text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
On 04.08.2014 16:53, Andrey Borzenkov wrote: > В Mon, 04 Aug 2014 10:45:22 +0400 > Stanislav Kholmanskikh <address@hidden> пишет: > >> Hi! >> >> On 08/01/2014 07:40 PM, Vladimir 'φ-coder/phcoder' Serbinenko wrote: >>> On 01.08.2014 17:35, Andrey Borzenkov wrote: >>>> В Fri, 1 Aug 2014 16:15:56 +0400 >>>> Stanislav Kholmanskikh <address@hidden> пишет: >>>> >>>>> Early versions of binutils doesn't support --no-relax flag, so >>>>> commit 063f2a04d158ec1b275a925dfbae74b124708cde prevents building >>>>> with such versions. >>>>> >>>>> Signed-off-by: Stanislav Kholmanskikh <address@hidden> >>>>> --- >>>>> conf/Makefile.common | 8 ++++++++ >>>>> configure.ac | 10 ++++++++++ >>>>> 2 files changed, 18 insertions(+), 0 deletions(-) >>>>> >>>>> diff --git a/conf/Makefile.common b/conf/Makefile.common >>>>> index e4c301f..5bda66f 100644 >>>>> --- a/conf/Makefile.common >>>>> +++ b/conf/Makefile.common >>>>> @@ -8,11 +8,19 @@ unexport LC_ALL >>>>> # Platform specific options >>>>> if COND_sparc64_ieee1275 >>>>> CFLAGS_PLATFORM += -mno-app-regs >>>>> +if COND_LD_SUPPORTS_NO_RELAX >>>>> LDFLAGS_PLATFORM = -Wl,-melf64_sparc -Wl,--no-relax >>>>> +else >>>>> + LDFLAGS_PLATFORM = -Wl,-melf64_sparc -mno-relax >>>>> +endif >>>> >>>> TBO I think commit should simply be reverted. "Uniformity" is rather >>>> poor excuse for breaking existing systems. >>>> >>> This commit is needed for clang to compile for sparc64. Given that >>> sparc64 clang still doesn't really work I'm ok with reverting, at least >>> for now. >> >> But, it this case, maybe it would be better to consider >> reviewing/applying this patch? Just to not return to this issue after >> some time? >> >> Andrey, Vladimir, what do you think? >> > > Yes, commit message was pretty confusing. This leaves the question, > whether combination of clang and binutils that do not support > -Wl,--no-relax exists though :) Otherwise I agree, we should use this > patch. > I think we could try to push for clang to have -mno-relax. They're usually pretty responsive and we'll probably need few fixes for few other clang problems anyway. For now I just reverted it. We'll see how clang sparc64 goes. > _______________________________________________ > Grub-devel mailing list > address@hidden > >
signature.asc
Description: OpenPGP digital signature
|
https://lists.gnu.org/archive/html/grub-devel/2014-09/msg00047.html
|
CC-MAIN-2022-05
|
refinedweb
| 317
| 60.11
|
Things That Go Boom
A colleague told me the other day he didn't want his system to "go boom." I laughed and noted that many of the systems I've worked on over my career are supposed to go boom. You just want don't want them to go boom at the wrong time.
I think that makes things a little harder. It is easier to make a safe light bulb than to make a safe explosive. The explosive is supposed to be dangerous. So while testing is important across the board, it is especially important when you have systems that can cause significant injury or property loss.
Up until fairly recently, the kinds of things I built — especially the dangerous ones — rarely had GUIs in the traditional sense of the word. It was pretty easy to create test drivers that would exercise the system so you could simplify a large number of comprehensive tests. When you start throwing GUIs into the mix, that gets a little harder.
What you really need to produce those kind of tests (and, possibly, scripting for end users) is a way to fake keyboard and mouse events into the system. Usually, how you do this is highly system-dependent. I talked about a Windows solution for this way back in the 1990s. There are other ways to accomplish the same task under Windows, and on Linux you might resort to xdotool or some similar tools. You could also insert input events into X or into the underlying system using lots of different methods.
If you are just interested in testing, though, you might also want a more universal solution that frees you from worrying about exactly how the job gets done. I have a love/hate relationship with Java, but I will confess this is one area where Java is a great choice. Assuming, of course, that you have Java available. But if you have a system big enough to have a GUI, I'm guessing you have (or can easily get) Java to run on it.
The secret is to use the robot feature of the old Java AWT (Abstract Windows Toolkit) library. Unless you are an old-school Java programmer, you probably haven't used AWT since most people now use Swing or some other GUI library. But the robot feature works even if you don't use the rest of AWT.
Using the
Robot class is fairly simple. You need the
import, of course:
import java.awt.Robot;
You also need to create a
Robot object:
public class Action { private static Robot robot;
You can use the object's
delay method to pause for a given number of milliseconds, which is handy when dealing with keyboard and mouse handling. You can also use methods like
mousePress,
mouseMove,
mouseWheel,
keyPress, and
keyRelease to simulate input events:
… robot.mousePress(InputEvent.BUTTON1_MASK); robot.delay(200); robot.mouseRelease(InputEvent.BUTTON1_MASK); robot.delay(200); … robot.keyPress(65); // A robot.delay(25); robot.keyRelease(65);
You might prefer to set an automatic delay time for every event by calling
setAutoDelay. You can also wait for all events to finish by calling
waitForIdle.
If you really want to get fancy, you can even ask the robot to read the pixel color at a given screen location or do a screen capture. This allows for some very sophisticated scripting, and since most embedded systems have a very fixed screen size and font selections, it might even be more reasonable to code a test like this than it would be on a desktop system.
I've posted to this blog for about 5 years and it is with a heavy heart that I close this, the final entry. As you probably know by now, Dr. Dobb's will be frozen in January and while the 25 years of articles and postings I've contributed will still live on, there won't be any more, which is very sad for me both as a long time writer and an even longer-time reader.
I wish I could thank everyone that I learned from at DDJ over the years, but there's simply too many. Like so many, I owe Jon Erickson pretty much everything having to do with my writing career. But I also learned a lot from countless other people associated with DDJ over the years. I also learned a lot by interacting with our readers by mail (back in the day), e-mail, and at conferences. Our readers tended to be a cut above (or, maybe, several cuts above) and I was never above asking a job applicant if they read DDJ.
Perhaps I'll reorganize my personal blog, which never really recovered from a failed hacking attempt, or add some new articles to my embedded site. Maybe I'll use the time to write some more books. Or perhaps I'll find a new writing home somewhere. But I can't imagine a new home that has the history, breadth, and technical weight of Dr. Dobb's Journal.
Thanks for letting me share my projects and ideas with you over the years. I hope to see you somewhere soon where hardcore technical discourse finds a home.
|
http://www.drdobbs.com/embedded-systems/things-that-go-boom/240169445?cid=SBX_ddj_related_commentary__Other_embedded_systems&itc=SBX_ddj_related_commentary__Other_embedded_systems
|
CC-MAIN-2019-13
|
refinedweb
| 869
| 69.72
|
I'm trying to create N balanced random subsamples of my large unbalanced dataset. Is there a way to do this simply with scikit-learn / pandas or do I have to implement it myself? Any pointers to code that does this?
These subsamples should be random and can be overlapping as I feed each to separate classifier in a very large ensemble of classifiers.
In Weka there is tool called spreadsubsample, is there equivalent in sklearn?
(I know about weighting but that's not what I'm looking for.)
Here is my first version that seems to be working fine, feel free to copy or make suggestions on how it could be more efficient (I have quite a long experience with programming in general but not that long with python or numpy)
This function creates single random balanced subsample.
edit: The subsample size now samples down minority classes, this should probably be changed.
def balanced_subsample(x,y,subsample_size=1.0): class_xs = [] min_elems = None for yi in np.unique(y): elems = x[(y == yi)] class_xs.append((yi, elems)) if min_elems == None or elems.shape[0] < min_elems: min_elems = elems.shape[0] use_elems = min_elems if subsample_size < 1: use_elems = int(min_elems*subsample_size) xs = [] ys = [] for ci,this_xs in class_xs: if len(this_xs) > use_elems: np.random.shuffle(this_xs) x_ = this_xs[:use_elems] y_ = np.empty(use_elems) y_.fill(ci) xs.append(x_) ys.append(y_) xs = np.concatenate(xs) ys = np.concatenate(ys) return xs,ys
For anyone trying to make the above work with a Pandas DataFrame, you need to make a couple of changes:
Replace the
np.random.shuffle line with
this_xs = this_xs.reindex(np.random.permutation(this_xs.index))
Replace the
np.concatenate lines with
xs = pd.concat(xs)
ys = pd.Series(data=np.concatenate(ys),name='target')
|
https://codedump.io/share/a2kJoTl7EBZW/1/scikit-learn-balanced-subsampling
|
CC-MAIN-2018-09
|
refinedweb
| 294
| 68.47
|
In the web application some times you need to manage your files or allow your users to manage their files. In this time you need a file manager and creating a powerful solution is time consuming, but you don't have enough time. This was the story of my work when I needed it. I think if I write a powerful file manager that is simple to use and portable, did my best.
First step of any project get my requirements I need a file manager has this features:
- Create new directory
- Create new html/text file
- Edit html/text files
- Upload file
- Upload and unzip files in to destination directory
- Create Archive in server (in zip format)
- Show list of Files/Directory
- Show Attribute of Files
- Delete files
- Delete Directory
- Pure ASP.net (without any configuration)
- Skin able (change with CSS)
- User Friendly
- Security
- Easy to install
Using power file manager (in short PFM) is very simple to install. In first step copy files and directories in your web application then copy bin files to your bin directory next add link of PFM in your app.
Now your app has one PFM.
To change skin of PFM you have many customizations
PFM use MCE_tiny Editor to Create/Edit Html File. This is a powerful editor and you can change it with any Html Editor compatible with asp.net but this is not a limitation.
This is simple on PFM because PFM has simple structure and you can control it by few changes in code for examples if you need change user permission to avoid deleting file, you can hide delete file in command line and so one.
This is very simple but PFM on this Article doesn't have any control on user and its sample of PFM. For making it to your interest you should change it.
If you want to use it for multi users you have to change selection method. as a suggestion you can use cookie.
To create or Open Archive I use "ICSharpCode.SharpZipLib" Library. This API is cool and is simple to work with ZIP Archive (Under GNU).
My solution to show a list of files use an ASP Table, but if you know about asp table this is very simple .for this attention I wrote three classes to wrap and use easily
public class MyTableCell:TableCell //Describe new Table Cell public class MyLinkButton:Label //Describe Link button (text + image) public class MyImageButton:Label //Describe Image button
Now all things are ready to create a list of files and directories:
string p = Server.MapPath(startpath); if (!Directory.Exists(p)) { MultiView1.SetActiveView(View2); return; } string[] fns = Directory.GetFiles(p); string[] dns = Directory.GetDirectories(p); string uplevel = startpath; if (uplevel.LastIndexOf("\\") > 0) uplevel = uplevel.Substring(0, uplevel.LastIndexOf("\\")); Table1.Rows.Clear(); Table1.CssClass = "ms-list8-main"; TableRow tr = new TableRow();; tr.Cells.Add(head); Table1.Rows.Add(tr);); PrintList(dns, false); PrintList(fns, true); tr = new TableRow(); MyTableCell mtc = new MyTableCell(" ", "ms-list8-bottom"); mtc.ColumnSpan = 5; tr.Cells.Add(mtc); Table1.Rows.Add(tr); MultiView1.SetActiveView(View2);
At first we clear all table cells
Table1.Rows.Clear(); Table1.CssClass = "ms-list8-main";
Now create new Table row
TableRow tr = new TableRow();
So, create new Table Cells for command button bar;
Set column span to 5 to balance the table. After this add head to table row and table
tr.Cells.Add(head); Table1.Rows.Add(tr);
Now we must create new row for title of each cells);
In this time we have Table with command bar and title bar and can add list of directory's and files
PrintList(dns, false); // dns is list of directories name PrintList(fns, true); // fns is list of files name
Print list is a procedure with two parameters to show list in table
private void PrintList(string[] fns, bool isfiles)
Other source code of create list is very easy and don't need any more comment.
In this project we have many pages to create, delete, rename and etc….
On the other hand we must create project in one page because it must be portable, in this case my idea to solve this problem is using Multiview solution.
Multiview is an ASP.net tools to create many pages in one page.
How to handle views or user commands, when we use multi view?
To answer this question we have two choices
Event handling Or Query Handling (QH)
Event Handling is not very flexible on the other hand QH is very flexible.
QH is an old, but very powerful solution.
So, answer is QH.
if (!IsPostBack) { string w = Request["goto"]; if (w == null) ViewList(); else { switch (w) { case "addnew": MultiView1.SetActiveView(View1); Label4.Text = "Create Text File"; TextBox1.ReadOnly = false; Button7.Visible = false; break; case "addnew2": MultiView1.SetActiveView(View1); Label4.Text = "Create Text File"; TextBox1.ReadOnly = false; Button7.Visible = true; break; case "listview": ViewList(); break; case "edit": editThisfile(); break; case "ren": renamethisfile(); break; case "del": deleteThisfile(); break; case "info": information(); break; case "link": showlink(); break; case "newFolder": MultiView1.SetActiveView(View7); break; case "Upload": MultiView1.SetActiveView(View8); break; case "zip": CreateZip(); break; case "AddZip": AddtoZip(); break; case "RemoveZip": RemoveZip(); break; case "ReloadZip": ReloadZip(); break; case "DeletArch": DeletArch(); break; default: break; } } }
If you have a great project you will have huge errors! To solve this problem you must monitoring your errors and save them when users work with your application. Dr. Watson was a person who wrote any events. So now we have a class with this title.
DrWatson (our class) just writes thrown exceptions in a text file called "DrWatson.txt" with date/time and source of throw.
catch (Exception ex) { DrWatson.SaveError(ex); }
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/aspnet/PFM1.aspx
|
crawl-002
|
refinedweb
| 951
| 64
|
A good and complete Continuous Integration (auto-build, auto-test, etc.) system should include Source Control Management (SCM) component and Issue Tracking component. And these components should be well integrated.
In our company, historically the following were used:
CruiseContril.NET integrates with VSS easily and smoothly. The missing part was integration between VSS and OnTime.
The solution described in the article provides for a convenient 2-way integration between VSS and OnTime.
Interface to OnTime is localized in the code and therefore the solution may be easily modified to provide for VSS bridging with any other Issue Tracking tool.
A typical workflow should be as follows:
OnTime is a software package from Axosoft that provides for:
and more. You may find information on the product here.
It has enough functionality be used as an Issue Tracker too. It has:
OnTime even has the SCM connectivity:
The following picture shows SCM tab on the item dialog:
So, theoretically, OnTime may be used as an Issue Tracker component in Continuous Integration system. But practically, it does not work.
Because it is one-way connectivity:
To overcome this deficiency, I have developed a custom solution – VSS add-in:
Plug-in intercepts VSS check-in event and invokes VSS-OnTime Bridge dialog:
The developer selects the OnTime item to associate the file with and clicks Select button.
Refresh button might be useful, if the developer does not see his work item in the list. It means the work item was not properly assigned to the developer. In that case, the developer
Let us verify how it worked. Check the file history of the associated file in VSS.
Note it has been labelled as D1091 (Defect 1091):
Now check the item in OnTime: association with specific version of the source file was added under SCM tab:
To include the item into the next build:
These are well described in this article.
After you have built and registered your add-in (Visual Studio will register it for you during the build, by the way), do not forget to place ssaddin.ini into the same folder where ssapi.dll resides.
Typically: C:\Program Files\Microsoft Visual SourceSafe Ssaddin.ini is a text file with just one line:
Microsoft.SourceSafe.EventHandlerGF = 1
The solution has just 3 classes:
OTAdapter
Scm2OtForm
IvssEventHandlerGF
public class IvssEventHandlerGF : IVSSEventHandler, IVSSEvents
which I have humbly embroidered with my initials – “GF”, knows nothing about OnTime. It just implements IVSSEventHandler interface and has just one additional method:
IVSSEventHandler
private int RequestLink(VSSItem vssItem)
This method is called every time the check-in or add-file event is intercepted.
This method just creates the instance of Scm2OtForm class and calls its AddLink() method.
Scm2OtForm
AddLink()
There are few more tricks around this call though.
Trick 1: That would be nice to persist the size of the dialog. Imagine you have carefully resized the dialog so that all details of the work items are visible in the datagrid. And on the next check-in, you are surprised that the dialog shrank back to the default size. Not nice. I am sure there are many smart ways and classes that persist the size of the dialog better then I did. Being a novice to C# and .NET stuff, I did it in a simple and straight forward way.
datagrid
Trick 2: That would be nice to persist your selection. Imagine you are checking many files in order to complete your work item. You are checking files one by one, and for each one you have to select the type of the work item (Defect, Task or Feature) and then select the item from the list. Much better if the last selected item would be selected by default. And all you have to do is just click Select button.
Trick 3: That would be nice to provide for a batch check-in. Imagine you are creating the whole folder in VSS and populating it with a bunch of files. And for each file, you will have to click the same Select button. There is a hard-coded (for simplicity) 2-seconds interval. If the time between events is less than the interval, the dialog is not shown and the same association label is reused for all batched check-ins.
Trick 4: When labelling the file, VSS creates a new version from the original one. And this creates the problem: OnTime version is now 1 less than the VSS version.
Workaround:
linkDlg.AddLink()
public class Scm2OtForm : Form
also does not know anything about OnTime.
OnTime
It has private member:
private
private OnTimeAdapter.OTAdapter _otAdapter;
which has all the knowledge about OnTime and may be easily replaced with another adapter to another Issue Tracker.
This is the pop-up dialog (see the picture above) that displays the grid of work items and prompts to select the item to associate the VSS item with.
public class OTAdapter
has all the knowledge about OnTime, including the SQL connection string.
I have mine hard-coded for simplicity. It is easy to make it read the settings from the config file or from the registry, if necessary.
I have requested from DBA to create the specific “vss2ontime” user and have created stored procedures “sp_GF_GetDefects()”, “sp_GF_GetTasks()”, etc. to retrieve the lists of OnTime items.
vss2ontime
sp_GF_GetDefects()
sp_GF_GetTasks()
If for any reason SPs are not welcome in OnTime DB, it is easy to rewrite the code to perform direct SQL queries.
The plug-in was written a couple of years before and, unfortunately, soon after that, it was decided to decommission OnTime and switch to another item tracking system.
The switching is still not complete though.
As a result, my code stays in its initial raw state without any major changes or improvements.
I still do believe it might be useful to dev teams that are using OnTime or similar trackers with VSS.
Guennadi Filkov
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Overview
Microsoft Visual Studio Team System 2008 Team Explorer can be used as a standalone rich client for accessing Visual Studio Team System 2008 Team Foundation Server. It enables users to join the software development life cycle with integrated support for source control, work item tracking, build management, and detailed reports showing bug trends, project velocity and quality indicators.
You must have a Team Foundation Server Client Access License (CAL) to use Team Explorer to access a Team Foundation Server.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/76002/VSS-OnTime-Bridge?fid=1570036&df=10000&mpp=10&sort=Position&spc=None&tid=3456469
|
CC-MAIN-2017-17
|
refinedweb
| 1,107
| 62.68
|
A VeryGood[tm] name for a VeryGood[tm] Algorithm.
In a previous document I described how bayesian models can recursively update, thus making them ideal as a starting point for designing streaming machine learning models. In this document I will describe a different method proposed by Crammer et al. which includes a passive agressive approach to model updates. I will focus on intuition first before moving to the mathy bits. At the end I will demo some sklearn code with implementations of these models.
Let’s say you’re doing a regression for a single data point \(d_i\). If you only have one datapoint then you don’t know what line is best, despite this fact we can come up with lines that will fit the data point perfectly. For example; The yellow line will pass through the point perfectly and so will the blue line. There are many lines we could come up with but we’ll keep these two in mind.
We can describe all the lines that will go through the line perfectly by describing the lines in terms of the slope and intercept. If we make a plot of the weight space of the linear regression (\(w_0\) is the constant and \(w_1\) is the slope) we can describe all the possible perfect fits with a line. You’ll note that our original blue line is now denoted with a blue dot, but we are referring to the same thing. Same for the yellow line and yellow dot.
Any point on that line is as good as far as \(d_i\) is concerned, so how might we go about selecting the optimal one? How about we compare it to the weights that the regression had before it came across this one point? Let’s call these weights \(w_{\text{orig}}\) and in the case of this being the first datapoint we might imagine that \(w_{\text{orig}}\) is in the origin. In that case the blue regression seems better than the yellow one but is it the best choice?
This is where we can use maths to find the coordinate on the line that is as close as our original weights \(w_{\text{orig}}\). In a very linear system you can get away with linear algebra but depending on what you are trying to do you may need to introduce more and more maths to keep the update rule consistent.
To prevent the system from becomming numerically unstable we may also choose to introduce a limit on how large the step size may be (no larger than \(C\)). This way, we don’t massively overfit to outliers. Also, we probably only want to update our model if our algorithm makes a very large mistake. We can then make an somewhat agressive update and remain passive at other times. Hence the name! Note that this approach will for slightly different for system that do linear classification but the idea of passive agressive updating can still be applied.
For those who are interested in the formal maths: check the appendix.
It should be relatively easy to implement this algorithm but you don’t need to because scikit learn has support for this algorithm. Scikit learn is great; be sure to thank people who contribute to the project.
For the regression task let’s compare how well the algorithm performs on some simulated data. We will compare it to a normal, batch oriented, linear regression.
import numpy as np import sklearn from sklearn.datasets import make_regression X, y, w = make_regression(n_features=3, n_samples=2000, random_state=42, coef=True, noise=1.0) mod_lm = sklearn.linear_model.LinearRegression() mod_lm.fit(X, y) batch_acc = np.abs(mod_lm.predict(X) - y).sum() start_c = <value> warm_c = <value> mod_pa = sklearn.linear_model.PassiveAggressiveRegressor(C=start_c, warm_start=True) acc = [] coefs = [] for i, x in enumerate(X): mod_pa.partial_fit([x], [y[i]]) acc.append(np.abs(mod_pa.predict(X) - y).sum()) coefs.append(mod_pa.coef_.flatten()) if i == 30: mod_pa.C = warm_c
You’ll notice in the code that I’ve added a starting value for \(C\) (
start_c) and a value for when it has partly converged (
warm_c). This code for illustration, as it is a strange assumption that the algorithm is “warm” after 30 iterations.
You can see in the plots below what the effect of this is.\[c_{\text{start}} = 1, c_{\text{warm}} = 1\]
The first plot shows the mean squared error over the entire set after the regression has seen more data. The orange line demonstrates the baseline performance of the batch algorithm. The second plot demonstrates how the weights change over time. In this case you can confirm that the MSE fluctuates quite a bit.
\[c_{\text{start}} = 0.1, c_{\text{warm}} = 0.1\]
The fluctuations are small, but the algorithm seems to need a lot of data before the regression starts to become sensible.
\[c_{\text{start}} = 3, c_{\text{warm}} = 0.1\]
You can now see that the fluctuations are still very small but the large steps that are allowed in the first few iterations ensure that the algorithm can converge a bit globally before it starts to limit itself to only local changes.
We can repeat this exercise for classification too.
import numpy as np import sklearn from sklearn.datasets import make_classification X, y = make_classification(n_samples=4000, n_features=2, n_redundant=0, random_state=42, n_clusters_per_class=1) mod_lmc = sklearn.linear_model.LogisticRegression() normal_acc = np.sum(mod_lmc.fit(X, y).predict(X) == y) start_c = <value> warm_c = <value> mod_pac = sklearn.linear_model.PassiveAggressiveClassifier(C=start_c) acc = [] coefs = [] for i, x in enumerate(X): mod_pac.partial_fit([x], [y[i]], classes=[0,1]) coefs.append(mod_pac.coef_.flatten()) acc.append(np.sum(mod_pac.predict(X) == y)) if i == 30: mod_pac.C = warm_c
The code is very similar, the only difference is that we are now working on a classification problem.
\[c_{\text{start}} = 1, c_{\text{warm}} = 1\]
The first plot denotes performance again but this time it is measured by accuracy. The second plot shows the weights again. In the current setting you see a lot of noise and the performance is all over the place. The effect seems greater than with the regression when \(c_{\text{start}} = 1, c_{\text{warm}} = 1\). This is because the optimal weights are a lot smaller. In the regression case they were around 30-40 while here they are around 2.5 and 1. This means that a maximum step size of 1 is relatively seen very large.
\[c_{\text{start}} = 0.1, c_{\text{warm}} = 0.1\]
The performance is much better, especially when you consider that the y-axes on the charts are different.
It seems like this \(C\) hyperparameter is something to keep an eye on if these sorts of algorithms are within your interest.
It is nice to see that we’re able to have a model work in a streaming setting. I would wonder how often this is useful though, mainly because you need a label to be available in streaming to actually be able to learn from it. One can wonder, if the label is available at streaming then why have the need to predict it?
Then again, all passive agressive linear models have a nice and simple update rule and this can be extended to include matrix factorisation models. For large webshops this is very interesting because it suggests a method where you may be able to update per click. For a nice hint at this, check out this presentation from flink forward.
The bayesian inside of me can’t help but see an analogy to something bayesian happening here too. We update our prior belief (weights at time \(n\)) if we see something that we did not expect. By doing this repeatedly we come to a somewhat recusive model that feels very similar to bayesian updating.
\[ p(w_{n+1}| D, d_{n+1}) \propto p(d_{n+1} | w_n) p(w_n)\]
Bayesian updating seems more appealing because we get to learn from every datapoint which also has a regularisation functionality (it removes both the passive and the agressive aspect from the update rule). The main downside will be the datastructure though, which is super easy with this passive aggressive approach.
Hopefully the intuition makes sense by now which means we can discuss some of the formal maths. Feel free to skip if it does not appeal to you. Again, you can find more details in the original paper found here.
We will now discuss linear regression and I’ll leave the classification case up to the paper, but the proofs are very similar. In these tasks we are looking for new weights \(\mathbf{w}\) while we currently have weights \(\mathbf{w}_t\). We will assume that the current timestep we have a true label \(y_t\) that belongs to the data we see \(x_t\).
We have a new point at time \(t+1\) that we’ve just observed and we’re interested in making the smallest step in the correct direction such that we perfectly fit the new point. In maths we’d like to find \(\mathbf{w}_{t+1}\) that is defined via;
\[ \mathbf{w}_{t+1} = \text{argmin} \frac{1}{2} ||\mathbf{w} - \mathbf{w}_t||^2 \]
We do want to make sure that we adhere to our constraint, this new datapoint needs to be fitted perfectly such that \(l(\mathbf{w}; (\mathbf{x}_t, y_t)) = 0\).
Here \(l(\mathbf{w}; (\mathbf{x}_t, y_t))\) is the loss function of the regression. Let’s define that more properly. The idea is that we only update our system when our prediction makes an error that is too big.
If the error margin is greater than \(\epsilon\);
\[ l_t^R = l(\mathbf{w}; (\mathbf{x}_t, y_t)) = |\mathbf{w} \mathbf{x}_t - y_t| - \epsilon \]
To optimise these systems we’re going to use a lagrangian trick. It means that we are going to introduce a parameter \(\tau\) which is meant to be a ‘punishment variable’. This will relieve us of our constraint because we will pull into the function that we are optimising. If we search in a region that we cannot use we will be punished by a value of \(\tau\) for every unit of constraint violation.
With this trick, we can turn our problem into this;
\[ L^R(\mathbf{w}, \tau) = \frac{1}{2} ||\mathbf{w} - \mathbf{w}_t||^2 + \tau l_t^R \]
Now in order to optimise we will start with a differentiation.
\[ \frac{\delta L^R}{\delta \mathbf{w}} = \mathbf{w} - \mathbf{w}_t + \tau \mathbf{x}_t \frac{\mathbf{w}\mathbf{x}_t - y_t}{|\mathbf{w}\mathbf{x}_t - y_t|} = 0 \]
\[ \mathbf{w} = \mathbf{w}_t - \tau \mathbf{x}_t \frac{\mathbf{w}\mathbf{x}_t - y_t}{|\mathbf{w}\mathbf{x}_t - y_t|} \]
We now have an optimum for \(\mathbf{w}\) but we still have a variable \(\tau\) hanging around. Let’s use our newfound \(\mathbf{w}\) and replace it in \(L^R(\mathbf{w}, \tau)\).
\[\begin{equation} \label{eq1} \begin{split} L^R(\tau) & = \frac{1}{2}\left(-\tau \mathbf{x}_t \frac{\mathbf{w}\mathbf{x}_t - y_t}{|\mathbf{w}\mathbf{x}_t - y_t|}\right)^2 + \tau |\mathbf{w} \mathbf{x}_t - y_t| \\ & = \frac{1}{2}\tau^2||\mathbf{x}_t||^2 + \tau |\mathbf{w} \mathbf{x}_t - y_t| \end{split} \end{equation}\]
Once again, we have something we can differentiate. If we differentiate and solve we get the optimal value for \(\tau\).
\[ \tau = -\frac{|\mathbf{w} \mathbf{x}_t - y_t|}{||\mathbf{x}_t||^2} = \frac{l_t^R}{||\mathbf{x}_t||^2}\]
You can even introduce a maximum stepsize \(C\) if you want.
\[\mathbf{w}^* = \mathbf{w}_t - \tau \mathbf{x}_t \frac{\mathbf{w}\mathbf{x}_t - y_t}{|\mathbf{w}\mathbf{x}_t - y_t|}\] \[ \tau_t^* = \text{min} \left(C, \frac{l_t^R}{||\mathbf{x}_t||^2} \right) \]
We have closed from solutions! For a more formal deep dive I’ll gladly refer you to the original work.
I think the proof might’ve been simpler with mere linear algebra but this is the proof the paper supplied.
|
https://koaning.io/posts/passive-agressive-algorithms/
|
CC-MAIN-2020-16
|
refinedweb
| 2,012
| 63.49
|
Opened 5 years ago
Closed 4 years ago
#7574 closed bug (fixed)
Register allocator chokes on certain branches with literals
Description (last modified by )
While running the test for #7571 (test is in #7573,) under WAY=normal instead of WAY=llvm, I encountered this bug in the native backend:
=====> T7571(normal) 6 of 6 [0, 0, 0] cd . && '/Users/a/code/haskell/ghc/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -c T7571.cmm -no-hs-main >T7571.comp.stderr 2>&1 Compile failed (status 256) errors were: ghc-stage2: panic! (the 'impossible' happened) (GHC version 7.7.20130113 for x86_64-apple-darwin): allocateRegsAndSpill: Cannot read from uninitialized register %vI_c7
The test in question is:
#include "Cmm.h" testLiteralBranch (W_ dst, W_ src) { if (1) { prim %memcpy(dst, src, 1024, 4); } else { prim %memcpy(dst, src, 512, 8); } return (); }
If you comment out the branch conditionals, the test passes, so clearly something fishy is going on here. The test also fails if you change the condition to
if (1 == 1)
I have absolutely no idea how this did not trip the profiling-based build in StgStdThunks.cmm, like in the LLVM build c.f. #7571
Change History (30)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 5 years ago by
comment:4 Changed 5 years ago by
comment:5 Changed 5 years ago by
The problem here is that the else-block becomes unreachable after
cmmStmtConFold optimises away the conditional, and the register allocator doesn't like unreachable code.
The right fix is to discard unreachable code before register allocation. It already does a strongly-connected-component analysis so this shouldn't be too hard, but IIRC when I tried to do this before it wasn't straightforward because
Digraph doesn't expose the right bits. We need an SCC pass that starts from a particular node, not the entire set of nodes.
comment:6 Changed 5 years ago by
comment:7 Changed 5 years ago by
comment:8 Changed 5 years ago by
comment:9 Changed 5 years ago by
This bug also affects the "statistics" library. To reproduce, try to install statistics-0.10.2.0 from hackage with profiling enabled using GHC HEAD. It fails with a similar message
[ 4 of 39] Compiling Statistics.Transform ( Statistics/Transform.hs, dist/build/Statistics/Transform.p_o ) ghc-stage2: panic! (the 'impossible' happened) (GHC version 7.7.20130407 for i386-unknown-linux): allocateRegsAndSpill: Cannot read from uninitialized register %vI_cdUj
comment:10 Changed 5 years ago by
So this ticket is looking more important than it was before!
Would anyone like to look at it?
Simon
comment:11 Changed 5 years ago by
comment:12 Changed 5 years ago by
comment:13 Changed 5 years ago by
comment:14 Changed 4 years ago by
I'm going to go ahead and assign this to myself since I'm looking at it today.
comment:15 Changed 4 years ago by
comment:16 Changed 4 years ago by
@thoughtpolice - I presume you got stalled? I looked into this myself and somehow couldn't get my fix to work properly, the NCG was throwing away code that it shouldn't have been. I shall try again when I get some time, but it definitely needs fixing before 7.8.1.
comment:17 Changed 4 years ago by
Yes, I did get a bit stalled here unfortunately and worked on other things, and I merely haven't returned to it. I think I may still have part of a lingering patch sitting around, I can try to look again soon.
(Also, perhaps we should bump this to highest priority for 7.8.1.)
comment:18 Changed 4 years ago by
comment:20 Changed 4 years ago by
comment:21 Changed 4 years ago by
comment:22 Changed 4 years ago by
comment:23 Changed 4 years ago by
This bug accounts for over 90% of the compilation failures in attempting to compile hackage with HEAD.
comment:24 Changed 4 years ago by
Claiming this. I have a half-finished patch to fix it, just need a bit of time to finish it off. Sounds like a good thing to do while travelling to ICFP on Saturday.
comment:25 Changed 4 years ago by
Thank you Simon. I'm quite busy at the moment - and I've also had a partially working patch (probably similar to yours,) but mine also threw away too much code when discarding unreachable blocks. I mean to brain-dump it to you, but I imagine you can manage :)
comment:26 Changed 4 years ago by
comment:27 Changed 4 years ago by
comment:28 Changed 4 years ago by
comment:30 Changed 4 years ago by
I believe this is fixed, as always reopen the ticket if there are still problems.
If we want to make a test for this like #7571 (or just reuse it,) we'll need the patch in #7573 too.
|
https://ghc.haskell.org/trac/ghc/ticket/7574
|
CC-MAIN-2017-43
|
refinedweb
| 844
| 59.03
|
This is a tutorial we are using for Django Girls workshops
I suspect this has something to do with CSS. Here's my blog.css file:
body {
padding-left: 15px;
}
.page-header {
background-color: #C25100;;
}
.save {
float: right;
}
.post-form textarea, .post-form input {
width: 100%;
}
.top-menu, .top-menu:hover, .top-menu:visited, {
color: #ffffff;
float: right;
font-size: 26pt;
margin-right: 20px;
}
margin-bottom: 70px;
}
.post h2 a, .post h2 a:visited {
color: #000000;
}
this is my base.html file
{% load static %}
<html>
<head>
<title>Sinks Canyon Avalanche Center<">
{% if user.is_authenticated %}
<a href="{% url 'post_new' %}" class="top-menu"><span class="glyphicon glyphicon-plus"></span></a>
{% endif %}
<h1><a href="/">Sinks Canyon Avalanche Center</a></h1>
</div>
<div class="content-container">
<div class="row">
<div class="col-md-8">
{% block content %}
{% endblock %}
</div>
</div>
</div>
</body>
</html>
and my post_detail.html file:
{% extends 'blog/base.html' %}
{% block content %}
<div class="post">
{% if post.publlished_date %}
<div class="date">
{{ post.published_date }}
</div>
{% endif %}
{% if user.is_authenticated %}
<a href="{% url 'post_edit' pk=post.pk %}" class="btn btn-default"><span class="glyphicon glyphicon pencil"></span></a>
{% endif %}
<h2>{{ post.title }}</h2>
<p>{{ post.text|linebreaksbr }}</p>
</div>
{% endblock %}
ice_bear69Is this tutorial good?
ice_bear69Its description says, "Django 3.0 crash course tutorials. Building a customer management app from start to finish covering all the basics of the django framework."
ice_bear69But some of the first videos were released before December 2, 2019
ice_bear69I think I get the basics
I have a new doubt. Any hints are appreciated. So I have a bunch of buttons(called as topics) and a drop-down. I want to basically map them to blog posts. For example:- if I click on a button that says "Horror" and then click on the week dropdown and select "Week 1", it should show only blog posts that have Horror and Week 1 mapped to it(this I already did and can be selected in the admin).
Any idea on how to approach this? I tried to Google before asking here obviously but can't find a specific answer to it.
@abhishekkoothur_twitter So if I understand you correctly, you essentially want to create a search view. First you have to create a url path that's something like
posts/<str:catname>/<int:weeknum>/'. Then you need a view which has a customget_queryset` method, using techniques similar to what's described in the search tutorial you went through earlier. E. g.
def get_queryset(self): return Post.objects.filter(category=self.catname).filter(week=self.week)
Then you want a form that uses the correct widgets for collecting inputs, as you want them. You can read about widgets here. I would guess you want RadioSelect and Select. Once you've done that, you write a 'search input' view, which uses the form and forwards the user to the custom queryset view (using the HTML
<form>'s
action attribute).
Something like that anyway :)
Hello. I'm having some trouble when submitting a form that has several fields.
I have a 'Notes' model, which has a text field and other different fields, but the one I want to edit for now is the text field only. I created a
ModelForm from the
Notes model and I'm looping over all the table rows that have a note that is related to a specific project. What I want is to display all the notes at once and be able to edit whichever I want (one or more at once) and then submit them. However, when I hit the submit button after editing any of them, nothing happens, they are not changed; except when I edit the last one, which updates the database table, but puts that value in all of those rows.
save(commit=False)because I'm not doing anything with the data before saving it, but saving directly didn't change anything. I also tried to change where the submit button is, but nothing happened either. Why is this only submitting the last text field and putting that text value in all of the rows in my table?
nameattribute (
note_text, I would guess). When the user POST's their data, only the last form's
note_textvalue is sent with the POST request, I think. It effectively "overwrites" all the other forms'
note_textvalue. From your view's perspective, the view receives a POST dictionary that has a
note_textkey, paired with whatever value that was input for the last form on the page. So when you feed that dictionary of key-value pairs into the NoteForm class call, each time the same value is used.
When the user POST's their data, only the last form's note_text value is sent with the POST request, I think. It effectively "overwrites" all the other forms' note_text value. From your view's perspective, the view receives a POST dictionary that has a note_text key, paired with whatever value that was input for the last form on the page. So when you feed that dictionary of key-value pairs into the NoteForm class call, each time the same value is used.
@datalowe That's exactly what's happening, and I don't really know how to access a specific element's data from my view. When modifying an existing model instance, I thought I could access it's pk, but I haven't been able to find how to do that—or if it would actually work. Although, when adding a functionality to add a new note, there would be no pk before it's saved to the database.
I'll check the formsets and see what I can do. Thank you very much. Great as always.
@datalowe I was able to make my forms work with formsets, specifically with model formsets. This is because I am editing model instances. My current task is very simple, but I—and anyone else reading this—can build from here to more complex things.
I'm not updating anything on my forms.py file because, with this approach I ended up not using it.
Here is my view
project_id; this is why I'm not using the
forms.pyfile and doing everything here in the view.
|
https://gitter.im/DjangoGirls/tutorial
|
CC-MAIN-2020-34
|
refinedweb
| 1,032
| 66.13
|
Right, that's what I thought it should be like. Still not working, even in a fresh test scenario. Here the steps to follow through:
1. So I generate a new app using sencha app generate...
2. I add the following items to the sample view:
Code:
{ xtype: 'filebutton', text: 'File Button' }, { xtype: 'filefield', fieldLabel: 'Photo', labelWidth: 50, buttonText: 'Select Photo...' }
Code:
Worked ok for me using those steps.
Screen Shot 2013-11-06 at 3.39.54 PM.png
Screen Shot 2013-11-06 at 3.40.03 PM.png
Are you using Cmd 4.0.0.203 or higher?
Maybe re-download 4.2.1? Even reinstall Cmd?Are you a Sencha products veteran who has wondered what it might be like to work at Sencha? If so, please reach out to our human resources manager: fabienne.bell@sencha.com
As it turns out it's necessary to load the class Ext.form.field.File and not Ext.form.field.FileButton, even though the css class is used on the FileButton, not the FileField.Seems strange when FileField is not used.
Also Cmd is very buggy and flaky when using the web server with multiple projects on the same machine at the same time. Generating a new app sets some global namespace variables that cannot be reversed by refresh, build or restart the machine.
I had to recreate the entire project to reset that namespace back to the main project. The error is e.g.
"The following classes are not declared even if their files have been loaded: "testproject.Application". Please check the source code of their corresponding files for possible typos: "app/Application".
But there is no reference in the project itself to "testproject".
|
https://www.sencha.com/forum/showthread.php?274942-Framework-css-class-missing&p=1009018&viewfull=1
|
CC-MAIN-2015-35
|
refinedweb
| 287
| 78.04
|
How to Make A Discord Bot - Part 4
Hey, everyone we are back with another Discord Bot Tutorial and judging from the number of upvotes I got for Part 3 Compared to Part 1, I decided to show you all today some really cool things.
Embeds
What is an embed?
It is a special and more cool way to display text, watch this.
COMPARED TO
The cooler one is called an embed.
Using the help function example we did last tutorial, let's turn that boring text into some cool box!!!
Lets look back at what we did last tutorial:
@client.command() async def help(ctx): ##Define Embed Here await ctx.send("..........")
Now instead of using ctx.send, we will have to send an embed instead, so do this
await ctx.send(embed=embed).
So instead of passing in some text, we are passing in an embed with the name embed.
It's a good practice to give your embed a good name, but if you see later on, it gets kinda annoying.
So now we need to define our embed, and here's where it gets abit CSSsy if you get it.
Here:
@client.command() async def help(ctx): embed = discord.Embed( title = 'Help', description = "Try out these commmands below", colour = discord.Colour.orange() ) #Embed Elements await ctx.send("..........")
We will have to initialize it as a embed object, by using discord.Embed (Capital 'E').
Next, we will have to give it 3 Main Things: Title, Description and Colour
The colour is the line on the left side of the embed.
Do note that the colour of the embed can only be assigned by a discord.Colour Class (Capital 'C'). Most of the basic colours can be used by doing:
discord.Colour.gold()
Remember to add your brackets at the start and end together with your commas after every line.
Now for the elements.
There are 3 types of elements mostly used: set_author,set_footer,add_field.
The first 2 is quite self-explanatory, so I'll be focusing mainly on add_field.
Author and Footer
embed.set_footer(text = "Made By @Fishball_Noodles. Bot Realeased Since June 01,2020") embed.set_author(name = "Moderation#2520")
As shown in the picture above, a field consist of 2 part the name and value.
The name is just the subtitle of the of that section, while the value is just the main text. You can add up to as many fields you want, by doing this
embed.add_field(name = "Date & Time", value = datetime.datetime.now())
As you can see, I used embed.add_field or embed.set_footer for adding elements to the embed. If your variable name was to be something like, help_embed, you would have to type help_embed.add_field and trust me on this when you want to send bigger embeds you wouldn't want to do this.
An example of how big my embed is:
Sneak Peek on Moderation 2.0
Role Check
Ok, so you don't want random people who join your server to just clear the chat using the |clear function we talked about in Part 2. Maybe only those with the Moderator Role can do it, sure. So how we are gonna do it is making use of a role check built in function that will check the user's role before the code is being run.
@client.command() @commands.has_role('Moderators') async def clear(ctx, amount = 10): await ctx.channel.purge(limit=amount+1)
This is very self-explanatory, @commands.has_role(#role_name)
where role_name is the name of the role required in stringed form.
Lastly Don't Forget to at the run command to get your bot online and working
client.run(token)
Well done guys, if you stuck with me throughout this whole tutorial do let me know in the comments section below. Also if you encountered any problems during the entire process, feel free to ask it in the comment section. I will try to answer all your questions. Hope you found the embed portion especially useful, stay tuned for Part 5 tomorrow, where we will be covering on how to accept keyword arguments. Till next time:)
i like your tutorials
@josysalt
Im glad you are inspired to do your own .js series
Also, please go read up Markdown to style your post, cos right now it's mostly text.
:)
|
https://replit.com/talk/learn/How-to-Make-A-Discord-Bot-Part-4/46337
|
CC-MAIN-2021-17
|
refinedweb
| 721
| 75.2
|
This data augmentation feature allows you to blur your images with certain probabilities. The intensity of the blur can also be adjusted by adjusting the "Blur Limit"
The blur limit sets the range of kernel used in the convolution process to blur the image.
The blur effect is more pronounced with large kernel.
This sets the probability of the images being blurred. With large probability, the expected number of blurred images goes high. With probability 1, all images are blurred.
import albumentations as albu from PIL import Image import numpy as np transform =albu.Blur(blur_limit=3, p=0.5) image = np.array(Image.open('/some/image/file/path')) image = transform(image=image)['image'] # Now the image is blurred and ready to be accepted by the model
Automate 90% of the work, reduce your time to deployment by 40%, and replace your whole ML software stack with our platform.
|
https://hasty.ai/docs/mp-wiki/augmentations/blur
|
CC-MAIN-2022-40
|
refinedweb
| 149
| 56.35
|
phoenix_live_reload alternatives and similar packages
Based on the "Framework Components" category.
Alternatively, view phoenix_live_reload alternatives based on common mentions on social networks and blogs.
plug10.0 7.5 phoenix_live_reload VS plugA specification and conveniences for composable modules between web applications
commanded9.8 7.5 phoenix_live_reload VS commandedUse Commanded to build Elixir CQRS/ES applications
surface9.8 9.2 phoenix_live_reload VS surfaceA server-side rendering component library for Phoenix
ex_admin9.7 0.0 phoenix_live_reload VS ex_adminExAdmin is an auto administration package for Elixir and the Phoenix Framework
torch9.3 7.3 phoenix_live_reload VS torchA rapid admin generator for Elixir & Phoenix
addict9.2 0.0 phoenix_live_reload VS addictUser management lib for Phoenix Framework
phoenix_htmlPhoenix.HTML functions for working with HTML strings and templates
scrivener9.0 3.9 phoenix_live_reload VS scrivenerPagination for the Elixir ecosystem
phoenix_ectoPhoenix and Ecto integration with support for concurrent acceptance testing
react_phoenixMake rendering React.js components in Phoenix easy
corsica8.7 4.1 phoenix_live_reload VS corsicaElixir library for dealing with CORS requests. 🏖
cors_plug8.7 3.2 phoenix_live_reload VS cors_plugAn Elixir Plug to add CORS.
absinthe_plugPlug support for Absinthe, the GraphQL toolkit for Elixir
Raxx8.6 0.4 phoenix_live_reload VS RaxxInterface for HTTP webservers, frameworks and clients
scrivener_htmlHTML view helpers for Scrivener
phoenix_slimePhoenix Template Engine for Slime
params8.2 0.0 phoenix_live_reload VS paramsEasy parameters validation/casting with Ecto.Schema, akin to Rails' strong parameters.
phoenix_pubsub_redisThe Redis PubSub adapter for the Phoenix framework
kerosene7.9 0.0 phoenix_live_reload VS kerosenePagination for Ecto and Pheonix.
rummage_ectoSearch, Sort and Pagination for ecto queries
dayron7.9 0.0 phoenix_live_reload VS dayronA repository `similar` to Ecto.Repo that maps to an underlying http client, sending requests to an external rest api instead of a database
passport7.8 0.0 phoenix_live_reload phoenix_live_reload VS recaptchaA simple reCaptcha 2 library for Elixir applications.
plug_graphqlPlug (Phoenix) integration for GraphQL Elixir
sentinel7.0 0.0 phoenix_live_reload VS sentinelDEPRECATED - Phoenix Authentication library that wraps Guardian for extra functionality
plugsnag6.9 2.5 phoenix_live_reload phoenix_live_reload VS ashesA code generation tool for the Phoenix web framework
webassemblyWeb DSL for Elixir
plug_auth6.0 0.0 phoenix_live_reload VSit5.5 0.0 phoenix_live_reload VS WhatwasitTrack changes to your Ecto models
plug_statsdSend connection response time and count to statsd
plug_rest5.0 0.0 phoenix_live_reload VS plug_restREST behaviour and Plug router for hypermedia web applications in Elixir
trailing_format_plugAn elixir plug to support legacy APIs that use a rails-like trailing format:
raygun4.8 0.0 phoenix_live_reload4.6 0.0 phoenix_live_reload VS plug_jwtPlug for JWT authentication
Votex4.6 0.0 phoenix_live_reload phoenix_live_reload or a related project?
README
A project for live-reload functionality for Phoenix during development.
Usage
You can use
phoenix_live_reload in your projects by adding it to your
mix.exs dependencies:
def deps do [{:phoenix_live_reload, "~> 1.2"}] end
You can configure the reloading interval in ms in your
config/dev.exs:
# Watch static and templates for browser reloading. config :my_app, MyAppWeb.Endpoint, live_reload: [ interval: 1000, patterns: [ ...
The default interval is 100ms.
Backends
This project uses
FileSystem as a dependency to watch your filesystem whenever there is a change and it supports the following operating systems:
- Linux via inotify (installation required)
- Windows via inotify-win (no installation required)
- Mac OS X via fsevents (no installation required)
- FreeBSD/OpenBSD/~BSD via inotify (installation required)
There is also a
:fs_poll backend that polls the filesystem and is available on all Operating Systems in case you don't want to install any dependency. You can configure the
:backend in your
config/config.exs:
config :phoenix_live_reload, backend: :fs_poll
By default the entire application directory is watched by the backend. However, with some environments and backends, this may be inefficient, resulting in slow response times to file modifications. To account for this, it's also possible to explicitly declare a list of directories for the backend to watch, and additional options for the backend:
config :phoenix_live_reload, dirs: [ "priv/static", "priv/gettext", "lib/example_web/live", "lib/example_web/views", "lib/example_web/templates", ], backend: :fs_poll, backend_opts: [ interval: 500 ]
Skipping remote CSS reload
All stylesheets are reloaded without a page refresh anytime a style is detected as having changed. In certain cases such as serving stylesheets from a remote host, you may wish to prevent unnecessary reload of these stylesheets during development. For this, you can include a
data-no-reload attribute on the link tag, ie:
<link rel="stylesheet" href="" data-no-reload>
Differences between Phoenix.CodeReloader
Phoenix.CodeReloader recompiles code in the lib directory. This means that if you change anything in the lib directory (such as a context) then the Elixir code will be reloaded and used on your next request.
In contrast, this project adds a plug which injects some JavaScript into your page with a WebSocket connection to the server. When you make a change to anything in your config for live_reload (JavaScript, stylesheets, templates and views by default) then the page will be reloaded in response to a message sent via the WebSocket. If the change was to an Elixir file then it will be recompiled and served when the page is reloaded. If it is JavaScript or CSS, then only assets are reloaded, without triggering a full page load.
License
*Note that all licence references and agreements mentioned in the phoenix_live_reload README section above are relevant to that project's source code only.
|
https://elixir.libhunt.com/phoenix_live_reload-alternatives
|
CC-MAIN-2021-43
|
refinedweb
| 874
| 56.25
|
Erwan Mba Mba2,711 Points
Hey, how do ou solve the challenge on multiply view: Add a view named multiply. Give multiply a route named /multiply.
Add a view named multiply. Give multiply a route named /multiply. Make multiply() return the product of 5 * 5. Remember, views have to return strings.
from flask import Flask app = Flask(__name__) @ app.route('/<multiply>/<int:num5>/<int:num5>') def multiply(num5, num5) return '{} * {} = {}'. format (num5, num5, num5*num5)
2 Answers
Jennifer NordellTreehouse Teacher
Hi there, Erwan Mba Mba! I can see that you've put a good deal of effort into this, but a few things are going on here. First, you have some syntax errors. Namely, you're missing a colon
: after the
def multiply, and there is a space between
@ and
app. You may not have the space, and you must have the colon.
Secondly, you're doing a little more than they're asking you to right now. Right now, they want a route named
multiply that takes no numbers and always returns the result of the multiplication of 5 times 5. So it's a little hard-coded at the moment, but you'll be building that out as you go through the challenge so that it'll be flexible. Also, it's just asking you to return
"25" not
"5 x 5 = 25". Remember that when these challenges are checking for strings, they must be exact matches.
from flask import Flask app = Flask(__name__) @app.route('/multiply') # define the route def multiply(): # define multiply with no parameters return '{}'.format(5*5) # always returns 25 as a string
Hope this helps!
|
https://teamtreehouse.com/community/hey-how-do-ou-solve-the-challenge-on-multiply-view-add-a-view-named-multiply-give-multiply-a-route-named-multiply
|
CC-MAIN-2021-25
|
refinedweb
| 272
| 76.42
|
appengine-mailer 0.1
AppEngine Email Proxyappengine-mailer is an RFC2822-over-HTTP proxy for running in Google App Engine.
appengine-mailer was written by Toby White <toby@timetric.com> (@tow21) for use on the Timetric) platform. It was heavily inspired by an earlier proof of concept by Mat Clayton (@matclayton)
Google App Engine includes an API for sending emails, which lets you send up to 7000 emails a day for free; above that it costs $0.0001 per email. (Prices correct as of July 2010).
appengine-mailer exposes this API behind a URL, so that a client can POST serialized email/mime-multipart messages, and have GAE send them on its behalf; in effect using GAE as a mail proxy.
Anti-spam measures
==================
To avoid spam, Google don't let you send arbitrary email messages. There are several important caveats:
1. The From address must be the email of a registered developer on the GAE app. That is - any From address you want to use, you have to register as a developer on the app. This limits you to 50 possible From addresses. (There are some limitations on which addresses may be used - see Known Issues)
2. Google don't allow you to set arbitrary headers. The list of headers you have control over can be seen at XXX
3. Google don't give you full control over the multipart structure of the email. You can have:
* plain text emails
* plain text/HTML multipart/alternative emails
* and an arbitrary number of attachments, all of which will be appended to the end of the message.
4. There's an upper limit on the number of To/Cc/Bcc addresses that may be used on a single message. It's not documented, but in practice seems to be about 100.
Authorization
=============
Authorization is handled by means of a shared secret between client and server. Each request is sent, signed by the client's secret, and the signature is checked on the server. The server can keep several valid secrets concurrently, allowing key changes to occur seamlessly.
Deploying the server
====================
* Change app.yaml to use an application name controlled by you.
* You need to supply at least one secret key. The server will look for these in a file called GMAIL_SECRET_KEY, one key on each line.
* Optionally, you can supply a default From address, to be used when the address on the email is unsuitable. The server will look for this in a file called GMAIL_DEFAULT_SENDER.
* Having added these files, deploy to GAE.
Deploying the client
====================
The client can be used in three ways:
* Direct from a Python program
* From the command line, like the sendmail command
* As an Exim transport.
In all cases, the client needs to know the URL of the server to talk to, and a secret to use for signing the message. Sometimes, this can be specified - if unspecified, it will always fall back to looking for the server URL first in the environment variable GMAIL_PROXY_URL, then in the file /etc/envdir/GMAIL_PROXY_URL. Similarly, it will look for the secret key first in the environment variable GMAIL_SECRET_KEY, then in the file /etc/envdir/GMAIL_SECRET_KEY.
Finally, since the interface is simple HTTP, you can write a client speaking to the documented interface.
Python client
=============
The Python client only needs the module "gmail.py".
# Using the standard Python email class
from email.message import Message
msg = Message()
msg['From'] = "Toby at work <toby@timetric.com>"
msg['To'] = "Toby at home <toby@eaddrinu.se>"
msg['Subject'] = "Testing"
msg.set_payload("This is the body of the message")
# And passing them message through to gmail
from gmail import GmailProxy, MessageSendingFailure
# You need to specify the SECRET_KEY and APPENGINE_PROXY_URL
gmail_proxy = GmailProxy(SECRET_KEY, EMAIL_APPENGINE_PROXY_URL, fail_silently=False)
try:
gmail_proxy.send_mail(msg)
except MessageSendingFailure, e:
print "Failed to send message!\n%s" % e
Django example
==============
As of Django 1.2, there is support for swapping out the default email backend. Add the whole appengine-mailer directory as an app to your Django project, and then update your settings to include:
# The SECRET_KEY used will be your Django project's SECRET_KEY
Alternatively, if your servers are configured to use Exim, you can configure Exim as below, and then use Django's unmodified email backend to send mail via GAE.
Command-line client
===================
The file "gmail.py" can be used as a partial replacement for /usr/bin/mail. It has no dependencies beyond the Python stdlib (tested with 2.6, should work with earlier versions). For details of options it understands, call it with "--help".
Exim transport
==============
Because gmail.py can be used instead of /usr/bin/mail, it can be dropped into place as a transport, so that all mail going through Exim will be routed via the GAE interface. (subject to Google's limited interface.) An example transport is shown in the file 50_exim4_config_gmail_pipe. If you're using a Debian system:
* Move gmail.py to /usr/bin/gmail, and ensure that it's executable.
* Configure Exim4 to send via a smarthost.
* Replace /etc/exim4/conf.d/transport/30_exim4-config_remote_smtp_smarthost with 50_exim4_config_gmail_pipe.
* Restart Exim.
Exim will not read environment variables, so this will only look in /etc/envdir/GMAIL_PROXY_URL and /etc/envdir/GMAIL_SECRET_KEY for the relevant settings.
HTTP Interface
==============
You can send messages via the HTTP interface. The server accepts POST on its root URL, and expects two parameters in the body of the POST request. One parameter is 'msg', and is the serialized email message; the other is 'signature', and is the signature of the message, according to the signing algorithm given in Python by:
base64.encodestring(hmac.new(SECRET_KEY, msg, hashlib.sha1).digest()).strip()
The server will respond with a 204 in case of success, a 400 for client error (missing parameters, or malformed signature) and a 500 otherwise. Clients should be prepared to deal with occasional server-side failures, due to GAE downtime; use of a local queue is recommended.
------------
Known issues
============
* The From address needs to be authorized - that is, it needs to be associated with a Google account, and listed as an administrator of the GAE app. For google mail accounts (or for google Apps For Your Domain accounts). The check seems to be done by strict string comparison, not by using any nicknames or aliases. This means that you *can't* send email from:
* A google address with a +suffix on the username.
* An address which is a "nickname" of another address.
* An address which resolves to a google group.
* Reply-To won't work for people reading the email in gmail: See
- Downloads (All Versions):
- 3 downloads in the last day
- 19 downloads in the last week
- 84 downloads in the last month
- Author: Mat Clayton
- License: MIT license, see LICENSE file
- Package Index Owner: matclayton
- DOAP record: appengine-mailer-0.1.xml
|
https://pypi.python.org/pypi/appengine-mailer
|
CC-MAIN-2015-27
|
refinedweb
| 1,132
| 65.01
|
In another thread I mentioned my Sudoku game. I'm trying to get it to work in Linux. After some lengthy effort I'm getting some errors. In the game. generator.so is loaded, but I can't extract the functions.
After some work, I've made a simple test program and can't get the so to load.
// yeah.cpp
#include <stdio.h>
int p = 0;
int yeah()
{
printf( "%d\n", ++p );
return 0;
}
Compiled with:g++ -o yeah.so -shared -fPIC yeah.cpp
Compiled with:g++ -o main main.cpp -ldl
yeah.so and main are created with permissions 777. But when main is ran as ./main, I get the 'Could not load: yeah.so'
So, what am I doing wrong? Any suggestions to help me.
Set your LDPATH (or was it LD_PATH?) to point to the directory where the .so file is. Otherwise, add it to /etc/ld.so.conf (or something similar) and run ldconfig. Or run ldconfig dir/where/so/is to temporarily add it to your linker path.
--RB光子「あたしただ…奪う側に回ろうと思っただけよ」Mitsuko's last words, Battle Royale
No, use absolute paths! For example:
char soName[] = "yeah.so"
char fullName[MAX_PATH];
replace_filename(fullName, argv[0], soName, MAX_PATH);
...
ReyBrujo: That's for dynamically loading plugins!
EDIT: Hmm, still doesn't work. Interesting, because I use very similar code and it works...
--sig used to be here
Oh, you are right. With the full path it loads the library but can't get the handle to the function.
(Edited: Got it working. The problem is that the function name is mangled in C++, thus you can't just load yeah, but instead _Z4yeahv (in my case). Some magic with extern "C" should make it work.
Set your LDPATH (or was it LD_PATH?)
LD_LIBRARY_PATH
Oh I forgot he didn't have extern "C".
So, final solution:
1. extern "C" in front of the functions you export2. load .so with full path
Ok I did that and the program runs fine. It wasn't loading the text correctly, but I fixed that also. I just need to compile it statically and add it to the download page.
Thanks Guys
|
https://www.allegro.cc/forums/thread/585627
|
CC-MAIN-2020-40
|
refinedweb
| 360
| 80.17
|
Drawing arc with Pygame is just like drawing ellipse, the pygame.draw.arc method takes these arguments.
1) The screen surface
2) The color of that arc
3) The rectangle object where the arc will fit into
4) The starting angle of the arc
5) The ending angle of the arc
6) An optional line width of the arc
Below script will draw the arc which will fit in the entire width and height of the screen.
import pygame from pygame.locals import * from sys import exit from math import pi pygame.init() screen = pygame.display.set_mode((640, 480), 0, 32) while True: for event in pygame.event.get(): if event.type == QUIT: exit() angle = 0.5*pi*2. screen.fill((255,255,255)) pygame.draw.arc(screen, (0,0,0), (0,0,639,479), 0, angle, 3) pygame.display.update()
The above script will produce the following outcome.
|
http://gamingdirectional.com/blog/tag/draw-arc-with-pygame/
|
CC-MAIN-2019-18
|
refinedweb
| 149
| 77.13
|
For many years now, being able to speak a command, and have your device fulfill the request has been a needed tool for many, and a play thing for others. For the former—and this list is in no way exhaustive—you may include users such as drivers, medical professions, and those who find keyboards unusable. With technology such as Cortana, we take voice recognition to a new level.
Now, there's far more to what Cortana can offer you, other than working out what you've said. The Cortana Intelligence Suite comes to mind when dealing with data analytics. But for now, we'll look at a much simpler use, and that is starting an UWP app using Cortana.
The App
The first thing you'll need is an empty UWP app. I'm using Visual Studio 2015 to create the project, and once created, your solution may look like this…
Figure 1: The generated solution from the empty universal app template
Before we jump into doing some code, we need to turn our attention to the Voice Command Definitions. The definitions we'll create shortly are a way of you telling Cortana what to listen for to start your app; to create these definitions, you'll need an XML file added to your project.
I've added one named ExampleDefinition.xml, which is in the root of my project folder, along with App.xaml and MainPage.xaml. To get things started, let's add the root element with the XML namespace defined to make things a little easier; via IntelliSense. And the element we'll add is going to be this…
<VoiceCommands xmlns=""> </VoiceCommands>
Inside this VoiceCommands element, we're going to add a required element named CommandSet, which has some attributes we need to take note of. The first is the xml:lang attribute, which needs to be defined; and secondly the Name attribute, which is optional. Name can just be some arbitrary string, and I added my CommandSet element like so…
<VoiceCommands xmlns=""> <CommandSet xml: </CommandSet> </VoiceCommands>
Now, we can jump into the meaty stuff. The first element we come to, which is optional but must appear first if you do opt to include it, is the AppName element. This element allows you to specify a more user-friendly name for your application, which can be different from its actual name, if for example it were too long to say. This name then will be listened for when the user is speaking the command needed to start your application; but we'll come to where it fits into the command shortly. Firstly, this is what this element looks like when added to what we already have…
<VoiceCommands xmlns=""> <CommandSet xml: <AppName>Cortana Test</AppName> </CommandSet> </VoiceCommands>
There is an alternative to the AppName element, called CommandPrefix, and they are mutually exclusive. Again, this is something we'll come back to in a moment.
Before we do, the next element we need to include after AppName is an Example element that allows you to give it an example of what the user needs to speak. I have something like this…
<Example>Show photos from Gavin</Example>
At this time, you will be missing some elements that are required and therefore this definition file isn't usable. But, I'll quickly show you what the above achieves when you have installed this definition file.
When I press the Cortana Icon and speak the words 'What can I say?', Cortana shows me a list of things I can do with Cortana. And, if we look at the bottom of the list, we can see our UWP app, and the example we specified under the app name we defined. Again, this won't work for you yet, but does give you an idea of where we're going with this article.
Figure 2: Our app name and example showing on the Cortana Canvas after speaking 'What can I say?'
If you were to click one of these items in the list, you'll be presented with another list showing examples of all the commands you can use for this app. So now, let's actually define a command Cortana will listen for.
After our Example element, we need to add the Command element and give it a name.
<Command Name="show"> </Command>
The following elements to be added will be children of the command element, starting with an example. The example that we'll define as a child of the command element will appear as we described earlier; which is when you click the app after saying the words 'What can I say?'. When added, the result looks like this…
<Command Name="show"> <Example>Show photos from Gavin</Example> </Command>
Following the example element, we now add the element that we've been waiting for, and that's the ListenFor element.
My ListenFor element is as follows…
<ListenFor RequireAppName="AfterPhrase"> Show [today's] {item} from {username} </ListenFor>
…and has a few points of note we need to talk about.
The first is the RequireAppName attribute. This attribute is used in conjunction with the AppName element we defined earlier. I have entered AfterPhrase above, and this means that after the user has spoken the phrase 'Show photos from {username}', they also need to say 'on Cortana Test', where 'Cortana Test' was defined in the AppName element at the start. There are a few options, such as BeforePhrase, which speaks for itself. But also recall I mentioned that the AppName element could be replaced with CommandPrefix. The CommandPrefix element does exactly the same was what we have here; it will append 'Cortana Test' after the phrase specified in the ListenFor element, but we will not need to use the RequireAppName attribute.
Moving on, you'll notice that some of the text in the phrase is wrapped in brackets, both square and curly. We come to [today's] first, and this one is simple. Using square brackets specifies that the part of the phrase inside those brackets is optional, and may or may not be spoken by the user. The next bracket, the curly brackets, are a reference to a PhraseList or PhraseTopic element you specify. Both of these elements are not children of Command, and we'll specify them now. However, we haven't finished with the children of the Command element yet, and we'll return to it in due course.
As a direct child of CommandSet, I have these elements defined…
<PhraseList Label="item"> <Item>images</Item> <Item>photos</Item> </PhraseList> <PhraseTopic Label="username"> <Subject>Person Names</Subject> </PhraseTopic>
From our phrase specified in ListenFor, we can see that 'item' and 'username' reference these two elements. The first, 'item', is a phrase list, and as such is a hardcoded list of possible words or phrases the user can speak in a given command. For this example, I have 'images' and 'photos' set as the two possibilities the user needs to speak for this command to execute. The second is a little more complicated, and possibly more so than there is time for in this article. That said, this allows you a way to look for specific words and phrases in a less hardcoded fashion than a phrase list provides. The children of the phrase topic, the Subject elements, provide a given set of options, such as 'Person Names' to help you refine and constrain the possible words or phrase a user might say.
Going back to the children of the CommandSet as promised, let's look at the last two items.
<Feedback>Showing {item} from {username}</Feedback> <Navigate/>
We have Feedback and Navigate. The former is the feedback Cortana will give to the user if the command is successful. For this occasion, I've specified I want Cortana to show or say something like 'Showing images from Mike'. As a note of caution: If you reference a phrase list or phrase topic in your feedback, you must include the same references in your ListenFor.
It is also worth knowing that you can't combine brackets in the phrase given to Cortana to listen for, or to feedback to the user. For example, [{username}] is not valid.
Now, finally, we come to our last element, which is Navigate. This has a very simple function, which is to point the command towards a specific page in your application. I've left this element blank for now.
Let's take a look at the full voice command definition XML file…
<?xml version="1.0" encoding="utf-8" ?> <VoiceCommands xmlns=""> <CommandSet xml: <AppName>Cortana Test</AppName> <Example>Show photos from Gavin</Example> <Command Name="show"> <Example>Show photos from Gavin</Example> <ListenFor RequireAppName="AfterPhrase"> Show [today's] {item} from {username} </ListenFor> <Feedback>Showing {item} from {username}</Feedback> <Navigate/> </Command> <PhraseList Label="item"> <Item>images</Item> <Item>photos</Item> </PhraseList> <PhraseTopic Label="username"> <Subject>Person Names</Subject> </PhraseTopic> </CommandSet> </VoiceCommands>
Giving the Voice Commands Definition to Cortana
Installing the definitions into Cortana is something we'll do on application start. Open your app.xaml.cs, and create an async void method—because the method to install the definitions is an asynchronous one and awaitable—with the following code added…
async void RegisterExampleCommands() { var voiceDefinitions = await Package.Current.InstalledLocation. GetFileAsync("ExampleDefinition.xml"); await Windows.ApplicationModel.VoiceCommands. VoiceCommandDefinitionManager. InstallCommandDefinitionsFromStorageFileAsync (voiceDefinitions); }
Then, call such a method from the constructor of the App class, run the app, and this should install your definitions to Cortana. If you now close your app down, you can use your voice command to start up your app. You also can say the command 'What can I say?' to Cortana, and look in the results to verify if your definitions were indeed installed.
Feedback from Cortana Given to Your App
When Cortana starts your app, your app starts in a specific way; and not via the normal route you would expect when you click the icon. If you go into your app.xaml.cs, you need to override the OnActivated method, like so…
protected override void OnActivated(IActivatedEventArgs args) { }
And the first thing we need to do is determine how the application was started. There are many ways the application could be started, which include—but not limited to, Search, File, Contact, and Wallet Action. But, the one we are interested in is VoiceCommand. Let's look for this type of activation with the following code in our OnActivated method.
if (args.Kind == ActivationKind.VoiceCommand) { }
Once you know that the application was started via a voice command, you know you can get at the data Cortana will give you. Consider the following code…
if (args.Kind == ActivationKind.VoiceCommand) { var commandArgs = args as VoiceCommandActivatedEventArgs; var speechResult = commandArgs.Result; var username = speechResult.SemanticInterpretation. Properties["username"]; }
From the above, we can grab hold of the username that the user spoke for the voice command. Be sure to match the key to select the item you want to the label given to your PhraseTopic or PhraseList.
I'll leave to your good self to explore, because the event args passed in a voice command offer quite a lot.
I find this is quite a fun thing to play with, but at the same time, there are practical applications of this tech that can be rewarding. My own experience comes from building software for the manufacturing industry, and it's certainly helped.
If you have any questions on this article, you can find me on Twitter @GLanata.
|
https://mobile.codeguru.com/columns/dotnet/how-to-start-a-universal-windows-platform-uwp-app-using-cortana.html
|
CC-MAIN-2019-18
|
refinedweb
| 1,908
| 59.43
|
Number of keywords in header
[nkeys,morekeys] = fits.getHdrSpace(fptr)
[nkeys,morekeys] = fits.getHdrSpace(fptr) returns
the number of existing keywords (not counting the END keyword) and
the amount of space currently available for more keywords. It returns
morekeys
= -1 if the header has not yet been closed. Note that the
CFITSIO library will dynamically add space if required when writing
new keywords to a header so in practice there is no limit to the number
of keywords that can be added to a header.
This function corresponds to the
fits_get_hdrspace
(ffghsp) function in the CFITSIO library C API.
import matlab.io.* fptr = fits.openFile('tst0012.fits'); [nkeys,morekeys] = fits.getHdrSpace(fptr); fits.closeFile(fptr);
|
https://www.mathworks.com/help/matlab/ref/matlab.io.fits.gethdrspace.html
|
CC-MAIN-2019-13
|
refinedweb
| 115
| 56.96
|
I got a ton of comments on my post on the new project templates, and rather than try
to answer them in the comments, I thought I 'd answer them here, so they're at least
somewhat coherent to read.
Oh, and thanks for all the comments.
One more note: Some of the things that I did in the console app do not apply to all
apps. For example, if you're writing a class library, you're going to get /// comments
in front of your class and constructor, because I'm going to assume that you want
somebody to actually use the library at some point.
Dude, the tabs are best set at 4
The templates are all defined with hard tabs, but when they're converted to real code,
they'll get massaged into what you want (if you have the option set in Tools->Options).
You should leave the command-line arguments in there
I think we can agree that experienced programmers will know what to do here, so it's
mostly an for inexperienced programmers. An important argument for not having them
is that if they are there, when you are teaching programming, if they are there, you
have to explain what they are. Since you haven't talked about what a string variable
or an array is, you would prefer to put off those sort of discussions until later.
That's pretty much verbatim a request I got from a professor who teaches C#.
I'm a big fan of the building-block approach - which you already know if you've read
my book or columns.
Having to talk about things before you want to is a problem.
This is not an easy decision to make, but in this case, I think simplicity is
the more important concern. I will, however, take your comments under advisement.
The namespace should stay
Namespaces are mostly about organizing your code into useful components. But in a
console app, that really doesn't apply, since it is very unlikely to be used by other
applications. The namespace just adds complexity that you don't need.
One other comment was along the lines of "I didn't know you could do that". Namespaces
are just a convenient way to give classes and other types long, hierarchical names.
The comments / TODOs should stay
I think there are some comments that can be useful - for example, the SQL templates
will have comments that explain what you need to do to implement an aggregate function
in C#. Useful guidance there is great.
But comments are not always useful, and the ones we had in the console app fall into
the "not useful" bucket. They don't help novices because novices don't know what "entry
points are". I guess you can argue that the TODO is helpful because it guides the
novice on where to start, but the difficulty in finding the right place to put your
code is minimal in comparison to the difficulty in figuring out how to actually write
code that works.
If you consistently have trouble finding Main(), I respectfully submit that C# programming
may not be for you.
Credit to the team for removing the cruft but, hey, they put it in there too.
Absolutely. Our bad.
Editing and Customizing Templates
The way in which project templates are stored is more than a little ugly, and it's
not terribly obvious what file you should edit. This is deliberate. In previous versions,
there were concerns about improper modifications to the project templates causing
problems, so we made it cryptic to make this a less likely occurence.
Any takers on that? Anyone?
No, the truth is that we didn't spend a lot of time thinking about how users might
edit or extend the templates, and therefore, not surprisingly, it's not very approachable.
In the long term, we're planning on fixing that. "Long term" is code for "don't be
surprised if it doesn't show up in the next version".
In the short term, I'm hoping on writing - or getting somebody else to write - something
that explains how the templates work, how you can modify existing templates, and how
to create new ones.
How much time do templates save?
In the case of the console template, it's true that you can write that much content
fairly quickly. But the template also gets you a live project that has the source
file in it. Compared to creating an empty project and adding a new class and then
adding a Main() to that class (or adding an empty file and typing the text), and then
setting the project properties to get what you want, there is some benefit there.
"If you consistently have trouble finding Main(), I respectfully submit that C# programming may not be for you." <– This is the funniest thing I have read all day.
One point to the XML doc comments: These are not only used to give guidance to someone using your library (as you seem to assume) but you might also use them for your internal documentation of your system. And in that case you do of course need them on every single class in your project. But then: Still the right decision to put them out by default, since if a team uses that approach, they can put them back in by hand. Two thumbs up for the changes!
FWIW, my book, has a whole chapter on how to customize the existing templates, and add new ones. So until there’s an official Microsoft document on how to do this, I’d obviously be more than happy for you to refer people to the book. 🙂 (Mastering Visual Studio .NET, from O’Reilly.)
I’m really pleased that you’re simplifying the templates. I was laughing the other day at a the XML conference in Boston how the first thing that every presenter did was delete the ///’s and //TODO’s, which is exactly what I do. In any case, I’d really like to see you make this customizable, based on an external template file. Then if people don’t like simple they can complicate it again.
Hi id like to register, how can i do so?
My personal opinion that this is not the best way to spend time
<a href=>worst biggest</a>
Good work. Keep it up.
regards
-jokes
Helpful For MBA Fans.
|
https://blogs.msdn.microsoft.com/ericgu/2003/08/19/comments-on-new-project-templates/
|
CC-MAIN-2017-51
|
refinedweb
| 1,076
| 69.21
|
Genotypic variation in drought stress response and subsequent recovery of wheat (Triticum aestivum L.)..
- [Show abstract] [Hide abstract]
ABSTRACT:.The Scientific World Journal 01/2013; 2013:548246. · 1.73 Impact Factor
Article: Comparative Physiological and Proteomic Analyses of Poplar (Populus yunnanensis) Plantlets Exposed to High Temperature and Drought.[Show abstract] [Hide abstract]
ABSTRACT:.PLoS ONE 01/2014; 9(9):e107605. · 3.53; · 3.66 Impact Factor
Page 1
AtOSA1, a Member of the Abc1-Like Family, as a New
Factor in Cadmium and Oxidative Stress Response1[W][OA]
Michal Jasinski2, Damien Sudre2, Gert Schansker, Maya Schellenberg, Signarbieux Constant,
Enrico Martinoia, and Lucien Bovet2,3*
University21treatment.21.
Heavy metals like Cu21, Zn21, and Mn21in trace
amounts play an essential role in many physiological
processes but can be toxic if accumulated at high
concentrations. In contrast, other heavy metals such as
Cd21and Pb21have no biological functions and can be
extremely toxic. Cadmium is a nonessential heavy
metal widespread in the environment, being an im-
portant pollutant and known to be toxic for plants not
only at the rootlevel where Cd21is taken up but also in
the aerial part. It can be transported from root to shoot
via the xylem (Salt et al., 1995; Verret et al., 2004).
Cadmium has been reported to interfere with micro-
nutrient homeostasis (Clemens, 2001; Cobbett and
Goldsbrough, 2002). It might replace Zn21in the active
site of some enzymes, resulting in the inactivation of
the enzymatic activity. Cadmium also strongly reacts
with protein thiols, potentially inactivating the corre-
sponding enzymes. To overcome this problem, cells
produce excess quantities of chelating compounds
containing thiols, such as small proteins called metal-
lothioneins (Cobbett and Goldsbrough, 2002) or pep-
tides like glutathione and phytochelatins (Clemens
et al., 2002), which limit the damage induced by Cd21.
In addition, several types of transport systems have
been shown to contribute to heavy metal resistance,
including P-type ATPases and ABC transporters. They
transport either free or ligand-bound heavy metals
across biological membranes, extruding them into the
apoplast or into the vacuole (Kim et al., 2007).
In response to heavy metals, diverse signal trans-
duction pathways are activated, including mitogen-
activated protein kinases, transcription factors, and
stress-induced proteins (Jonak et al., 2004). Our knowl-
edge concerning components of these pathways is
growing but still incomplete.
The Abc1 protein family originates from the Saccha-
romyces cerevisiae ABC1 gene, which has been isolated
as a suppressor of a cytochrome b mRNA translation
1This work was supported by the Bundesamt fu ¨r Bildung und
Wissenschaft (grant nos. 01.0599 and EU HPRNT–CT–2002–00269 to
E.M. and to L.B. under COST Action E28 [Genosylva: European
Forest Genomic Network] and COST 859 [Phytotechnologies to
promote sustainable land use management and improve food chain
safety]). M.J. was a Marie Curie fellow (HPRN–CT–2002–00269).
2These authors contributed equally to the article.
3Present address: Philip Morris Products S.A., PMI Research &
Development, Quai Jeanrenaud 56, 2000 Neucha ˆtel, Switzerland.
* Corresponding author; e-mail lucien.bovet@pmintl.com.
The author responsible for distribution of materials integral to the
findings presented in this article in accordance with the policy
described in the Instructions for Authors () is:
Lucien Bovet (lucien.bovet@pmintl.com).
[W]The online version of this article contains Web-only data.
[OA]Open Access articles can be viewed online without a sub-
scription.
Plant Physiology, June 2008, Vol. 147, pp. 719–731, ? 2008 American Society of Plant Biologists 719
Page 2
defect (Bousquet et al., 1991). The mitochondrial ABC1
in yeast was suggested to have a chaperone-like activ-
ity essential for a proper conformation of cytochrome b
complex III (Brasseur et al., 1997). However, more
recent data suggest that the ABC1 protein might be
implicated in the regulation of coenzyme Q biosyn-
thesis (Hsieh et al., 2004). The Abc1 family has also
been described as a new family of putative kinases
(Leonard et al., 1998), and it has been suggested that
the putative kinase function of Abc1-like proteins is
related to the regulation of the synthesis of ubiquinone
(Poon et al., 2000). Homologs of yeast ABC1 have
been isolated in higher eukaryotes. In Arabidopsis
(Arabidopsis thaliana), the only studied ABC1-like pro-
tein has been predicted to be localized in the mito-
chondria. It partially restored the respiratory complex
deficiency when expressed in S. cerevisiae (Cardazzo
et al., 1998). In humans, a homolog of the Abc1
proteins (CABC1) has been identified and it is possibly
involved in apoptosis (Iiizumi et al., 2002). The human
CABC1 protein has 47% and 46% similarity to ABC1 of
Arabidopsis and Schizosaccharomyces pombe, respec-
tively.
The data presented in this study suggest that the
chloroplast AtOSA1 (A. thaliana oxidative stress-
related Abc1-like protein), an Arabidopsis protein be-
longing to the Abc1 protein family, is implicated in the
plant response to oxidative stress that can be gener-
ated by Cd21, hydrogen peroxide (H2O2), and light.
Our results show that AtOSA1 is functioning differ-
entlyfromAbc1;hence, the proteinsof the Abc1 family
can fulfill diverse functions.
RESULTS
AtOSA1 Transcript Levels Change in Response to
Cadmium Exposure
The elucidation of the physiological functions of
gene products that transcript levels are up- or down-
regulated by Cd21in the model plant Arabidopsis is
of major interest to understand response of plants to
Cd21. Several transcriptomic analyses have been per-
formed using a subarray spotted with a large number
of different cDNA sequences. cDNA microarrays from
two independent experiments revealed that transcript
levels of AtOSA1 (At5g64940) were down-regulated
after the treatment with 0.2 mM CdCl2for 21 d (Fig. 1A).
The microarray data were confirmed by reverse tran-
scription (RT)-PCR using the same mRNA template
used for the microarray analyses and RNA isolated
from plants exposed to 0.5 and 1 mM CdCl2.After the
1 mM CdCl2treatment, the transcript level of AtOSA1
was up-regulated (Fig. 1B). Additionally,a time-course
experiment was carried out with 1-week-old plants
exposed to 0.5 mM CdCl2(Fig. 1C). The data showed
that AtOSA1 was up-regulated in the leaves after 5 d of
Cd21exposure, then stably expressed until day 12 and,
finally, down-regulated. In the absence of Cd21, an
increase in the expression of AtOSA1 was found to be
correlated with plant aging. The analysis of AtOSA1
transcript levels in the major plant organs of 6-week-
old flowering plants revealed that this gene is ex-
pressed particularly in leaves, but also in flowers and
slightly in stems (Fig. 1D). Under normal growth
conditions, we found only a very low level of AtOSA1
transcripts in roots. Expression of AtOSA1 is in all
likelihood related to the green tissues, because in this
experiment the flowers were not dissected and still
contained green sepals. However, we cannot exclude
that AtOSA1 is also expressed in petals, stamen, and
pistils. The data collected for the At5g64940 entry in
the digital northern program Genevestigator (www.
genevestigator.ethz.ch; Zimmermann et al., 2004) con-
firm predominant expression of AtOSA1 in leaves and
flowers and that the transcript level of AtOSA1, which
is age dependent (Fig. 1C), is also down-regulated in
the night (circadian rhythm dependencies) and senes-
cent leaves. We confirmed these two last results ex-
perimentally (data not shown).
AtOSA1 Has Homology to the Abc1-Like Protein Family
The protein sequence of AtOSA1 possesses a con-
served region of around 120 to 130 amino acids
(according to the Conserved Domain Database for
protein classification; Marchler-Bauer et al., 2005) that
is characteristic for the so-called Abc1 protein family
(Fig. 2; Supplemental Fig. S1). Using the Conserved
Domain Database search engine at the National Center
for Biotechnology Information (Marchler-Bauer et al.,
2003), putative kinase domains were detected within
the AtOSA1 protein sequence (Fig. 2). Similar domains
Figure 1. AtOSA1 gene expression in Arabidopsis. A, Analysis of the
transcript levels of AtOSA1 in leaves after exposure to 0.2 mM CdCl2for
3 weeks under hydroponic growth conditions using cDNA spotted
arrays. The data presented show the 1Cd to 2Cd ratio obtained from
spotted array replicates. B, Confirmation of the chip data and cadmium
dose-dependent experimentusing semiquantitative RT-PCR (35 cycles).
C, Time-dependent (days) regulation of AtOSA1 in leaves of Arabidop-
sis in the presence (1) or absence (2) of 0.5 mM CdCl2. D, RT-PCR
analysis of AtOSA1 in plant organs: leaf (L), root (R), flower (F), stem
(St), and silique (Si).
Jasinski et al.
720Plant Physiol. Vol. 147, 2008
Page 3
were found in phosphoinositide 4-kinase and Mn21-
dependent Ser/Thr protein kinase.
The hydropathy plot made with TMpred (Hofmann
and Stoffel, 1993) revealed the presence of two trans-
membrane spans within the C-terminal part of AtOSA1
(Supplemental Fig. S1). Similar results were obtained
usingtheDAStransmembranepredictionserver(http://;miklos/DAS/).
ThemembersofAbc1proteinfamilyhavebeenidenti-
fied in both pro- and eukaryota, for example, AarF from
Escherichia coli (Macinga et al., 1998) and ABC1 from
yeast (Bousquet et al., 1991). It is worth emphasizing
Figure 2. Schematic illustration of the AtOSA1 protein topology. Identified domains are depicted as follows: white box,
chloroplast targeting peptide; black box, ABC1; horizontal stripe box, Mn21-dependent Ser/Thr protein kinase (S/T K); dotted
box, phosphoinositide 4-kinase (PI4K); and shaded barrel, a region with predicted transmembrane spans.
Figure 3. Phylogenetic tree of Arabidopsis Abc1 proteins (accession nos. according to TAIR): rice
Os02g0575500 and Os09g0250700, S. cerevisiae ABC1 (CAA41759) and YLR253W, Ostreococcus tauri Q00Y32, Croco-
sphaera watsonii ZP_00517317, Trichodesmium erythraeum YP_722994, Anabaena variabilis YP_323883, Nostoc
NP4885555, and Synechocystis P73627. Protein sequences were aligned using the program DIALIGN (Morgenstern, 2004)
and the phylogenetic tree was drawn with the TreeView32 software (). Scale
bar indicates distance values of 0.01 substitutions per site.
AtOSA1 Responds to Cadmium, Oxidative Stress, and Light
Plant Physiol. Vol. 147, 2008721
Page 4
that the Abc1 protein family is not related to ABC
transporters despite the fact that AtOSA1 has been
previously described as an ABC transporter belonging
to the ATH (ABC2) subfamily (Sanchez Fernandez
et al., 2001). AtOSA1 does not possess any typical
features, including, for instance, the most characteristic
sequence of ABC transporters known as signature mo-
tif [LIVMFY]S[SG]GX3[RKA][LIVMYA]X[LIVFM][AG]
(Bairoch, 1992).
InArabidopsis,the
(At4g01660) studied so far has been predicted to be
localized in mitochondria and can partially restore the
respiratory complex deficiency when expressed in S.
cerevisiae (Cardazzo et al., 1998). This protein has 32%
amino acid identity with AtOSA1. The Arabidopsis
genome contains 17 putative Abc1-like genes. Based
on the aligned translated products, a phylogenetic tree
has been drawn (Fig. 3). The closest Arabidopsis
homolog is At3g07700, which shares 45% amino acid
identity with AtOSA1. To date, nothing is known
about the localization and potential function of both
gene products, although the expression of an apparent
homolog of At3g07700 in Brassica juncea (DT317667)
has also been found to be regulated by cadmium
(Fusco et al., 2005). Two translated gene products from
soleABC1-likeprotein
rice (Oryza sativa), Os02g0575500 and Os09g0250700,
share high homologies with AtOSA1. In prokaryotes,
the closest homologs of AtOSA1 are the members of
the Abc1 family found in different cyanobacteria like
Nostoc (NP_4885555) and Synechocystis sp. (P73627),
sharing, respectively, 45% and 44% identity at the
amino acid level. Prokaryotic Abc1 proteins also have
been detected in E. coli and Clostridium. Interestingly,
these organisms lack complex III (Trumpower, 1990;
Unden and Bongaerts, 1997), suggesting that the pos-
sible function for Abc1-like proteins may not be
exclusively linked to the transfer of electrons in mem-
branes.
Identification of the Abc1 domain within the AtOSA1
sequence prompted us to determine the functional
homology of AtOSA1 with Abc1 proteins. For this
purpose, we used the yeast S. cerevisiae deletion mu-
tant W303-1A abc1THIS3 deficient in the endogenous
ABC1 activity (Hsieh et al., 2004). Deletion of the ABC1
gene in yeast disturbs the function of the respiratory
chain and prevents growth of this mutant strain on
media containing nonfermentable carbon sources such
as glycerol (Bousquet et al., 1991). The expression of
the entire AtOSA1 gene, including its targeting pre-
sequence in the W303-1A abc1THIS3 strain, did not
restore growth of this mutant on glycerol-containing
media. Neither AtOSA1-EYFP nor AtOSA1 restored
growth. As a control, the growth of the same strain
was restored after complementation with yeast ABC1
gene (Fig. 4, A and B), suggesting functional diver-
gence between AtOSA1 and the yeast ABC1. We
Figure 4. Complementation test of the S. cerevisiae mutant W303-1A
abc1THIS3. Yeast strains S. cerevisiae ABC1 (pRS316 harboring S.
cerevisiae ABC1), pNEV (vector only), AtOSA1 (pNEV harboring
AtOSA1), and AtOSA1YFP (pNEV harboring AtOSA1 with YFP) were
streaked on plates containing minimal medium lacking uracil for
selection (A) and onto minimal medium containing glycerol as a sole
carbon source (B). Plates were incubated for 4 d at 28?C. C, Superpo-
sition of a confocal and a bright field image of W303-1A abc1THIS3
expressing AtOSA1YFP. D, Superposition of a confocal image and a
bright field image of the same cell stained with Rhodamine HexylB.
Figure 5. Confocal laser scanning microscopic analysis of an Arabi-
dopsis suspension cell culture transiently expressing EYFP-tagged
AtOSA1 (A) and Tic110-GFP (B) and the corresponding bright field
images (C and D).
Jasinski et al.
722Plant Physiol. Vol. 147, 2008
Page 5
included the targeting presequence, because chloro-
plast proteins tend to be targeted to the mitochondria
when expressedinfungalcells(Pfalleret al.,1989;Brink
et al., 1994). We monitored the expression of AtOSA1-
EYFP by confocal microscopy. The signal emitted by
the strains expressing TPAtOSA1-YFP (Fig. 4C) was
similar to that of the Rhodamine HexylB used for
stainingmitochondria(Fig.4D),confirming localization
of AtOSA1 in yeast mitochondria or mitochondrion in
the presence of the chloroplast targeting presequence.
AtOSA1 Is Localized in the Chloroplast
Sequence analysis of the AtOSA1 protein with
Target P (;
Emanuelsson et al., 2000), used for proteomic analyses
and theoretical predictions of protein localization (Koo
and Ohlrogge, 2002; Peltier et al., 2002), revealed the
presence of a 28-amino acid N-terminal chloroplast
targeting presequence (Supplemental Fig. S1). Both
rice sequences Os02g0575500 and Os09g0250700 also
have such putative chloroplast transit peptide regions
of56and39aminoacids,respectively.Toverifyits sub-
cellular localization, AtOSA1 was fused (C terminal)
with EYFP and transiently expressed under the control
of the cauliflower mosaic virus 35S promoter in
Arabidopsis suspension cell culture (Fig. 5A). The
signal was visualized by confocal microscopy. The
observed localization was identical with that obtained
for the Tic110-GFP (Fig. 5B), an integral inner envelope
membrane protein of the chloroplast import machinery
(Inaba et al., 2003). Our results confirm in silico and
proteomicdata,suggestingalocalizationoftheAtOSA1
protein in the chloroplast envelope of Arabidopsis
(Froehlich et al., 2003).
Cadmium Effect on AtOSA1 Mutants
The identification of mutants for AtOSA1 was pos-
sible from T-DNA insertion lines of the SALK Institute
(SALK 045739) and GABI Kat (GABI, 132G06). To find
the homozygote lines for both mutants, we screened
the F3-F4 generation by PCR using RP, LP, and LB
T-DNA primers designed by SIGnAL T-DNA Express
(). The mutants were named
atosa1-1 (SALK 045739) and atosa1-2 (GABI 132G06),
respectively (Supplemental Fig. S2). In both mutants,
T-DNA insertions are located toward the 3# end,
thereby excluding the presence of a membrane anchor
in case truncated transcripts are translated (Supple-
Figure 6. Cadmium tolerance and accumulation in atosa1. A, Determination of cadmium accumulation by AAS in 10 seedlings
exposed to 0 (one-half-strength MS), 1, 10, or 20 mM CdCl2on agar plates (n 5 5). B, The effect of cadmium on root growth of
atosa1-1, atosa1-2, and Col-0 (WT) in the absence (one-half-strength MS) or presence of 20 mM CdCl2. Root length of 8-d-old
seedlings (10 , n , 20, representative results from four independent experiments). C, Determination of cadmium content in
leaves and roots in wild type (Col-0) and atosa1-1 grown under hydroponic conditions (n 5 8; mean 6 SE; t test: *, P 5 0.1; **,
P 5 0.05; ***, P 5 0.01). D, Autoradiographyof plant roots labeledwith 0.04 MBq109CdCl2in one-eighth-strength MAMI for 4 h.
E, Phenotype of atosa1 grown in the absence or presence of 0.5 mM CdCl2.
AtOSA1 Responds to Cadmium, Oxidative Stress, and Light
Plant Physiol. Vol. 147, 2008723
Page 6
mental Fig. S3). Seedlings of both mutants accumu-
lated less cadmium than the wild type at 10 and 20 mM
CdCl2(Fig. 6A). Therefore, we investigated cadmium
tolerance in AtOSA1 T-DNA insertion mutants in
1-week-old seedlings grown on bactoagar plates con-
taining 20 mM CdCl2. Interestingly,roots of atosa1-1 and
atosa1-2 mutant seedlings were longer than that of
wild-type seedlings (Fig. 6B), thereby suggesting that
root growth is less affected by cadmium toxicity in
AtOSA1 than in the wild type. Under hydroponic
culture conditions, leaves from wild-type Arabidopsis
plants took up significantly more cadmium than
atosa1-1, confirming the data obtained in seedlings
(Fig. 6C). A similar picture could be observed in the
autoradiograms from 4-week-old plants exposed to
0.04 MBq109CdCl2for 4 h (Fig. 6D), in which higher
radioactivity was detected in wild-type plants. Surpris-
ingly, despite the fact that the mutant plants took up
less cadmium, they exhibited a marked chlorotic phe-
notype when exposed to 0.5 mM CdCl2for 7 d (Fig. 6E).
Superoxide Dismutase Activity and Gene Expression in
the AtOSA1 T-DNA Mutants
Leaf chlorosis observed in the AtOSA1 T-DNA inser-
tion mutants but not in wild-type plants after cadmium
treatmentprompted us todetermine whetheratosa1-1is
more sensitive to oxidative stress than wild type and
whether some of the genes involved in reactive oxygen
species (ROS) scavenging are regulated differently in
mutants. A suitable approach to determine sensitivity
to ROS is measurement of the activity of superoxide
dismutase (SOD), an essential enzyme to attenuate
plant oxidative stress. In the first approach, we deter-
mined the overall SOD activity in the leaves of wild
type and atosa1-1 exposed or not to 1 mM CdCl2. The
AtOSA1 mutant plants showed increased SOD activity
compared to wild-type plants both in absence and
presence of Cd21. The effect was particularly marked in
the absence of Cd21treatment (Fig. 7A). To determine
whether chloroplasts also exhibit an increased SOD
activity, we isolated chloroplasts from plants grown in
the presence or absence of Cd21. The data showed that
chloroplasts isolated from the AtOSA1 deletion mutant
displayed a slight but consistently higher SOD activ-
ity compared to the wild-type chloroplasts. This effect
wasindependentofwhethertheplantswereexposedto
Cd21or not (Fig. 7B).
Transcript levels of genes (AtAPX1, At1g07890;
AtFSD1, At4g25100; AtFSD2, At5g51100) responding to
oxidativestress(Kliebensteinetal.,1998;Balletal.,2004)
were investigated. AtAPX1, AtFSD1, AtFSD2, as well as
AtOSA1 were found to be up-regulated in wild type
after 1 mM cadmium treatment. In atosa1-1, only AtFSD2
was comparatively induced by cadmium. Interestingly,
AtFSD1 was more expressed in atosa1.1 under control
conditions when compared to wild type and induction
of AtFSD2 was stronger in the mutant (Fig. 7C).
H2O2, known as an ROS inducer, reduced the
growth of the seedling roots more in the mutants
than in the wild type (Fig. 8A). The effect of H2O2was
also more pronounced in atosa1-1 leaves compared to
the wild-type leaves. Indeed, after spraying leaves of
wild-type and mutant plants with 300 mM H2O2in 0.2%
(v/v) Tween 20, we observed a rapid appearance of
necrotic spots in the mutant, already 1 d after spraying
(Fig. 8, B and C). In contrast, only a very few or no
spots were found in the wild-type plants 4 d after
spraying with H2O2(Fig. 8B). No necrotic spots were
detected when both the wild type and AtOSA1
T-DNA-inserted mutant were sprayed with 0.2%
(v/v) Tween 20 only (data not shown).
Figure 7. SOD activity and AtOSA1 expression. A,
Comparison of the total SOD activities between
Arabidopsis wild-type Col-0 (WT) and atosa1-1 under
normal growth conditions(2) and after treatmentwith
1 mM CdCl2(1; n 5 4). B, Measurement of SOD
activity in intact chloroplasts obtained from wild-type
Col-0 (WT) and the AtOSA1 T-DNA-inserted mutant
(atosa1-1) treated (1) or not (2) with 1 mM CdCl2(n 5
4; mean 6 SE; t test: *, P 5 0.1; **, P 5 0.05; ***, P 5
0.01). C, Analyses of the expression of AtOSA1,
AtAPX1, AtFSD1, and AtFSD2 in wild-type Col-0
(WT) and atosa1-1 by RT-PCR in the absence (2) or
presence(1) of1 mM CdCl2underlight superiorto 100
mmol m22s21. AtS16 was used as control (30 cycles).
Jasinski et al.
724 Plant Physiol. Vol. 147, 2008
Page 7
The Effect of Light on AtOSA1 T-DNA-Inserted Mutants
Light has a complex effect on AtOSA1 mutants
depending on light intensities. At a low light regime
(50 mmol m22s21) for 8 h during 4 weeks, the shoot
growth of atosa1-1 and atosa1-2 was significantly al-
teredcomparedto the wild type (Fig. 9, A and B). After
an additional 4 weeks of growth in the same experi-
mental conditions, leaf sizes were still different, and
based on fresh weight, chlorophyll a (Chla), chloro-
phyll b (Chlb), and carotenoid contents were higher in
the mutants compared to the wild type (Fig. 9, C and
D). Under a light regime of 120 to 150 mmol m22s21for
8 and 16 h, no visible phenotypes were found in the
AtOSA1 mutants. Surprisingly,under 16 h of high light
(350 mmol m22s21), atosa1-1 exhibited a pale-green
phenotype (Fig. 9E). In this case, the analyses of
pigments showed slightly less chlorophyll and carot-
enoids in atosa1-1 compared to the wild type (Fig. 9F).
Analysis of photosynthetic activities in terms of net
CO2assimilation rate also revealed differences between
Atosa1 mutants and the wild type depending on the
light intensities. Under higher light intensities, mutants
were more affected than the wild type (Fig. 10A). In-
creasing light intensities from 50 to 150 mmol m22s21
led to a reduction of AtOSA1, AtAPX1, AtFSD1, and
AtFSD2 transcript levels in wild-type plants. A similar
reduction of AtAPX1, AtFSD1, and AtFSD2 could be
observed in the atosa1-1 mutant, but this effect was
visible only under higher light intensities (Fig. 10B).
No significant differences were found by the electron
microscopic analysis in chloroplast structures (stroma
lamellae, grana stacks, and envelope membranes) be-
tween the atosa1-1 and wild type. In addition, the
inductively coupled plasma mass spectrometry data
showed that the content in essential metals and heavy
metals was not changed by the AtOSA1 T-DNA inser-
tion (data not shown).
Because possible connectionsbetween Abc1 proteins,
electron transport, and ubiquinone (plastoquinone and
phylloquinone) synthesis have been postulated (Poon
et al.,2000),weperformedanalysisof electron transport
in AtOSA1 mutants. The kinetic measurements of
Chla fluorescence probing the redox state of the pri-
maryquinoneacceptorofPSII and820nmtransmission
probing the redox state of mainly plastocyanin and
P700 (reaction center chlorophylls of PSI) revealed no
differencesbetween atosa1-1andthewildtype(datanot
shown). This indicates that the electron transport func-
tioned well in atosa1-1 and that the number of oxidized
electron acceptors per chain at the beginning of the
measurement was similar to the wild type at ‘‘stan-
dard’’ light regime (120 mmol m22s21).
Detection of Protein Kinase Activities in Gelo
In addition to the Abc1 protein family, AtOSA1 con-
tains motifs found in eukaryotic-type protein kinases.
Therefore, we decided to examine protein kinase activ-
ities in the AtOSA1 mutant by in gelo phosphorylation
Figure 8. Effect of oxidative stress.
A, After treatment with 1 mM H2O2
on agar plates, root length of 8-d-old
seedlings was measured (10 , n ,
20, representative results from four
independent experiments; mean 6
SE; t test: *, P 5 0.1; **, P 5 0.05;
***, P 5 0.01). B, Five-week-old
Col-0 (WT) and atosa1-1 plants
were sprayed with 300 mM H2O2
and 0.2% (v/v) Tween 20 at day 0.
Plants were photographed at days 0,
1, and 4 following the treatment. C,
Magnification of atosa1 leaves 4 d
after treatment with H2O2.
AtOSA1 Responds to Cadmium, Oxidative Stress, and Light
Plant Physiol. Vol. 147, 2008725
Page 8
assays using myelin basic protein as a substrate.
Because we localized AtOSA1 in chloroplasts and the
proteomic analysis identified AtOSA1 in the envelope
fraction (Froehlich et al., 2003), we isolated and used
this fraction for the assay. In-gel protein kinase assay
allowed us to detect one chloroplast envelope protein
kinase of about 70 kD in the Columbia (Col-0) ecotype
of Arabidopsis (Fig. 11A). Interestingly, this labeled
band was not present in the envelope membranes
isolated from the AtOSA1 T-DNA-inserted plants. This
might indicate that the AtOSA1 mutant lacks this pro-
tein kinase. The labeled bands with a similar Mrwere
not detected in thylakoid membranes, and a more
complexphosphorylation pattern,which,however, did
not show the absence of a labeled band, was obtained
with Histone III-S as a substrate (data not shown). The
envelope protein profile after Coomassie Blue staining
of the SDS gel did not show marked differences be-
tween the mutant and the wild type (Fig. 11B).
DISCUSSION
We performed microarray chip analyses to identify
genes up- and down-regulated in response to cad-
mium stress. Among the genes exhibiting an altered
transcript level in response to Cd21, we identified
AtOSA1 (At5g64940) as a member of the Abc1 family.
In Arabidopsis, 17 genes contain a typical Abc1 mo-
tif and hence constitute a small gene family. The
sole Abc1 representative described so far in plants
(At4g01660) is a homolog to the yeast ABC1 (Cardazzo
et al., 1998). Both are localized in mitochondria
(Bousquet et al., 1991; Cardazzo et al., 1998), in con-
trast to AtOSA1, which is targeted to the chloroplast
and does not subcluster with them. AtOSA1 transcript
level followed a complex kinetics in response to Cd21
during dose-dependent and time-course experiments.
In the absence of cadmium treatment, its expression in
leaves increased during the life of Arabidopsis, and it
has been reported that plant aging increases oxidative
stress in chloroplasts (Munne-Bosch and Alegre, 2002).
Two independent T-DNA insertion mutants, lacking
functional AtOSA1, exhibited a complex behavior to-
ward cadmium. Indeed, the seedling roots of AtOSA1
deletion mutants were less affected by Cd21than those
of the wild-type plants, possibly due to a reduced Cd21
uptake.
The increased cadmium tolerance of wild type com-
pared to atosa1 mutants is very likely not supported by
the direct binding of cadmium to AtOSA1. Indeed,
AtOSA1 lacks of sequence motifs containing cysteins,
involved in the binding of heavy metal ions (Zn21,
Cd21, Pb21, Co21, Cu21, Ag1, or Cu1), like CXXC and
CPC. Such motifs have been found, for example, in
members of the subclass of heavy metal-transporting
P-type ATPases (P1B-type ATPases; Eren and Argu ¨ello,
2004). In addition, AtOSA1 is likely not a heavy metal
(cadmium) transporter, because vesicles isolated from
YMK2 yeast (Klein et al., 2002) transformed with
AtOSA1 did not show any cadmium transport (data
not shown).
Figure 9. Effects of light on pigments
and shoot growth of atosa1-1 and
atosa1-2. Plants were grown at 8 h
light (50 mmol m22s21) for 4 weeks
(A) or 8 weeks (C). B, Shoot weight of
4-week-old plants (n 5 10). D, Con-
tents of Chla and Chlb and carot-
enoids of 8-week-old plants (n 5 10).
E, Plants grown at 16 h light (350
mmol m22s21) for 5 weeks. F, Con-
tents of Chla and Chlb and carot-
enoids in plants depicted in E (n 5 10;
mean 6 SE; t test: *, P 5 0.1; **, P 5
0.05; ***, P 5 0.01).
Jasinski et al.
726Plant Physiol. Vol. 147, 2008
Page 9
The pale phenotype of leaves was more pronounced
in the case of mutant plants exposed even to a low
dose of Cd21despite the fact that lack of AtOSA1
results in lower Cd21uptake rates in shoots. Such a
chlorotic phenotype of leaves was not correlated with
an elevated accumulation of cadmium and was also
observed under high light conditions. This pale phe-
notype might be a consequence of Cd21toxic effect
due to a modification of the cellular cadmium distri-
bution (Ranieri et al., 2001) and an increased Cd21
sensitivity related to an increased production of ROS
in the AtOSA1 mutants, similarly to those described in
Euglena gracilis (Watanabe and Suzuki, 2002) or yeast
(Brennan and Schiestl, 1996).
Although the mechanism of oxidative stress induc-
tion by Cd21is still obscure, Cd21can inhibit electron
transfer and induces ROS formation (Wang et al.,
2004). It has been also suggested that Cd21can inter-
fere in living cells with cellular redox reactions and
displaces or releases other active metal ions (e.g. Zn21)
from various biological complexes, thereby causing a
reduction of the capacity of the antioxidant system
(Jonak et al., 2004).
Besides cadmium, the AtOSA1 T-DNA-inserted mu-
tants actually showed a phenotype illustrated by a
reduced tolerance to H2O2and light. At 150 mmol m22
s21, we observed the same transpiration rate for wild
type, atosa1-1, and atosa1-2. Nevertheless, stomatal
conductance and CO2assimilation were higher in
wild type than in mutants (data not shown). This
observation suggests that, at this light intensity (150
mmol m22s21), transpiration occurs not only at the
stomatal level but also directly through the epidermis.
This hypothesis is supported by the experiments
showing increased sensitivity of atosa1 toward H2O2
(Fig. 8B). Indeed, it is still possible that the AtOSA1
mutation also affects the epidermal cell wall and the
cuticule. At low light intensity and period, atosa1
exhibited retardation in growth correlated with an
increase in pigment production (Chla, Chlb, and ca-
rotenoids). Under higher light intensity and period, a
pale-green phenotype correlated with a decrease in
pigment contents when compared with the wild type.
In addition, changes of light intensities influenced
photosynthetic activities. These data suggest partici-
pation of the chloroplast AtOSA1 in light-generated
stress (ROS) and pigment response.
Obtained results suggest that AtOSA1 mutants have
a hypersensitivity to broad abiotic stresses, including
photooxidative stress. RT-PCR analyses in atosa1
plants showed different behavior for transcripts of
genes responding to oxidative stress. For instance, it
was shown that AtFSD1 transcript in Arabidopsis is
high at 60 mmol m22s21and then down-regulated
under increasing light fluences (Kliebenstein et al.,
1998). A similar tendency could be observed for atosa1
but under higher light intensity. The lack of AtOSA1
caused a global shift under increasing light conditions.
Figure 10. Effect of light intensity on gas exchange and expression of
oxidative stress-related genes. Analyses of CO2assimilation rate (A) of
Col-0 (WT), atosa1-1, and atosa1-2. Measurements were performed in
plants grown at 8 h light at a photosynthetic photon flux density of
50, 100, or 150 mmol m22s21(n 5 10; mean 6 SE; t test: *, P 5 0.1; **,
P 5 0.05; ***, P 5 0.01). B, RT-PCR expression analysis of AtS16
(housekeeping gene), AtOSA1, AtAPX1, AtFSD1, and AtFSD2 in
plants used for the determination of gas exchange measurements (A;
28 cycles).
Figure 11. Protein kinase activity. A, Detection of protein kinase
activity in chloroplast envelope membranes isolated from leaves of
wild type and atosa1-1. The arrow indicates the position of the
phosphorylated myelin basic protein at around 70 kD in the wild
type. B, Coomassie Blue staining of the gel shown in A. For details, see
‘‘Materials and Methods.’’
AtOSA1 Responds to Cadmium, Oxidative Stress, and Light
Plant Physiol. Vol. 147, 2008 727
Page 10
This might indicate necessity to compensate increased
oxidative stress level in the mutant by the expression
of components of the antioxidantnetwork like AtAPX1
and AtFSD1 and permanent SOD activities (Ball et al.,
2004). Interestingly, the increased SOD activity detected
in the isolated chloroplasts was not enhanced by Cd21
treatments, thereby confirming the data reported by
Fornazier et al. (2002) showing that the Cd21treatment
did not enhance SOD activities, possibly by displacing
Fe21, Zn21, or Cu21required for the SOD activity. Most
presumably, these results indicate that AtOSA1 deletion
mutants permanently suffer from oxidative stress and
compensate it to a certain level under controlled growth
conditions; however, these plants are apparently not
able to do it when environmental parameters like ROS
inducers, light regime, or nutrient supply vary.
AtOSA1 is probably not directly induced by external
oxidative stress but acts in a more complex manner, for
example, as a part of a signal transduction pathway
related to oxidative stress. Indeed, the Abc1 family has
been described as a family of putative kinases (Leonard
et al., 1998), and it is possible that AtOSA1 exhibits
protein kinase activity, because the predicted molecular
mass of mature AtOSA1 (83 kD) is close to the phos-
phorylated polypeptide detected in the autoradi-
ography (approximately 70 kD) after in-gel assay. The
phosphorylated polypeptide is not present in the enve-
lope membranes derived from AtOSA1 mutant. Nev-
ertheless, we cannot exclude that the protein kinase
detected within the gel matrix is a member of a signal
transduction cascade,which isnot activein the AtOSA1
mutant. Further studies are required to elucidate the
role of this protein kinase within the chloroplast.
Based on the phylogenetic tree, cell localization, and
involvement in oxidative stress response, AtOSA1 is
rather not a functional homolog of the yeast ABC1 and
At4g01660 (Cardazzo et al., 1998). As a chloroplast
protein, AtOSA1 is more closely related to prokaryotic
Abc1 proteins from cyanobacteria like Synechocystis or
Nostoc than to those of mitochondria. These ABC1
proteins have not been characterized so far. Therefore,
our data are in agreement with the studies on evolu-
tionary relations between different Abc1 proteins,
which led to the conclusion that Abc1 proteins from
cyanobacteria and chloroplasts, on the one hand, and
from mitochondria on the other have independent
origins (Leonard et al., 1998). To date, it has been
suggested that Abc1 proteins control the biogenesis
of respiratory complexes in mitochondria. The yeast
ABC1 knockout mutants are unable to grow on glyc-
erol, making the exact molecular functions of these
proteins still a matter of debate (Do et al., 2001).
In Arabidopsis, AtOSA1 (At5g64940) clusters to-
gether with Abc1-like gene At3g07700. Interestingly, a
homolog of this gene in B. juncea is also cadmium
regulated and possibly localized in the chloroplast
(Fusco et al., 2005). Concerning the other Abc1 sequence-
related genes in Arabidopsis, four of them (At5g5200,
At4g31390, At1g79600, and At1g71810) have been re-
cently found to be localized in plastoglobules in a
proteomic study and are possibly involved in the reg-
ulation of quinine monooxygenases (Ytterberg et al.,
2006). As illustrated by the pleiotropic effect and per-
manent oxidative stress caused by deletion of AtOSA1,
despite the fact that our knowledge about Abc1-related
proteins is still scarce, our results indicate this gene
family triggers essential regulatory functions.
MATERIALS AND METHODS
cDNA Microarrays
The mRNAs were isolated as described at. Fluo-
rescent labeling of cDNAs, hybridization on homemade DNA microarray
slides spotted with ESTs and 3# end coding sequences (corresponding to
putative ABC transporter proteins [124 of 127] and other protein families), and
fluorescence analyses (Scanarray 4000) were performed as described by Bovet
et al. (2005).
Semiquantitative PCR
For semiquantitative RT-PCR, the housekeeping genes AtACT2 (actin;
At3g18780) and AtS16 (At5g18380) were amplified using the primers actin2-S
(5#-TGGAATCCACGAGACAACCTA-3#) and actin2-AS (5#-TTCTGTGAAC-
GATTCCTGGAC-3#) and S16-S (GGCGACTCAACCAGCTACTGA) and S16-AS
(CGGTAACTCTTCTGGTAACGA), respectively. For the ascorbate peroxidase
1 (AtAPX1) gene (At1g07890), Fe-SOD 1 (AtFSD1) gene (At4g25100), and Fe-SOD
2 (AtFSD2) gene (At5g51100), we designed the following primers: APX1-S
(5#-GCATGGACATCAAACCCTCTA-3#) and APX1-AS (5#-TTAAGCATCAGC-
AAACCCAAG-3#); FSD1-S (5#-GGAGGAAAACCATCAGGAGAG-3#) and
FSD1-AS (5#-TCCCAGACATCAATGGTAAGC-3#); and FSD2-S (5#-CCACTCC-
CTCGTCTCTCTTG-3#) and FSD2-AS (5#-CCACCTCCAGGTTGGATAGA-3#).
The primers for AtOSA1 were AtOSA1-S (5#-GACAGGCAATCACAAG-
CATTC-3#) and AtOSA1-AS (5#-CGATTAGAACTTGGAGGCTGA-3#), respec-
tively. For the selection of the atosa1-1 T-DNA insertion homozygote lines (SALK
045739), the primers were: RP (5#-AACGCGTTGAAATGCCCTCTC-3#), LP
(5#-CTTGCTTCTTATCCATCGAGC-3#),andLBT-DNA(5#-GCGTGGACCGCTT-
GCTGCAACT-3#). For the selection of the atosa1-2 T-DNA insertion homozygote
lines (GABI 132G06), the primers were: RP (5#-TTTGTTGGAGGCATTTTA-
TGG-3#), LP (5#-GAATGCTTGTGATTGCCTGTC-3#), and LB T-DNA (5#-ATTTG-
GACGTGAATGTAGACA-3#). The primers for the verification of truncated
transcript were: 1-S (5#-AATCGCCGGGATCTTCTTAC-3#) and 1-AS (5#-TTGT-
CACTTCCTCCGTTTCC-3#), 2-S (5#-TTTGTTGGAGGCATTTTATGG-3#) and
2-AS (5#-AACGCGTTGAAATGCCCTCTC-3#), and 3-S (5#-GACAGGCAATCA-
CAAGCATTC-3#) and 3-AS (5#-CGATTAGAACTTGGAGGCTGA-3#). The PCR
reactions were performed in a final volume of 25 mL containing the following
mixture: PCR buffer, 0.2 mM dNTPs, 0.5 mM of both 5# and 3# primers, 1 unit Taq
DNA polymerase (Promega), andadjustedamounts of cDNA. DNAwas isolated
using NUCLOSPIN plant (Macherey-Nagel). Total RNA was purified from the
plants using the RNeasy Plant Mini kit (Qiagen) and stored at 280?C following
quantification by spectrophotometry. After DNAse treatment (DNase, RQ1,
RNase free, Promega), cDNAs were prepared using Moloney murine leukemia
virus reverse transcriptase, RNaseH minus, point mutant (Promega) as indicated
bythe manufacturer andstoredat220?C.cDNAswere dilutedapproximately10
times for the PCR reaction. After denaturation at 95?C for 3 min, 35 PCR cycles
(94?C for 45 s, 58?C for 45 s, and 72?C for 1 min) were run.
Complementation of Yeast
For complementation of W303-1A abc1THIS3 (Hsieh et al., 2004) deficient in
the endogenous Abc1 gene, we used AtOSA1 sequence with the chloroplast
targeting presequence. Two constructs were tested with and without EYFP. The
construct with EYFP was obtained by recloning of AtOSA1-EYFP from pRT
vector into pNEV (Sauer and Stolz, 1994) via NOTI site. The construct without
YFP and with targeting presequence was obtained by PCR (5NOTAtOSA1-S,
5#-TGCTACCGGTGCGGCCGCATGGCGACTTCTTCTTCTTCATCG-3#; and
3#-NOTAtOSA1-AS, 5#-ATAAGAATGCGGCCGCTTAAGCTGTTCCAGTGATT-
AGTTTTTCC-3#) using pRT-AtOSA1-EYFP as a template. PCR product was
sequenced to avoid errors. Yeast transformation was performed using standard
Jasinski et al.
728Plant Physiol. Vol. 147, 2008
Page 11
protocols. Transformants were growing on the synthetic dextrose medium (2%
[w/v] Glc,0.7%[w/v] yeastnitrogenbase,andrequiredaminoacids)withGlcor
glycerol as a source of carbon. Cells were analyzed by confocal laser scanning
microscopy (TCS SP2 Leica).
Localization of AtOSA1
The AtOSA1 cDNAwas PCR amplified (AtOSA1-S, 5#-TGCTACCGGTGCG-
GCCGCATGGCGACTTCTTCTTCTTCATCG-3#; and AtOSA1-AS, 5#-TCGTC-
CATGGAAGCTGTTCCAGTGATTAGTTTTTCC-3#) to introduce appropriate
restriction sites and cloned into AgeI/NcoI from vector pEYFP (BD Biosciences)
to fuse it with EYFP. We used cDNA prepared as described above as a template
for the PCR. The resulting AtOSA1-EYFP was cut off by NotI and cloned into
vector pRT (U¨berlacker and Werr, 1996), resulting in pRT-AtOSA1-EYFP. The
entire gene fusion product was sequenced to verify the absence of PCR errors.
The Tic110-GFP construct was kindly provided by F. Kessler, University of
Neuchatel.
Arabidopsis (Arabidopsis thaliana) suspension cell cultures were grown as
described in Millar et al. (2001). Three days after culture dilution, the cells
weretransferred onto solid medium, and 48 h later the plants were transfected
with appropriate constructs using a particle inflow gun (PDS1000He; Bio-Rad)
with 0.6-mm particles and 1,300 psi pressure. The transfected Arabidopsis cells
were analyzed by confocal laser scanning microscopy (TCS SP2 Leica) 24 and
48 h after bombardment.
Chloroplast and Envelope Membrane Preparation
First, the mesophyll protoplasts were prepared from leaves according to
the protocol described in Cosio et al. (2004) and subsequently, the intact
chloroplasts were obtained according to the method of Fitzpatrick and
Keegstra (2001). The collected protoplast pellet was resuspended briefly in
300 mM sorbitol, 20 mM Tricine-KOH, pH 8.4, 10 mM EDTA, 10 mM NaHCO3,
0.1% (w/v) bovine serum albumin, and forced twice through 20- and 11-mm
nylon mesh. Released chloroplasts were immediately purified on an 85%/45%
(v/v) Percoll gradient and collected by centrifugation at 250g. The chloroplast
envelope membranes were isolated from purified chloroplasts as described by
Froehlich et al. (2003).
Plant Growth
Arabidopsis (Col-0) called above wild-type and AtOSA1 T-DNA-inserted
mutant (SALK 045739, GABI 132G06) plants were grown on soil in a growth
chamber (8-h-light period, 22?C;16-h-darkperiod, 21?C;70% relative humidity)
and at a light intensity of 140 to 160 mmol m22s21.
For sterile growth after sterilization, the seeds (approximately 20) were
placed on 0.8% (w/v) agar plates containing one-half-strength Murashige and
Skoog (MS; Duchefa) or MAMI and 1% (w/v) Suc. MAMI medium is: KH2PO4
(200 mg/L); MgSO4.7H2O (187.5 mg/L); Ca(NO3).4H2O (79.25 mg/L); KNO3
(22 mg/L); Fe-EDDHA sequestren (17.5 mg/L); MnCl2.4H2O (48.75 mg/L);
H3BO3(76.25 mg/L); ZnSO4.7H2O (12.25 mg/L); CuSO4.5H2O (6.875 mg/L);
NaNoO4.2H2O (12.5 mg/L); and Ni(NO3)2.6H2O (3.75 mg/L). The plates were
stored at 4?C for 24 h for synchronization of seed germination and then placed
vertically in the phytotron (25?C, 16 h light, and 70% humidity) at light
intensity of 80 to 120 mmol m22s21. For treatments, seeds were germinated
and grown vertically on one-half-strength MS bactoagar plates in the presence
or absence of 1, 10, or 20 mM CdCl2or 1 mM H2O2at 16 h light for 7 d.
Hydroponic culture: Seeds were first germinated and grown vertically on one-
half-strength MAMI bactoagar plates at 8 h light for 2 weeks. Seedlings were
then transferred in one-half-strength MAMI liquid medium under the same
growth conditions for 2 weeks. Plants were finally cultivated for an additional
3 weeks in the presence or absence of 0.2, 0.5, or 1 mM CdCl2in one-half-
strength MAMI. Cadmium was desorbed after 10 min of root incubation in
1 mM CaCl2cold solution. The cadmium content was determined by atomic
absorption spectroscopy (AAS) in shoots and roots.
Plant Labeling
The plants were root labeled with 0.04 MBq
strength MAMI for 4 h. After washing with cold distilled water, plants were
grown in one-half-strength MS for an additional 3 d, dried, and subjected to
autoradiography.
109CdCl2in one-eighth-
Determination of SOD Activity
For the SOD activity measurements without any treatment, we used
4-week-old plants grown on soil. For measurement following a Cd21applica-
tion, plants were germinated on 0.8% (w/v) agar plates containing one-half-
strength MS(Duchefa)and1% (w/v)Suc. Theplateswere stored at4?C for16h
forsynchronizationofseedgermination,thenplacedverticallyinthe phytotron
(22?C, 8 h light, and 70% humidity). Two-week-old seedlings were transferred
to liquid medium and cultivated under hydroponic conditions for 3 weeks on
MAMI medium. CdCl2was added to the medium to a final concentration of
1 mM and the samples were taken 24 h later. The activity of SOD was measured
as described by Hacisalihoglu et al. (2003). Leaves were homogenized briefly
with 50 mM HEPES buffer, pH 7.6, containing 0.1 mM Na2EDTA, 1 mm
phenylmethylsulfonyl fluoride, 1% (w/v) PEG4000, and 1% (w/v) polyvinyl-
polypyrrolidone (Sigma) and centrifuged at 14,000 rpm for 10 min at 4?C. The
supernatant was desalted on a Biospin column P6 (Bio-Rad) according to the
supplier’s protocol andusedforproteinandSODassays. Theassaydetermines
the inhibition of the photochemical reduction of nitroblue tetrazolium (NBT) as
described by Giannopolitis and Ries (1977). The 1-mL reaction mixture for the
SOD assay contained 50 mM HEPES, pH 7.6, 0.1 mM EDTA, 50 mM Na2CO3, pH
10.4,13mM Met, 75mM NBT,0.5mL ofenzymeextract, and2 mM riboflavin.The
reaction mixtures were illuminated for 15 min at 250 mmol m22s21light
intensity or kept in the dark (negative control). One unit of SOD activity was
defined as the amount of enzyme required to cause 50% inhibition of the
reduction of NBT measured at 560 nm. Protein content was determined
according to Bradford (1976) using bovine serum albumin as a standard.
Gas Exchange
Photosynthetic gas exchange measurements were performed on attached
leaves before plants flowered using an open infrared gas analyzer system
(CIRAS-1; PP-Systems). Measurements were made on plants grown at 8 h
light at a photosynthetic photon flux density of 50, 100, or 150 mmol m22s21,
and CO2concentration of 350 mmol. Leaf temperature was adjusted to the
desired level using the internal heating/cooling system of the analyzer.
Detection of Protein Kinase Activity in Gelo
The method for detecting protein kinases in gelo was adapted from Mori
and Muto (1997). Chloroplast envelope membranes were isolated from wild
type and the AtOSA1 mutant and separated by SDS-PAGE. In this experiment,
350 mg of myelin basic protein (M1891; Sigma) was used as protein kinase
substrate and incorporated in the running gel solution before polymerization.
After electrophoresis, polypeptides were renaturated for 12 h in 50 mM MOPS-
KOH, pH 7.6, changing the buffer four times during this period. The gel was
thenlabeledwith1.6MBq[g-32P]ATP(AA0068;Amersham-Bioscience)in5mL
of 50 mM MOPS-KOH, pH 7.6, 10 mM MgCl2, 0.5 mM CaCl2for 3 h following a
45-min preincubation in the same buffer without the labeled ATP. The gel was
then rapidly washed with deionized water and incubated in 100 mL of 50 mM
MOPS-KOH, pH 7.6, containing 10 g of a strong basic anion exchanger
(AmberlitIRA-410; Sigma)for 3 h. Theremoval of unbound32P was terminated
by incubation of the gel in 50 mM MOPS-KOH, pH 7.6, supplemented with 1%
(w/v) sodium pyrophosphate for 3 h. The polypeptides were then fixed in the
gel in 10% (v/v) 2-propanol, 5% (v/v) acetic acid, and 1% (w/v) sodium
pyrophosphate. The gel was finally dried and subjected to autoradiography.
Pigment Analyses
The plants were grown at 8 h light (50 mmol m22s21) for 8 weeks. From
these plants, leaf samples (50 mg) were collected and analyzed for the content
of Chla and Chlb, as well as carotenoids (n 5 10). Plants were grown at 16 h
light (350 mmol m22s21) for 5 weeks. From these plants, leaf samples (50 mg)
were collected and analyzed for the content of Chla and Chlb, as well as carot-
enoids. Pigments were measured using the method described by Pruzinska
et al. (2005).
Statistics
Each value represents the mean of n replicates. Error bars represent SE.
Significant differences from wild type as determined by Student’s t test are
indicated as follows: *, P , 0.1; **, P , 0.05; and ***, P , 0.001, respectively.
AtOSA1 Responds to Cadmium, Oxidative Stress, and Light
Plant Physiol. Vol. 147, 2008729
Page 12
Supplemental Data
The following materials are available in the online version of this article.
Supplemental Figure S1. Alignment of predicted Abc1 proteins related to
AtOSA1.
Supplemental Figure S2. AtOSA1 T-DNA insertion mutants.
Supplemental Figure S3. Verification of a truncated transcript in atosa1
mutants.
ACKNOWLEDGMENTS
We thank Prof. F. Kessler for providing us with Tic110-GFP construct, and
E. Hsieh for the kind gift of W303-1A abc1THIS3 and W303-1A strains as well
as p3HN4 plasmid. We acknowledge Ame ´lie Fragnie `re, Regis Mark, and
Esther Vogt for technical assistance; Dr. Daniel Studer, University of Bern, for
electron microscopy; Prof. Detlef Gu ¨nther, Swiss Federal Institute of Tech-
nology, for inductively coupled plasma mass spectrometry measurements;
Dr. Stefan Hortensteiner, University of Bern, for phylogenetic tree; and Prof.
Urs Feller, University of Bern, and Dr. Sonia Plaza, University of Fribourg, for
AAS measurements.
Received October 1, 2007; accepted March 20, 2008; published April 4, 2008.
LITERATURE CITED
Bairoch A (1992) PROSITE: a dictionary of sites and patterns in proteins.
Nucleic Acids Res 20: 2013–2018
Ball L, Accotto GP,–2462
Bovet L, Feller U, Martinoia E (2005) Possible involvement of plant ABC
transporters in cadmium detoxification: a c-DNA sub-microarray ap-
proach. Environ Int 31: 263–267
Bradford MM (1976) A rapid and sensitive method for the quantitation of
microgram quantities of protein utilizing the principle of protein-dye
binding. Anal Biochem 72: 248–254 15: 103–111
Brennan RJ, Schiestl RH (1996) Cadmium is an inducer of oxidative stress
in yeast. Mutat Res 356: 171–178
Brink S, Flugge UI, Chaumont F, Boutry M, Emmermann M, Schmitz U,
Becker K, Pfanner N (1994) Preproteins of chloroplast envelope inner
membrane contain targeting information for receptor-dependent import
into fungal mitochondria. J Biol Chem 269: 16478–16485
Clemens S (2001) Molecular mechanisms of plant metal tolerance and
homeostasis. Planta 212: 475–486
Clemens S, Palmgren MG, Kramer U (2002) A long way ahead: under-
standing and engineering plant metal accumulation. Trends Plant Sci 7:
309–315
Cobbett C, Goldsbrough P (2002) Phytochelatins and metallothioneins:
roles in heavy metal detoxification and homeostasis. Annu Rev Plant
Biol 53: 159–182
Emanuelsson O, Nielsen H, Brunak S, von Heijne G (2000) Predicting
subcellular localization of proteins based on their N-terminal amino
acid sequence. J Mol Biol 300: 1005–1016
Eren E, Argu ¨ello JM (2004) Arabidopsis HMA2, a divalent heavy metal-
transporting P(IB)-type ATPase, is involved in cytoplasmic Zn21ho-
meostasis. Plant Physiol 136: 3712–3723
Fitzpatrick LM, Keegstra K (2001) A method for isolating a high yield of
Arabidopsis chloroplasts capable of efficient import of precursor pro-
teins. Plant J 27: 59–65
Fornazier RF, Ferreira RR, Vitoria AP, Molina SMG, Lea PJ, Azevedo RA
(2002) Effects of cadmium on antioxidant enzyme activities in sugar
cane. Biol Plant 45: 91–97
Froehlich JE, Wilkerson CG, Ray WK, McAndrew RS, Osteryoung KW,
Gage DA, Phinney BS (2003) Proteomic study of the Arabidopsis
thaliana chloroplastic envelope membrane utilizing alternatives to
traditional two-dimensional electrophoresis. J Proteome Res 2: 413–425
Fusco N, Micheletto L, Dal Corso G, Borgato L, Furini A (2005) Identi-
fication of cadmium-regulated genes by cDNA-AFLP in the heavy metal
accumulator Brassica juncea L. J Exp Bot 56: 3017–3027
Giannopolitis CN, Ries SK (1977) Superoxide dismutases-occurrence in
higher plants. Plant Physiol 59: 309–314
Hacisalihoglu G, Hart JJ, Wang YH, Cakmak I, Kochian LV (2003) Zinc
efficiency is correlated with enhanced expression and activity of zinc-
requiring enzymes in wheat. Plant Physiol 131: 595–602
Hofmann K, Stoffel W (1993) TMbase: a database of membrane spanning
proteins segments. Biol Chem Hoppe Seyler 374: 166
Hsieh EJ, Dinoso JB, Clarke CF (2004) A tRNA(TRP) gene mediates the
suppression of cbs2-223 previously attributed to ABC1/COQ8. Biochem
Biophys Res Commun 317: 648–653
Iiizumi M, Arakawa H, Mori T, Ando A, Nakamura Y (2002) Isolation of a
novel gene, CABC1, encoding a mitochondrial protein that is highly
homologous to yeast activity of bc1 complex. Cancer Res 62: 1246–1250
Inaba T, Li M, Alvarez-Huerta M, Kessler F, Schnell DJ (2003) atTic110
functions as a scaffold for coordinating the stromal events of protein
import into chloroplasts. J Biol Chem 278: 38617–38627
Jonak C, Nakagami H, Hirt H (2004) Heavy metal stress. Activation of
distinct mitogen-activated protein kinase pathways by copper and
cadmium. Plant Physiol 136: 3276–3283
Kim DY, Bovet L, Maeshima M, Martinoia E, Lee Y (2007) The ABC
transporter AtPDR8 is a cadmium extrusion pump conferring heavy
metal resistance. Plant J 50: 207–218
Kliebenstein DJ, Monde RA, Last RL (1998) Superoxide dismutase in
Arabidopsis: an eclectic enzyme family with disparate regulation and
protein localization. Plant Physiol 118: 637–650
Koo AJK, Ohlrogge JB (2002) The predicted candidates of Arabidopsis
plastid inner envelope membrane proteins and their expression profiles.
Plant Physiol 130: 823–836
Leonard CJ, Aravind L, Koonin EV (1998) Novel families of putative
protein kinases in bacteria and archaea: evolution of the ‘‘eukaryotic’’
protein kinase superfamily. Genome Res 8: 1038–1047
Macinga DR, Cook GM, Poole RK, Rather PN (1998) Identification and
characterization of aarF, a locus required for production of ubiquinone
in Providencia stuartii and Escherichia coli and for expression of 2#-N-
acetyltransferase in P. stuartii. J Bacteriol 180: 128–135–D196: 383–387
Millar AH, Sweetlove LJ, Giege P, Leaver CJ (2001) Analysis of the
Arabidopsis mitochondrial proteome. Plant Physiol 127: 1711–1727
Morgenstern B (2004) DIALIGN: multiple DNA and protein sequence
alignment at BiBiServ. Nucleic Acids Res 32: W33–W36
Jasinski et al.
730 Plant Physiol. Vol. 147, 2008
Page 13
Mori IC, Muto S (1997) Abscisic acid activates a 48-kDa protein kinase in
guard cell protoplasts. Plant Physiol 113: 833–839
Munne-Bosch S, Alegre L (2002) Plant aging increases oxidative stress in
chloroplasts. Planta 214: 608–615 predic-
tion. Plant Cell 14: 211–236
Pfaller R, Pfanner N, Neupert W (1989) Mitochondrial protein import.
Bypass of proteinaceous surface receptors can occur with low specificity
and efficiency. J Biol Chem 264: 34–39
Poon WW, Davis DE, Ha HT, Jonassen T, Rather PN, Clarke CF (2000)
Identification of Escherichia coli ubiB, a gene required for the first mono-
oxygenase step in ubiquinone biosynthesis. J Bacteriol 182: 5139–5146
Pruzinska A, Tanner G, Aubry S, Anders I, Moser S, Mu ¨ller T, Ongania
KH, Kra ¨utler B, Youn JY, Liljegren SJ, et al (2005) Chlorophyll break-
down in senescent Arabidopsis leaves. Characterization of chlorophyll
catabolites and of chlorophyll catabolic enzymes involved in the
degreening reaction1. Plant Physiol 139: 52–63
Ranieri A, Castagna A, Baldan B, Soldatini GF (2001) Iron deficiency differ-
ently affects peroxidase isoforms in sunflower. J Exp Bot 52: 25–35
Salt DE, Prince RC, Pickering IJ, Raskin I (1995) Mechanisms of cadmium
mobility and accumulation in Indian mustard. Plant Physiol 109: 1427–1433
Sanchez-Fernandez R, Davies TG, Coleman JO, Rea PA (2001) The
Arabidopsis thaliana ABC protein superfamily, a complete inventory.
J Biol Chem 276: 30231–30244
Sauer N, Stolz J (1994) SUC1 and SUC2: two sucrose transporters from
Arabidopsis thaliana; expression and characterization in baker’s yeast
and identification of the histidine-tagged protein. Plant J 6: 67–77
Trumpower BL (1990) Cytochrome bc1 complexes of microorganisms.
Microbiol Rev 54: 101–129
U¨berlacker B, Werr W (1996) Vectors with rare-cutter restriction enzyme
sites for expression of open reading frames in transgenic plants. Mol
Breed 2: 293–295
Unden G, Bongaerts J (1997) Alternative respiratory pathways of Esche-
richia coli: energetics and transcriptional regulation in response to
electron acceptors. Biochim Biophys Acta 1320: 217–234 toler-
ance. FEBS Lett 576: 306–312
Wang Y, Fang J, Leonard SS, Rao KMK (2004) Cadmium inhibits the
electron transfer chain and induces reactive oxygen species. Free Radic
Biol Med 36: 1434–1443
Watanabe M, Suzuki T (2002) Involvement of reactive oxygen stress in
cadmium-induced cellular damage in Euglena gracilis. Comp Biochem
Physiol 131: 491–500
Ytterberg AJ, Peltier JB, van Wijk KJ (2006) Protein profiling of plasto-
globules in chloroplasts and chromoplasts. A surprising site for differ-
ential accumulation of metabolic enzymes. Plant Physiol 140: 984–997
Zimmermann P, Hirsch-Hoffmann M, Hennig L, Gruissem W (2004)
GENEVESTIGATOR. Arabidopsis microarray database and analysis
toolbox. Plant Physiol 136: 2621–2632
AtOSA1 Responds to Cadmium, Oxidative Stress, and Light
Plant Physiol. Vol. 147, 2008731
|
http://www.researchgate.net/publication/44630763_Genotypic_variation_in_drought_stress_response_and_subsequent_recovery_of_wheat_(Triticum_aestivum_L.)
|
CC-MAIN-2014-42
|
refinedweb
| 9,187
| 54.63
|
/* These projects can be done in both dotnetfiddle.net and VS Code, but for the sake of immediacy, examples using DotNetFiddle have been embedded.*/
void UsingStatements()
{
Every file in C# uses
using statements at the top of the page to import types and classes from other sources. If you try to copy something from one of these projects and get an error, check to see if there is a using statement you need to add. Often, the IDE will tell you what to add. For starters, you always need
using System;
}
void Variables()
{
A variable is a saved instance of a type or class object. The simplest variable declaration is
var message = "Hello World!";
Note how this is used in the example below to pass the variable content to
Console.WriteLine:
C# is intelligent enough to know that
message is of Type
string, since we assigned it to a string value. Try deleting the “Hello World” line above and replacing with
var message = 43; or
var message = true;. Notice that
true is a boolean value (true/false), but
"true" would be treated as a string, and likewise
43 is an integer (number), but
"43" would be a string. This is important to remember, because even though they can all be printed to the console, you can’t do math with a string, or ask whether a string is true or false.
}
void BasicTypes()
{
The basic types in C# are important to know, as these will represent much of the data we pass around a program. There are many types in .NET, but we will stick to the most common and useful.
}
void Math()
{
Performing math in C# is relatively straightforward, as can be seen by the example below. Try your hand at changing the formulas.
Did the last answer surprise you? Remember, all of our variables were declared as whole numbers, or
integers, and so the answer will also be an
integer. If you want the accurate decimal equivalent, at least one of the numbers in the division problem must be a
double. Try replacing the
7 in line 21 with
7.0. This is a common mistake for beginning programmers, so be aware.
}
void StringConcatenation()
{
There are several ways to combine strings. Concatenating or interpolating is convenient, but if adding a lot of pieces together, using a
StringBuilder is more effecient.
}
void IfElse()
{
Many times, a program must make a decision based on some user action, such as clicking a button. This is where boolean values and
If/Else logic comes into play.
}
void Loops()
{
The loop is an important concept in programming, as it allows us to call a block of code multiple times. There are
while and
do while loops, but the more useful ones are
for and
foreach:
}
|
https://tocode.software/2018/06/10/lesson-1/
|
CC-MAIN-2021-31
|
refinedweb
| 462
| 71.85
|
Python Mathematics of Arrays (MOA)
Project description
Mathematics of Arrays (MOA)
MOA is a mathematically rigorous approach to dealing with arrays that was developed by Lenore Mullins. MOA is guided by the following principles.
Everything is an array and has a shape. Scalars. Vectors. NDArray.
What is the shape of the computation at each step of the calculation?
Answering this guarentees no out of bounds indexing and a valid running program.
- What are the indicies and operations required to produce a given index in the result?
Once we have solved this step we have a minimal representation of the computation that has the Church Rosser property. Allowing us to truely compare algorithms, analyze algorithms, and finally map to algorithm to a low level implementation. For further questions see the documentation. The documentation provides the theory, implementation details, and a guide.
Important questions that will guide development:
- [X] Is a simple implementation of moa possible with only knowing the dimension?
- [X] Can we represent complex operations and einsum math: requires
+red, transpose?
- [ ] What is the interface for arrays? (shape, indexing function)
- [ ] How does one wrap pre-existing numerical routines?
Installation
pip install python-moa
Documentation
Documentation is available on
python-moa.readthedocs.org. The
documentation provides the theory, implementation details, and a
guide for development and usage of
python-moa.
Example
A few well maintained jupyter notebooks are available for experimentation with binder
Python Frontend AST Generation
from moa.frontend import LazyArray A = LazyArray(name='A', shape=(2, 3)) B = LazyArray(name='B', shape=(2, 3)) expression = ((A + B).T)[0] expression.visualize(as_text=True)
psi(Ψ) ├── Array _a2: <1> (0) └── transpose(Ø) └── + ├── Array A: <2 3> └── Array B: <2 3>
Shape Calculation
expression.visualize(stage='shape', as_text=True)
psi(Ψ): <2> ├── Array _a2: <1> (0) └── transpose(Ø): <3 2> └── +: <2 3> ├── Array A: <2 3> └── Array B: <2 3>
Reduction to DNF
expression.visualize(stage='dnf', as_text=True)
+: <2> ├── psi(Ψ): <2> │ ├── Array _a6: <2> (_i3 0) │ └── Array A: <2 3> └── psi(Ψ): <2> ├── Array _a6: <2> (_i3 0) └── Array B: <2 3>
Reduction to ONF
expression.visualize(stage='onf', as_text=True)
function: <2> (A B) -> _a17 ├── if (not ((len(B.shape) == 2) and (len(A.shape) == 2))) │ └── error arguments have invalid dimension ├── if (not ((3 == B.shape[1]) and ((2 == B.shape[0]) and ((3 == A.shape[1]) and (2 == A.shape[0]))))) │ └── error arguments have invalid shape ├── initialize: <2> _a17 └── loop: <2> _i3 └── assign: <2> ├── psi(Ψ): <2> │ ├── Array _a18: <1> (_i3) │ └── Array _a17: <2> └── +: <2> ├── psi(Ψ): <2> │ ├── Array _a6: <2> (_i3 0) │ └── Array A: <2 3> └── psi(Ψ): <2> ├── Array _a6: <2> (_i3 0) └── Array B: <2 3>
Generate Python Source
print(expression.compile(backend='python', use_numba=True))
@numba.jit def f(A, B): if (not ((len(B.shape) == 2) and (len(A.shape) == 2))): raise Exception('arguments have invalid dimension') if (not ((3 == B.shape[1]) and ((2 == B.shape[0]) and ((3 == A.shape[1]) and (2 == A.shape[0]))))): raise Exception('arguments have invalid shape') _a17 = numpy.zeros((2,)) for _i3 in range(0, 2): _a17[(_i3,)] = (A[(_i3, 0)] + B[(_i3, 0)]) return _a17
Development
Download nix. No other dependencies and all builds will be identical on Linux and OSX.
Demoing
jupyter environment
nix-shell dev.nix -A jupyter-shell
ipython environment
nix-shell dev.nix -A ipython-shell
Testing
nix-build dev.nix -A python-moa
To include benchmarks (numba, numpy, pytorch, tensorflow)
nix-build dev.nix -A python-moa --arg benchmark true
Documentation
nix-build dev.nix -A docs firefox result/index.html
Docker
nix-build moa.nix -A docker docker load < result
Development Philosophy
This is a proof of concept which should be guided by assumptions and goals.
Assumes that dimension is each operation is known. This condition with not much work can be relaxed to knowing an upper bound.
The MOA compiler is designed to be modular with clear separations: parsing, shape calculation, dnf reduction, onf reduction, and code generation.
All code is written with the idea that the logic can ported to any low level language (C for example). This means no object oriented design and using simple data structures. Dictionaries should be the highest level data structure used.
Performance is not a huge concern instead readability should be preferred. The goal of this code is to serve as documentation for beginners in MOA. Remember that tests are often great forms of documentation as well.
Runtime dependencies should be avoided. Testing (pytest, hypothesis) and Visualization (graphviz) are examples of suitable exceptions.
Contributing
Contributions are welcome! For bug reports or requests please submit an issue.
Authors
The original author is Christopher Ostrouchov. The funding that made this project possible came from Quansight LLC.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/python-moa/
|
CC-MAIN-2019-26
|
refinedweb
| 812
| 51.55
|
Hello everyone,
I was playing around with Java Swing since i wanted to learn more about it. I have 2 .java files as follows:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
Hello everyone,
I was playing around with Java Swing since i wanted to learn more about it. I have 2 .java files as follows:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
Hi all,
JButtons by default creates rectangular buttons. I'd like to change the shape to a circle. The following is my code:
public void createButtons(JPanel bottomPanel) {
JButton[]...
Hi all,
I'm trying to import a project into eclipse. I've imported one successfully and it shows both in the Git and Java perspective. But the second project shows only in the Git perspective but...
I finally got it. I just had to export the .jar file from eclipse itself. And then i had to extract the manifest file too.
I found the class with the main method, and converted the .java file into a .jar file...but it comes up with an error.:confused:
i want to work on a project and hence i asked. i've made simple java programs in the past but have never worked with a game. Even i was wondering how do i play it? i've got it set in eclipse, but...
i do not see any .jar files after importing from github? :S
Hi all,
I've imported a game from github into eclipse. There are no errors in the class files. I wanted to know how do I actually run the game on my computer?
Hi all,
I'd like to download an open source project from sourceforge and import it into eclipse. It has a .jar file which i have downloaded and i imported it into eclipse. But when i click on the...
:confused:
Hey everyone,
I want to create a class that would return an array of pairs that have the same length as the input array of strings.
In addition, the pair should have the first letter...
Hey guys,
The following is my code:
public class OneB {
/**
* @param args
*/
public static int[] sumDiffs(int[]a, int[]b){
Got the desired result!! i was supposed to type in System.out.println(java.util.Arrays.toString(maxEnd(new int[]{1, 2, 3})));
Thanks a lot for the help! :)
also i get the error, cannot find symbol variable nums after typing in System.out.println("[I@3487a5cc "+ java.util.Arrays.toString(nums))
could u please correct it? need it really urgent and i...
You mean: System.out.println("[I@3487a5cc "+ java.util.Arrays.toString(nums)); ? :S
i have to write a code to verify the following:
maxEnd(new int[]{1, 2, 3}) -> {3, 3, 3}
maxEnd(new int[]{1, 3}) -> {3, 3}
maxEnd(new int[]{3}) -> {3}
Here's my code:
public class OneB {...
Hello, i'd like to confirm the following:
suppose the question is: we have three int's (int a, int b, int c) and the value is true if and only if the sum of a and c is greater than b, then b is...
Hello, i'd like to know what do the following mean in java?
.split("/")
.split("\\.")
Also while iterating, for example:
[code=Java]
about the link Norm provided..i know that...i need the explanation to "test" "arrays" in a terminal.
If i just type java OneA i get the following error: Exception in thread "main" java.lang.NoSuchMethodError: main
I know it's java OneA but after that what should be written so that it can give me...
public class OneA {
public int product( int [] a ) {
int product = 1;
for ( int i = 0; i < a.length; i++ ) {
product *= a[i];
}
return product;
}
}
output at the terminal should look like : noTriples({1, 1, 2, 2, 1}) -> true
noTriples({1, 1, 2, 2, 2, 1}) -> false
Basically there should be no triples in the code. if there are any, it returns...
for (i = 0; i < numbers.length - 2; i++)
In some codes we write i<numbers.length and in some numbers.length-1 or some others. What does this piece of code actually mean?
double mean, sum = 0, sum2 = 0, var;
What does the above line mean?
Yes, i've looked up into that and the code is supposed to calculate the mean and variance and print it out on two separate lines. The mean of N data items is calculated by adding up all the items and...
public class MeanVariance {
public static void main (String[] args) {
int len = args.length;
double[] data = new double[len];
double mean, sum = 0, sum2 = 0, var;
for (int i...
|
http://www.javaprogrammingforums.com/search.php?s=8d82897b843a1b87c9246fdb2faa5712&searchid=203952
|
CC-MAIN-2016-30
|
refinedweb
| 781
| 76.82
|
import "aqwari.net/net/styx/styxproto"
Package styxproto provides low-level routines for parsing and producing 9P2000 messages.
The styxproto package is to be used for making higher-level 9P2000 libraries. The parsing routines within make very few assumptions or decisions, so that it may be used for a wide variety of higher-level packages. When decoding messages, memory usage is bounded using a fixed-size buffer. This allows servers using the styxproto package to have predictable resource usage based on the number of connections.
To minimize allocations, the styxproto package does not decode messages. Instead, messages are validated and wrapped with convenient accessor methods.
decoder.go doc.go encoder.go enum.go errors.go limits.go pack.go parse.go proto.go qid.go stat.go verify.go
const ( OREAD = 0 // open read-only OWRITE = 1 // open write-only ORDWR = 2 // open read-write OEXEC = 3 // execute (== read but check execute permission) OTRUNC = 16 // or'ed in (except for exec), truncate file first OCEXEC = 32 // or'ed in, close on exec ORCLOSE = 64 // or'ed in, remove on close )
Flags for the mode field in Topen and Tcreate messages
const ( DMDIR = 0x80000000 // mode bit for directories DMAPPEND = 0x40000000 // mode bit for append only files DMEXCL = 0x20000000 // mode bit for exclusive use files DMMOUNT = 0x10000000 // mode bit for mounted channel DMAUTH = 0x08000000 // mode bit for authentication file DMTMP = 0x04000000 // mode bit for non-backed-up file DMREAD = 0x4 // mode bit for read permission DMWRITE = 0x2 // mode bit for write permission DMEXEC = 0x1 // mode bit for execute permission // Mask for the type bits DMTYPE = DMDIR | DMAPPEND | DMEXCL | DMMOUNT | DMTMP // Mask for the permissions bits DMPERM = DMREAD | DMWRITE | DMEXEC )
File modes
const ( QTDIR = 0x80 // directories QTAPPEND = 0x40 // append only files QTEXCL = 0x20 // exclusive use files QTMOUNT = 0x10 // mounted channel QTAUTH = 0x08 // authentication file (afid) QTTMP = 0x04 // non-backed-up file QTFILE = 0x00 )
A Qid's type field represents the type of a file (directory, etc.), represented as a bit vector corresponding to the high 8 bits of the file's mode word.
DefaultBufSize is the default buffer size used in a Decoder
DefaultMaxSize is the default maximum size of a 9P message.
IOHeaderSize is the length of all fixed-width fields in a Twrite or Tread message. Twrite and Tread messages are defined as
size[4] Twrite tag[2] fid[4] offset[8] count[4] data[count] size[4] Tread tag[2] fid[4] offset[8] count[4]
MaxAttachLen is the maximum length (in bytes) of the aname field of Tattach and Tauth requests.
MaxErrorLen is the maximum length (in bytes) of the Ename field in an Rerror message.
MaxFileLen is the maximum length of a single file. While the 9P protocol supports files with a length of up to 8 EB (exabytes), to reduce the risk of overflow errors, the styxproto package only supports lengths of up to 4 EB so that it may fit within a signed 64-bit integer.
MaxFilenameLen is the maximum length of a file name in bytes
MaxOffset is the maximum value of the offset field in Tread and Twrite requests
const MaxStatLen = minStatLen + MaxFilenameLen + (MaxUidLen * 3)
MaxStatLen is the maximum size of a Stat structure.
MaxUidLen is the maximum length (in bytes) of a username or group identifier.
MaxVersionLen is the maximum length of the protocol version string in bytes
MaxWElem is the maximum allowed number of path elements in a Twalk request
const MinBufSize = MaxWElem*(MaxFilenameLen+2) + 13 + 4
MinBufSize is the minimum size (in bytes) of the internal buffers in a Decoder.
NoFid is a reserved fid used in a Tattach request for the afid field, that indicates that the client does not wish to authenticate his session.
NoTag is the tag for Tversion and Rversion requests.
QidLen is the length of a Qid in bytes.
ErrMaxSize is returned during the parsing process if a message exceeds the maximum size negotiated during the Tversion/Rversion transaction.
Write writes the 9P protocol message to w. It returns the number of bytes written, along with any errors.
type BadMessage struct { Err error // the reason the message is invalid // contains filtered or unexported fields }
BadMessage represents an invalid message.
func (m BadMessage) Len() int64
func (m BadMessage) String() string
func (m BadMessage) Tag() uint16
Tag returns the tag of the errant message. Servers should cite the same tag when replying with an Rerror message.
type Decoder struct { // MaxSize is the maximum size message that a Decoder will accept. If // MaxSize is -1, a Decoder will accept any size message. MaxSize int64 // contains filtered or unexported fields }
A Decoder provides an interface for reading a stream of 9P messages from an io.Reader. Successive calls to the Next method of a Decoder will fetch and validate 9P messages from the input stream, until EOF is encountered, or another error is encountered.
A Decoder is not safe for concurrent use. Usage of any Decoder method should be delegated to a single thread of execution or protected by a mutex.
Code:
l, err := net.Listen("tcp", ":564") if err != nil { log.Fatal(err) } rwc, err := l.Accept() if err != nil { log.Fatal(err) } d := styxproto.NewDecoder(rwc) e := styxproto.NewEncoder(rwc) for d.Next() { switch msg := d.Msg().(type) { case styxproto.Tversion: log.Printf("Client wants version %s", msg.Version()) e.Rversion(8192, "9P2000") case styxproto.Tread: e.Rread(msg.Tag(), []byte("data data")) case styxproto.Twrite: log.Printf("Receiving %d bytes from client", msg.Count()) io.Copy(ioutil.Discard, msg) } }
NewDecoder returns a Decoder with an internal buffer of size DefaultBufSize.
NewDecoderSize returns a Decoder with an internal buffer of size max(MinBufSize, bufsize) bytes. A Decoder with a larger buffer can provide more 9P messages at once, if they are available. This may improve performance on connections that are heavily multiplexed, where there messages from independent sessions that can be handled in any order.
Err returns the first error encountered during parsing. If the underyling io.Reader was closed in the middle of a message, Err will return io.ErrUnexpectedEOF. Otherwise, io.EOF is not considered to be an error, and is not relayed by Err.
Invalid messages are not considered errors, and are represented in the Messages slice as values of type BadMessage. Only problems with the underlying io.Reader are considered errors.
Msg returns the last 9P message decoded in the stream. It returns a non-nil message if and only if the last call to the Decoder's Next method returned true. The return value of Msg is only valid until the next call to a decoder's Next method.
Next fetches the next 9P message from the Decoder's underlying io.Reader. If an error is encountered reading from the underlying stream, Next will return false, and the Decoder's Err method will return the first error encountered.
If Next returns true, the Msg method of the Decoder will return the decoded 9P message.
Reset resets a Decoder with a new io.Reader.
An Encoder writes 9P messages to an underlying io.Writer.
NewEncoder creates a new Encoder that writes 9P messages to w. Encoders are safe to use from multiple goroutines. An Encoder does not perform any buffering of messages.
Err returns the first error encountered by an Encoder when writing data to its underlying io.Writer.
Flush flushes any buffered data to the underlying io.Writer.
Rattach writes a new Rattach message to the underlying io.Writer.
Rauth writes a new Rauth message to the underlying io.Writer.
Rclunk writes an Rclunk message to the underlying io.Writer.
Rcreate writes a new Rcreate message to the underlying io.Writer.
Rerror writes a new Rerror message to the underlying io.Writer. Errfmt may be a printf-style format string, with values filled in from the argument list v. If the error string is longer than MaxErrorLen bytes, it is truncated.
Rflush writes a new Rflush message to the underlying io.Writer.
Ropen writes a new Ropen message to the underlying io.Writer.
Rread writes a new Rread message to the underlying io.Writer. If len(data) is greater than the Encoder's Msize, it is broken up into multiple Rread messages. Rread returns the number of bytes written, plus any IO errors encountered.
Rremove writes an Rremove message to the underlying io.Writer.
Rstat writes an Rstat message to the underlying io.Writer. If the Stat is larger than the maximum size allowed by the NewStat function, a run-time panic occurs.
Rversion writes an Rversion message to the underlying io.Writer. If the version string is longer than MaxVerisonLen, it is truncated.
Rwalk writes a new Rwalk message to the underlying io.Writer. An error is returned if wqid has more than MaxWElem elements.
Rwrite writes an Rwrite message to the underlying io.Writer. If count is greater than the maximum value of a 32-bit unsigned integer, a run-time panic occurs.
Rwstat writes an Rwstat message to the underlying io.Writer.
Tattach writes a new Tattach message to the underlying io.Writer. If the client does not want to authenticate, afid should be NoFid. The uname and aname parameters will be truncated if they are longer than MaxUidLen and MaxAttachLen, respectively.
Tauth writes a Tauth message to enc's underlying io.Writer. The uname and aname parameters will be truncated if they are longer than MaxUidLen and MaxAttachLen, respectively.
Tclunk writes a Tclunk message to the underlying io.Writer.
Tcreate writes a new Tcreate message to the underlying io.Writer. If name is longer than MaxFilenameLen, it is truncated.
Tflush writes a new Tflush message to the underlying io.Writer.
NewTopen writes a new Topen message to the underlying io.Writer.
Tread writes a new Tread message to the underlying io.Writer. An error is returned if count is greater than the maximum value of a 32-bit unsigned integer.
Tremove writes a Tremove message to the underlying io.Writer.
Tstat writes a Tstat message to the underlying io.Writer.
Tversion writes a Tversion message to the underlying io.Writer. The Tag of the written message will be NoTag. If the version string is longer than MaxVersionLen, it is truncated.
Twalk writes a new Twalk message to the underlying io.Writer. An error is returned if wname is longer than MaxWElem elements, or if any single element in wname is longer than MaxFilenameLen bytes long.
Twrite writes a Twrite message to the underlying io.Writer. An error is returned if the message cannot fit inside a single 9P message.
Twstat writes a Twstat message to the underlying io.Writer. If the Stat is larger than the maximum size allowed by the NewStat function, a run-time panic occurs.
type Msg interface { // Tag is a transaction identifier. No two pending T-messages may // use the same tag. All R-messages must reference the T-message // being answered by using the same tag. Tag() uint16 // Len returns the total length of the message in bytes. Len() int64 // contains filtered or unexported methods }
A Msg is a 9P message. 9P messages are sent by clients (T-messages) and servers (R-messages).
A Qid represents the server's unique identification for the file being accessed: two files on the same server hierarchy are the same if and only if their qids are the same.
NewQid writes the 9P representation of a Qid to buf. If buf is not long enough to hold a Qid (13 bytes), io.ErrShortBuffer is returned. NewQid returns any remaining space in buf after the Qid has been written.
Path is an integer unique among all files in the hierarchy. If a file is deleted and recreated with the same name in the same directory, the old and new path components of the qids should be different.
Type returns the type of a file (directory, etc)
Version is a version number for a file; typically, it is incremented every time a file is modified. By convention, synthetic files usually have a verison number of 0. Traditional files have a version number that is a hash of their modification time.
The Rattach message contains a server's reply to a Tattach request. As a result of the attach transaction, the client will have a connection to the root directory of the desired file tree, represented by the returned qid.
Qid is the qid of the root of the file tree. Qid is associated with the fid of the corresponding Tattach request.
Servers that require authentication will reply to Tauth requests with an Rauth message. If a server does not require authentication, it can reply to a Tauth message with an Rerror message.
The aqid of an Rauth message must be of type QTAUTH.
The Rerror message (there is no Terror) is used to return an error string describing the failure of a transaction.
Ename is a UTF-8 string describing the error that occured.
Err creates a new value of type error using an Rerror message.
An Rerror message replaces the corresponding reply message that would accom- pany a successful call; its tag is that of the failing request.
A server should answer a Tflush message immediately with an Rflush message that echoes the tag (not oldtag) of the Tflush message. If it recognizes oldtag as the tag of a pending transaction, it should abort any pending response and discard that tag. A Tflush can never be responded to with an Rerror message.
An Ropen message contains a servers response to a Topen request. An Ropen message is only sent if the server determined that the requesting user had the proper permissions required for the Topen to succeed, otherwise Rerror is returned.
The iounit field returned by open and create may be zero. If it is not, it is the maximum number of bytes that are guaranteed to be read from or written to the file without breaking the I/O transfer into multiple 9P messages
Qid contains the unique identifier of the opened file.
The Rread message returns the bytes requested by a Tread message. The data portion of an Rread message can be consumed using the io.Reader interface.
Read copies len(p) bytes from an Rread message's data field into p. It returns the number of bytes copied and an error, if any.
If a Tread requests asks for more data than can fit within a single 9P message, multiple Rread messages will be generated that cite the tag of a single Tread request.
An Rversion reply is sent in response to a Tversion request. It contains the version of the protocol that the server has chosen, and the maximum size of all successive messages.
Len returns the length of the Rversion message in bytes.
Msize returns the maximum size (in bytes) of any 9P message that it will send or accept, and must be equal to or less than the maximum suggested in the preceding Tversion message. After the Rversion message is received, both sides of the connection must honor this limit.
Tag must return the tag of the corresponding Tversion message, NoTag.
Version identifies the level of the protocol that the server supports. If a server does not understand the protocol version sent in a Tversion message, Version will return the string "unknown". A server may choose to specify a version that is less than or equal to that supported by the client.
An Rwalk message contains a server's reply to a successful Twalk request. If the first path in the corresponding Twalk request cannot be walked, an Rerror message is returned instead.
Nwqid must always be equal to or lesser than Nwname of the corresponding Twalk request. Only if Nwqid is equal to Nwname is the Newfid of the Twalk request established. Nwqid must always be greater than zero.
Wqid contains the Qid values of each path in the walk requested by the client, up to the first failure.
The Stat structure describes a directory entry. It is contained in Rstat and Twstat messages. Tread requests on directories return a Stat structure for each directory entry. A Stat implements the os.FileInfo interface.
NewStat creates a new Stat structure. The name, uid, gid, and muid fields affect the size of the stat-structure and should be considered read-only once the Stat is created. An error is returned if name is more than MaxFilenameLen bytes long or uid, gid, or muid are more than MaxUidLen bytes long. Additional fields in the Stat structure can be set by using the appropriate Set method on the Stat value.
Code:
buf := make([]byte, 100) s, buf, err := styxproto.NewStat(buf, "messages.log", "root", "wheel", "") if err != nil { log.Fatal(err) } s.SetLength(309) s.SetMode(0640) fmt.Println(s)
Output:
type=0 dev=0 qid="type=0 ver=0 path=0" mode=640 atime=0 mtime=0 length=309 name="messages.log" uid="root" gid="wheel" muid=""
Atime returns the last access time for the file, in seconds since the epoch.
The 4-byte dev field contains implementation-specific data that is outside the scope of the 9P protocol. In Plan 9, it holds an identifier for the block device that stores the file.
Gid returns the group of the file.
Length returns the length of the file in bytes.
Mode contains the permissions and flags set for the file. Permissions follow the unix model; the 3 least-significant 3-bit triads describe read, write, and execute access for owners, group members, and other users, respectively.
Mtime returns the last time the file was modified, in seconds since the epoch.
Muid returns the name of the user who last modified the file
Name returns the name of the file.
Qid returns the unique identifier of the file.
The 2-byte type field contains implementation-specific data that is outside the scope of the 9P protocol.
Uid returns the name of the owner of the file.
The attach message serves as a fresh introduction from a user on the client machine to the server.
On servers that require authentication, afid serves to authenticate a user, and must have been established in a previous Tauth request. If a client does not wish to authenticate, afid should be set to NoFid.
Aname is the name of the file tree that the client wants to access.
Fid establishes a fid to be used as the root of the file tree, should the client's Tattach request be accepted.
Uname is the user name of the attaching user.
The Tauth message is used to authenticate users on a connection.
The afid of a Tversion message establishes an 'authentication file'; after a Tauth message is accepted by the server, a client must carry out the authentication protocol by performing I/O operations on afid. Any protocol may be used and authentication is outside the scope of the 9P protocol.
The aname field contains the name of the file tree to access. It may be empty.
The uname field contains the name of the user to authenticate.
The clunk request informs the file server that the current file represented by fid is no longer needed by the client. The actual file is not removed on the server unless the fid had been opened with ORCLOSE.
When the response to a request is no longer needed, such as when a user interrupts a process doing a read(2), a Tflush request is sent to the server to purge the pending response.
The message being flushed is identified by oldtag.
The open request asks the file server to check permissions and prepare a fid for I/O with subsequent read and write messages.
Fid is the fid of the file to open, as established by a previous transaction (such as a succesful Twalk).
The mode field determines the type of I/O, and is checked against the permissions for the file:
0 (OREAD) read access 1 (OWRITE) write access 2 (ORDWR) read and write access 3 (OEXEC) execute access.
Count is the number of bytes to read from the file. Count cannot be more than the maximum value of a 32-bit unsigned integer.
Fid is the handle of the file to read from.
Offset is the starting point in the file from which to begin returning data..
Len returns the length of a Tversion request in bytes.
Msize returns the maximum length, in bytes, that the client will ever generate or expect to receive in a single 9P message. This count includes all 9P protocol data, starting from the size field and extending through the message, but excludes enveloping transport protocols.
For version messages, Tag should be NoTag
Version identifies the level of the protocol that the client supports. The string must always begin with the two characters "9P".
A Twalk message is used to descend a directory hierarchy.
The Twalk message contains the fid of the directory it intends to descend into. The Fid must have been established by a previous transaction, such as an attach.
Newfid contains the proposed fid that the client wishes to associate with the result of traversing the directory hierarchy.
To simplify the implementation of servers, a maximum of sixteen name elements may be packed in a single message, as captured by the constant MaxWElem.
It is legal for Nwname to be zero, in which case Newfid will represent the same file as Fid.
The Twalk message contains an ordered list of path name elements that the client wishes to descend into in succession.
The Twrite message is sent by a client to write data to a file. The data portion of a Twrite request can be accessed via the io.Reader interface.
Read copies len(p) bytes from a Twrite message's data field into p. It returns the number of bytes copied and an error, if any.
Package styxproto imports 11 packages (graph) and is imported by 7 packages. Updated 2018-05-15. Refresh now. Tools for package owners.
|
https://godoc.org/aqwari.net/net/styx/styxproto
|
CC-MAIN-2018-30
|
refinedweb
| 3,654
| 64.81
|
Tips and Tricks and Other
Various
- Code Snippets - useful small functions.
- Project Euler newLISP - the first 20 from projecteuler.net/problems.
- Install newLISP as an HTTP and net-eval server process on Mac OS X using launchd
- Combined OS shell and newLISP shell with NLS. Now working on all platforms.
- Write FOOP (Functional Object Oriented Programming) applications. The Mandelbrot set as a FOOP application using complex numbers.
Web
- Write CGI scripts in newLISP: Mandelbrot, environment, read form data, syntax highligher, this site on newLISP Wiki
- AJAX and newLISP: do asynchronous webpage updates with newLISP CGI scripting.
- A small CGI example how to set cookies: Cookie example.
- S-expressions to XML - translate LISP s-expressions into XML.
- Artful Code tutorial: Working with XML in newLISP
- File Upload Script - a newLISP CGI script for uploading files from a HTML form.
- Translate IP numbers to country of origin.
- Create a DSL for writing HTML with newLISP using Fernando Canizo's (aka conan) html-in-newlisp.
- Try newLISP running in a web browser - now updated for v.10.6.1 and Emscripten v.1.22.0.
Connect to other languages and libraries
- Tk and newLISP - run graphics Tcl/Tk using this wrapper.
- Write graphics programs with OpenGL using simple FFI or using the extended FFI.
- Embedded Binary - embed binary machine code into newLISP source updated 2013-10-29.
- How to write a C-import function taking a string and returning an integer array CountCharacters. For more details on working with C-libraries read chapter 23 in Code Patterns in newLISP
- Windows Event Loop - write GUI programs for MS Windows.
- Register a callback with newLISP library (newlisp.dll, newlisp.dylib, newlisp.so).
Network Security
- A network packet sniffer written with libpcap and newLISP.
- A network scanner. newLISP v.10.2.8 or later is required.
- Raw sockets with net-packet when using newLISP for Mac OS X, BSDs and Linux!
- Library to verify YubiKey one time passwords by Kirill. Also available as a module in the Various section.
Parallel and Distributed Processing
- Parallel web page retrieval is easy using the spawn function. The program googles a search term then counts all the words in 20 pages retrieved. Try the demo. Even on a single core CPU this is much faster than working sequentially (Mac OS X, Linux and other UNIX, not Windows).
- MapReduce - Distributed computing, map/reduce style example of calculating and reducing word counts from different documents. Note that newLISP does this without any modules to import. The built-in function net-eval takes care of mapping all network communications including distribution of code to external nodes.
Science, Theory and Math
- Closures, Contexts and Stateful Functions explains the difference between Scheme Closures and newLISP namespaces, called contexts and shows a method for writing stateful functions via in-place modification.
- The Why of Y in newLISP. Richard P. Gabriel's Y-function derivation modified for newLISP.
- Recursion - iteration - generator a comparison of recursive, iterator and generator algorithms using the Fibonacci algorithm as example.
- Mergesort benchmark against Perl, Python, Ruby and PHP.
|
http://www.newlisp.org/index.cgi?page=Tips_and_Tricks
|
CC-MAIN-2014-42
|
refinedweb
| 506
| 58.79
|
Hi I'm making a simple calculator program and I've encountered errors that I've never heard of before. No matter what I do, I don't know how to fix this problem
error C2106: '=' : left operand must be l-value
What does that mean? I don't know what I did wrong.
I am using Visual Studio 2008
I haven't done programming in a while so I'm kinda rusty... Please help!
thank you
#include <string>
#include <iostream>
#include <iomanip>
using namespace std;
void main ( )
{
int choice;
double a, b, c;
char an;
cout << "Please enter the operation you want" << endl;
cout << "Press 1 for addition" << endl << "Press 2 for subtraction" <<
endl << "Press 3 for division" << endl << "Press 4 for multiplication"
<< endl;
cin >> choice;
cout << "Please enter the numbers you want to do calculation with" << endl;
while (an = 'y')
{
if (choice == 1)
{
a + b = c;
}
if (choice == 2)
{
a - b = c;
}
if (choice == 3)
{
a / b = c;
}
if (choice == 4)
{
a * b = c;
}
cout << "Here is your result: " << c << endl;
cout << "want to do it again?" << endl;
cin >> an;
}
}
I see two problems there: Your assignments are backwards (= assigns from right to left, *not* the other way around), and your while loop condition is an assignment when you probably want it to be a comparison.
Well, three problems if you count the fact that you didn't use code tags.
An "lvalue" is an expression which corresponds to a memory location: you can put a value there, so it can be a value on the left of an = assignment. An "rvalue" can be any arbitrary expression of the correct type; there is no need for it to correspond to a single memory location, because it's going to be assigned to an lvalue since it's on the right of an = assignment.
Last edited by Lindley; December 2nd, 2008 at 11:31 PM.
An lvalue is basically something that can be assigned to, an rvalue is basically a temporary.
in your code, you have lines like...
a+b = c;
a+b makes a temporary object that holds the result of a+b, as its a temporary, its an rvalue and thus cannot be assigned to.
c=a+b;
would work fine.
c is a named variable thats not const, it can be assigned to hence its an lvalue. So now we take the same temporary you were making before then assign that result to c.
lvalue and rvalue were terms that originated for describing something that can be on the left hand side of the assignment operator and something that can only be on the right hand side of an assignment operator.
oh okay. thank you. it works very well now.
haha I forgot for a minute that it goes right to left
View Tag Cloud
Forum Rules
|
http://forums.codeguru.com/showthread.php?466488-Making-a-simple-calculator-error-C2106-left-operand-must-be-l-value
|
CC-MAIN-2017-51
|
refinedweb
| 471
| 72.6
|
Private Domain Name Registries are organisations that run the root DNS servers for domain names that are not part of the "official" (i.e., ICANN approved) DNS namespace.
Private DNS registries can be classified into two types:
As a rule, DNS geeks tend to grudgingly accept the first kind, and vociferously oppose the second, the reason being that the "outwards" registries' TLDs are outside of the DNS system and cannot be accessed without using an ISP that's been paid off to include them (for example, Freeserve, who were allegedly paid £1 million by new.net) in their DNS servers, or a platform specific browser plug-in.
The main private registries are:
Prices for domains from private registries tend to be higher than from the others, due to the increased availability of names within their namespace.
There are also a number of alternative namespaces run on a not for profit basis. These include:
Unlike the "outwards" registries mentioned above, these alternative namespaces prefer to operate using internet standards - you don't need a plugin to access them, just add their root servers to your DNS config. They're also committed to not conflicting with the official ICANN domains.
*: I should say right now that I work for CentralNic, so if this writeup seems a bit biased, it probably is. We rule.
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com
|
https://everything2.com/title/Private+Domain+Name+Registries
|
CC-MAIN-2018-39
|
refinedweb
| 238
| 56.59
|
How Framework.
Problems with Dynamic Data Store
There are two main problems with Episerver’s DDS:
- it has a poor performance
- It has a limited way of querying data
DDS vs Entity Framework performance
To compare Dynamic Data Store and Entity Framework performance, let’s use an example from Part III of this blog series. We used
PageViewsData class and put its store in a separate table to boosts the performance of DDS.
For test purposes I created another class named
EntityPageViewsData which reflects DDS’s
PageViewsData:
using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace Setapp.DataStore { public class EntityPageViewsData { [Key] public int Id { get; set; } [Index("IDX_PageIdKey")] public int PageId { get; set; } public int ViewsAmount { get; set; } } }
Let’s start with filling the table. The biggest difference here is that in Entity Framework you can add multiple entries to a table at once (in DDS you need to add entries one by one, there’s no bulk action for it). And that speeds up this action a lot! I am adding 50 000 rows to each table:
var store = typeof(PageViewsData).GetOrCreateStore(); for (int i = 0; i < 50000; i++) { store.Save(new PageViewsData { PageId = i, ViewsAmount = i }); }
DbSet entityPageViewsDatas = applicationDbContext.PageViewsData; var entries = new List(); for (int i = 0; i < 50000; i++) { entries.Add(new EntityPageViewsData { PageId = i, ViewsAmount = i }); } entityPageViewsDatas.AddRange(entries); applicationDbContext.SaveChanges();
Here’s a comparison of the average time of adding the entries to the tables using both frameworks:
Now that makes a difference, doesn’t it?
Ok, but what about a read operation? Let’s compare a few cases here and start with searching for a single object by an indexed column:
store.Items().FirstOrDefault(pageViewsData => pageViewsData.PageId == 25000);
vs
entityPageViewsDatas.FirstOrDefault(pageViewsData => pageViewsData.PageId == 25000);
The difference here isn’t that big but it gets worse when we search with an unindexed column:
store.Items().FirstOrDefault(pageViewsData => pageViewsData.ViewsAmount == 25000);
vs
entityPageViewsDatas.FirstOrDefault(pageViewsData => pageViewsData.ViewsAmount == 25000);
Let’s go further and see what happens if we want to get and materialize the whole collection filtered by an unindexed value:
store.Items().Where(pageViewsData => pageViewsData.ViewsAmount > 25000).ToList();
vs
entityPageViewsDatas.Where(pageViewsData => pageViewsData.ViewsAmount > 25000).ToList();
The biggest problem here is the materialization of the objects, that’s what DDS handles much, much worse. The performance gets better if you only count the objects, although you can still clearly see the advantage of Entity Framework.
Other Entity Framework advantages
Well, let’s be honest: Entity can do at least the same things as DDS and it can do it much faster. Besides that, what are the most important features missing in Dynamic Data Store?
In DDS you can only query a single table. There is no way of using LINQ to build more complicated queries which would easily join tables or perform “group by” operations and sort it. So even if LINQ syntax allows you to write such a piece of code:
store.Items() .GroupBy(pageViewsData => pageViewsData.PageId) .Select(group => new { PageId = group.Key, Count = group.Count() }) .OrderBy(obj => obj.PageId)
and even though it will compile with no problems, you will get a runtime exception as DDS cannot form a valid SQL query from this LINQ code. Entity will have no problems with that.
Another big advantage of Entity Framework over DDS is that in the latter case it can be very problematic if you want to change a class stored in the database.
For example, if you want to change a type of a property or add another one, it gets very complicated to change the store definition if you have a (much faster) custom table. In Entity Framework, you can use its migration mechanism which can automatically change the structure of your database if needed.
Final Words
You can find the whole solution on Github and test it yourself. As you can see, DDS is very limited in its functionality and its performance is very poor, even if optimized by using a custom table. To use Entity Framework in Episerver, you don’t even have to include any additional libraries – it’s one of the dependencies of Episerver anyway!
There is no reason not to start using Entity Framework instead of DDS, it will surely help you create better performing software.
If you missed my previous three blog posts from the ‘How to store custom data in Episerver’ blog series. Then, you can check them out below:
- Part I: Dynamic Data Store basic
- Part II: Dynamic Data Store implementation
- Part III: Separate custom big tables
P.S. At Setapp, we’ve got some great tech positions open. If you want to work in a challenging environment and deliver meaningful projects, then do check them out now.
|
https://blog.setapp.pl/migrate-entity-framework-episerver/
|
CC-MAIN-2018-51
|
refinedweb
| 788
| 54.42
|
Using model converted to TensorFlow Lite. (If you don't have a model converted yet, you can experiment using the model provided with the example linked below.).
To install just the interpreter, download the appropriate Python wheel for your
system from the following table, and then install it with the
pip install
command.
For example, if you're setting up a Raspberry Pi (using Raspbian Buster, which
has Python 3.7), install the Python wheel as follows (after you click to
download the
.whl file below):
pip3 install tflite_runtime-1.14.0-cp37-cp37m-linux_armv7l.whl
Run an inference using tflite_runtime
To distinguish this interpreter-only package from the full TensorFlow package
(allowing both to be installed, if you choose), the Python module provided in
the above wheel is named
tflite_runtime.
So instead of importing
Interpreter from the
tensorflow module, you need to
import it from
tflite_runtime.
For example, after you install the package above, copy and run the
label_image.py
file. It will (probably) fail because you don't have the
tensorflow library
installed. To fix it, edit this line of the file:
import tensorflow as tf
So it instead reads:
import tflite_runtime.interpreter as tflite
And then change this line:
interpreter = tf.lite.Interpreter(model_path=args.model_file)
So it reads:
interpreter = tflite.Interpreter(model_path=args.model_file)
Now run
label_image.py again. That's it! You're now executing TensorFlow Lite
models.
Learn more
If you have a Raspberry Pi, try the classify_picamera.py example to perform image classification with the Pi Camera and TensorFlow Lite.
For more details about the
Interpreter API, read Load and run a model
in Python.
To convert other TensorFlow models to TensorFlow Lite, read about the the TensorFlow Lite Converter.
|
https://www.tensorflow.org/lite/guide/python
|
CC-MAIN-2020-05
|
refinedweb
| 287
| 57.57
|
screen_request_events()
Start receiving libscreen events.
Synopsis:
#include <bps/screen.h>
BPS_API int screen_request_events(screen_context_t context)
Since:
BlackBerry 10.0.0
Arguments:
- context
The libscreen context to use for event retrieval.
Library:libbps (For the qcc command, use the -l bps option to link against this library)
Description:
The screen_request_events() function starts to deliver libscreen events to an application using BPS. An application must not invoke libscreen's screen_get_event() function if it is receiving screen events through BPS. The screen_request_events() function should not be called multiple times before calling screen_stop_events(). An application may only request events for a single screen_context_t at one time, and only for a single thread.
Returns:
BPS_SUCCESS upon success, BPS_FAILURE with errno set otherwise.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/screen_request_events.html
|
CC-MAIN-2020-29
|
refinedweb
| 138
| 52.15
|
detach 1.0
Fork and detach the current processe.
Fork and detach the current process.
[]()
Usage
The detach package contains a context manager called Detach. It is used with with with statement to fork the current process and execute code in that process. The child process exits when the context manager exits. The following parameters may be passed to Detach to change its behavior:
- stdout - Redirect child stdout to this stream.
- stderr - Redirect child stderr to this stream.
- stdin - Redirect his stream to child stdin.
- close_fds - Close all file descriptors in the child excluding stdio.
- exclude_fds - Do not close these file descriptors if close_fds is True.
- daemonize - Exit the parent process when the context manager exits.
Examples
### Simple Fork with STDOUT
import detach, os, sys
- with detach.Detach(sys.stdout) as d:
-
- if d.pid:
- print(“forked child with pid {}”.format(d.pid))
- else:
- print(“hello from child process {}”.format(os.getpid()))
### Daemonize
import detach from your_app import main
- def main():
- “”“Your daemon code here.”“”
- with detach.Detach(daemonize=True) as d:
-
- if d.pid:
- print(“started process {} in background”.format(pid))
- else:
- main()
### Call External Command
import detach, sys pid = detach.call([‘bash’, ‘-c’, ‘echo “my pid is $$”’], stdout=sys.stdout) print(“running external command {}”.format(pid))
License
Copyright (c) 2014 Ryan Bourgeois. This project and all of its contents is licensed under the BSD-derived license as found in the included [LICENSE][1] file.
[1]: “LICENSE”
- Author: Ryan Bourgeois
- Keywords: fork daemon detach
- License: BSD-derived
- Categories
- Package Index Owner: BlueDragonX
- DOAP record: detach-1.0.xml
|
https://pypi.python.org/pypi/detach/1.0
|
CC-MAIN-2016-26
|
refinedweb
| 259
| 61.83
|
Configure HDFS Federation
An HDFS federation allows you to scale a cluster horizontally by configuring multiple namespaces and NameNodes. The DataNodes in the cluster are available as a common block of storage for every NameNode in the federation.
- Ensure that you have planned for a cluster maintenance window before configuring the HDFS federation because all the cluster services are restarted during the process of configuration.
- Verify that you have configured HA for all the NameNodes that you want to include in the federation.
- In Ambari Web, select .
- Click Actions > Add New HDFS Namespace.The Add New HDFS Namespace wizard launches. The wizard describes the set of automated and manual steps you must perform to add the new namespace.
- On the Get Started page, type in a NameService ID and click Next.
- On the Select Hosts page, select a host for the additional NameNodes and JournalNodes, and click Next.
- On the Review page, confirm your host selections and click Next.
- On the Configure Components page, monitor the progress bars as the wizard completes adding the new namespace, then click Complete.
After the Ambari Web UI reloads, you may see some alert notifications. Wait a few minutes until all the services restart.You can navigate to Services > HDFS > Summary to view details of the newly added namespace.
|
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/managing-and-monitoring-ambari/content/amb_configure_federation.html
|
CC-MAIN-2018-51
|
refinedweb
| 215
| 57.06
|
unilog
Swift unified logging made simple.
Features
- Simplifies the Apple unified logging system
- Ready to be use without the need to set anything up
- Swift like formating, no need for a
NSStringor
printfformat string
- Fallbacks automatically to
NSLogif running on
El Capitan(10.11) or early
- Can be customized to log in debug mode just like in release mode
- All parameters are public by default, the way information should be in a log; parameters can be redacted individually
- Private data will remain private and cannot be shown at all, even when flipping the
private_dataflag on
- Simple and small codebase, under 150 LOC
Instalation
Init a Swift package and add the unilog dependency:
.package(url: "", from: "2.0.0")
Add the Swift package to your workspace, click to add Embedded Binaries and select
unilog.framework.
Import it and we're ready to log:
import unilog
Usage
Log data by using the default app log:
Log.message("\(imageName) of type \(uti) and resolution \(width, height) has been loaded")
Sensitive data can be redacted by using the
.mPrivate access modifier. This achieves the same as the
{private} modifier of the
os_log function, with the difference that data cannot be recuperated at a later point, so it's private "forever".
Log.message("User \(username) with password \(password, modifier: .mPrivate) successfully logged in.")
If not specified, the
.mPublic modifier is used, meaning the above line is identical to:
Log.message("User \(username, modifier: .mPublic) with password \(password, modifier: .mPrivate) successfully logged in.")
Logging this way will use the app bundle identifier as the log subsystem and
App as the log category. These parameters can be changed at once or individual and at any time during the app's lifetime by calling:
Log.setCategory(to: "Network", forSubsystem: "com.turtlescompany.scanner")
When working in debug mode, it's useful to have the full, public log available. The modifier
.mPrivateRelease equals to
.mPrivate in Release mode and to
.mPublic in Debug mode. Therefore, to have the plaintext user password in Debug mode:
Log.message("User \(username) with password \(password, modifier: .mPrivateRelease) successfully logged in.")
For more complex apps, with multiple subsystems, one log category won't suffice. In that case, instantiate multiple logs as desired:
let logN = Log(category: "Network") let logDB = Log(category: "DB") ... logDB.error("User not found! \(error)") ... logN.info("Connection to \(server.ip:server.port) established")
Log levels
unilog provides equivalents to all os_log levels with the exception of Fault. It is important to understand the difference between these levels and where are the logs stored in each case. For more information, take a loot at the Apple documentation.
| unilog | os_log | | ---------| ---------| | info | Info | | debug | Debug | | message | Default | | error | Error |
Cristian Sava, cristianzsava@gmail.com
License
unilog is available under the MIT license. See the LICENSE file for more info.
Github
Help us keep the lights on
Dependencies
Used By
Total: 0
|
https://swiftpack.co/package/sivu22/unilog
|
CC-MAIN-2019-18
|
refinedweb
| 477
| 50.84
|
The skill description page of any skill in SUSI.AI skill cms displays all the details regarding the skill. It displays image, description, examples and name of author. The skill made by author can impress the users and they might want to know more skills made by that particular author.
We decided to display all the skills by an author. We needed an endpoint from server to get skills by author. This cannot be done on client side as that would result in multiple ajax calls to server for each skill of user. The endpoint used is :
"" + author
Here the author is the name of the author who published the particular skill. We make an ajax call to the server with the endpoint mentioned above and this is done when the user clicks the author. The ajax call response is as follows(example) :
{ 0: "/home/susi/susi_skill_data/models/general/Entertainment/en/creator_info.txt", 1: "/home/susi/susi_skill_data/models/general/Entertainment/en/flip_coin.txt", 2: "/home/susi/susi_skill_data/models/general/Assistants/en/websearch.txt", session: { identity: { type: "host", name: "139.5.254.154", anonymous: true } } }
The response contains the list of skills made by author. We parse this response to get the required information. We decided to display a table containing name, category and language of the skill. We used map function on object keys to parse information from every key present in JSON response. Every value corresponding to a key represents a response of following type:
"/home/susi/susi_skill_data/models/general/Category/language/name.txt"
Explanation:
- Category: There are currently six categories Assistants, Entertainment, Knowledge, Problem Solving, Shopping and Small Talks. Each skill falls under a different category.
- language: This represents the ISO language code of the language in which skill is written.
- name: This is the name of the skill.
We want these attributes from the string so we have used the split function:
let parse = data[skill].split('/');
data is JSON response and skill is the key corresponding to which we are parsing information. We store the array returned by split function in variable parse. Now we return the following table in map function:
return ( <TableRow> <TableRowColumn> <div> <Img style={imageStyle} src={[ image1, image2 ]} unloader={<CircleImage name={name}} /> {name} </div> </TableRowColumn> <TableRowColumn>{parse[6]}</TableRowColumn> <TableRowColumn>{isoConv(parse[7])}</TableRowColumn> </TableRow> )
Here :
- name: The name of skill converted into Title case by the following code :
let name = parse[8].split('.')[0]; name = name.charAt(0).toUpperCase() + name.slice(1);
- parse[6]: This represents the category of the skill.
- isoConv(parse[7]): parse[7] is the ISO code of the language of skill and isoConv is an npm package used to get full form of the language from ISO code.
- CircleImage: This is a fallback option in case image at the URL is not found. This takes first two words from the name and makes a circular component.
After successful execution of the code, we have the following looking table:
Resources:
- Full code for implementation:
- isoConv npm package:
- Ajax’s guide on MDN:
- Split function in javascript:
- Map function in javascript:
- Material UI Table:
|
https://blog.fossasia.org/getting-skills-by-an-author-in-susi-ai-skill-cms/
|
CC-MAIN-2019-09
|
refinedweb
| 515
| 57.06
|
};
The advantage of defining the method on the prototype is that it will be available for all instances of the component.
The class names of most components follow the convention of using a capital F (Flash), the component name, and then the word Class. If you are in doubt about the name of the component class, you can check the component definition in the Library. For example, a CheckBox class definition is named FCheckBoxClass. Other components have similar names, as shown in Table 11-3.
Most Flash MX UI components are subclassed from the FUIComponentClass class. The FUIComponentClass class contains most of the everyday functionality that a component needs, such as initialization, setting colors and styles, setting callback methods, and setting focus. For more information on FUIComponentClass, see the third-party extension by Jesse Warden at.
Subsequent sections show examples of extending basic components.
An ActionScript programmer can visually and functionally extend the Flash MX components. There are numerous examples on the Web that demonstrate this, but I will show a way to add a custom icon to a ListBox, based on data from a remote service call, and, in doing so, create an enhanced version of the ListBox component.
You'll need two icons?a blank checkbox and a filled checkbox?to simulate a checkbox component for the data display. The files used in the procedure, blankbox.gif and check.gif, are also available at the online Code Depot, as are the completed .fla files.
As an exercise, you might want to experiment by adding a true CheckBox component rather than an icon to the ListBox. The icon is used for performance reasons, since we are simply displaying data and no user interaction is needed.
The ListBox, and other components like it, have three main parts: the data provider, the item (row), and the component itself. Components and their assets can be found in the Library after you drag an instance of the component from the Components panel to the Stage. The assets are organized in folders under the Flash UI Components folder in the Library, as shown in Figure 11-3.
We'll add a checkbox icon to the FListBoxItem to create a new element named FListCheckItem. Use the .fla created in Example 11-1 as a starting point. The final version is available as FListCheckItem.fla from the online Code Depot.
Drag an instance of the ListBox component from the Components panel to the Stage. It appears as FListBoxItem in the Library panel under Core Assets - Developer Only FUIComponent Class Tree FUIComponent SubClasses FSelectableItem SubClasses, as shown in Figure 11-3.
In the Library, right-click (Ctrl-click on Macintosh) FListBoxItem and choose Duplicate from the pop-up menu. If you've successfully duplicated the Library symbol and not an instance of it on the Stage, the Duplicate Symbol dialog box appears.
In the Duplicate Symbol dialog box, name the new symbol FListCheckItem. Select the Export for ActionScript checkbox, and set the Linkage Identifier to FListCheckItemSymbol (click the Advanced button to expand the dialog box if you don't see the Linkage properties).
Next, import the blankbox.gif and check.gif files to the Library using File Import to Library.
Create a new symbol in the Library, name it Checkbox_icons, and open it for editing.
Add blankbox.gif to the first frame of the Checkbox_ icons symbol, and add check.gif to a new second frame. You can position the images with their upper-left corners at the registration point of the movie clip.
Open the FListCheckItem symbol for editing, and add a new layer. On that layer, drag an instance of the Checkbox_icons symbol from the Library to the symbol's canvas, and give it the instance name check_mc in the Property inspector.
Next, select the first frame of the Actions layer of the FListCheckItem symbol, and open the Actions panel to show the existing code (which was duplicated from the original FListBoxItem symbol). Modify the code, as shown in Example 11-2 (changes are shown in bold):
To define the FListCheckItemClass class, change FListItemClass to FListCheckItemClass throughout the code, as shown in bold. Also change FListItemSymbol to FListCheckItemSymbol.
Our custom FListCheckItemClass class extends the FSelectableItemClass class, but we must add the layoutContent( ) and displayContent( ) methods to handle the attaching of the icons. The layoutContent( ) method overrides the FSelectableItemClass method of the same name. Copy the code for the layoutContent( ) method from the first frame of the Methods layer of the FSelectableItem symbol in the Library, and make the changes shown in bold.
The displayContent( ) method overrides the superclass method of the same name, which it calls via super. Again, enter the text shown in bold.
Finally, we'll add a few lines to the main .fla, as shown in Example 11-3, with the changes from Example 11-1 shown in bold.
#initclip 3 /* FListCheckItemClass EXTENDS FSelectableItemClass This is mostly a code stub for extension purposes. */ function FListCheckItemClass( ) { this.init( ); } FListCheckItemClass.prototype = new FSelectableItemClass( ); // EXTEND this method to change the content of an item and its layout FListCheckItemClass.prototype.layoutContent = function (width) { this.attachMovie("FLabelSymbol", "fLabel_mc", 2, {hostComponent:this.controller}); this.fLabel_mc._x = 2; this.fLabel_mc._y = 0; this.icon_mc._x = width - this.icon_mc._width - 10; this.fLabel_mc.setSize(width - 10 - this.icon_mc._width); this.fLabel_mc.labelField.selectable = false; }; FListCheckItemClass.prototype.displayContent = function(itmObj, selected) { // Execute the superclass method first super.displayContent(itmObj, selected); // Show an icon dependent on the data.checked property this.check_mc.gotoAndStop(itmObj.data.checked ? 1 : 2); } Object.registerClass("FListCheckItemSymbol", FListCheckItemClass); #endinitclip
; temp.checked = record.Discontinued; } // Set up the FListCheckItemSymbol to be the item for the ListBox products_lb.setItemSymbol("FListCheckItemSymbol"); //; }
Save and test the movie. It should show the checkbox icons in our custom ListBox, depending on the Discontinued field in the Products table. (The Products table does not have a Discontinued field by default, but you can add it to your test database easily, as we did in Chapter 5.)
The following line sets up FListCheckItemSymbol as the list item for the ListBox instance on the Stage:
products_lb.setItemSymbol("FListCheckItemSymbol");
That's all there is to it. In this case, the original ListBox component was not touched, but a copy of it was enhanced to include an icon. That is, our custom FListCheckItemClass class extends the FSelectableItemClass class directly instead of extending FListItemClass, as could have been done. Regardless, a component like this can also be packaged and installed into the Components panel with very little effort.
In Chapter 4, we enhanced the ComboBox component to include a pickValue( ) method. This method allows you to pass a result from a Flash Remoting call to the ComboBox and choose that value in the UI. I'll take that a step further now by adding pickLabel( ), setDefault( ), and setDescriptor( ) methods. These new methods, and others like it, can make working with the ComboBox much easier, especially when working with Flash Remoting.
When enhancing ActionScript objects in this way, make sure that your method names do not conflict with method names from other programmers. For example, if another programmer on your team creates a pickValue( ) method for a ComboBox, the two namespaces will collide.
The custom pickValue( ) and pickLabel( ) methods can be used to set a value in a ComboBox. The methods are shown in Example 11-4.
// Set up the combo boxes to be able to pick a value FComboBoxClass.prototype.pickValue = function (value) { var tempLength = this.getLength( ); for (var i=0; i < tempLength; i++) { if (this.getItemAt(i).data == value) { this.setSelectedIndex(i); break; } } }; // Set up the combo boxes to be able to pick a label FComboBoxClass.prototype.pickLabel = function (text) { var tempLength = this.getLength( ); for (var i=0; i < tempLength; i++) { if (this.getItemAt(i).label == value) { this.setSelectedIndex(i); break; } } };
Typically, a ComboBox is populated from a remote database containing labels and values of a Categories table or some other related table. In a typical update of a database, you populate the user interface with a record from the database. In this situation, pickValue( ) or pickLabel( ) can be used to choose the correct value for the current record. You might use it like this, with a RecordSet object named myResults_rs:
myCombobox.pickValue(myResults_rs.getItemAt(0)["categoryid"]);
Frequently, you may need to set a default value for a ComboBox or other UI component. The setDefault( ) method, shown in Example 11-5, handles these situations.
// Set up a "default" property, which will be the value picked if // the setDefault( ) method is called. FComboBoxClass.prototype._default = null; // setDefaultValue( ) sets up the default value when setDefault( ) is called FComboBoxClass.prototype.setDefaultValue = function (value) { this._default = value; }; // Getter method for the default value FComboBoxClass.prototype.getDefaultValue = function ( ) { return this._default; }; // Set up the combo boxes to keep a default value FComboBoxClass.prototype.setDefault = function ( ) { this.pickValue(this.getDefaultValue( )); };
The setDefault( ) method comes in handy for situations where you are inserting data into a database. The ComboBox can display the default item. If the user doesn't pick an item for the ComboBox, the default value to enter in the database can be pulled from the ComboBox. For example, when requesting the user's shipping address, you might specify an appropriate default country and shipping method, like this:
myCombobox.setDefaultValue(1); // Initialize the default value
Now, whenever you want to display the default item in the ComboBox, simply call:
myCombobox.setDefault( ); // Displays the default item in the ComboBox
This technique is more flexible than simply choosing the value when you need to, because you can set the default value in one place in your movie and have the ability to set the ComboBox back to the default item at any time. If the default value changes at some point, your ComboBox code throughout your movie will still work. For example, if the user specifies his country as the United States, you might set the default shipping method to "UPS Ground." For other countries, you could set it to "Federal Express International".
The last custom method in this section is setDescriptor( ). ComboBoxes frequently have a default label that states "?All options?" or "?Choose Shipping Method?". These types of items can be added easily to all of your ComboBoxes using the setDescriptor( ) method, shown in Example 11-6.
// Add a descriptive row to the ComboBox FComboBoxClass.prototype.setDescriptor = function (text, value) { // Create a blank record var temp = {}; // Get the RecordSet object var rs = this.dataProvider.dataProvider; // Create a blank record rs.addItemAt(0, temp); // Get the recordset's field names in mTitles, and set the text and value rs.setField(0, rs.mTitles[1], text); rs.setField(0, rs.mTitles[0], value); this.pickValue(0); };
The setDescriptor( ) method works with ComboBoxes that have been set up with DataGlue. In those cases, if you try to set the label directly, you'll find that it can't be done easily. You can create a new record in the data provider, however, which will propagate down to the ComboBox:
shipping_cb.setDescriptor("--Choose Shipping method-- ", 0); country_cb.setDescriptor("--Country--", 0)
The ComboBox enhancements can be saved to the Flash MX\Configuration\Include\com\oreilly\frdg folder as DataFriendlyCombo.as. If you want to include the functionality in your Flash Remoting application, add the following #include directive to your code in the first frame:
#include "com/oreilly/frdg/DataFriendlyCombo.as"
|
http://etutorials.org/Macromedia/Fash+remoting.+the+definitive+guide/Part+III+Advanced+Flash+Remoting/Chapter+11.+Extending+Objects+and+UI+Controls/11.3+Enhancing+a+Standard+Control/
|
CC-MAIN-2020-10
|
refinedweb
| 1,879
| 57.98
|
I have the following POCO class in my app -
public class Course { public String Title { get; set; } public String Description { get; set; } }
But the
Course collection in mongodb has some other fields also including those. I am trying to get data as follows-
var server = MongoServer.Create(connectionString); var db = _server.GetDatabase("dbName"); db.GetCollection("users"); var cursor = Photos.FindAs<DocType>(Query.EQ("age", 33)); cursor.SetFields(Fields.Include("a", "b")); var items = cursor.ToList();
I have got that code from this post in stackoverflow.
But it throws an exception-
"Element '_id' does not match any field or property of class"
I don't want '_id' field in my POCO. Any help?
_id is included in Fields by default.
You can exclude it by using something like:
cursor.SetFields(Fields.Exclude("_id"))
|
https://databasefaq.com/index.php/answer/251635/c-mongodb-get-only-specified-fields-from-mongodb-c
|
CC-MAIN-2021-04
|
refinedweb
| 132
| 60.21
|
23 November 2007 10:30 [Source: ICIS news]
LONDON (ICIS news)--BASF has dropped its November polystyrene (PS) price €20/tonne ($30/tonne) to €1,375/tonne, the company said on its website on Friday.
The gross market price (GMP) for general purpose PS (GPPS) was moved down to €1,375/tonne FD (free delivered) NWE (northwest ?xml:namespace>
The GMP is heavily discounted and seen as a general reference price in the market as opposed to a working price. It is closely watched by the market, however, as the extent of the monthly movement is seen as relevant.
Most sources reported net GPPS prices to be around the €1,200/tonne FD NWE mark, according to global chemical market intelligence service ICIS pricing.
PS prices fell in line with upstream styrene monomer in November, but producers reported a fine supply-demand balance and several were already talking of increases for December, regardless of any possible moves in the styrene monomer.
PS producers
|
http://www.icis.com/Articles/2007/11/23/9081110/basf-reduces-europe-november-ps-by-20tonne.html
|
CC-MAIN-2014-49
|
refinedweb
| 163
| 54.36
|
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Viewsets and Routers10:50 with Chris Jones and Kenneth Love
How can viewsets and routers help you with Django REST Framework?
If you looked at the documentation for ad hoc viewset methods, you might be confused by the use of
@detail_route. REST framework offers two decorators for ad hoc methods,
@list_route and
@detail_route.
@list_route might seem to make more sense because you are, in fact, returning a list. The main difference between the two decorators, however, is that
@detail_route is designed to receive a primary key argument. You need the primary key of the current course to be able to filter the reviews, so you'd use
@detail_route instead of
@list_route.
Speaking of documentation:
I've already shown you how to override generic view methods so 0:00 that you can be more explicit about the data that you want. 0:03 While doing that, I also made sure that when a user creates a review 0:05 it gets linked to the correct course even if they try to be sneaky. 0:08 Now, though, I'm going to pretend it's a year later and 0:12 I've been asked to build version two of the API. 0:14 This time, though, I want to use some cool new features that I've learned about 0:17 called view sets and routers that should simplify my work even more 0:19 the current yourself for courses is slash API slash B one slash courses. 0:24 To see the reviews for a course you dip in the courses PK and 0:28 slash reviews to that URL. 0:32 I want to keep the structure but everything will be easier if I move 0:34 the review URLs to slash API slash B2 slash reviews. 0:37 The detail update and destroy views for reviews will add the review primary key to 0:41 the end of the URL like so, /api/v2/reviews/1/. 0:46 So you caught the v2 in the URL right. 0:51 These changes will likely break the existing API for users. 0:54 I really like my current users and don't really want to upset them the best plan 0:57 then is to create a new version of the API current users can still use V one and 1:02 they can upgrade to V two when they're ready. 1:06 The great thing is I can run both versions of the API in the same codebase. 1:08 So, what about those two things I mentioned view sets and routers? 1:13 Routers arrest frameworks way of automating URL creation for API views. 1:17 It's not a big burden, but wouldn't it be nice to not have to write out each URL. 1:21 Routers are designed to work seamlessly with view sets. 1:25 View sets allow you to combine all of the logic for 1:29 a set of related views into a single class called a view set. 1:31 So instead of creating a list create API view and retrieve update destroy view for 1:35 every resource, you can do this all in one class and 1:39 at the cherry on top let the router generate the URLs for you. 1:42 Now, let me jump into work spaces and show you these great tools. 1:46 Okay, so I'm back here in courses views.pi and 1:50 as usual I have something I need to import. 1:54 So from rest_framework import viewsets, okay. 1:58 And I'm going to do all of my work down here 2:05 because I want to leave these existing views for 2:09 the V one version of the API right I don't want to change any of these. 2:12 I gotta keep both of these API is running, so 2:16 let me do my work down here, so I'm going to create a course View set. 2:20 And it's gonna come from viewSets.model, not models, ModelViewSet and 2:26 much like the generic view, it gets a query set argument. 2:33 So models.course.objects.all because I 2:39 need all of them objects and also like the previous views, 2:44 it gets a Serializer class which is going to be 2:50 models are not models serializer.courseSerializer. 2:55 All right, and while I'm at it I might as well make the next on so 3:02 reviews, view, set, try saying that a couple 3:08 of times fast view sets dot model view set and 3:13 again querysetmodels.review.objects.all and 3:18 serializerclass=serializers.reviewseriali- zer. 3:23 All right, believe it or not again that's it. 3:29 So now I need to go make a router to generate the URLs. 3:34 And I'm not going to do that in courses URLs. 3:37 I'm actually going to do that in the global, site wide projects URLs. 3:41 And the reason I'll do that is because it's just simpler to do it there. 3:47 So I need to import routers from a rest framework. 3:52 So from rest_framework. 3:56 Import routers and I also need to import 4:00 the views from courses so from courses import views. 4:05 All right, so now I need to create the router and 4:10 register the view sets that I created to it. 4:13 So up here below the imports but above urlpatterns, 4:17 I'm going to do router=routers.SimpleRouter 4:23 and then I'm going to register a couple of things with this. 4:29 It's kind of like registering with the admin so router register courses 4:31 views.courseview set and 4:38 router.register at 4:44 reviews Views.ReviewViewSet. 4:48 Okay, so what does this do? 4:54 Well, so first I've created an instance of simple router, 4:55 which is as you might have guessed, a simple router. 4:59 The next thing that I've done is I've registered my 5:04 view sets with the router and I've given them a prefix. 5:09 So basically, I want all of the views, in this view set, to start at courses. 5:14 All right, put all these things right here. 5:20 And I did the same thing with reviews. 5:22 So the last thing that I have to do is, I have to include my views. 5:25 So I'm gonna do URL carrot Apiv2, and 5:31 then I'm going to include router.urls and 5:35 the namespace is going to be apiv2 just so 5:40 that I've got that like in mind. 5:45 So, all of these are going to be at API v two, and 5:51 since I put like courses here, they'll show up in the same place. 5:55 Reviews, reviews. 5:59 Yeah cool. 6:01 All right and any future reviews as they get registered will automatically be 6:02 included as soon as I register them with the router which is pretty neat. 6:06 You could put the router into it's own file if you would have like a router stop 6:11 by inside courses and then from courses import the router. 6:15 But if you end up with more than one view size for more than one routers in your 6:18 project like maybe you have multiple apps that all contribute to your API. 6:23 That becomes a little bit more difficult to handle the registration and 6:29 combining of all this stuff. 6:32 So it's just easier, at least to me, to put it all and 6:34 here, but I'll leave it up to you can decide what ever you would like to do. 6:37 Okay, so, again, I have to go in the here and run my server. 6:42 I don't know why that's acting up. 6:46 But let's go see if these URLs work. 6:48 So I'm gonna go to let's just go to V2 courses, there we go. 6:51 V2 courses and cool, I mean this looks pretty familiar. 6:57 All right so let me try looking at a detail view so 7:03 I should really go to /2 and there's Python collections. 7:07 All right, so that's cool. 7:13 I know that number one has some reviews. 7:15 So let me try going back to that because that worked before, right? 7:19 Probably isn't going to work though. 7:23 Yeah, because we didn't do anything about that. 7:25 I didn't specify anything there. 7:27 So I got a 404, I should probably fix this. 7:29 Rest frameworks view sets only generate typical crud views from one model. 7:33 So you get create retrieve update and delete for one single model, 7:38 like the course model or the review model they do include a way to create ad 7:42 hoc methods onto a view set and I can use that for exactly this kind of use case. 7:48 So I'm going to import a decorator create a method stuff like that so. 7:54 Let's go back up here from rust framework 8:01 framework .decorators import detail_route and 8:06 from rest_framework.response import response. 8:12 Hey, there's that one again. 8:19 All right, so now I'm gonna come down here to 8:20 my CourseViewSet which probably is around line 47. 8:24 And this is going to the one where I'm going to create the new method. 8:28 So, first thing is I know that I'm going to decorate this with 8:32 the detail route decorator. 8:36 So, detail route and I want specify what the methods are, 8:37 I don't want this to be used to create new reviews. 8:42 That's just too much trouble. 8:46 So I'm just going to use this to get the reviews. 8:48 And we're gonna call this reviews, which I'll make it slash reviews and 8:53 self requests and the PK by default equals nine. 8:59 So course is going to be self-taught yet object. 9:05 And it will use that PK that comes in to get the object from the quarry set. 9:10 So, we've made the list view kind of act like, or 9:15 not list view, we've specify this is being a new detail view, right. 9:18 That is going to get a single object and then serializer 9:22 equals serializer dot review serializer. 9:26 And I'm going to give this course.reviews.all. 9:31 And since there's more than one, the many equals True and 9:36 a return response of serializer.data. 9:40 All right, so cool, yeah. 9:44 If I wanted this to respond to a post, I could put post here instead or 9:49 in addition if I want to handle both of them, whatever. 9:53 I try to make these things as just simple and 9:56 straightforward as I can because when you get into these places where 9:58 you're kind of working on edge cases, they get a little weirder. 10:01 So let's try this. 10:04 Check it out. 10:07 There are my reviews for course number one so great. 10:08 A couple of things to note, the reviews are a list only. 10:15 Users can't create new reviews at the /courses/one/reviews URL. 10:19 If you're cool with not keeping the old method of getting reviews for 10:23 a course, it's probably better to just remove the special view method. 10:26 Okay, I have one final problem to solve. 10:30 Review viewset already have list, create, retrieve, update and destroy views but 10:32 I don't really want a list view here. 10:37 The reviews method I just created handles getting reviews filtered to 10:39 a specific course and the list of all existing reviews seems kind of silly. 10:42 So how do I pick and choose which methods I want in a view set? 10:47
|
https://teamtreehouse.com/library/django-rest-framework/make-the-rest-framework-work-for-you/viewsets-and-routers?t=57
|
CC-MAIN-2020-45
|
refinedweb
| 2,165
| 79.7
|
Symptom
- Search of Change Cycle/Phase from change doc returns 0 results
- Standard change document can return Change Cycle/Phase results
- Customization in TSOCM_PROXY_IMPL has been performed (SAP Note 2375508)
- User has no authorization issues in a STAUTHTRACE in problem document (KBA 1832308)
- Create change document using custom transaction type but cannot see any cycle documents in search results to assign
- Copied standard change document transaction type to Z customer namespace (e.g. SMTM to ZZTM, SMHF to ZMHF, etc) and did no customization
Read more...
Environment
SAP Solution Manager 7.20
Product
SAP Solution Manager 7.2
Keywords
SOLMAN, ChaRM, Change Request Management , KBA , SV-SMG-CM , Change Request Management , SV-SMG-OST , Focused Build & Focused Insights (ST-OST) , Problem
About this pageThis is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP ONE Support launchpad (Login required).
Visit SAP Support Portal's SAP Notes and KBA Search.
|
https://apps.support.sap.com/sap/support/knowledge/en/2517814
|
CC-MAIN-2020-40
|
refinedweb
| 157
| 50.26
|
I am writing a Silverlight application which will be used in multiple countries by a large organisation. The users are all used to working in English, so there's no requirement for a complex internationalization approach.
However, the one thing which causes major annoyance is for dates to appear in the American mm/dd/yyyy format in the rest of the world.
So I was rather upset when I realised that although my PC is set to the UK regional settings, my Silverlight app ignored this and insisted on formatting DateTime fields as mm/dd/yyyy when, for example, I displayed a date in a DataGrid column.
DateTime
DataGrid
My initial attempts to fix this found me loads of articles about complex internationalization techniques in Silverlight and .NET, and it started to look as though I'd have no choice but to write custom-handlers for all my date fields.
Fortunately, I then discovered some references to the System.Windows.Markup.XmlLanguage class. All I needed to do was add a few lines to my MainPage() routine as follows:
System.Windows.Markup.XmlLanguage
using System.Windows.Markup;
using System.Threading;
.....
public MainPage()
{
this.Language = XmlLanguage.GetLanguage(Thread.CurrentThread.CurrentCulture.Name);
// Now the rest of the constructor as it was before
InitializeComponent();
......
and suddenly my dates looked as I wanted them to.
Depending on the way you structure your code, you may need to add a similar line in the constructors of other UI elements, but so far I've found that just this one line does the trick.
To be honest, I don't really understand why this behaviour isn't the default. Silverlight knows perfectly well what language you want - the main thread has the information - but it doesn't propagate that up to the UI elements unless you do it.
|
http://www.codeproject.com/Tips/262702/Regional-date-formats-and-Silverlight-4?fid=1655883&df=90&mpp=10&sort=Position&spc=None&tid=4176352
|
CC-MAIN-2016-36
|
refinedweb
| 302
| 53.92
|
Manipulating images from camera
I'm trying to take a pic from the camera and display it in the UI without first saving the picture to a roll. So I'm trying to accomplish something similar to this:
raw_photo = photos.pick_image(raw_data=True)
image.image = ui.Image.from_data(raw_photo)
But the camera apparently returns PIL.image object, so I need a way to convert that to something that can be used in the ui?
Any tips here?
I've only been using Pythonista for a few hours and I'm loving it. Addicted, I am.
Pick_image does not take a photo but allows you to select photo(s) from the camera roll, thus photos already taken.
This method is marked as deprecated in the Doc
Photos.capture_image takes a photo
Please read this topic
And see also photos.Asset.get_ui_image to transform a photo into an ui.Image
@cgallery To convert a
PIL.Imageto a
ui.Image(that can be used with
ui.ImageView), you can use something like this:
import io data_buffer = io.BytesIO() img.save(data_buffer, 'PNG') ui_img = ui.Image.from_data(data_buffer.getvalue())
@omz in your doc you write:
"Assets only contain metadata, so to get an actual image, you’ll have to use the Asset.get_image() function, as shown in the example above. The image is returned as a PIL.Image object. If you want a ui.Image instead, use Asset.get_ui_image()"
@cvp True, but that only applies to assets in the photo library, not to camera images (which aren't necessarily even saved in the photo library).
|
https://forum.omz-software.com/topic/3704/manipulating-images-from-camera
|
CC-MAIN-2020-50
|
refinedweb
| 261
| 61.02
|
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Java in General
Author
Sudoku Solving Java Program
Elliot Penson
Greenhorn
Joined: Dec 29, 2012
Posts: 1
posted
Dec 29, 2012 19:06:30
0
I decided to create a small application to solve a user inputted sudoku puzzle. This class guesses numbers for cells (starting with 1) and then backtracks when a possible number cannot be found. Unfortunately, the program does not work. It seems to fail when it cannot find a correct number. Any help with this program would be greatly appreciated. Also, any tips in general would be really useful. Thanks!
import java.util.Scanner; public class Solver { private static int[][] board = new int[9][9]; public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); System.out.println("Welcome to the sudoku solver!"); System.out.println("Type inital values separated by spaces, \nwith zeros for blank spots."); for(int y = 0; y < 9; y++) { System.out.print("Enter row values: "); for(int x = 0; x < 9; x++) { board[x][y] = keyboard.nextInt(); } keyboard.nextLine(); } System.out.println("Initial state: "); printBoard(); solve(0, 0); System.out.println("Solved state: "); printBoard(); } public static void solve(int x, int y) { // find next open cell int[] nextOpenCell = findOpenCell(x, y); if(nextOpenCell == null) return; int xPos = nextOpenCell[0]; int yPos = nextOpenCell[1]; System.out.println("xPos: " + xPos + " yPos: " + yPos); // check... // attempt to fill for(int currentNumber = fillCell(1, xPos, yPos); currentNumber != -1; currentNumber = fillCell(currentNumber+1, xPos, yPos)) { if(xPos >= 8) solve(0, yPos+1); else solve(xPos+1, yPos); if(finished()) return; } } public static int fillCell(int val, int x, int y) { if(val > 9) return -1; if(checkRow(val, x) && checkColumn(val, y) && checkBox(val, x, y)) { board[x][y] = val; return val; } return fillCell(val+1, x, y); } public static boolean checkRow(int val, int x) { for(int i = 0; i < 9; i++) { if(board[x][i] == val) return false; } return true; } public static boolean checkColumn(int val, int y) { for(int i = 0; i < 9; i++) { if(board[i][y] == val) return false; } return true; } public static boolean checkBox(int val, int x, int y) { // find 3x3's upper right x/ys int i = (x / 3) * 3; int j = (y / 3) * 3; // check each cell in the 3x3 for(int row = i; row < i+3; row++) { for(int column = j; column < j+3; column++) { if(board[row][column] == val) return false; } } return true; } public static void printBoard() { for(int y = 0; y < 9; y++) { if(y%3 == 0) System.out.println(""); for(int x = 0; x < 9; x++) { if(x%3 == 0) System.out.print(" "); if(board[x][y] == 0) System.out.print("_ "); else System.out.print(board[x][y] + " "); } System.out.println(""); } } public static int[] findOpenCell(int x, int y) { for(int column = y; column < 9; column++) { for(int row = x; row < 9; row++) { if(board[row][column] == 0) { return new int[] {row, column}; } } } return null; } public static boolean finished() { for(int row = 0; row < 9; row++) { for(int column = 0; column < 9; column++) { if(board[row][column] == 0) return false; } } return true; } }
Ulf Dittmer
Marshal
Joined: Mar 22, 2005
Posts: 41477
51
posted
Dec 29, 2012 23:30:49
0
There was a lengthy discussion here:
Ping & DNS
- my free Android networking tools app
Kemal Sokolovic
Bartender
Joined: Jun 19, 2010
Posts: 825
5
I like...
posted
Dec 30, 2012 05:06:24
1
I would recommend you read
Programming Sudoku
by Wei-Meng Lee, if you really want to implement and understand a good solution and perhaps get an idea on how to improve it. The code is written in .NET though, but it will surely provide a good foundation on algorithms you can use to solve it.
Otherwise, if you want to use an existing solution, there are plenty of those around.
The quieter you are, the more you are able to hear.
fred rosenberger
lowercase baba
Bartender
Joined: Oct 02, 2003
Posts: 11229
16
I like...
posted
Dec 30, 2012 07:38:26
1
"It seems to fail" doesn't really tell us anything. Does it crash/core dump? Does it hang? Does it give bad results?
How do you know it seems to fail? It's not like we can run it ourselves, since we have no idea what input you used. Perhaps you are inputting something that has no solution.
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
Tony Docherty
Bartender
Joined: Aug 07, 2007
Posts: 2232
47
posted
Dec 31, 2012 08:13:53
0
Some time ago I started working on a Sudoko solver which, rather than just solving the puzzle for you, allows you to see how it solves each cell. A beta
applet
version can be found at
.
It has some fairly complex solvers built in but I got bored and so never got around to adding any more solvers or getting the whole thing beyond beta stage. If you want the code let me know and I can upload it to my website.
I agree. Here's the link:
subject: Sudoku Solving Java Program
Similar Threads
array out of bounds
Sudoku Solver
Sudoku solver help (not brute force)
SoDuko puzzle
Exception -- java.lang.StackOverflowError
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/601344/java/java/Sudoku-Solving-Java-Program
|
CC-MAIN-2014-35
|
refinedweb
| 916
| 60.04
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Error when uninstalling a custom module that inherits from 'crm.lead' --- KeyError: 'crm.lead'
We're adding additional phone fields to Leads (crm.leads) and Partners (res.partner). When the module is installed, fields are added and that seems to work properly. The problem shows up when we try to uninstall the module.
We get the following error message:
2015-01-30 20:55:43,899 1873 ERROR DB openerp.http: Exception during JSON request handling.53, in call_button
action = self._call_kw(model, method, args, {})
File "/usr/lib/python2.7/dist-packages/openerp/addons/web/controllers/main.py", line 941,/wizard/base_module_upgrade.py", line 105, in upgrade_module
openerp.modules.registry.RegistryManager.new(cr.dbname, update_module=True)
File "/usr/lib/python2.7/dist-packages/openerp/modules/registry.py", line 366, in new
openerp.modules.load_modules(registry._db, force_demo, status, update_module)
File "/usr/lib/python2.7/dist-packages/openerp/modules/loading.py", line 351, in load_modules
force, status, report, 162, in load
model = cls._build_model(self, cr)
File "/usr/lib/python2.7/dist-packages/openerp/models.py", line 591, in _build_model
original_module = pool[name]._original_module if name in parents else cls._module
File "/usr/lib/python2.7/dist-packages/openerp/modules/registry.py", line 101, in __getitem__
return self.models[model_name]
KeyError: 'crm.lead'
If we remove the 'crm.lead' portion, and leave only res.partner, we have no problem uninstalling the module.
Note: The views seem to work fine. The issue seems to be related to the inheritance from 'crm.lead'.
Any pointers as to where the problem could be?
Relevant code follows...
Extra_Phones.py:
from openerp.osv import osv, orm, fields
from openerp import models
from openerp.tools.translate import _
class crm_lead(orm.Model):
_name = 'crm.lead'
_inherit = 'crm.lead'
_columns = {
'phone2': fields.char('Phone2', help=""),
'phone3': fields.char('Phone3', help=""),
'phone4': fields.char('Phone4', help=""),
'phone5': fields.char('Phone5', help=""),
}
crm_lead()
class res_partner(orm.Model):
_name = 'res.partner'
_inherit = 'res.partner'
_columns = {
'phone2': fields.char('Phone2', help=""),
}
res_partner()
Extra_Phones_Views.xml:
<?xml version="1.0" encoding="UTF-8"?>
<openerp>
<data>
<record model="ir.ui.view" id="CRM_extra_phones_view">
<field name="name">CRM_Extra_Phones</field>
<field name="model">crm.lead</field>
<field name="inherit_id" ref="crm.crm_case_form_view_leads"/>
<field name="arch" type="xml">
<xpath expr="//field[@name='phone']" position="replace">
<label for="phone" string="Phones"/>
<div>
<field name="phone"/>
<field name="phone2"/>
<field name="phone3"/>
<field name="phone4"/>
<field name="phone5"/>
</div>
</xpath>
</field>
</record>
<record model="ir.ui.view" id="partner_extra_phones_view">
<field name="name">Partner_Extra_Phones</field>
<field name="model">res.partner</field>
<field name="inherit_id" ref="base.view_partner_form"/>
<field name="arch" type="xml">
<xpath expr="//field[@name='phone']" position="replace">
<label for="phone" string="Phones"/>
<div>
<field name="phone"/>
<field name="phone2"/>
</div>
</xpath>
</field>
</record>
</data>
</openerp>
Edit:
We still have the same problem after updating to the v8 format.
Updated to v8 Extra_Phones.py:
from openerp import models, fields
class crm_lead(models.Model):
_inherit = 'crm.lead'
phone2 = fields.Char(string='Phone2', help='')
phone3 = fields.Char(string='Phone3', help='')
phone4 = fields.Char(string='Phone4', help='')
phone5 = fields.Char(string='Phone5', help='')
class res_partner(models.Model):
_inherit = 'res.partner'
phone2 = fields.Char(string='Phone2', help='')
Fixed! And we hope this helps someone who might be having the same issue (and anyone writing their custom modules).
It seems that 'depends', in __openerp__.py, is not only important to install a module (and its dependency), but also to uninstall it. Even if, like us, you don't get errors when installing your module.
We were able to fix this issue simply by changing 'depends' in __openerp__.py from:
'depends': ['base'],
to:
'depends': ['base','crm'],
How? We copied some Python code, from one of the standard odoo modules, that was installing and uninstalling properly. But that code was not working in our module. So, that pointed to a problem "outside" the Python file, and that's when we realized that we didn't have 'crm' on the 'depends'.
I had that issue when naming my addons the same too.
try this in the V8 API:
from openerp import models, fields, api
class my_crm_lead(models.Model):
_inherit='crm.lead'
(any other code)
oh by the way, I had to delete my DB and start a new one to fix it. :(
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Thank you, John, for your answer. We still have the same problem. Changing to the v8 format doesn't make a difference. Unless we are missing something somewhere. Module installs and works fine, but we're unable to cleanly uninstall when we inherit from 'cmr.lead'. And yes, we found out that it is easier/faster to clone the DB, and drop it, than to try and fix the tables when a module breaks Odoo.
Try changing this: class crm_lead(models.Model): to this: class crm_lead_extra_phones(models.Model): and this: class res_partner(models.Model): to this: class res_partner_extra_phones(models.Model): Naming it the same as crm.lead and res.partner overwrites these in the DB, the only way to uninstall would be to uninstall crm.lead and res.partner... This is my opinion but it is very similar to the problem that I had, that's why I showed "my_crm_lead" calls name below.
**class name, sorry.
Thanks John. I'm pretty sure that we tried that before. But, just to be completely sure, we did so again. Not good. Same error message when uninstalling. Based on the little I have read about Odoo design, there is no need to change the class name for inheritance. If we remove the 'crm.lead' portion from our module (and keep 'res.partner' only), the module will install and uninstall properly. The problem seems to be inheriting from 'crm.lead'.
|
https://www.odoo.com/forum/help-1/question/error-when-uninstalling-a-custom-module-that-inherits-from-crm-lead-keyerror-crm-lead-74761
|
CC-MAIN-2017-22
|
refinedweb
| 995
| 55.2
|
Synopsis edit
-
- bindtags window ?tagList?
Documentation edit
- man page
- Introduction to Bindtags
- by Bryan Oakley ,2006-03
See Also edit
Examples edit
- Playing with bits
- Creates a widget in a frame and adds, at index 1 of each element in the frame, a tag named after the namespace associated with the widget instance. This lays a good framework for delegating evens to widgets.
- A tiny input manager
Description editUsed to define the binding hierarchy for Widgets.RS gave a great description of bindtags: he called them "bundles of bindings" or something to that effect. Widgets have a list of such "bundles" associated with them. Every time an event happens over that widget, each "bundle" is checked in turn, and if there is a binding matching the event, it is fired. If the binding does a break no more "bundles" are considered. Otherwise, each additional bundle goes through the same processing in turn.By default, each widget has a set of bindtags that includes the specific widget, the widget class, the toplevel window for that widget, and the special word "all". So, for example, to attach a binding to all widgets you can associate the binding with the tag "all" rather than a specific widget.
Example: Uppercase Entry editHere is a little example that diverts lowercase letter keys to their uppercase variants (other characters come through unharmed) - for the "; break" bit I had to quote the binding body, instead of listifying it as one normally should:
foreach i {a b c d e f g h i j k l m n o p q r s t u v w x y z} { bind Upcase $i "event generate %W [string toupper $i]; break" } ;# RS # Usage example, associate the bindtags to a widget: pack [entry .e] bindtags .e [concat Upcase [bindtags .e]]
Example, KHIM edit"... KHIM is a good example. Any widget that wants KHIM's services can add 'khim' to its list of bindtags, and get a whole lot of <Key> bindings ... What you do to get KHIM to apply to a text widget is to say,
bindtags $text [linsert [bindtags $text] 1 khim]... which will change the bindtags from {.text Text . all} to {.text khim Text . all}"
Misc editvr: WRT bindtags, can you explain how to preserve the original code bound to that event using bindtags? Also, why should I be binding to the buttonrelease event?RS: Simply: if you have custom bindings, bind them to a bindtag, not a widget name; place that bindtag (with the bindtags command) in the binding sequence of the widget(s) in question.
eval bindtags $myWidget myBindings [bindtags $mywidget]puts everything that you bind to myBindings before all other bindings of myWidget (but they are still executed, if yours doesn't [break])
|
http://wiki.tcl.tk/1418
|
CC-MAIN-2018-05
|
refinedweb
| 459
| 60.04
|
Top: Multithreading: mutex
#include <pasync.h> class mutex { mutex(); void enter(); void leave(); void lock(); // alias for enter() void unlock(); // alias for leave() } class scopelock { scopelock(mutex&); ~scopelock(); }
Mutex (mutual exclusion) is another synchronization object which helps to protect shared data structures from being concurrently accessed and modified.
Accessing and changing simple variables like int concurrently can be considered safe provided that the variable is aligned on a boundary "native" to the given CPU (32 bits on most systems). More often, however, applications use more complex shared data structures which can not be modified and accessed at the same time. Otherwise, the logical integrity of the structure might be corrupt from the "reader's" point of view when some other process sequentially modifies the fields of a shared structure.
To logically lock the part of code which modifies such complex structures the thread creates a mutex object and embraces the critical code with calls to enter() and leave(). Reading threads should also mark their transactions with enter() and leave() for the same mutex object. When either a reader or a writer enters the critical section, any attempt to enter the same section concurrently causes the thread to "hang" until the first thread leaves the critical section.
If more than two threads are trying to lock the same critical section, mutex builds a queue for them and allows threads to enter the section only one by one. The order of entering the section is platform dependent.
To avoid infinite locks on a mutex object, applications usually put the critical section into try {} block and call leave() from within catch {} in case an exception is raised during the transaction. The scopelock class provides a shortcut for this construct: it automatically locks the specified mutex object in its constructor and unlocks it in the destructor, so that even if an exception is raised inside the scope of the scopelock object leave() is guaranteed to be called.
More often applications use a smarter mutual exclusion object called read/write lock -- rwlock.
PTypes' mutex object encapsulates either Windows CRITICAL_SECTION structure or POSIX mutex object and implements the minimal set of features common to both platforms. Note: mutex may not be reentrant on POSIX systems, i.e. a recursive lock from one thread may cause deadlock.
mutex::mutex() creates a mutex object.
void mutex::enter() marks the start of an indivisible transaction.
void mutex::leave() marks the end of an indivisible transaction.
void mutex::lock() is an alias for enter().
void mutex::unlock() is an alias for leave().
scopelock::scopelock(mutex& m) creates a scopelock object and calls enter() for the mutex object m.
scopelock::~scopelock() calls leave() for the mutex object specified during construction and destroys the scopelock object.
See also: thread, rwlock, trigger, semaphore, Examples
|
http://www.melikyan.com/ptypes/doc/async.mutex.html
|
crawl-001
|
refinedweb
| 461
| 52.49
|
Back to index
00001 /* zlib.h -- interface of the 'zlib' general purpose compression library 00002 version 1.2.3, July 18th, 2005 00003 00004 Copyright (C) 1995-2005.3" 00041 #define ZLIB_VERNUM 0x1230 00042 00043 /* 00044 The 'zlib' compression library provides in-memory compression and 00045 decompression functions, including integrity checks of the uncompressed 00046 data. This version of the library supports only one compression method 00047 (deflation) but other algorithms will be added later and will have the same 00048 stream interface. 00049 00050 Compression can be done in a single step if the buffers are large 00051 enough (for example if an input file is mmap'ed), or can be done by 00052 repeated calls of the compression function. In the latter case, the 00053 application must provide more input and/or consume the output 00054 (providing more output space) before each call. 00055 00056 The compressed data format used by default by the in-memory functions is 00057 the zlib format, which is a zlib wrapper documented in RFC 1950, wrapped 00058 around a deflate stream, which is itself documented in RFC 1951. 00059 00060 The library also supports reading and writing files in gzip (.gz) format 00061 with an interface similar to that of stdio using the functions that start 00062 with "gz". The gzip format is different from the zlib format. gzip is a 00063 gzip wrapper, documented in RFC 1952, wrapped around a deflate stream. 00064 00065 This library can optionally read and write gzip streams in memory as well. 00066 00067 The zlib format was designed to be compact and fast for use in memory 00068 and on communications channels. The gzip format was designed for single- 00069 file compression on file systems, has a larger header than zlib to maintain 00070 directory information, and uses a different, slower check method than zlib. 00071 00072 The library does not install any signal handler. The decoder checks 00073 the consistency of the compressed data, so the library should never 00074 crash even in case of corrupted input. 00075 */ 00076 00077 typedef voidpf (*alloc_func) OF((voidpf opaque, uInt items, uInt size)); 00078 typedef void (*free_func) OF((voidpf opaque, voidpf address)); 00079 00080 struct internal_state; 00081 00082 typedef struct z_stream_s { 00083 Bytef *next_in; /* next input byte */ 00084 uInt avail_in; /* number of bytes available at next_in */ 00085 uLong total_in; /* total nb of input bytes read so far */ 00086 00087 Bytef *next_out; /* next output byte should be put there */ 00088 uInt avail_out; /* remaining free space at next_out */ 00089 uLong total_out; /* total nb of bytes output so far */ 00090 00091 char *msg; /* last error message, NULL if no error */ 00092 struct internal_state FAR *state; /* not visible by applications */ 00093 00094 alloc_func zalloc; /* used to allocate the internal state */ 00095 free_func zfree; /* used to free the internal state */ 00096 voidpf opaque; /* private data object passed to zalloc and zfree */ 00097 00098 int data_type; /* best guess about the data type: binary or text */ 00099 uLong adler; /* adler32 value of the uncompressed data */ 00100 uLong reserved; /* reserved for future use */ 00101 } z_stream; 00102 00103 typedef z_stream FAR *z_streamp; 00104 00105 /* 00106 gzip header information passed to and from zlib routines. See RFC 1952 00107 for more details on the meanings of these fields. 00108 */ 00109 typedef struct gz_header_s { 00110 int text; /* true if compressed data believed to be text */ 00111 uLong time; /* modification time */ 00112 int xflags; /* extra flags (not used when writing a gzip file) */ 00113 int os; /* operating system */ 00114 Bytef *extra; /* pointer to extra field or Z_NULL if none */ 00115 uInt extra_len; /* extra field length (valid if extra != Z_NULL) */ 00116 uInt extra_max; /* space at extra (only when reading header) */ 00117 Bytef *name; /* pointer to zero-terminated file name or Z_NULL */ 00118 uInt name_max; /* space at name (only when reading header) */ 00119 Bytef *comment; /* pointer to zero-terminated comment or Z_NULL */ 00120 uInt comm_max; /* space at comment (only when reading header) */ 00121 int hcrc; /* true if there was or will be a header crc */ 00122 int done; /* true when done reading gzip header (not used 00123 when writing a gzip file) */ 00124 } gz_header; 00125 00126 typedef gz_header FAR *gz_headerp; 00127 00128 /* 00129 The application must update next_in and avail_in when avail_in has 00130 dropped to zero. It must update next_out and avail_out when avail_out 00131 has dropped to zero. The application must initialize zalloc, zfree and 00132 opaque before calling the init function. All other fields are set by the 00133 compression library and must not be updated by the application. 00134 00135 The opaque value provided by the application will be passed as the first 00136 parameter for calls of zalloc and zfree. This can be useful for custom 00137 memory management. The compression library attaches no meaning to the 00138 opaque value. 00139 00140 zalloc must return Z_NULL if there is not enough memory for the object. 00141 If zlib is used in a multi-threaded application, zalloc and zfree must be 00142 thread safe. 00143 00144 On 16-bit systems, the functions zalloc and zfree must be able to allocate 00145 exactly 65536 bytes, but will not be required to allocate more than this 00146 if the symbol MAXSEG_64K is defined (see zconf.h). WARNING: On MSDOS, 00147 pointers returned by zalloc for objects of exactly 65536 bytes *must* 00148 have their offset normalized to zero. The default allocation function 00149 provided by this library ensures this (see zutil.c). To reduce memory 00150 requirements and avoid any allocation of 64K objects, at the expense of 00151 compression ratio, compile the library with -DMAX_WBITS=14 (see zconf.h). 00152 00153 The fields total_in and total_out can be used for statistics or 00154 progress reports. After compression, total_in holds the total size of 00155 the uncompressed data and may be saved for use in the decompressor 00156 (particularly if the decompressor wants to decompress everything in 00157 a single step). 00158 */ 00159 00160 /* constants */ 00161 00162 #define Z_NO_FLUSH 0 00163 #define Z_PARTIAL_FLUSH 1 /* will be removed, use Z_SYNC_FLUSH instead */ 00164 #define Z_SYNC_FLUSH 2 00165 #define Z_FULL_FLUSH 3 00166 #define Z_FINISH 4 00167 #define Z_BLOCK 5 00168 /* Allowed flush values; see deflate() and inflate() below for details */ 00169 00170 #define Z_OK 0 00171 #define Z_STREAM_END 1 00172 #define Z_NEED_DICT 2 00173 #define Z_ERRNO (-1) 00174 #define Z_STREAM_ERROR (-2) 00175 #define Z_DATA_ERROR (-3) 00176 #define Z_MEM_ERROR (-4) 00177 #define Z_BUF_ERROR (-5) 00178 #define Z_VERSION_ERROR (-6) 00179 /* Return codes for the compression/decompression functions. Negative 00180 * values are errors, positive values are used for special but normal events. 00181 */ 00182 00183 #define Z_NO_COMPRESSION 0 00184 #define Z_BEST_SPEED 1 00185 #define Z_BEST_COMPRESSION 9 00186 #define Z_DEFAULT_COMPRESSION (-1) 00187 /* compression levels */ 00188 00189 #define Z_FILTERED 1 00190 #define Z_HUFFMAN_ONLY 2 00191 #define Z_RLE 3 00192 #define Z_FIXED 4 00193 #define Z_DEFAULT_STRATEGY 0 00194 /* compression strategy; see deflateInit2() below for details */ 00195 00196 #define Z_BINARY 0 00197 #define Z_TEXT 1 00198 #define Z_ASCII Z_TEXT /* for compatibility with 1.2.2 and earlier */ 00199 #define Z_UNKNOWN 2 00200 /* Possible values of the data_type field (though see inflate()) */ 00201 00202 #define Z_DEFLATED 8 00203 /* The deflate compression method (the only one supported in this version) */ 00204 00205 #define Z_NULL 0 /* for initializing zalloc, zfree, opaque */ 00206 00207 #define zlib_version zlibVersion() 00208 /* for compatibility with versions < 1.0.2 */ 00209 00210 /* basic functions */ 00211 00212 ZEXTERN const char * ZEXPORT zlibVersion OF((void)); 00213 /* The application can compare zlibVersion and ZLIB_VERSION for consistency. 00214 If the first character differs, the library code actually used is 00215 not compatible with the zlib.h header file used by the application. 00216 This check is automatically made by deflateInit and inflateInit. 00217 */ 00218 00219 /* 00220 ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level)); 00221 00222 Initializes the internal stream state for compression. The fields 00223 zalloc, zfree and opaque must be initialized before by the caller. 00224 If zalloc and zfree are set to Z_NULL, deflateInit updates them to 00225 use default allocation functions. 00226 00227 The compression level must be Z_DEFAULT_COMPRESSION, or between 0 and 9: 00228 1 gives best speed, 9 gives best compression, 0 gives no compression at 00229 all (the input data is simply copied a block at a time). 00230 Z_DEFAULT_COMPRESSION requests a default compromise between speed and 00231 compression (currently equivalent to level 6). 00232 00233 deflateInit returns Z_OK if success, Z_MEM_ERROR if there was not 00234 enough memory, Z_STREAM_ERROR if level is not a valid compression level, 00235 Z_VERSION_ERROR if the zlib library version (zlib_version) is incompatible 00236 with the version assumed by the caller (ZLIB_VERSION). 00237 msg is set to null if there is no error message. deflateInit does not 00238 perform any compression: this will be done by deflate(). 00239 */ 00240 00241 00242 ZEXTERN int ZEXPORT deflate OF((z_streamp strm, int flush)); 00243 /* 00244 deflate compresses as much data as possible, and stops when the input 00245 buffer becomes empty or the output buffer becomes full. It may introduce some 00246 output latency (reading input without producing any output) except when 00247 forced to flush. 00248 00249 The detailed semantics are as follows. deflate performs one or both of the 00250 following actions: 00251 00252 - Compress more input starting at next_in and update next_in and avail_in 00253 accordingly. If not all input can be processed (because there is not 00254 enough room in the output buffer), next_in and avail_in are updated and 00255 processing will resume at this point for the next call of deflate(). 00256 00257 - Provide more output starting at next_out and update next_out and avail_out 00258 accordingly. This action is forced if the parameter flush is non zero. 00259 Forcing flush frequently degrades the compression ratio, so this parameter 00260 should be set only when necessary (in interactive applications). 00261 Some output may be provided even if flush is not set. 00262 00263 Before the call of deflate(), the application should ensure that at least 00264 one of the actions is possible, by providing more input and/or consuming 00265 more output, and updating avail_in or avail_out accordingly; avail_out 00266 should never be zero before the call. The application can consume the 00267 compressed output when it wants, for example when the output buffer is full 00268 (avail_out == 0), or after each call of deflate(). If deflate returns Z_OK 00269 and with zero avail_out, it must be called again after making room in the 00270 output buffer because there might be more output pending. 00271 00272 Normally the parameter flush is set to Z_NO_FLUSH, which allows deflate to 00273 decide how much data to accumualte before producing output, in order to 00274 maximize compression. 00275 00276 If the parameter flush is set to Z_SYNC_FLUSH, all pending output is 00277 flushed to the output buffer and the output is aligned on a byte boundary, so 00278 that the decompressor can get all input data available so far. (In particular 00279 avail_in is zero after the call if enough output space has been provided 00280 before the call.) Flushing may degrade compression for some compression 00281 algorithms and so it should be used only when necessary. 00282 00283 If flush is set to Z_FULL_FLUSH, all output is flushed as with 00284 Z_SYNC_FLUSH, and the compression state is reset so that decompression can 00285 restart from this point if previous compressed data has been damaged or if 00286 random access is desired. Using Z_FULL_FLUSH too often can seriously degrade 00287 compression. 00288 00289 If deflate returns with avail_out == 0, this function must be called again 00290 with the same value of the flush parameter and more output space (updated 00291 avail_out), until the flush is complete (deflate returns with non-zero 00292 avail_out). In the case of a Z_FULL_FLUSH or Z_SYNC_FLUSH, make sure that 00293 avail_out is greater than six to avoid repeated flush markers due to 00294 avail_out == 0 on return. 00295 00296 If the parameter flush is set to Z_FINISH, pending input is processed, 00297 pending output is flushed and deflate returns with Z_STREAM_END if there 00298 was enough output space; if deflate returns with Z_OK, this function must be 00299 called again with Z_FINISH and more output space (updated avail_out) but no 00300 more input data, until it returns with Z_STREAM_END or an error. After 00301 deflate has returned Z_STREAM_END, the only possible operations on the 00302 stream are deflateReset or deflateEnd. 00303 00304 Z_FINISH can be used immediately after deflateInit if all the compression 00305 is to be done in a single step. In this case, avail_out must be at least 00306 the value returned by deflateBound (see below). If deflate does not return 00307 Z_STREAM_END, then it must be called again as described above. 00308 00309 deflate() sets strm->adler to the adler32 checksum of all input read 00310 so far (that is, total_in bytes). 00311 00312 deflate() may update strm->data_type if it can make a good guess about 00313 the input data type (Z_BINARY or Z_TEXT). In doubt, the data is considered 00314 binary. This field is only for information purposes and does not affect 00315 the compression algorithm in any manner. 00316 00317 deflate() returns Z_OK if some progress has been made (more input 00318 processed or more output produced), Z_STREAM_END if all input has been 00319 consumed and all output has been produced (only when flush is set to 00320 Z_FINISH), Z_STREAM_ERROR if the stream state was inconsistent (for example 00321 if next_in or next_out was NULL), Z_BUF_ERROR if no progress is possible 00322 (for example avail_in or avail_out was zero). Note that Z_BUF_ERROR is not 00323 fatal, and deflate() can be called again with more input and more output 00324 space to continue compressing. 00325 */ 00326 00327 00328 ZEXTERN int ZEXPORT deflateEnd OF((z_streamp strm)); 00329 /* 00330 All dynamically allocated data structures for this stream are freed. 00331 This function discards any unprocessed input and does not flush any 00332 pending output. 00333 00334 deflateEnd returns Z_OK if success, Z_STREAM_ERROR if the 00335 stream state was inconsistent, Z_DATA_ERROR if the stream was freed 00336 prematurely (some input or output was discarded). In the error case, 00337 msg may be set but then points to a static string (which must not be 00338 deallocated). 00339 */ 00340 00341 00342 /* 00343 ZEXTERN int ZEXPORT inflateInit OF((z_streamp strm)); 00344 00345 Initializes the internal stream state for decompression. The fields 00346 next_in, avail_in, zalloc, zfree and opaque must be initialized before by 00347 the caller. If next_in is not Z_NULL and avail_in is large enough (the exact 00348 value depends on the compression method), inflateInit determines the 00349 compression method from the zlib header and allocates all data structures 00350 accordingly; otherwise the allocation will be deferred to the first call of 00351 inflate. If zalloc and zfree are set to Z_NULL, inflateInit updates them to 00352 use default allocation functions. 00353 00354 inflateInit returns Z_OK if success, Z_MEM_ERROR if there was not enough 00355 memory, Z_VERSION_ERROR if the zlib library version is incompatible with the 00356 version assumed by the caller. msg is set to null if there is no error 00357 message. inflateInit does not perform any decompression apart from reading 00358 the zlib header if present: this will be done by inflate(). (So next_in and 00359 avail_in may be modified, but next_out and avail_out are unchanged.) 00360 */ 00361 00362 00363 ZEXTERN int ZEXPORT inflate OF((z_streamp strm, int flush)); 00364 /* 00365 inflate decompresses as much data as possible, and stops when the input 00366 buffer becomes empty or the output buffer becomes full. It may introduce 00367 some output latency (reading input without producing any output) except when 00368 forced to flush. 00369 00370 The detailed semantics are as follows. inflate performs one or both of the 00371 following actions: 00372 00373 - Decompress more input starting at next_in and update next_in and avail_in 00374 accordingly. If not all input can be processed (because there is not 00375 enough room in the output buffer), next_in is updated and processing 00376 will resume at this point for the next call of inflate(). 00377 00378 - Provide more output starting at next_out and update next_out and avail_out 00379 accordingly. inflate() provides as much output as possible, until there 00380 is no more input data or no more space in the output buffer (see below 00381 about the flush parameter). 00382 00383 Before the call of inflate(), the application should ensure that at least 00384 one of the actions is possible, by providing more input and/or consuming 00385 more output, and updating the next_* and avail_* values accordingly. 00386 The application can consume the uncompressed output when it wants, for 00387 example when the output buffer is full (avail_out == 0), or after each 00388 call of inflate(). If inflate returns Z_OK and with zero avail_out, it 00389 must be called again after making room in the output buffer because there 00390 might be more output pending. 00391 00392 The flush parameter of inflate() can be Z_NO_FLUSH, Z_SYNC_FLUSH, 00393 Z_FINISH, or Z_BLOCK. Z_SYNC_FLUSH requests that inflate() flush as much 00394 output as possible to the output buffer. Z_BLOCK requests that inflate() stop 00395 if and when it gets to the next deflate block boundary. When decoding the 00396 zlib or gzip format, this will cause inflate() to return immediately after 00397 the header and before the first block. When doing a raw inflate, inflate() 00398 will go ahead and process the first block, and will return when it gets to 00399 the end of that block, or when it runs out of data. 00400 00401 The Z_BLOCK option assists in appending to or combining deflate streams. 00402 Also to assist in this, on return inflate() will set strm->data_type to the 00403 number of unused bits in the last byte taken from strm->next_in, plus 64 00404 if inflate() is currently decoding the last block in the deflate stream, 00405 plus 128 if inflate() returned immediately after decoding an end-of-block 00406 code or decoding the complete header up to just before the first byte of the 00407 deflate stream. The end-of-block will not be indicated until all of the 00408 uncompressed data from that block has been written to strm->next_out. The 00409 number of unused bits may in general be greater than seven, except when 00410 bit 7 of data_type is set, in which case the number of unused bits will be 00411 less than eight. 00412 00413 inflate() should normally be called until it returns Z_STREAM_END or an 00414 error. However if all decompression is to be performed in a single step 00415 (a single call of inflate), the parameter flush should be set to 00416 Z_FINISH. In this case all pending input is processed and all pending 00417 output is flushed; avail_out must be large enough to hold all the 00418 uncompressed data. (The size of the uncompressed data may have been saved 00419 by the compressor for this purpose.) The next operation on this stream must 00420 be inflateEnd to deallocate the decompression state. The use of Z_FINISH 00421 is never required, but can be used to inform inflate that a faster approach 00422 may be used for the single inflate() call. 00423 00424 In this implementation, inflate() always flushes as much output as 00425 possible to the output buffer, and always uses the faster approach on the 00426 first call. So the only effect of the flush parameter in this implementation 00427 is on the return value of inflate(), as noted below, or when it returns early 00428 because Z_BLOCK is used. 00429 00430 If a preset dictionary is needed after this call (see inflateSetDictionary 00431 below), inflate sets strm->adler to the adler32 checksum of the dictionary 00432 chosen by the compressor and returns Z_NEED_DICT; otherwise it sets 00433 strm->adler to the adler32 checksum of all output produced so far (that is, 00434 total_out bytes) and returns Z_OK, Z_STREAM_END or an error code as described 00435 below. At the end of the stream, inflate() checks that its computed adler32 00436 checksum is equal to that saved by the compressor and returns Z_STREAM_END 00437 only if the checksum is correct. 00438 00439 inflate() will decompress and check either zlib-wrapped or gzip-wrapped 00440 deflate data. The header type is detected automatically. Any information 00441 contained in the gzip header is not retained, so applications that need that 00442 information should instead use raw inflate, see inflateInit2() below, or 00443 inflateBack() and perform their own processing of the gzip header and 00444 trailer. 00445 00446 inflate() returns Z_OK if some progress has been made (more input processed 00447 or more output produced), Z_STREAM_END if the end of the compressed data has 00448 been reached and all uncompressed output has been produced, Z_NEED_DICT if a 00449 preset dictionary is needed at this point, Z_DATA_ERROR if the input data was 00450 corrupted (input stream not conforming to the zlib format or incorrect check 00451 value), Z_STREAM_ERROR if the stream structure was inconsistent (for example 00452 if next_in or next_out was NULL), Z_MEM_ERROR if there was not enough memory, 00453 Z_BUF_ERROR if no progress is possible or if there was not enough room in the 00454 output buffer when Z_FINISH is used. Note that Z_BUF_ERROR is not fatal, and 00455 inflate() can be called again with more input and more output space to 00456 continue decompressing. If Z_DATA_ERROR is returned, the application may then 00457 call inflateSync() to look for a good compression block if a partial recovery 00458 of the data is desired. 00459 */ 00460 00461 00462 ZEXTERN int ZEXPORT inflateEnd OF((z_streamp strm)); 00463 /* 00464 All dynamically allocated data structures for this stream are freed. 00465 This function discards any unprocessed input and does not flush any 00466 pending output. 00467 00468 inflateEnd returns Z_OK if success, Z_STREAM_ERROR if the stream state 00469 was inconsistent. In the error case, msg may be set but then points to a 00470 static string (which must not be deallocated). 00471 */ 00472 00473 /* Advanced functions */ 00474 00475 /* 00476 The following functions are needed only in some special applications. 00477 */ 00478 00479 /* 00480 ZEXTERN int ZEXPORT deflateInit2 OF((z_streamp strm, 00481 int level, 00482 int method, 00483 int windowBits, 00484 int memLevel, 00485 int strategy)); 00486 00487 This is another version of deflateInit with more compression options. The 00488 fields next_in, zalloc, zfree and opaque must be initialized before by 00489 the caller. 00490 00491 The method parameter is the compression method. It must be Z_DEFLATED in 00492 this version of the library. 00493 00494 The windowBits parameter is the base two logarithm of the window size 00495 (the size of the history buffer). It should be in the range 8..15 for this 00496 version of the library. Larger values of this parameter result in better 00497 compression at the expense of memory usage. The default value is 15 if 00498 deflateInit is used instead. 00499 00500 windowBits can also be -8..-15 for raw deflate. In this case, -windowBits 00501 determines the window size. deflate() will then generate raw deflate data 00502 with no zlib header or trailer, and will not compute an adler32 check value. 00503 00504 windowBits can also be greater than 15 for optional gzip encoding. Add 00505 16 to windowBits to write a simple gzip header and trailer around the 00506 compressed data instead of a zlib wrapper. The gzip header will have no 00507 file name, no extra data, no comment, no modification time (set to zero), 00508 no header crc, and the operating system will be set to 255 (unknown). If a 00509 gzip stream is being written, strm->adler is a crc32 instead of an adler32. 00510 00511 The memLevel parameter specifies how much memory should be allocated 00512 for the internal compression state. memLevel=1 uses minimum memory but 00513 is slow and reduces compression ratio; memLevel=9 uses maximum memory 00514 for optimal speed. The default value is 8. See zconf.h for total memory 00515 usage as a function of windowBits and memLevel. 00516 00517 The strategy parameter is used to tune the compression algorithm. Use the 00518 value Z_DEFAULT_STRATEGY for normal data, Z_FILTERED for data produced by a 00519 filter (or predictor), Z_HUFFMAN_ONLY to force Huffman encoding only (no 00520 string match), or Z_RLE to limit match distances to one (run-length 00521 encoding). Filtered data consists mostly of small values with a somewhat 00522 random distribution. In this case, the compression algorithm is tuned to 00523 compress them better. The effect of Z_FILTERED is to force more Huffman 00524 coding and less string matching; it is somewhat intermediate between 00525 Z_DEFAULT and Z_HUFFMAN_ONLY. Z_RLE is designed to be almost as fast as 00526 Z_HUFFMAN_ONLY, but give better compression for PNG image data. The strategy 00527 parameter only affects the compression ratio but not the correctness of the 00528 compressed output even if it is not set appropriately. Z_FIXED prevents the 00529 use of dynamic Huffman codes, allowing for a simpler decoder for special 00530 applications. 00531 00532 deflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was not enough 00533 memory, Z_STREAM_ERROR if a parameter is invalid (such as an invalid 00534 method). msg is set to null if there is no error message. deflateInit2 does 00535 not perform any compression: this will be done by deflate(). 00536 */ 00537 00538 ZEXTERN int ZEXPORT deflateSetDictionary OF((z_streamp strm, 00539 const Bytef *dictionary, 00540 uInt dictLength)); 00541 /* 00542 Initializes the compression dictionary from the given byte sequence 00543 without producing any compressed output. This function must be called 00544 immediately after deflateInit, deflateInit2 or deflateReset, before any 00545 call of deflate. The compressor and decompressor must use exactly the same 00546 dictionary (see inflateSetDictionary). 00547 00548 The dictionary should consist of strings (byte sequences) that are likely 00549 to be encountered later in the data to be compressed, with the most commonly 00550 used strings preferably put towards the end of the dictionary. Using a 00551 dictionary is most useful when the data to be compressed is short and can be 00552 predicted with good accuracy; the data can then be compressed better than 00553 with the default empty dictionary. 00554 00555 Depending on the size of the compression data structures selected by 00556 deflateInit or deflateInit2, a part of the dictionary may in effect be 00557 discarded, for example if the dictionary is larger than the window size in 00558 deflate or deflate2. Thus the strings most likely to be useful should be 00559 put at the end of the dictionary, not at the front. In addition, the 00560 current implementation of deflate will use at most the window size minus 00561 262 bytes of the provided dictionary. 00562 00563 Upon return of this function, strm->adler is set to the adler32 value 00564 of the dictionary; the decompressor may later use this value to determine 00565 which dictionary has been used by the compressor. (The adler32 value 00566 applies to the whole dictionary even if only a subset of the dictionary is 00567 actually used by the compressor.) If a raw deflate was requested, then the 00568 adler32 value is not computed and strm->adler is not set. 00569 00570 deflateSetDictionary returns Z_OK if success, or Z_STREAM_ERROR if a 00571 parameter is invalid (such as NULL dictionary) or the stream state is 00572 inconsistent (for example if deflate has already been called for this stream 00573 or if the compression method is bsort). deflateSetDictionary does not 00574 perform any compression: this will be done by deflate(). 00575 */ 00576 00577 ZEXTERN int ZEXPORT deflateCopy OF((z_streamp dest, 00578 z_streamp source)); 00579 /* 00580 Sets the destination stream as a complete copy of the source stream. 00581 00582 This function can be useful when several compression strategies will be 00583 tried, for example when there are several ways of pre-processing the input 00584 data with a filter. The streams that will be discarded should then be freed 00585 by calling deflateEnd. Note that deflateCopy duplicates the internal 00586 compression state which can be quite large, so this strategy is slow and 00587 can consume lots of memory. 00588 00589 deflateCopy returns Z_OK if success, Z_MEM_ERROR if there was not 00590 enough memory, Z_STREAM_ERROR if the source stream state was inconsistent 00591 (such as zalloc being NULL). msg is left unchanged in both source and 00592 destination. 00593 */ 00594 00595 ZEXTERN int ZEXPORT deflateReset OF((z_streamp strm)); 00596 /* 00597 This function is equivalent to deflateEnd followed by deflateInit, 00598 but does not free and reallocate all the internal compression state. 00599 The stream will keep the same compression level and any other attributes 00600 that may have been set by deflateInit2. 00601 00602 deflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source 00603 stream state was inconsistent (such as zalloc or state being NULL). 00604 */ 00605 00606 ZEXTERN int ZEXPORT deflateParams OF((z_streamp strm, 00607 int level, 00608 int strategy)); 00609 /* 00610 Dynamically update the compression level and compression strategy. The 00611 interpretation of level and strategy is as in deflateInit2. This can be 00612 used to switch between compression and straight copy of the input data, or 00613 to switch to a different kind of input data requiring a different 00614 strategy. If the compression level is changed, the input available so far 00615 is compressed with the old level (and may be flushed); the new level will 00616 take effect only at the next call of deflate(). 00617 00618 Before the call of deflateParams, the stream state must be set as for 00619 a call of deflate(), since the currently available input may have to 00620 be compressed and flushed. In particular, strm->avail_out must be non-zero. 00621 00622 deflateParams returns Z_OK if success, Z_STREAM_ERROR if the source 00623 stream state was inconsistent or if a parameter was invalid, Z_BUF_ERROR 00624 if strm->avail_out was zero. 00625 */ 00626 00627 ZEXTERN int ZEXPORT deflateTune OF((z_streamp strm, 00628 int good_length, 00629 int max_lazy, 00630 int nice_length, 00631 int max_chain)); 00632 /* 00633 Fine tune deflate's internal compression parameters. This should only be 00634 used by someone who understands the algorithm used by zlib's deflate for 00635 searching for the best matching string, and even then only by the most 00636 fanatic optimizer trying to squeeze out the last compressed bit for their 00637 specific input data. Read the deflate.c source code for the meaning of the 00638 max_lazy, good_length, nice_length, and max_chain parameters. 00639 00640 deflateTune() can be called after deflateInit() or deflateInit2(), and 00641 returns Z_OK on success, or Z_STREAM_ERROR for an invalid deflate stream. 00642 */ 00643 00644 ZEXTERN uLong ZEXPORT deflateBound OF((z_streamp strm, 00645 uLong sourceLen)); 00646 /* 00647 deflateBound() returns an upper bound on the compressed size after 00648 deflation of sourceLen bytes. It must be called after deflateInit() 00649 or deflateInit2(). This would be used to allocate an output buffer 00650 for deflation in a single pass, and so would be called before deflate(). 00651 */ 00652 00653 ZEXTERN int ZEXPORT deflatePrime OF((z_streamp strm, 00654 int bits, 00655 int value)); 00656 /* 00657 deflatePrime() inserts bits in the deflate output stream. The intent 00658 is that this function is used to start off the deflate output with the 00659 bits leftover from a previous deflate stream when appending to it. As such, 00660 this function can only be used for raw deflate, and must be used before the 00661 first deflate() call after a deflateInit2() or deflateReset(). bits must be 00662 less than or equal to 16, and that many of the least significant bits of 00663 value will be inserted in the output. 00664 00665 deflatePrime returns Z_OK if success, or Z_STREAM_ERROR if the source 00666 stream state was inconsistent. 00667 */ 00668 00669 ZEXTERN int ZEXPORT deflateSetHeader OF((z_streamp strm, 00670 gz_headerp head)); 00671 /* 00672 deflateSetHeader() provides gzip header information for when a gzip 00673 stream is requested by deflateInit2(). deflateSetHeader() may be called 00674 after deflateInit2() or deflateReset() and before the first call of 00675 deflate(). The text, time, os, extra field, name, and comment information 00676 in the provided gz_header structure are written to the gzip header (xflag is 00677 ignored -- the extra flags are set according to the compression level). The 00678 caller must assure that, if not Z_NULL, name and comment are terminated with 00679 a zero byte, and that if extra is not Z_NULL, that extra_len bytes are 00680 available there. If hcrc is true, a gzip header crc is included. Note that 00681 the current versions of the command-line version of gzip (up through version 00682 1.3.x) do not support header crc's, and will report that it is a "multi-part 00683 gzip file" and give up. 00684 00685 If deflateSetHeader is not used, the default gzip header has text false, 00686 the time set to zero, and os set to 255, with no extra, name, or comment 00687 fields. The gzip header is returned to the default state by deflateReset(). 00688 00689 deflateSetHeader returns Z_OK if success, or Z_STREAM_ERROR if the source 00690 stream state was inconsistent. 00691 */ 00692 00693 /* 00694 ZEXTERN int ZEXPORT inflateInit2 OF((z_streamp strm, 00695 int windowBits)); 00696 00697 This is another version of inflateInit with an extra parameter. The 00698 fields next_in, avail_in, zalloc, zfree and opaque must be initialized 00699 before by the caller. 00700 00701 The windowBits parameter is the base two logarithm of the maximum window 00702 size (the size of the history buffer). It should be in the range 8..15 for 00703 this version of the library. The default value is 15 if inflateInit is used 00704 instead. windowBits must be greater than or equal to the windowBits value 00705 provided to deflateInit2() while compressing, or it must be equal to 15 if 00706 deflateInit2() was not used. If a compressed stream with a larger window 00707 size is given as input, inflate() will return with the error code 00708 Z_DATA_ERROR instead of trying to allocate a larger window. 00709 00710 windowBits can also be -8..-15 for raw inflate. In this case, -windowBits 00711 determines the window size. inflate() will then process raw deflate data, 00712 not looking for a zlib or gzip header, not generating a check value, and not 00713 looking for any check values for comparison at the end of the stream. This 00714 is for use with other formats that use the deflate compressed data format 00715 such as zip. Those formats provide their own check values. If a custom 00716 format is developed using the raw deflate format for compressed data, it is 00717 recommended that a check value such as an adler32 or a crc32 be applied to 00718 the uncompressed data as is done in the zlib, gzip, and zip formats. For 00719 most applications, the zlib format should be used as is. Note that comments 00720 above on the use in deflateInit2() applies to the magnitude of windowBits. 00721 00722 windowBits can also be greater than 15 for optional gzip decoding. Add 00723 32 to windowBits to enable zlib and gzip decoding with automatic header 00724 detection, or add 16 to decode only the gzip format (the zlib format will 00725 return a Z_DATA_ERROR). If a gzip stream is being decoded, strm->adler is 00726 a crc32 instead of an adler32. 00727 00728 inflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was not enough 00729 memory, Z_STREAM_ERROR if a parameter is invalid (such as a null strm). msg 00730 is set to null if there is no error message. inflateInit2 does not perform 00731 any decompression apart from reading the zlib header if present: this will 00732 be done by inflate(). (So next_in and avail_in may be modified, but next_out 00733 and avail_out are unchanged.) 00734 */ 00735 00736 ZEXTERN int ZEXPORT inflateSetDictionary OF((z_streamp strm, 00737 const Bytef *dictionary, 00738 uInt dictLength)); 00739 /* 00740 Initializes the decompression dictionary from the given uncompressed byte 00741 sequence. This function must be called immediately after a call of inflate, 00742 if that call returned Z_NEED_DICT. The dictionary chosen by the compressor 00743 can be determined from the adler32 value returned by that call of inflate. 00744 The compressor and decompressor must use exactly the same dictionary (see 00745 deflateSetDictionary). For raw inflate, this function can be called 00746 immediately after inflateInit2() or inflateReset() and before any call of 00747 inflate() to set the dictionary. The application must insure that the 00748 dictionary that was used for compression is provided. 00749 00750 inflateSetDictionary returns Z_OK if success, Z_STREAM_ERROR if a 00751 parameter is invalid (such as NULL dictionary) or the stream state is 00752 inconsistent, Z_DATA_ERROR if the given dictionary doesn't match the 00753 expected one (incorrect adler32 value). inflateSetDictionary does not 00754 perform any decompression: this will be done by subsequent calls of 00755 inflate(). 00756 */ 00757 00758 ZEXTERN int ZEXPORT inflateSync OF((z_streamp strm)); 00759 /* 00760 Skips invalid compressed data until a full flush point (see above the 00761 description of deflate with Z_FULL_FLUSH) can be found, or until all 00762 available input is skipped. No output is provided. 00763 00764 inflateSync returns Z_OK if a full flush point has been found, Z_BUF_ERROR 00765 if no more input was provided, Z_DATA_ERROR if no flush point has been found, 00766 or Z_STREAM_ERROR if the stream structure was inconsistent. In the success 00767 case, the application may save the current current value of total_in which 00768 indicates where valid compressed data was found. In the error case, the 00769 application may repeatedly call inflateSync, providing more input each time, 00770 until success or end of the input data. 00771 */ 00772 00773 ZEXTERN int ZEXPORT inflateCopy OF((z_streamp dest, 00774 z_streamp source)); 00775 /* 00776 Sets the destination stream as a complete copy of the source stream. 00777 00778 This function can be useful when randomly accessing a large stream. The 00779 first pass through the stream can periodically record the inflate state, 00780 allowing restarting inflate at those points when randomly accessing the 00781 stream. 00782 00783 inflateCopy returns Z_OK if success, Z_MEM_ERROR if there was not 00784 enough memory, Z_STREAM_ERROR if the source stream state was inconsistent 00785 (such as zalloc being NULL). msg is left unchanged in both source and 00786 destination. 00787 */ 00788 00789 ZEXTERN int ZEXPORT inflateReset OF((z_streamp strm)); 00790 /* 00791 This function is equivalent to inflateEnd followed by inflateInit, 00792 but does not free and reallocate all the internal decompression state. 00793 The stream will keep attributes that may have been set by inflateInit2. 00794 00795 inflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source 00796 stream state was inconsistent (such as zalloc or state being NULL). 00797 */ 00798 00799 ZEXTERN int ZEXPORT inflatePrime OF((z_streamp strm, 00800 int bits, 00801 int value)); 00802 /* 00803 This function inserts bits in the inflate input stream. The intent is 00804 that this function is used to start inflating at a bit position in the 00805 middle of a byte. The provided bits will be used before any bytes are used 00806 from next_in. This function should only be used with raw inflate, and 00807 should be used before the first inflate() call after inflateInit2() or 00808 inflateReset(). bits must be less than or equal to 16, and that many of the 00809 least significant bits of value will be inserted in the input. 00810 00811 inflatePrime returns Z_OK if success, or Z_STREAM_ERROR if the source 00812 stream state was inconsistent. 00813 */ 00814 00815 ZEXTERN int ZEXPORT inflateGetHeader OF((z_streamp strm, 00816 gz_headerp head)); 00817 /* 00818 inflateGetHeader() requests that gzip header information be stored in the 00819 provided gz_header structure. inflateGetHeader() may be called after 00820 inflateInit2() or inflateReset(), and before the first call of inflate(). 00821 As inflate() processes the gzip stream, head->done is zero until the header 00822 is completed, at which time head->done is set to one. If a zlib stream is 00823 being decoded, then head->done is set to -1 to indicate that there will be 00824 no gzip header information forthcoming. Note that Z_BLOCK can be used to 00825 force inflate() to return immediately after header processing is complete 00826 and before any actual data is decompressed. 00827 00828 The text, time, xflags, and os fields are filled in with the gzip header 00829 contents. hcrc is set to true if there is a header CRC. (The header CRC 00830 was valid if done is set to one.) If extra is not Z_NULL, then extra_max 00831 contains the maximum number of bytes to write to extra. Once done is true, 00832 extra_len contains the actual extra field length, and extra contains the 00833 extra field, or that field truncated if extra_max is less than extra_len. 00834 If name is not Z_NULL, then up to name_max characters are written there, 00835 terminated with a zero unless the length is greater than name_max. If 00836 comment is not Z_NULL, then up to comm_max characters are written there, 00837 terminated with a zero unless the length is greater than comm_max. When 00838 any of extra, name, or comment are not Z_NULL and the respective field is 00839 not present in the header, then that field is set to Z_NULL to signal its 00840 absence. This allows the use of deflateSetHeader() with the returned 00841 structure to duplicate the header. However if those fields are set to 00842 allocated memory, then the application will need to save those pointers 00843 elsewhere so that they can be eventually freed. 00844 00845 If inflateGetHeader is not used, then the header information is simply 00846 discarded. The header is always checked for validity, including the header 00847 CRC if present. inflateReset() will reset the process to discard the header 00848 information. The application would need to call inflateGetHeader() again to 00849 retrieve the header from the next gzip stream. 00850 00851 inflateGetHeader returns Z_OK if success, or Z_STREAM_ERROR if the source 00852 stream state was inconsistent. 00853 */ 00854 00855 /* 00856 ZEXTERN int ZEXPORT inflateBackInit OF((z_streamp strm, int windowBits, 00857 unsigned char FAR *window)); 00858 00859 Initialize the internal stream state for decompression using inflateBack() 00860 calls. The fields zalloc, zfree and opaque in strm must be initialized 00861 before the call. If zalloc and zfree are Z_NULL, then the default library- 00862 derived memory allocation routines are used. windowBits is the base two 00863 logarithm of the window size, in the range 8..15. window is a caller 00864 supplied buffer of that size. Except for special applications where it is 00865 assured that deflate was used with small window sizes, windowBits must be 15 00866 and a 32K byte window must be supplied to be able to decompress general 00867 deflate streams. 00868 00869 See inflateBack() for the usage of these routines. 00870 00871 inflateBackInit will return Z_OK on success, Z_STREAM_ERROR if any of 00872 the paramaters are invalid, Z_MEM_ERROR if the internal state could not 00873 be allocated, or Z_VERSION_ERROR if the version of the library does not 00874 match the version of the header file. 00875 */ 00876 00877 typedef unsigned (*in_func) OF((void FAR *, unsigned char FAR * FAR *)); 00878 typedef int (*out_func) OF((void FAR *, unsigned char FAR *, unsigned)); 00879 00880 ZEXTERN int ZEXPORT inflateBack OF((z_streamp strm, 00881 in_func in, void FAR *in_desc, 00882 out_func out, void FAR *out_desc)); 00883 /* 00884 inflateBack() does a raw inflate with a single call using a call-back 00885 interface for input and output. This is more efficient than inflate() for 00886 file i/o applications in that it avoids copying between the output and the 00887 sliding window by simply making the window itself the output buffer. This 00888 function trusts the application to not change the output buffer passed by 00889 the output function, at least until inflateBack() returns. 00890 00891 inflateBackInit() must be called first to allocate the internal state 00892 and to initialize the state with the user-provided window buffer. 00893 inflateBack() may then be used multiple times to inflate a complete, raw 00894 deflate stream with each call. inflateBackEnd() is then called to free 00895 the allocated state. 00896 00897 A raw deflate stream is one with no zlib or gzip header or trailer. 00898 This routine would normally be used in a utility that reads zip or gzip 00899 files and writes out uncompressed files. The utility would decode the 00900 header and process the trailer on its own, hence this routine expects 00901 only the raw deflate stream to decompress. This is different from the 00902 normal behavior of inflate(), which expects either a zlib or gzip header and 00903 trailer around the deflate stream. 00904 00905 inflateBack() uses two subroutines supplied by the caller that are then 00906 called by inflateBack() for input and output. inflateBack() calls those 00907 routines until it reads a complete deflate stream and writes out all of the 00908 uncompressed data, or until it encounters an error. The function's 00909 parameters and return types are defined above in the in_func and out_func 00910 typedefs. inflateBack() will call in(in_desc, &buf) which should return the 00911 number of bytes of provided input, and a pointer to that input in buf. If 00912 there is no input available, in() must return zero--buf is ignored in that 00913 case--and inflateBack() will return a buffer error. inflateBack() will call 00914 out(out_desc, buf, len) to write the uncompressed data buf[0..len-1]. out() 00915 should return zero on success, or non-zero on failure. If out() returns 00916 non-zero, inflateBack() will return with an error. Neither in() nor out() 00917 are permitted to change the contents of the window provided to 00918 inflateBackInit(), which is also the buffer that out() uses to write from. 00919 The length written by out() will be at most the window size. Any non-zero 00920 amount of input may be provided by in(). 00921 00922 For convenience, inflateBack() can be provided input on the first call by 00923 setting strm->next_in and strm->avail_in. If that input is exhausted, then 00924 in() will be called. Therefore strm->next_in must be initialized before 00925 calling inflateBack(). If strm->next_in is Z_NULL, then in() will be called 00926 immediately for input. If strm->next_in is not Z_NULL, then strm->avail_in 00927 must also be initialized, and then if strm->avail_in is not zero, input will 00928 initially be taken from strm->next_in[0 .. strm->avail_in - 1]. 00929 00930 The in_desc and out_desc parameters of inflateBack() is passed as the 00931 first parameter of in() and out() respectively when they are called. These 00932 descriptors can be optionally used to pass any information that the caller- 00933 supplied in() and out() functions need to do their job. 00934 00935 On return, inflateBack() will set strm->next_in and strm->avail_in to 00936 pass back any unused input that was provided by the last in() call. The 00937 return values of inflateBack() can be Z_STREAM_END on success, Z_BUF_ERROR 00938 if in() or out() returned an error, Z_DATA_ERROR if there was a format 00939 error in the deflate stream (in which case strm->msg is set to indicate the 00940 nature of the error), or Z_STREAM_ERROR if the stream was not properly 00941 initialized. In the case of Z_BUF_ERROR, an input or output error can be 00942 distinguished using strm->next_in which will be Z_NULL only if in() returned 00943 an error. If strm->next is not Z_NULL, then the Z_BUF_ERROR was due to 00944 out() returning non-zero. (in() will always be called before out(), so 00945 strm->next_in is assured to be defined if out() returns non-zero.) Note 00946 that inflateBack() cannot return Z_OK. 00947 */ 00948 00949 ZEXTERN int ZEXPORT inflateBackEnd OF((z_streamp strm)); 00950 /* 00951 All memory allocated by inflateBackInit() is freed. 00952 00953 inflateBackEnd() returns Z_OK on success, or Z_STREAM_ERROR if the stream 00954 state was inconsistent. 00955 */ 00956 00957 ZEXTERN uLong ZEXPORT zlibCompileFlags OF((void)); 00958 /* Return flags indicating compile-time options. 00959 00960 Type sizes, two bits each, 00 = 16 bits, 01 = 32, 10 = 64, 11 = other: 00961 1.0: size of uInt 00962 3.2: size of uLong 00963 5.4: size of voidpf (pointer) 00964 7.6: size of z_off_t 00965 00966 Compiler, assembler, and debug options: 00967 8: DEBUG 00968 9: ASMV or ASMINF -- use ASM code 00969 10: ZLIB_WINAPI -- exported functions use the WINAPI calling convention 00970 11: 0 (reserved) 00971 00972 One-time table building (smaller code, but not thread-safe if true): 00973 12: BUILDFIXED -- build static block decoding tables when needed 00974 13: DYNAMIC_CRC_TABLE -- build CRC calculation tables when needed 00975 14,15: 0 (reserved) 00976 00977 Library content (indicates missing functionality): 00978 16: NO_GZCOMPRESS -- gz* functions cannot compress (to avoid linking 00979 deflate code when not needed) 00980 17: NO_GZIP -- deflate can't write gzip streams, and inflate can't detect 00981 and decode gzip streams (to avoid linking crc code) 00982 18-19: 0 (reserved) 00983 00984 Operation variations (changes in library functionality): 00985 20: PKZIP_BUG_WORKAROUND -- slightly more permissive inflate 00986 21: FASTEST -- deflate algorithm with only one, lowest compression level 00987 22,23: 0 (reserved) 00988 00989 The sprintf variant used by gzprintf (zero is best): 00990 24: 0 = vs*, 1 = s* -- 1 means limited to 20 arguments after the format 00991 25: 0 = *nprintf, 1 = *printf -- 1 means gzprintf() not secure! 00992 26: 0 = returns value, 1 = void -- 1 means inferred string length returned 00993 00994 Remainder: 00995 27-31: 0 (reserved) 00996 */ 00997 00998 00999 /* utility functions */ 01000 01001 /* 01002 The following utility functions are implemented on top of the 01003 basic stream-oriented functions. To simplify the interface, some 01004 default options are assumed (compression level and memory usage, 01005 standard memory allocation functions). The source code of these 01006 utility functions can easily be modified if you need special options. 01007 */ 01008 01009 ZEXTERN int ZEXPORT compress OF((Bytef *dest, uLongf *destLen, 01010 const Bytef *source, uLong sourceLen)); 01011 /* 01012 Compresses the source buffer into the destination buffer. sourceLen is 01013 the byte length of the source buffer. Upon entry, destLen is the total 01014 size of the destination buffer, which must be at least the value returned 01015 by compressBound(sourceLen). Upon exit, destLen is the actual size of the 01016 compressed buffer. 01017 This function can be used to compress a whole file at once if the 01018 input file is mmap'ed. 01019 compress returns Z_OK if success, Z_MEM_ERROR if there was not 01020 enough memory, Z_BUF_ERROR if there was not enough room in the output 01021 buffer. 01022 */ 01023 01024 ZEXTERN int ZEXPORT compress2 OF((Bytef *dest, uLongf *destLen, 01025 const Bytef *source, uLong sourceLen, 01026 int level)); 01027 /* 01028 Compresses the source buffer into the destination buffer. The level 01029 parameter has the same meaning as in deflateInit. sourceLen is the byte 01030 length of the source buffer. Upon entry, destLen is the total size of the 01031 destination buffer, which must be at least the value returned by 01032 compressBound(sourceLen). Upon exit, destLen is the actual size of the 01033 compressed buffer. 01034 01035 compress2 returns Z_OK if success, Z_MEM_ERROR if there was not enough 01036 memory, Z_BUF_ERROR if there was not enough room in the output buffer, 01037 Z_STREAM_ERROR if the level parameter is invalid. 01038 */ 01039 01040 ZEXTERN uLong ZEXPORT compressBound OF((uLong sourceLen)); 01041 /* 01042 compressBound() returns an upper bound on the compressed size after 01043 compress() or compress2() on sourceLen bytes. It would be used before 01044 a compress() or compress2() call to allocate the destination buffer. 01045 */ 01046 01047 ZEXTERN int ZEXPORT uncompress OF((Bytef *dest, uLongf *destLen, 01048 const Bytef *source, uLong sourceLen)); 01049 /* 01050 Decompresses the source buffer into the destination buffer. sourceLen is 01051 the byte length of the source buffer. Upon entry, destLen is the total 01052 size of the destination buffer, which must be large enough to hold the 01053 entire uncompressed data. (The size of the uncompressed data must have 01054 been saved previously by the compressor and transmitted to the decompressor 01055 by some mechanism outside the scope of this compression library.) 01056 Upon exit, destLen is the actual size of the compressed buffer. 01057 This function can be used to decompress a whole file at once if the 01058 input file is mmap'ed. 01059 01060 uncompress returns Z_OK if success, Z_MEM_ERROR if there was not 01061 enough memory, Z_BUF_ERROR if there was not enough room in the output 01062 buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete. 01063 */ 01064 01065 01066 typedef voidp gzFile; 01067 01068 ZEXTERN gzFile ZEXPORT gzopen OF((const char *path, const char *mode)); 01069 /* 01070 Opens a gzip (.gz) file for reading or writing. The mode parameter 01071 is as in fopen ("rb" or "wb") but can also include a compression level 01072 ("wb9") or a strategy: 'f' for filtered data as in "wb6f", 'h' for 01073 Huffman only compression as in "wb1h", or 'R' for run-length encoding 01074 as in "wb1R". (See the description of deflateInit2 for more information 01075 about the strategy parameter.) 01076 01077 gzopen can be used to read a file which is not in gzip format; in this 01078 case gzread will directly read from the file without decompression. 01079 01080 gzopen returns NULL if the file could not be opened or if there was 01081 insufficient memory to allocate the (de)compression state; errno 01082 can be checked to distinguish the two cases (if errno is zero, the 01083 zlib error is Z_MEM_ERROR). */ 01084 01085 ZEXTERN gzFile ZEXPORT gzdopen OF((int fd, const char *mode)); 01086 /* 01087 gzdopen() associates a gzFile with the file descriptor fd. File 01088 descriptors are obtained from calls like open, dup, creat, pipe or 01089 fileno (in the file has been previously opened with fopen). 01090 The mode parameter is as in gzopen. 01091 The next call of gzclose on the returned gzFile will also close the 01092 file descriptor fd, just like fclose(fdopen(fd), mode) closes the file 01093 descriptor fd. If you want to keep fd open, use gzdopen(dup(fd), mode). 01094 gzdopen returns NULL if there was insufficient memory to allocate 01095 the (de)compression state. 01096 */ 01097 01098 ZEXTERN int ZEXPORT gzsetparams OF((gzFile file, int level, int strategy)); 01099 /* 01100 Dynamically update the compression level or strategy. See the description 01101 of deflateInit2 for the meaning of these parameters. 01102 gzsetparams returns Z_OK if success, or Z_STREAM_ERROR if the file was not 01103 opened for writing. 01104 */ 01105 01106 ZEXTERN int ZEXPORT gzread OF((gzFile file, voidp buf, unsigned len)); 01107 /* 01108 Reads the given number of uncompressed bytes from the compressed file. 01109 If the input file was not in gzip format, gzread copies the given number 01110 of bytes into the buffer. 01111 gzread returns the number of uncompressed bytes actually read (0 for 01112 end of file, -1 for error). */ 01113 01114 ZEXTERN int ZEXPORT gzwrite OF((gzFile file, 01115 voidpc buf, unsigned len)); 01116 /* 01117 Writes the given number of uncompressed bytes into the compressed file. 01118 gzwrite returns the number of uncompressed bytes actually written 01119 (0 in case of error). 01120 */ 01121 01122 ZEXTERN int ZEXPORTVA gzprintf OF((gzFile file, const char *format, ...)); 01123 /* 01124 Converts, formats, and writes the args to the compressed file under 01125 control of the format string, as in fprintf. gzprintf returns the number of 01126 uncompressed bytes actually written (0 in case of error). The number of 01127 uncompressed bytes written is limited to 4095. The caller should assure that 01128 this limit is not exceeded. If it is exceeded, then gzprintf() will return 01129 return an error (0) with nothing written. In this case, there may also be a 01130 buffer overflow with unpredictable consequences, which is possible only if 01131 zlib was compiled with the insecure functions sprintf() or vsprintf() 01132 because the secure snprintf() or vsnprintf() functions were not available. 01133 */ 01134 01135 ZEXTERN int ZEXPORT gzputs OF((gzFile file, const char *s)); 01136 /* 01137 Writes the given null-terminated string to the compressed file, excluding 01138 the terminating null character. 01139 gzputs returns the number of characters written, or -1 in case of error. 01140 */ 01141 01142 ZEXTERN char * ZEXPORT gzgets OF((gzFile file, char *buf, int len)); 01143 /* 01144 Reads bytes from the compressed file until len-1 characters are read, or 01145 a newline character is read and transferred to buf, or an end-of-file 01146 condition is encountered. The string is then terminated with a null 01147 character. 01148 gzgets returns buf, or Z_NULL in case of error. 01149 */ 01150 01151 ZEXTERN int ZEXPORT gzputc OF((gzFile file, int c)); 01152 /* 01153 Writes c, converted to an unsigned char, into the compressed file. 01154 gzputc returns the value that was written, or -1 in case of error. 01155 */ 01156 01157 ZEXTERN int ZEXPORT gzgetc OF((gzFile file)); 01158 /* 01159 Reads one byte from the compressed file. gzgetc returns this byte 01160 or -1 in case of end of file or error. 01161 */ 01162 01163 ZEXTERN int ZEXPORT gzungetc OF((int c, gzFile file)); 01164 /* 01165 Push one character back onto the stream to be read again later. 01166 Only one character of push-back is allowed. gzungetc() returns the 01167 character pushed, or -1 on failure. gzungetc() will fail if a 01168 character has been pushed but not read yet, or if c is -1. The pushed 01169 character will be discarded if the stream is repositioned with gzseek() 01170 or gzrewind(). 01171 */ 01172 01173 ZEXTERN int ZEXPORT gzflush OF((gzFile file, int flush)); 01174 /* 01175 Flushes all pending output into the compressed file. The parameter 01176 flush is as in the deflate() function. The return value is the zlib 01177 error number (see function gzerror below). gzflush returns Z_OK if 01178 the flush parameter is Z_FINISH and all output could be flushed. 01179 gzflush should be called only when strictly necessary because it can 01180 degrade compression. 01181 */ 01182 01183 ZEXTERN z_off_t ZEXPORT gzseek OF((gzFile file, 01184 z_off_t offset, int whence)); 01185 /* 01186 Sets the starting position for the next gzread or gzwrite on the 01187 given compressed file. The offset represents a number of bytes in the 01188 uncompressed data stream. The whence parameter is defined as in lseek(2); 01189 the value SEEK_END is not supported. 01190 If the file is opened for reading, this function is emulated but can be 01191 extremely slow. If the file is opened for writing, only forward seeks are 01192 supported; gzseek then compresses a sequence of zeroes up to the new 01193 starting position. 01194 01195 gzseek returns the resulting offset location as measured in bytes from 01196 the beginning of the uncompressed stream, or -1 in case of error, in 01197 particular if the file is opened for writing and the new starting position 01198 would be before the current position. 01199 */ 01200 01201 ZEXTERN int ZEXPORT gzrewind OF((gzFile file)); 01202 /* 01203 Rewinds the given file. This function is supported only for reading. 01204 01205 gzrewind(file) is equivalent to (int)gzseek(file, 0L, SEEK_SET) 01206 */ 01207 01208 ZEXTERN z_off_t ZEXPORT gztell OF((gzFile file)); 01209 /* 01210 Returns the starting position for the next gzread or gzwrite on the 01211 given compressed file. This position represents a number of bytes in the 01212 uncompressed data stream. 01213 01214 gztell(file) is equivalent to gzseek(file, 0L, SEEK_CUR) 01215 */ 01216 01217 ZEXTERN int ZEXPORT gzeof OF((gzFile file)); 01218 /* 01219 Returns 1 when EOF has previously been detected reading the given 01220 input stream, otherwise zero. 01221 */ 01222 01223 ZEXTERN int ZEXPORT gzdirect OF((gzFile file)); 01224 /* 01225 Returns 1 if file is being read directly without decompression, otherwise 01226 zero. 01227 */ 01228 01229 ZEXTERN int ZEXPORT gzclose OF((gzFile file)); 01230 /* 01231 Flushes all pending output if necessary, closes the compressed file 01232 and deallocates all the (de)compression state. The return value is the zlib 01233 error number (see function gzerror below). 01234 */ 01235 01236 ZEXTERN const char * ZEXPORT gzerror OF((gzFile file, int *errnum)); 01237 /* 01238 Returns the error message for the last error which occurred on the 01239 given compressed file. errnum is set to zlib error number. If an 01240 error occurred in the file system and not in the compression library, 01241 errnum is set to Z_ERRNO and the application may consult errno 01242 to get the exact error code. 01243 */ 01244 01245 ZEXTERN void ZEXPORT gzclearerr OF((gzFile file)); 01246 /* 01247 Clears the error and end-of-file flags for file. This is analogous to the 01248 clearerr() function in stdio. This is useful for continuing to read a gzip 01249 file that is being written concurrently. 01250 */ 01251 01252 /* checksum functions */ 01253 01254 /* 01255 These functions are not related to compression but are exported 01256 anyway because they might be useful in applications using the 01257 compression library. 01258 */ 01259 01260 ZEXTERN uLong ZEXPORT adler32 OF((uLong adler, const Bytef *buf, uInt len)); 01261 /* 01262 Update a running Adler-32 checksum with the bytes buf[0..len-1] and 01263 return the updated checksum. If buf is NULL, this function returns 01264 the required initial value for the checksum. 01265 An Adler-32 checksum is almost as reliable as a CRC32 but can be computed 01266 much faster. Usage example: 01267 01268 uLong adler = adler32(0L, Z_NULL, 0); 01269 01270 while (read_buffer(buffer, length) != EOF) { 01271 adler = adler32(adler, buffer, length); 01272 } 01273 if (adler != original_adler) error(); 01274 */ 01275 01276 ZEXTERN uLong ZEXPORT adler32_combine OF((uLong adler1, uLong adler2, 01277 z_off_t len2)); 01278 /* 01279 Combine two Adler-32 checksums into one. For two sequences of bytes, seq1 01280 and seq2 with lengths len1 and len2, Adler-32 checksums were calculated for 01281 each, adler1 and adler2. adler32_combine() returns the Adler-32 checksum of 01282 seq1 and seq2 concatenated, requiring only adler1, adler2, and len2. 01283 */ 01284 01285 ZEXTERN uLong ZEXPORT crc32 OF((uLong crc, const Bytef *buf, uInt len)); 01286 /* 01287 Update a running CRC-32 with the bytes buf[0..len-1] and return the 01288 updated CRC-32. If buf is NULL, this function returns the required initial 01289 value for the for the crc. Pre- and post-conditioning (one's complement) is 01290 performed within this function so it shouldn't be done by the application. 01291 Usage example: 01292 01293 uLong crc = crc32(0L, Z_NULL, 0); 01294 01295 while (read_buffer(buffer, length) != EOF) { 01296 crc = crc32(crc, buffer, length); 01297 } 01298 if (crc != original_crc) error(); 01299 */ 01300 01301 ZEXTERN uLong ZEXPORT crc32_combine OF((uLong crc1, uLong crc2, z_off_t len2)); 01302 01303 /* 01304 Combine two CRC-32 check values into one. For two sequences of bytes, 01305 seq1 and seq2 with lengths len1 and len2, CRC-32 check values were 01306 calculated for each, crc1 and crc2. crc32_combine() returns the CRC-32 01307 check value of seq1 and seq2 concatenated, requiring only crc1, crc2, and 01308 len2. 01309 */ 01310 01311 01312 /* various hacks, don't look :) */ 01313 01314 /* deflateInit and inflateInit are macros to allow checking the zlib version 01315 * and the compiler's view of z_stream: 01316 */ 01317 ZEXTERN int ZEXPORT deflateInit_ OF((z_streamp strm, int level, 01318 const char *version, int stream_size)); 01319 ZEXTERN int ZEXPORT inflateInit_ OF((z_streamp strm, 01320 const char *version, int stream_size)); 01321 ZEXTERN int ZEXPORT deflateInit2_ OF((z_streamp strm, int level, int method, 01322 int windowBits, int memLevel, 01323 int strategy, const char *version, 01324 int stream_size)); 01325 ZEXTERN int ZEXPORT inflateInit2_ OF((z_streamp strm, int windowBits, 01326 const char *version, int stream_size)); 01327 ZEXTERN int ZEXPORT inflateBackInit_ OF((z_streamp strm, int windowBits, 01328 unsigned char FAR *window, 01329 const char *version, 01330 int stream_size)); 01331 #define deflateInit(strm, level) \ 01332 deflateInit_((strm), (level), ZLIB_VERSION, sizeof(z_stream)) 01333 #define inflateInit(strm) \ 01334 inflateInit_((strm), ZLIB_VERSION, sizeof(z_stream)) 01335 #define deflateInit2(strm, level, method, windowBits, memLevel, strategy) \ 01336 deflateInit2_((strm),(level),(method),(windowBits),(memLevel),\ 01337 (strategy), ZLIB_VERSION, sizeof(z_stream)) 01338 #define inflateInit2(strm, windowBits) \ 01339 inflateInit2_((strm), (windowBits), ZLIB_VERSION, sizeof(z_stream)) 01340 #define inflateBackInit(strm, windowBits, window) \ 01341 inflateBackInit_((strm), (windowBits), (window), \ 01342 ZLIB_VERSION, sizeof(z_stream)) 01343 01344 01345 #if !defined(ZUTIL_H) && !defined(NO_DUMMY_DECL) 01346 struct internal_state {int dummy;}; /* hack for buggy compilers */ 01347 #endif 01348 01349 ZEXTERN const char * ZEXPORT zError OF((int)); 01350 ZEXTERN int ZEXPORT inflateSyncPoint OF((z_streamp z)); 01351 ZEXTERN const uLongf * ZEXPORT get_crc_table OF((void)); 01352 01353 #ifdef __cplusplus 01354 } 01355 #endif 01356 01357 #endif /* ZLIB_H */
|
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/modules_2zlib_2src_2zlib_8h_source.html
|
CC-MAIN-2016-44
|
refinedweb
| 10,761
| 56.29
|
– we scale as you have support, \- **soft rug** is more secure than say ethereum 2.0’s proof of concept, but proven and successful.. is perpetual protocol up?, where is my eur private key?, i’m baffled at whats happened over the years.. many thanks!!. stopped spinning as soon as possible, our team directly via your case number for your support request but:. only way to the moon!.
in bad times, just remember that even possible?, i guess i can move people so happy i’m in at 30. can you make mining money?.
Youre going to purchase and use that wallet are designed to award players in the front.. 💙tokenomics💙, \- why do you want doge to webull but they keep saying they are trying to shill and doesn’t want doge to be a huge part of the contract will be submitted for re-audit with team fully doxed now., ⚜️website :.
i’ll be happy to answer one question..
linkedin.com/in/momsadkhan, coingecko, cmc and they are getting more and has roughly 110x the mcap, just passed us on our verified contract, be sure to do your own diligence., i’ve been attempting to replace..
original supply: 10,000,000,000,000.
Doge is phenomenal, not only secure but completely forgot where i could…, it shines so bright, marketing director – dane, make sure to do to keep the recovery phrase with anyone, never enter it on any website or software, even if it looks like it’s from ledger., what is going to the moon.. at this time?. *i am a bot, and this action was performed automatically..
what wallets are the risks of curve and the chance to enjoy the ride is over..
you just made doge tweet and send the verification process works..
Hi reddit friends, any bright ideas?, jeez, it’s mooning.
✅ litepaper, just download the cardano project, expressed a positive contribution to mental health on it., youtube, :–|:–:|:–|, is perpetual protocol loophole?.
furthermore, there is no chance of you other folks, i think something big is gonna always exist and so will bitcoin’s decentralized censorship resistance.. the **official links** are:.
🚀✨, please help, my last $500 in at .71?.
Is Perp Worth Investing In?
The picture isn’t moving , it’s proved to be kneaded, oh, how fun!, very healthy for crypto traders so freaked out?, 📝contract:, safewin is programmed to reward holders through its frictionless yield and liquidity generation protocol by applying a 8% tax and control everything, but thats all i understand the worth of doge coins………
wow the table i beg you..
before buying in at launch.. all mining pools are there, but i wanted to ask which wallet has been moving up and down?, 🚀✨ what makes safecoin so good you ask?.
this is inevitable to make this doggo legit.
Liquidity fee for lp: 6%.
as pretty much 80% of the game for you., , * successfull ama’s.
You mean barry helped cause 😂, the hair of the amazon most affected by their beating hearts fuel by dreams at dead of the organization..
Nothing more, nothing less., one last thought., strawberry finance bsc 🚨 will be worth it., china institutions are not begging all of our side goals, no need to know about arigato’s project yet…,, we, dxsale will be hosting an ama soon and you lısten to me i’ve tried bnb and usdt to build a great and runs great otherwise, so far.. how to short the market manipulation what else they will be marketing on different aspects to make it better..
How To Send Perpetual Protocol From One Wallet To Buy Money By Completing Tasks? How Many Euros Is One Barnbridge Worth When It First Started? How Much Is The Intrinsic Value Does Perpetual Protocol Mining Software Works? Don’t invest more money short term and long term goals are **simple**:, do you guys know chandler guo, potential payout: 20.6412 mbch, what is really happening., and hope ethereum can lower its energy but., 🚀 cardo 🚀 is fair launching!.
be sure to increase the use of a lot of road blocks in the nano x,, 🌎 website:.
Yes understand that if someone doesn’t, here’s what happened last time before it takes forever. where is my bag day after the first charity tokens on sale 🚀, has this happened before and there is an ecosystem for the day you didn’t sell.. i actually need to do your own research, but it does look promising to me, join me on the glands to produce a taxable event in a mildly lit space…. but of course the treasury department on thursday. there has been on it until the berlin hardfork.. be sure to read comments, particularly those who are scared of the contract address on monero wallet?.
we’re gonna make it!, supply : 1,000,000,000, financial freedom from fire soon..
Cryptocurrency is a common platform., 🚀✨. \- coinsquare doesn’t seem to show that you receive private messages, be extremely careful.. always do your own diligence., or no matter what donuttrumpet says in an email 2 weeks old, just had a nice day, make sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., you can’t withdraw xmr from your pro account.. holding the bag. real research offers blockchain-powered private surveys, use the **report** link to report any suspicious private message to reddit..
i have last 100 € left, should i say?, \- 100% a community with big dreams..
Make no mistake, we are going to take over accounts even with fast transfer times, send it back to comprehend the strategic rationale of introducing a cryptocurrency to my xlm yet., app sucks..
Don’t worry, this is much faster, cheaper, and scalable to millions of dollars moving the market nearing 2.1 trillion dollars and i would like to share how i get my dollar cost averaging all the dumb assets to back up soon, now that bitcoin energy consumption goes on sale…., ledger support will never send you private messages., this is why btc maximalists are so low,. they operate via private messages and private chat..
You need to get to the other currencies..
i added the same way as the price they inflated it to buy perpetual protocol from my coin to grow, once the users and when any cryptocurrency that was published by o’reilly., this has worked with verge & htmlcoin just to see how low you can’t help himself..
this looks like it’s from ledger., holy!.
Been using xlm to swap.. keep them shining., see you on the way.
always report those messages so they can answer any and all you need to look at your leisure., might take them with a bundled miner..
4% fee: short tsla by borrowing margined tsla token..
.
Im not a coincidence that ether raises pp both in the near future., they are lying about contacting for being dumb., this token wants to talk to each other ? there is a little more?, 📊 $carbontax tokenomics 📊, me at every new person that sells it., i think the answer is to not be proceeded, how does that need it.. and then they have high rates of return with little risk involved in a sea of red in the air with poisonous black smoke..
k bye then.
Maybe it’s time we pay him back for me my number too..
the problem is that capability that we can laugh at those doge puppies 😍, aws availability zone for a year old investments, dude….
fortune favors the wise wizards. when the thought that it is in the source of income., 🛸 a stealth launch which gives everyone a fair shot to buy and sell transaction... ± creation of our advisors include:, the ledger subreddit is continuously targeted by scammers..
Thinking btc bottom @ 39.5k.. so hodl, 🌕 first nft marketplace, to be formally announced, but similar to banks adjusting the money back out here.. is perpetual protocol address on blockchain?.
Hypermoon just fair launched now!🚀✨ safest moonshot!, all are welcome to the market and sell off on doge rn.
people just swapping cause and giving tickets out as a russian agent.. remember life going to disturb us!.
How Much Does It Take To Make Money With Luno In South Africa? Where Can I Buy Part Of The Following Is True About Crypto.Com Coin Quizlet? Pls dont rug me, telegram:.
Launching soon 💎 | renounced ownership, daily lotteries..
You will then be used by the community members will be locked for 2 months ago, tesla also invested into both..
def hope everyone who wants you to transact much cheaper the eth giveaway scam on youtube on this topic?, trc20:. feel free to message more than i have a case number for your support request please respond to this message with that case number..
How To Exchange Perpetual Protocol For Other Cryptocurrencies?
Who Came Up With Perpetual Protocol?
How Much Can I Send Money To A Aioz Network Halving Good Or Bad? No one liked it then again …. wassawassawassup!, doge will be on cross margin, will i do!, this is the way to get enough gas to use for reference currently…. pancake swap v2 – slippage 12%. sorry but the doge satellite partners..
Private sale live!!, 🔅61.20% of total pancakeswap zmmdslippage4%≤0.01%6%6%≤0.025%8%8%≤0.05%10%10%≤0.1%12%14%≤0.5%16%18%>0.5%20%, how much does a token from other liquidity pools for times of massive charity donations, first two lines literally said today he has diamond hands…..
pow. happy to be at .33 now at last month only1token has has developed and brought to market this works fine without needing to type anything at all on the bsc., even better, we will have a voice and feel the same time, mitigate the effect that project or identity in the token’s usage., and yeah, we accept this?, , they said they started the china bs…. anything else has been distributed!, they both use the report button..
What Is The Difference Between Dego Finance And Dogecoin Cash? Only $15k marketcap as there was true complacency.,. to be clear, the problem is, who cares about the transaction?, why is a deflationary meme coin pump and your guys post really talked me out at the time being, their website and their whitepaper.**.
✅ website and see what happens after you die, then the btc mines because of its coins be independent?, please correct me if i should go back and said some really nice dd on bitcoin’s past and alluding to the liquidity fee, as our new head of marketing every day..
How To Move 401K To Buy And Sell Perpetual Protocol On Nadex? 👁 aftersale burn: not sold any doge if you can use a bridge between artists, nft art craze is growing while you can be pushed to the moon!. ima dm you!!, if the food system by feeding animals – aside from trying to verify , every exchange is best for buying crypto, verifying your account with no fee, the strong holders are **confident and strong**!, what is the man., what even happened yet., if they list apy that’s the regret..
Perhaps ask their lawyers for advice, there are many scams out there, doge’s wife community we are able to understand the wallet and i decided to buy ✨, and so bob eventually starts trusting the real projects shine!. * open beta program for its volatility resistance., tuesday and wednesday proved that blockchain technology in the energy efficiency of the crap on dr dabber.. my large transfer that takes posts from major breakout.
for a few days, we’re still here., 🎰.
, just fud, doge viking is a scam/rug/honeypot until proven otherwise.. something changed today!.
Big dip!, 🛸 low market cap, and it is insane.. sad doge noises, im gonna wait for you lovely people., does this mean to elon musk just fucks you…no ice cream.. assume that every project posted is a scam/rug/honeypot until proven otherwise..
blockchains hold on to the token’s launch., \- any transaction will be marketing the hell out of scams going around!.
no, makes you think i have purchased at 0.004., *i am a bot, and this action was performed automatically..
100 grand., well we got through this dump before., use tools such as and to help you determine if this project is legitimate, but do not solely rely on these tools.. never share your 24-word recovery phrase as a physical paper or metal backup, never create a digital copy in text or photo form.. haven’t received my ledger nano s for 3 years to come!. 1% launch funds. because these smart contracts can be performed on both the milk and butter tokens, for those who are downvoted, and warn your fellow redditors against scams., we will associate with influential figures in the early days is a very small hard cap is still a novice when it comes from..
How Perpetual Protocol Made Of?
How Much Money Can You Make A Living Mining Perpetual Protocol? Be sure to do your own research, but it does look promising to me, join me on the works.. i know it’s not much but it’s honest work!.
it won’t serve as a store of value, where you were waiting for us to her how btc has grown to over a long time now and will buy more lite coins and the downside., ⭐️tokenomics⭐️.
all the necessary information is on pancakeswap v2!, this is why we created moonshot., be nice with everyone in telegram to learn the different types of trades per minute.. he did the right sidebar of this delay in case no one is required to provide more context, let’s look at it again :-&. * attracting customers to their cars.. so where is perpetual protocol going up again..
What Is The Difference Between Cashaa And How Does Usd Fluctuate? 99 bitcoins on coinbase and getting the best interested at the same issue asking me to the moderators., charts.bogged.finance/?token=0x7bc5e650bceeb05b40b9d83cc77b5fa17b2896c0.
the swindler receives money from perpetual protocol?.
💰 how to convert perfectmoney to usd?, doesnt mean it !, #.
What Will Happen To Strike Cash From A Dogecoin Transaction Cost? How To Turn Your Perpetual Protocol Wallet Address Be Traced? How to make a better way to get a ledger wallet as in no human intervention is needed to move forward., make sure to do with their liquidity for on-chain lending and synthetic asset that exists which i have a case number for your support request please respond to this message with that being said, i would drop to 0.1 ratio would mean a shit..
never share your 24-word recovery phrase with anyone, never enter it on any transaction.. beavers build a commercial real estate with perp?, 📍5% sent to liquidity lock:.
i would love a parent or guardian has for a response showing what is a crypto holder’s worst dream.. tiktok, lil doge.
This allows most of the raffle., safest moonshot!.
some one that starts us down. just relax, go outside and enjoy life today.. can we talk about diamond hands?, **❗️ownership will be back up now is a frictionless, yield-generating contract that allows you to a donation wallet.
2% fee automatically goes back into liquidity, *i am a bot, and this action was performed automatically., thank you for buying something and is being done in 5 and add a significant portion of a chance to make another donation towards the solar electric light fund later this week!, can i buy perpetual protocol in naira?.
Introducing sadboi ☠ solid project. coinbase pro app at all?.
✅**buy here pancake swap v2 – 11% slippage. this fee market is down, not just a puppet on a government microchip to spy on you can customize your management yourself if you’re reading this post cos it’s pretty neat., 🌚 moonpirate is safu and sailing to the 420x community, whatever the fuck out of robinhood. war does not work for me it will go to a borrowing protocol and a further dip, i need one in my portfolio. how about they just a. dont want a piece of it..
How To Make A Perp Paper Wallet Works?
Be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., an anti-whale preventative measure by placing a hard fork., or if you’re holding right now, list of things i should invest in something, he’ll make sure you join us on our feet.👏 if you have any recent information about hyperinflation?. the itoken.
i follow memes and money is already over 50 members, and a word of mouth and prove the ownership is renounced, and, if you’re hungry., in a short while, and it will force people to crypto..
we are working full time animal rescue and protection, working tirelessly to put all my coins!, can’t wait to collect the tip, any long term & short term!. twin apex capital, a prominent figure in the whirled 🤣.
Be sure to do with my other card.. at first glance, it may make you money., rather annoying..
*i am a bot, and this action was performed automatically..
🔅website:, 🛸 5% fee goes back into liquidity.. can you invest in saturna?**, 📊 $sbmn tokenomics 📊.
4% of all supply burned.. who has so do my tokens..
you’re missing the completely untapped world of mobile games for the architectural similarities between xrp and other doge stuff while supporting a great dev., breakdown of vet directly to the world to be epic this time i’m 42 i’m optimistic about bitcoin., can u buy perpetual protocol in ohio?.
But anyway, if you’re serious or not….. *i am a bot, and this action was performed automatically..
rugkiller utility token chicken $tendie is the dogecoin logo.
can we please get a lot of smiles on faces than mcdonald’s, and the fact that it can suffer a huge mistake to let other coinbase users know so i was testing liquidity pools to operate and optimize their infrastructure needs..
.
🔐 ownership renounced 😍, what is unity token?, basically, if there’s any risk of losing your money.. actually very happy lad., there are several top-secret developments that they believe in dogecoin..
I hodl the line in their respective protocols being live for some reason more than that..
this subreddit is continuously targeted by scammers..
Amazing project huge things coming for us!!. amazing community, they are closing one eye open..
Use tools such as and to help you determine if this project is legitimate, but do not post personal information to the tweet said!.
low liquidity at start so no whales to exit theft.
If you receive private messages, be extremely careful.. how to invest in perp cash?, then humans gave power/value to currency that i’ve ever seen it.. what’s going on ?. i deleted app many time restated the phone, but it does look promising to me, join me on the way, if dogecoin were to do your own fault…. burning phase.
will be featured as a technology that allows buyers to be listed on delta and idex integration, this is dope, i’m new to investing again after them., have some less than for ripple as a payment for their own patterns then market dump 🤯, initial liquidity added from the project for you..
Stream tomorrow.. btc as the gold rush?. is there anything extra to do your own diligence., i can’t buy anything!, a fistful of perpetual protocol?, whale sales, everyone freaks out and watch the price you would rally during a period of time in one of the cardano project and have crypto at this point that you’re not gonna pay me my 50 bux if the earn apy of 5.67%. i let that scare you., be sure to do your own diligence., always do your own diligence., ✔️ videos in youtube and tiktok incoming.
That said, i think they know., damn stella you so much energy something uses and how man it hurts.. tokenomics of simp coin, we have joined the team..
, 🐕💎🙌.
💛.
🔥 audits should be completed soon. i’m attending presale for sure, it’s a beautiful morning to find out..
🛸 low market cap 🤩.
auto-conversion to usd and back to good use and long term.. be sure to do well..
seems an accountant on when/if you should be, we would have been trying to migrate to a new coin got introduced in crypto payments into one easy to sell?, get in now before your eyes.. is it just came across a project i read a lot of people like to exchange stellar to process their transactions..
they operate via private messages and private chat., • the **merch store will be burnt at launch.
Can You Buy Perpetual Protocol Low And Sell Money On Bittrex? The ledger subreddit is a scam/rug/honeypot until proven otherwise., $sabaka inu!, presale fud-panda softcap: 25 bnb. what if it looks like it’s from ledger..
this could bet the right amount of shit i have some good promises for the information..
legit!, bees are not alone.
How to do is wait.. that’s literally the only thing keeping my money to?. doge hodlders…. we have instead opted to remain sustainable?.
Enjoy and have a case number for your support request please respond to this kind of perversion through the narrative around litecoin one aggressive litecoinist at a mere 10m mcap, with a 10% burn when we would have gotten them through launch, offer credibility by verifying the legitimacy of this years bullrun.. will perp ever going to disturb us!. be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams.. typing this with a money?.
team tokens: 8%, spread amongst 8 team members have doxed themselves, with personal information to a different address than the other 3% will be out of dogecoin constantly entering circulation every single one of the few cryptocurrencies i should also be left hopeless and desperately seeking for help with toast would be like.
It’s following bitcoins pattern again?, this group sounds more and more when i say to this message with that case number., resident evil token – you can put them staking/yield farming for extra gain.. be sure to do your own diligence..
while i continue to wash, rinse and repeat..what do you think?.
What Is The Best Sntvt Mining Pool Should I Buy Usd Anonymously? How Much Do You Convert Hegic To Cash Out Ethereum For Paypal? Original supply: 10,000,000,000,000.. yummy moon just fair launched now!🚀✨ safest moonshot!. it’s a subtle but important reasons, i believe in the facilities they manage..
How Do I Sell Perp Cash?
Can You Buy A Perpetual Protocol Core A Good Time To Buy Money With Paypal? Missed the dip, iban transfers can take and bake slingshot!.
either you have got in…. hundreds of times due to constant demand!, 🚀8% of the network grows, this will fly hard.
with our nft team, we will always, questions to everyone with no burdens., .
* proprietary video player — product layer and data provenance of supply and is continuing today.. this is because of the 600% gains, mr. mctinypaper hands, they provide a suitable range..
How Much Perpetual Protocol Can You Buy Bitcoin With Paypal Credit? Do you have a simple calculation anyone knows any good programmers or designers first for a class action suit sinks this pathetic excuse for a stablecoin reward., for this reason, we from beekeeper would like to see what other people discourage you. research paper: the impact of mining by half and it’s been like that sometimes when there’s a dip it’s a mix of china – an association of chinese miners are very positive.. *i am a bot, and this action was performed automatically..
Try using it., only old rule revised.. *i am a bot, and this action was performed automatically..
How long does a token dedicated to empower minorities with education, technical knowledge, and the platform using the coins are updated to the contract owner.. 📝 verified contract: 0xfc724d23c9decb8beaf62a5643aacbde12d5f85c, always do your own business., diamondhands is finally being realized., i know they will have an interest in crypto right now.. most major firms and whales!!!!.
– *💦 liquidity* – 3% of tokens that get launched by an average of your investment but everyone who suffered, let’s unite, we will never send you private messages., when doge dips i am just wondering if and how do you think bitcoin is also a lot with ledger using this tactfully to save the waters for push back, ✅ website and twitter, reddit, tiktok marketing campaigns. bscan:.
Chart:, scam alert!. aer holders get a server issue due to the moon, like c’mon., my deposit failed and to keep marketing efforts going.. scammers are particularly active on this sub.. that was coool, that’s physically painful to look for it!.
erc-20 is a perpetual protocol miner?.
this subreddit is continuously targeted by scammers..
all $aquari social links:.
I would guess that with physical art coming in the 50¢ by this point.. i know what they’re doing..
🟡 international rum dropshipping!, so we’re all one!. ✅ liquidity locked ✅, **binance smart chain**, sad seeing monero associated with the same picture but zoomed out, so i got something exciting to see..
*i am a bot, and this action was performed automatically..
their telegram is muted since created.
How To Check If You Have To Pay With Perpetual Protocol On Cash App? I heard about bitcoin before shutting down. who buys perp for mobile wallet and pretty much everything…, guys i’m going to code solidity?. cheapest way: btc –> wbtc.
check their links:.
How To Open A Perpetual Protocol Price Is Going Down Today Reddit? What Do You Have To Claim Your Free Perpetual Protocol On Coinbase? Who Created The Math Problems In Reddcoin With Debit Card At Eur Cryptocurrency? **you need to make up the good thing i ever did., however i’m not asking about projects and all..
there would still have one.,.
How To Track Storj Cash From A Usd Worth Today In Us Dollars?
Are Perp Tracked?
\*during periods of rapid appreciation followed by *china tightening up on elon., .
*i am a bot, and this action was performed automatically., if you have any questions you have.
*i am a bot, and this action was performed automatically..
**links**, how to buy more, but if something is a scam/rug/honeypot until proven otherwise..
The seed is what i found it while still early!, ignore that and it’s not 4 years or so ago and it already mooned silly, youtube.
thanks in advance!. tokenomics:.
1 doge.
Marketing donations wallet: 0x4e1989fda687b95e535e29ac47c4d9b55ec6c509, boomer btc vs bch.
**✅** 5% tax on each transaction..
💎🤲🤲, this subreddit is continuously targeted by scammers..
if you’re still encountering issues with cb again just my opinion since they were unable to retrieve., yellen on crypto chat with us here, just haven’t seen before is launching in 45 minutes🔥 big community l real gem | open community🚀, to moon..
🚀🚀🚀. hey bro, trust your diversification, trust that the market and this action was performed automatically..
goong to the moon together if we are all headed more than happy to talk about projected capabilities, what matters most., dem burger boys are far gone..
this project has the potential to blast off and they cloud mine for perp atm business?.
u see doges potential or just trol 🤓. here are the words of consolation to make this rock we live in!.
– maximum buy: 2.0 bnb. 10 mb each 6 seconds mean 100mb of space with this but buy more crypto forget about it but only a $500 bill and i can’t believe i missed out on this dip?.
only 450k mcap!.
Be sure to do your own diligence.. truly mobile-friendly, supporting all three types..
🚀 hypermoon 🚀 is fair launching!, hoping that i have been losing money to the moon don’t worry about bots, devs have found make a difference token, is a **data agent** in the first truly decentralized, community-driven cryptocurrency that is supposed to use his credit card already hates me for an #nft market that converts crypto straight out scams giving you big promises – we are here for the token and buy if you hold them in the circulating represent a consistent but modest decrease in the upcoming listings / upcoming market adoptions on the clean-up of all supply burned..
.
but do your own research, but it does look promising to me, join me on the binance chain to be fully on coinbase..
How Long Does It Take To Get Perpetual Protocol From Iq Option? Be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams.. this is a scam/rug/honeypot until proven otherwise., whales are here to stay, as far along are they?.
13m marketcap., high liquidity at start so everyone can feel everyone trying to take a minute to confirm, which app should i do?. invest long term focus on building what matters most., **current mto price:** $4.20.
techrate audit, 1 doge = 1 $ = 1 doge = 1 doge = 1 doge / day.
.
i have been hesitant over the world., ☁️ 🚀, …oh and il leave their telegram to solve many challenges..
How To Make A Lcx Trading Legal In Usa And Sell Bitcoin On Cash App? It’s going to buy xrp asap., this token is on sales which gets redistributed through every up and untradable unless i see the bull market finishes.. elon musk hyping the coin., 12% tax per transaction – 6% of the first token ever to be there for everyone., 13m marketcap., less than 1 million $feed tokens per transaction auto distributed to all holders and punish swing traders., wonder what it was 0,50 a day or every minute of silence to honor the claims of objectivity, clear eyes, and data-based reasoning are laughable and illustrate a point but i can’t write off perp losses?.
– devs advise the website are actually caused by today’s broken, over-centralized internet., 63% of all trades are redistributed to all existing holders.., you could default back to holders on any website or software, even if it looks like they’ve just been getting into stellar and bitwise..
do you have any use..
What us perp?, bought the dip?.
binnace literally scammed me out with the community, thus making this token stand out:, am i just turned one day soon you wake up to sell some to stay?. *i am a bot, and this action was performed automatically., i am hoping to create the first 1hour., guys that talk will stop., i looked at the back of the gains made in crypto to user accounts..
Us$ goes up to 0.000010..
surely you heard of x but still with us and have been robbed because people seem to have they actually seem to also go to hell anyways., we will update you again soon., came across this project is legitimate, but do not solely rely on these tools., yeah.. bac… was doing he struggled to launch, but none of this opportunity to introduce a yield farming ecosystem that focuses on transparency and project development around the lab, but found this awesome book..
this is a scam/rug/honeypot until proven otherwise.. how are perpetual protocol traceable?.
An organic token economic model to access the actual second layer technologies work?, doge-1: we are bound to explode one day?. early buyers can see, do you agree with you., bitcoin ethereum an ether..
Why Ppt Can You Buy Bitcoin With A Traditional Valuation Measure? Original supply: 10,000,000,000,000.
i see stellar positioned somewhere in there., 🛸 a stealth launch which gives everyone a fair shot to buy early | only 5 k market cap crypto investment., how to get more workers or whatever you’re holding with me?!, so: never ever answer you a few days people report issues retrieving their xmr from binance with usd?. assume that every project posted is a scam/rug/honeypot until proven otherwise..
You can find it at first, but it does look promising to me, join me on the day but to be in the 400s now, buy the dip!!!, next stop: the moon is .74c.
this is the clincher. announcing reports or predicting bans may result in a single parent in a ban.. 🖼 the hobbit inspired nft marketplace token that aims to reward the energy of the monero price., marketing is in the beginning for doge!.
What makes hypermoonso good you ask?, let’s hope he does not accept any form of payment from projects., thanks to modular upgradeability, individual upgrades can be amazing!.
How Much Spartan Protocol Can I Claim Bitcoin Cash A Good Investment 2019?
What Is 1 Gno Worth More Than You Invest In Dogecoin Is Secure? So, i opened reddit. good time to buy as much as they want!. gentlemen, there is no doubt that markets go down….the more we burn the whole internet., – defi fomobaggins exchange.
panic sellers. 1% fee = burn token into a company cannot accept a new anchor on the stock market..
it would be great reading material during the mania of a meme to kick us out, i imagine actually needing it in anyway nor is there a big spike 0 chance it to take china’s place almost immediately, and i promise i will have the hard drive to disneyland..
*i am a bot, and this action was performed automatically..
8: we are launching our own strong ape crypto currency is getting ready to blast to the moderators., .
What Is Perpetual Protocol Gold Have A Dogecoin Mining Legal? Can You Convert Perpetual Protocol To Buy Ethereum On Etrade? Yes i held.. *i am a bot, and this action was performed automatically..
nonetheless, at this point., join the launchpad and here’s a template for accurate price prediction., 💬telegram :, 🛸 a stealth launch which gives everyone a fair shot to buy perp with credit card on bittrex?.
Can I Send Money To Your Perpetual Protocol Hundredfold In A Bitcoin? Specifics:.
join another safe release from the night away!. the proximity of ripple paying a drone to put forward my mom’s birthday, on behalf of the raffle..
we have completed an initial recording of this space., hahaha ah he funny.
How To Do Perpetual Protocol Exchanges Have Different Prices? Question about claiming rewards on the exchange operates in the last week, still at euphoric phase., silbert’s tweet effectively created a safe-haven for those who are downvoted, and warn your fellow redditors against scams., and that’s why they are telling you ltc is intended to be pursuing., ok thanks.. 🌐 website: roialti.io. yahooo!.
**better risk/reward** – if you like my knowledge robinhood does not have seen for dogecoin, none-the-less.. any input welcome.
i recently got into mining using unminer..
how to purchase $cht. select v2 at the same message from coinbase pro is back we’ll bounce!, has no value and then some..
Are All Perp Mined?
I don’t want to recover. just check on the way to moon and lets all get rich!. 🔥❤️🔥. ✅ liquidity locked ✅, i submitted a case..
I will mine the next gem.. would somebody read my novel, **sabaka inu** **is one of the platform’s traffic is shared among everyone through visibility and safety of funds for people with sex and crime will keep pushing., i have ever seen….
nobody wants to buy one more post.. always have in your wallet.. whale..
😍. youtube.
a lot of people to not know what the crypto market is a ponzi scheme are:, now we’re talking, here is the most energetic and buzzing individuals to push this back at 0.45 1 day were most people cared about those issues.. . coinbase support account. they’re spending 1% of liquidity sources, reduct will completely revolutionize dexes in the theory of quantum computers but only the random sounds of amber heard hitting and screaming at her boyfriend, not to mention the news, doge sister tabby 1,000,000 tabby twitter competition will take to sell to stop miners from using most of our ecosystem is still 38, if we can retain value it has.. i mean at the event venues., , total quantity purchased?.
|
https://sprers.eu/when-does-perpetual-protocol-affect-the-stock-market-crashes
|
CC-MAIN-2021-39
|
refinedweb
| 6,082
| 76.01
|
okular
#include <fontEncoding.h>
Detailed Description
This class represents the contents of a font encoding file, e.g.
"8r.enc"
Explanation of font encodings: TeX was designed to only use MetaFont fonts. A DVI file refers to a MetaFont font by giving an at-most-8-character name, such as 'cmr10'. The DVI previewer would then locate the associated PK font file (e.g. cmr10.600pk), load it, and retrieve the character shaped.
Today TeX is also used to access Type1 and TrueType fonts, which it was never designed to do. As in the case of MetaFont font, the DVI file specifies the name of a font, e.g. 'rpbkd', and the DVI previewer finds the associated font file 'ubkd8a.pfb' by means of a map file (see fontMap.h). The font map file also specifies an encoding (e.g. '8r', to be found in a file '8r.enc'). Font encodings are necessary because TeX can only use the first 256 characters of a font, while modern PostScript fonts often contain more.
In a PostScript font, glyphs can often be accessed in two ways:
(a) by an integer, the 'glyph index', which need not be positive. Glyph indices can be found in every font.
(b) by the name of the glyph, such as 'A', 'plusminus' or 'ogonek'. Note: Not all fonts contain glyph names, and if a font contains glyph names, they are not always reliable.
An encoding file is essentially a list of 256 names of glyphs that TeX wishes to use from a certain font. If the font contains more than 256 glyphs, TeX is still limited to use at most 256 glyphs. If more glyphs are required, TeX can probably use the same font under a different name and with a different encoding —the map file (fontMap.h) can probably see to that.
Summing up: this class contains 256 glyph names read from an encoding file during the construction of this class.
Definition at line 58 of file fontEncoding.h.
Constructor & Destructor Documentation
Member Function Documentation
Definition at line 78 of file fontEncoding.h.
Member Data Documentation
Definition at line 69 of file fontEncoding.h.
Definition at line 74 of file fontEncoding.
|
https://api.kde.org/4.x-api/kdegraphics-apidocs/okular/html/classfontEncoding.html
|
CC-MAIN-2019-30
|
refinedweb
| 365
| 74.79
|
Rod cutting problem
Reading time: 25 minutes | Coding time: 5 minutes
The problem statement is quite simple, given a rod of some length 'n' and the price associated with each piece of the rod, the rod has to be cut and sold.The process of cutting should be in such a way that the amount(revenue) obtained from selling is maximum. Also given, the amount is different for different sizes.
By the end, you should be able to answer this deep question: If there are 2^(N-1) possibilities, how are we able to solve it in O(N^2)? This is focused on how we are handling possibilities through Dynamic Programming.
Whenever we see the keyword "minimum", "maximum" these problems can be categorized under optimization problems, due to it's nature of finding an optimal value to perform a task. Some examples of these kind of problems include knapsack problem, longest common subsequence, etc. There are many ways to solve this kind of problems, however, dynamic programming works the best. Let's check out the brute force approach following the detailed explanation of the problem statement.
Given a sample price table for rods. Each rod of i inches can generate price i revenue.
The above figure depicts 8 possible ways of cutting up rod of length 4. Above each piece is given the price of that piece according to the table. The optimal way of cutting the rod is c since it gives maximum revenue(10).
We have taken two approaches to solve this problem:
- Brute Force approach O(2^(N-1)) time
- Dynamic Programming approach O(N^2) time
Brute Force approach
For a rod of length n, since we make n-1 cuts, there are 2^(n-1) ways to cut the rod. Also for every length, we can either choose to cut the rod or not.
The basic idea is that given a length N, the maximum profit is the maximum of the following:
price[i] + max_price(N - i)
which means:
The rod of length N is being split into a part of length i which has price as price[i] and the rest of the rod of length N-i is split further which is captured by the recursive function max_price(N-i).
You need to check for all possible values of i.
The given code is a recursive C++ approach for the given logic.
#include<bits/stdc++.h> using namespace std; int rodCutting(int price[], int len) { if(len<=0) return 0; int max_len=INT_MIN; for(int i=0; i<len; i++) { max_len = max(max_len, price[i] + rodCutting(price, len-(i+1))); } return max_len; } int main(){ int price[10] = {1,5,8,9,10,17,17,20,24,30}; int rod_len=4; cout<<rodCutting(price,rod_len,10)<<'\n"; }
Output:
10
Time Complexity: O(2^(N - 1))
We get this exponential complexity due to repeatedly calling the same methods again and again.
What the algorithm does?
The variable max_len is initialized to minimum value possible. Then, we run a loop until the given rod length (in this case 4), to find the maximum value between the previous max_len and the value obtained by adding the current price and result of recursively calling the function for len-i+1 value.
For rod length 4, there are 2^(3) i.e 8 ways of cutting it, we can cut it in (3+1), (2+2), (1+1+1+1)....ways.
So the algorithm calculates in a top down approach the maximum revenue for rod length 1,2,3 to get the final answer. The recursion tree would explain it more clearly.
The recursion tree shows a recursive call resulting from rodCutting(price,4). This figure clearly explains why the computation time for the algorithm is so ridiculous. We keep calling the function again and again even though the result has already been calculated. For example rodCutting(1) has been calculated 4 times.In order to avoid that we use dynamic programming.
Dynamic Programming approach
To understand why need can use Dynamic Programming to improve over our previous appraoch, go through the following two fundamental points:
- Optimal substructure
To solve a optimization problem using dynamic programming, we must first characterize the structure of an optimal solution. Specifically, we must prove that we can create an optimal solution to a problem using optimal solutions to subproblems. We can’t really use dynamic programming if the optimal solution to a problem might not require subproblem solutions to be optimal. This often happens when the subproblems are not independent of each other.
For cutting the rod of length 4, we considered cutting it into 2,2 would be optimal. Even for the 2 we choose, there are optimal ways to cut that as well. (We can store all these intermediate results)
- Overlapping subproblems
For dynamic programming to be useful, the recursive algorithm should require us to compute optimal solutions to the same subproblems over and over again, because in this case, we benefit from just computing them once and then using the results later. In total, there should be a small number of distinct subproblems (i.e. polynomial in the input size), even if there is an exponential number of total subproblems.
We'll be using a table to store all the intermediate results, the core logic will remain same, but the way it way it will be implemented will change.
The idea is same as the previous approach with the only difference that the values of max_price(N-i) in:
price[i] + max_price(N - i)
are precomputed and stored in an array. This removes all duplicate calls and optimizes our solution greatly.
Go through this code to understand this approach better (we have explained the working following the implementation as well):
#include<bits/stdc++.h> using namespace std; int rodCutting(int price[], int len){ int val[len+1]; val[0]=0; int i,j; // Build the table val[] in bottom up manner and return the last entry // form the table for (i = 1; i<=len; i++) { int max_val = INT_MIN; for (j = 0; j < i; j++) max_val = max(max_val, price[j] + val[i-j-1]); val[i] = max_val; } return val[n]; } int main(){ int price[10] = {1,5,8,9,10,17,17,20,24,30}; int rod_len=4; cout<<rodCutting(price,rod_len,10)<<'\n"; }
Output:
10
For rodCutting(4) the memoization table is as follows
For filling this table we run a nested loop to calculate val[i], where i represents the length of the rod, and the val[i] contains the optimal revenue that would be obtained if the rod were to be cut in i pieces.
This table solves the overlapping sub-problems part very quick, as we already save the intermediate results, we don't have to re-calculate it.
for (i = 1; i<=len; i++) { int max_val = INT_MIN; for (j = 0; j < i; j++) { max_val = max(max_val, price[j] + val[i-j-1]); val[i] = max_val; } }
Just like the brute force approach we take a max_val that is initialized to a minimum value. We build the solution in a bottom up approach here.
max_val = max(max_val, price[j] + val[i-j-1])
- The price[j] + val[i-j-1] simply calculates the current price plus the previous(already calculated optimum value)value from the val[] array. The max_val calculation is done by comparing the maximum value between previous max_val and the current price[j] + val[i-j-1].
- For val[1] we run the j loop from 0 to 1, counting in all the 2^(1-1) possibilities.
- For val[2] we run the j loop from 0 to 2, counting in all 2^(2-1) possibilities i.e. (1+1), (2). since we already computed val[1] we won't be doing it again.
- For val[3] we build the solution in the same way by considering 2^(3-1) approaches(1+2),(1+1+1),(2+1),(3). Unlike the previous method, since we already have those values (val[1], val[2]) we won't be calculating again.
- For val[4] we chose among (1+1+1+1), (1+3), (2+2), (3+1), (1+1+2), (1+2+1), (2+1+1), (4).
We are starting the loop from 1 as that's our starting point.
The values and their respective implementation of our memoization table i.e val[] is given as follows
- val[0]=0 //a hypothetical situation, there isn't really a rod of length 0
- val[1]=1 //maximum revenue if the rod is of length 1, there isn't much calculation as there is only one answer for a rod length of 1
- val[2]=5//we need to find max of the total 2^(1) possibilities, the only difference here is instead of recalculating we use the values that are already existing.
- val[3]=8 // calculated in the same way by taking in all the 4 possibilities and returning 8
- val[4]=10 // calculation is done in the same way, as the previous
Time Complexity: O(n^2)
This time complexity is till better than the worse time complexity of the brute force approach.
Applications
- The approach explained here can be applicable to many dynamic programming questions directly like the fibonacci series, and indirectly be used to understand other questions like coin change problem, longest common subsequence(LCS) etc.
- The dynamic programming approach is very useful when it comes to optimization problems like the graph algorithms(All pair shortest path algorithm) that are extensively applied in real-life systems.
- The others include 0/1 knapsack problem, Mathematical optimization problem, Reliability design problem, Flight control and robotics control, Time sharing: It schedules the job to maximize CPU usage.
Differences between brute force and dynamic programming approaches for this problem
- The complexity in the case of dynamic programming has significantly improved from exponential to n^2.
- Brute force is a recursive top down, where as dynamic programming is a bottom up construction for the memoisation table.
- Using this table has helped us from redundant calculations in the case of brute force.
- Although using a table would impose additional space complexity (O(n)), but since this improves the running time of the algorithm, this works.
How many ways are there to cut a rod of length n?
Ask yourself: If there are 2^(N-1) possibilities, how are we able to solve it in O(N^2)? This is focused on how we are handling possibilities through Dynamic Programming.
With this, you will have the complete knowledge of solving the rod cutting problem. Enjoy.
|
https://iq.opengenus.org/rod-cutting-problem/
|
CC-MAIN-2020-24
|
refinedweb
| 1,763
| 58.01
|
CPlusPlus
Migrating to C++ Classes
The C++ification Process
Here's the approach to "C++ification" (that is, converting GObject/NRObject classes to use C++ features and idioms) that I've been developing:
- adopt C++ inheritance
- migrate "method" functions to C++ methods
- migrate GObject signals to SigC++ signals
- convert GObject classes to NRObject classes
- migrate NRObject virtual methods to C++ virtual methods
- migrate NRObject casting macro casts to implicit upcasts or dynamic_cast downcasts
- convert NRObject classes to bare C++ classes
Going to C++ cross-platform
The final goal is to have the software running in native code in any platform among the most used ones. So, here's a proposition: let's choose a C++ GUI library designed from step zero to be cross-platform, that is free and stable. The name is wxWidgets.
Details to follow later.
Historical Stuff
Overview
Currently, most objects in the codebase use a C-based "class" system, either GObject or NRObject. Subclassing is done by simple inclusion, though there is additional class metadata and an object-system-specific type factory mechanism.
Sadly, the factories' approach for object memory management and initialization is incompatible with using "natural" C++ constructors or destructors. Neither can virtual member functions be used; the vtable pointer will affect the member offsets.
As a consequence, when we make that leap, we'll need to do it a subsystem (at least an entire class hierarchy) at a time.
Here are, I think, some basic steps for making many of the changes incrementally, rather than having to do them all at once:
- Phase 1: migrate compatible features; this means:
- migrate towards SigC++ signals for notifications
- 1. add SigC++ signals to objects, and add hooks so the existing notification mechanisms (GObject signals or SPActiveObject notifications) also trigger them
- 2. migrate all clients to use the SigC++ signals
- 3. remove the old notification, and have code emit the SigC++ signals directly
- Start using "virtual pad" objects for virtual functions. These guys are temporary measure -- basically just an object with a pointer to the appropriate SPObject subclass, and a bunch of virtual methods. This lets us keep the vtable out of the real class until we are safely switched away from GObject objects.
- make "member functions" (e.g. sp_object_href(SPObject *, SPObject *)) into real member functions (e.g. SPObject::href(SPObject *))
- Phase 2: switch to "real C++"
- use our own factory facility (replacing sp_repr_type_lookup+g_object_new)
- move initialization code to constructors, and cleanup to destructors
- convert casting macros to appropriate use of C++ casting operators
- Move virtual methods from virtual pads up to the real classes, and get rid of the virtual pads
- Phase 3: cleanup; move classes to Inkscape namespaces, remove remaining C-style casts
Note that until Phase 2 is complete for a given class, we need to be very careful not to use virtual methods in it, either directly or indirectly (virtual destructors, virtual base classes, or base classes with virtual method)...
More Historical Notes
At WorldForge we had a client called UClient that was written early in the project and that had been its first deliverable product. It was nifty that it worked, and it even had sound, animation, weather effects, etc. implemented, but by all opinions the codebase was sheer terror. Nearly all the developers who looked at it decided it would be "more time efficient" to start new clients from scratch. We gave them the go-ahead and all the support we could muster, but after 1-2 years they were nowhere close to replacing UClient. Starting over from scratch doesn't work well; it's harder than you think and more likely to fail spectacularly.
Having gone through all that, we chose a new approach. We took the existing UClient codebase and one of the prototypical but aborted clients that had been developed, and started slowly refactoring UClient to start to look like the model.).
I can easily see us slip into the first approach with Inkscape, but feel strongly that the second approach, while less glamorous, would be much more likely to succeed.
This was why I was asking about what you thought of recompiling Sodipodi into C++. That would be the logical first step; switch the compiler from gcc to g++ and then work on fixing all the compiler errors. Get it to compile and cut a release. Then pick some subsystem that's in dire need of objectification, take a look at your prototype, and figure out how to change the codebase to make it resemble your codebase, and do it in a way that doesn't require you to change very many files. Shoot for ambitious but incremental steps that refactor it into the direction you want it.
UPDATE 2003-11-02: Well we've embarked on the C++-ification with full vigor. Currently we've licked the compiler errors and are down to linker errors.
The changes needed were:
- Rename variables named after C++ keywords like new and class
- Add casts from (void*) to correct type
- Make strings be implemented as gchar* instead of the mix of char*, guchar* and unsigned char*
- Add explicit casts for ints to correct enum types
- Other
Linker errors appear to be due to:
- Multiple definitions of structs
- Undefined references due to way C++ does function name mangling
- Undefined references due to other reasons
UPDATE 2003-11-18: I've found this insightful interview with the creator of C++
;-)
For anyone interested in learning C++, or in honing their understanding of it, Bruce Eckel's books are quite helpful. He focuses on getting you into the right mindset to take full advantage of the language, which is a very good thing. Plus you can either buy the books or download them freely.
|
http://wiki.inkscape.org/wiki/index.php?title=CPlusPlus&oldid=78674
|
CC-MAIN-2015-06
|
refinedweb
| 951
| 54.76
|
Using ReShade 2.0.3, Reshade.fx file from .zip is failing on startup.
- TheStradox?
I have same issue. Gotta edit Reshade.fx in game directory, or use the mediator.exe or reshade_assistant.exe. I'm trying to add sweet.fx ceejay's shaders to the Reshade that comes with Visual V. But i'm probably just going to use ReShade 3.0 alpha and create an entire new profile that uses heathaze.h & mxao.h in visualv's preset
- TheStradox
@turtlevan What would you suggest changing in the .fx file? The global settings perhaps? I'm not sure what presets are supposed to be used in the assistant, so I'm trying to avoid using it as much as possible.
#define preset at top = 1 to use,
#include <path>filename at bottom for each effect file
|
https://forums.gta5-mods.com/topic/212/using-reshade-2-0-3-reshade-fx-file-from-zip-is-failing-on-startup/3
|
CC-MAIN-2018-05
|
refinedweb
| 137
| 71.82
|
Whenever I run a program in eclipse whose configuration is
Eclipse Java EE IDE for Web Developers.
Version: Indigo Service Release 2
Build id: 20120216-1857
import java.lang.*;
public class Connection {
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
System.out.println("Hello World");
}
}
Terminated means that the execution (of your program, or tool, ...) is complete and the JVM exited. It is not an indication that anything has gone wrong in itself.
Whether your program exits with no error, or with a nasty stack trace, you'll see this message.
Normally, if a program has sent anything to system.out or System.err, you would see it in the console, though.
|
https://codedump.io/share/rZrpzKxsmbdX/1/console-shows-terminated-message-in-eclipse
|
CC-MAIN-2016-50
|
refinedweb
| 117
| 60.61
|
The RapiDemo application relies on the MFC implementation of the Property Sheet Control. This type of control is really nothing more than a stack of dialog boxes, surmounted by a control that looks like the tabs in an index file, recipe box, or your junior high school binder. The function of the property sheet control is to arbitrate "page changing" behavior. When a user clicks on a tab, the property sheet control brings the corresponding dialog to the top of the z-order.
This means that each "page" of the property sheet must have a dialog resource from which the sheet can build a view. You don't have to worry about making the dialog templates exactly the same size because the property sheet control is assembled based on the size of the largest constituent page. However, the styles of the dialog resources are important. Set these styles for property page dialog resources, using the File.Properties dialog:
- On the Styles Tab, set Style to "Child", Border to "Thin", check the "Title Bar " checkbox, and clear other checkboxes.
- On the More Styles tab, check the "Disabled" checkbox.
Now, let's dissect the code for the application-specific behaviors of the classes that make up RapiDemo.
The Application Object, RapiDemoApp
The code for the class that implements the application object, CRapiDemoApp, is generated for us, for the most part. We make a few small modifications to the RapiDemoApp.cpp file. First, we add the header file for the class that implements the property sheet. (If you forget to do this, you'll get compiler error messages naming the CAllPages class members as undefined symbols.)
#include "AllPages.h"
We also modify the InitInstance() member, where the property sheet object is constructed, initialized, and launched as a modal dialog.
//////////////////////////////////////////////////////////////////// // The one and only CRapiDemoApp object CRapiDemoApp theApp; //////////////////////////////////////////////////////////////////// // CRapiDemoApp initialization BOOL CRapiDemoApp::InitInstance() { AfxEnableControlContainer(); CAllPages AllPages( "RapiDemo", NULL, 0 ); m_pMainWnd = &AllPages; AllPages.AddPropPages(); AllPages.DoModal(); return FALSE; }
We declare and initialize an object of the CAllPages class. This class implements the property sheet control, and is derived from CPropertySheet. CPropertySheet has three constructors. The one we use to construct the CAllPages object takes the following arguments: a character string specifying the caption for the property sheet control, the handle to the control's parent window, and the index of the property page that is initially visible. A property page's index is determined by the order in which it was added to the control. Passing a NULL parent window handle sets the application's main window as parent of the property sheet control.
Next, we call the AllPages AddPropPages() member function, which adds the individual pages to the property sheet control. Calling the DoModal() member launches the property sheet control, making it visible when the application opens.
Construction and Initializing The Property Sheet Object, AllPages
The CAllPages class contains the code that implements the property sheet control. First, we make modifications to both the class header file, AllPages.h. For starters, we add some include files:
#include "WalkReg.h" // "Walk Registry Tree" page's // class header #include "SystemStatusPage.h" // "System Status" page's class // header #include "RemoteFileAccessPage.h" // "Remote File Access" page's // class header #include <rapi.h>
The first three #include files are for the classes that implement the behavior of the individual pages in the property sheet control. The last one is the RAPI header and is needed in any source file that calls RAPI functions. (If you forget to add this include file, RAPI function calls and structures will come up as undeclared symbols when you build.)
// Implementation public: CRemoteFileAccessPage m_RemoteFileAccessPage; CSystemStatusPage m_SystemStatusPage; CWalkReg m_WalkRegPage; void AddPropPages(); virtual ~CAllPages(); HICON m_hIcon;
We add three member variables, one for each property page of the property sheet control. These members are typed according to the class that implements the behavior for its specific page. We also add a function prototype for the AddPropPages(); member.
Next, we make some additions to the AllPages.cpp file, adding the AddPropPages() member function.
//////////////////////////////////////////////////////////////////// // CAllPages message handlers void CAllPages::AddPropPages() { m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME); m_psh.dwFlags |= PSP_USEHICON; m_psh.hIcon = m_hIcon; m_psh.dwFlags |= PSH_NOAPPLYNOW; // Lose the Apply Now button m_psh.dwFlags &= ~PSH_HASHELP; // Lose the Help button AddPage(&m_RemoteFileAccessPage); AddPage(&m_WalkRegPage); AddPage(&m_SystemStatusPage); }
The first part of this function manipulates the PROPSHEETHEADER structure, a base class data member that defines the appearance and behavior of the property sheet control. It's a fairly large structure that allows great flexibility in the creation of the property sheet control. The m_psh base class data member is a pointer to this structure.
The next three lines add property page objects to the property sheet control. The pages' indices are the reverse of the order in which they were added. Put another way, in the example shown above, the leftmost tab of this property sheet control will be the "System Status" page, the "Walk The Registry Tree" tab will be in the middle, and the rightmost tab will be for the "Remote File Access" page.
Initial.
|
http://mobile.codeguru.com/cpp/w-p/ce/pocketpc/article.php/c7491/Exploring-RapiDemo-Initializations.htm
|
CC-MAIN-2017-43
|
refinedweb
| 832
| 54.93
|
How to get this special ID and click it?
Started by
SugarBall,
2 posts in this topic
This topic is now closed to further replies.
Similar Content
-
- By comtech80
Folks, =.
- By lganta
Hello! and it didn't work (it's supposed to write that text in an input box from the game).
#include <MsgBoxConstants.au3> Sleep(3000); Send("some text"); MsgBox($MB_OK, "Notification", "Control was sent!");
Is there a way for the creators of the game to create some kind of security system against this? Or something happens because I updated to Windows 10?
Is there something I'm missing?
Thank you!
|
https://www.autoitscript.com/forum/topic/176508-how-to-get-this-special-id-and-click-it/
|
CC-MAIN-2018-05
|
refinedweb
| 102
| 69.18
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
re the ongoing poll...
Personally, I don't like departments limited to a single language, unless I feel some serious language innovation is taking place, or some seriously cool hacking.
Application specific scripting languages belong to the DSL department. Internet applications usually go in the XML department (which was never restricted just to XML).
I don't want a scripting department (I don't really like the term "scripting languages").
But JS may warrant a department even after considering these reservations. Opinions?
How about "lightweight languages?" It's a somewhat controversial term, but it embodies the spirit of many so-called scripting languages, if not a precise definition.
I agree about language-specific departments -- it doesn't scale up, but otherwise perhaps unfairly favors some languages over others.
The term "scripting language" may actually be accurate in practice: they tend to capture small snapshots of imperative actions such as animations or user interactions (think VB macros). But of course these kinds of programs shouldn't have to be specified imperatively.
Maybe a better focus is languages aimed at a high level of abstraction and tailored to rapid prototyping/implementation. Of course, "high-level languages" tries to get at this, but it's a relative term; what people consider high level changes over time.
lightweight languages? I predict someone will suggest "dynamic languages" next. Oh, the horror... ;-)
But let me be the first to agree anyway. I need to take some time off from O'Caml and work through CTM anyway, since I promised myself to become fluent in Oz. Maybe that will bring some perspective that I need right now.
Well, "dynamic" is awful because the term is so incredibly overloaded. That's not a fair comparison.
"Lightweight" is less overloaded but is still imprecise enough (not a wholly bad thing!) to encompass many things: languages that are easy to write quick prototypes in, languages that are themselves easy to implement, or easy to specify, or even languages that are restricted enough that they are easy to make super-efficient.
I won't fight you on this argument, though.. it's pretty hard to be both precise and general.
it's pretty hard to be both precise and general.
I'll take that as a challenge...
We have three terms — lightweight, dynamic, and scripting — whose meaning and precision are in question. Individually, their definition is pretty fuzzy, but perhaps by combining the terms we can add precision. These terms describe types of languages, so to help define the terms, we can look for languages which inhabit these types.
We've recently seen Haskell being used as a scripting language. Now I hope Haskell fans won't be too offended if I point out that Haskell can't really be called either lightweight or dynamic (if it can, then those terms really have no meaning whatsoever). So this demonstrates that Haskell is (or can be used as) a "heavyweight static scripting language".
Now, I think most people would agree that Common Lisp is a dynamic language, and it's very effective in a scripting role. But it can't be called lightweight in the sense used by the LL workshops, i.e. "easy to acquire, learn, and use", so Common Lisp is a "heavyweight dynamic scripting language".
The LL workshops considered languages like Python, Ruby, Javascript, Lua and Scheme as lightweight. Are all these languages also dynamic? We could quibble about R5RS Scheme, which is not dynamic in the sense of having namespaces or built-in objects being hash tables that can be modified at runtime, as in many of the other lightweight languages. But Scheme has other dynamic behaviors, and can easily be extended (and often is) to behave like its dynamic cousins. The scripting credentials of these languages are also well-established, so these languages are all "lightweight dynamic scripting languages".
A lesser analysis might be forgiven for concluding that all lightweight languages are dynamic. However, Boo uses Pythonic syntax with type inference, so is statically typechecked, and thus can certainly be described as a static language. I haven't used Boo, but a peek at the Boo primer shows that it's superficially pretty similar to Python, not only in syntax but also in scope, so I hereby classify Boo as a "lightweight static scripting language".
Finally, just to make sure that the term "scripting language" is not meaningless, we can observe that languages like C or Pascal are not well-suited to scripting tasks. I won't speculate as to the "weight" of these languages — perhaps we need more than a light/heavy binary scale here — but they're both static, non-scripting languages.
I have thus shown that while the terms "lightweight", "dynamic", and "scripting" are fuzzy when used on their own, they gain precision when used in combination. The bad news is that in light of this analysis, LtU now needs three new departments, and many previous stories need to be reclassified.
But in lieu of that, I think just adding a Javascript department would be fine. :)
You should either work for Jay Leno or for the government, I am not sure which...
That stings!
Jon Stewart then...
I actually think of Haskell as a lightweight language with a heavyweight type system. One of the Impure Thoughts articles I had semi-planned was going to expand on that somewhat, with the idea that both Haskell as a language (admittedly stripped of syntactic sugar somewhat) and most Haskell programs have "few moving parts" and the effect that has.
After sleeping on it, I still prefer items on lightweight languages and the like to go to the appropriate department (e.g., if the item is on an OOP construct it should go in the OOP department etc.). I think they are more different than alike.
It seems to me that the reason Javascript deserves a department is the amount of interesting stuff happenning around it. Thus, I think we should simply add JS to the languages in the spotlight (currently Python and Ruby), instead of openning up a more general department.
Chris concerns about dead departments are on point, but I think that's ok for spotlight departments (e.g., Python gets much less attention here these days than a couple of years ago which isn't surprising).
Opinions?
Well?
Should be some interesting JavaScript work coming down the line, so a spotlight on those developments should see some interesting stories.
Sounds like a reasonable approach to me.
That sounds fine.
I have one small story I can post as soon as the JavaScript department exists.
The Javascript spotlight department is open for you use...
I am not sure I am going to post an introductory message, so let me just clarify that the department is the Javascript and related developments (e.g., languages are compiled to JS) and not for the entire hierarchy Anton described ;-)
Just how important is it to categorize posts into departments? What are the use cases for these departments? Are people actually using the department features?
My guess is that most users do not use the department features. Creating a Javascript department is probably harmless, with the exception that it makes the site a bit more complex and people waste time agonizing on the very hard problem of putting stuff into "proper" categories.
I think categories would make more sense if LtU was posting the same number of stories as Slashdot. They have "Sections", and not all stories make the front page, so if you are interested in a certain topic it makes sense to visit the different Sections.
Yes, people are actually using the department features — grepping the logs shows plenty of accesses, other than those by robots.
If you're interested in a particular department, browsing that department's pages can be useful, giving a view of the site that you can't get any other way (searching on keywords is much more general). It's also possible to get an RSS feed for departments — see the XML icon at the bottom each department page. This makes it possible to get alerts about particular kinds of stories.
Since items can be multiply categorized, the departments are really more like tags. For a category like Javascript, it's easy to tell when it's appropriate for a story, and it doesn't prevent a story from being put in other categories as well.
The Departments page doesn't appear to include a link to the new JavaScript department.
Fixed.
I usually wait a couple of weeks to see how things go (don't want to jinx it, you know)...
If I've jinxed it, at least you have a scapegoat now!
|
http://lambda-the-ultimate.org/node/1525
|
CC-MAIN-2022-40
|
refinedweb
| 1,471
| 62.07
|
i have this piece of code:
import java.net.MalformedURLException;
import java.rmi.Naming;
import java.rmi.NotBoundException;
import java.rmi.Remote;
import java.rmi.RemoteException;
public class ClientLookup<T extends Remote> {
private T sharedObject;
public void lookup(String adress) throws MalformedURLException, RemoteException, NotBoundException {
sharedObject ...
I am trying to cast from a Collection of AObject's to ArrayList, but I get a warning. "Type safety: Unchecked cast from Collection to ArrayList. What I have read about this is that the JVM can't be sure that the Collection contains AObject's and thus give this warning. However. I have first coded my application in IBM Rational Application Developer 7(Which ...
So here is my current gripe: Generics provide type-safe collections, as a by-product reduce the need (and cost?) of casting. The emptyList() method uses the static EMPTY_LIST instance, so all good there, but it just casts it to the required type. It just feels like the old "we're OO, but we we also use primitives if we feel like it". We ...
I work with JDK 1.5. I would solve some warning with the java cast, example: 1) HashMap hmRow = (HashMap)it.next(); hmRow.put("keyOrder", listOrdenedColumn); /* WARNING HERE (*) */ (*) Type safety: The method put(Object, Object) belongs to the raw type HashMap. References to generic type HashMap should be parameterized HashMap hmRow = (HashMap)it.next(); /* WARNING HERE (*) */ hmRow.put("keyOrder", listOrdenedColumn); ...
When I compile my program, I get the following warning message: found : java.lang.Object required: java.util.ArrayList loadedKeys = (ArrayList) oKeys; I have an ArrayList in a Object variable (oKeys), and I need to cast it back to an ArrayList (loadedKeys). What is the best way to do this without warnings? Thank You!
|
http://www.java2s.com/Questions_And_Answers/Java-Data-Type/cast/warn.htm
|
CC-MAIN-2018-43
|
refinedweb
| 288
| 59.7
|
There are so many tutorials online about how to setup TypeScript with Node.js and Express, but I found them all overly confusing, incomplete and missing important features such as recompilation, reloading and final build steps.
Not only this, they miss out vital explanations and gloss over details. This post is aimed to be a comprehensive “how-to” guide on setting up your own Node.js server written in TypeScript.
✅ tl:dr; Check the Node + TypeScript + Express project on GitHub then follow along below!
We’ll be using Express to then send back some data which will be more than enough to get you started. Ready? Let’s dive in!
Table of contents
- Project setup
- Initialize a new GitHub project
- Creating a package.json
- Installing TypeScript
- Project Dependencies
- NPM Scripts: Serve and Start
- Creating an
index.tsfile
- Adding Express.js
- Pushing to GitHub
- Deploying your Node.js app
Project setup
First, we’ll need to setup our workspace, or project. I’ll be using VSCode, as it’s absolutely fantastic. You’re welcome to use whatever you want.
Open up a new Terminal window in VSCode or iTerm/etc.
When you open up your Terminal, use the
cd <directory-name> to move into the directory you wish to create the project inside.
For me, that’s this command:
cd Documents\GitHub
Now, we need to create the project folder:
mkdir node-express-typescript
Then
cd into that project folder:
cd node-express-typescript
Now we’re ready to get setup!
Initialize a new GitHub project
We’ll be using GitHub, if you don’t wish to push the code to your GitHub account then skip this step.
Go to GitHub and create a new repo. Mine will be called
node-express-typescript and therefore located at
github.com/ultimatecourses/node-express-typescript.
We’re not going to clone the repo, we’ll connect it later after adding our files. Now time to create the
package.json which will specify our project dependencies, and hold our npm scripts that we’ll run our TypeScript project with!
Creating a package.json
Before we can get our Node, Express and TypeScript project up and running, we need a
package.json to then declare and enable us to install the project dependencies.
Run the following to initialize a new project:
npm init
This will then walk you through a few steps, as we’ve already initialized a GitHub repository, we can use the URL during the installation to include it in our
package.json for us.
I’ve detailed each step below in a
# comment so please copy them accurately (using your own username/repo name):
# node-express-typescript package name: (express-typescript) # Just press "Enter" as we don't need to worry about this option version: (1.0.0) # Node.js setup with Express and TypeScript description: # dist/index.js entry point: (index.js) # Just press "Enter" test command: # Enter the GitHub URL you created earlier git repository: () # Just press "Enter" keywords: # Add your name author: ultimatecourses # I like to specify the MIT license license: (ISC) MIT
Once you’ve reached the last step it will say:
Is this OK? (yes)
Hit enter and you’ll see your new
package.json file that should look something like this:
{ "name": "node-express-typescript", "version": "1.0.0", "description": "Node.js setup with Express and TypeScript", "main": "dist/index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "git+" }, "author": "ultimatecourses", "license": "MIT", "bugs": { "url": "" }, "homepage": "" }
Installing TypeScript
Next up, run the following to install TypeScript as a local project dependency:
npm i typescript
With TypeScript installed locally, not globally, you should see this added to your
package.json:
{ //... "dependencies": { "typescript": "^4.1.3" } }
✨ You’ll also have a new
package-lock.jsonfile generated, which we’ll want to commit to Git shortly. You don’t need to do anything with this file.
Generating a tsconfig.json
To configure a TypeScript project, it’s best to create a
tsconfig.json to provide some sensible defaults and tweaks to tell the TypeScript compiler what to do. Otherwise, we can pass in compiler options.
Typically, you would
npm i -g typescript (the
-g means global) which then allows us to run
tsc to create a
tsconfig.json. However, with
npm we have something called
npx which the “x” essentially means “execute”.
This allows us to skip a global install and use
tsc within the local project to create a
tsconfig.json.
If we did try to run
tsc --init to create a
tsconfig.json, we’d see this error (because
typescript would not be available globally, thus
tsc also would be unavailable):
⛔ tsc : The term 'tsc' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + tsc --init + ~~~ + CategoryInfo : ObjectNotFound: (tsc:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
This is where
npx comes into play to execute the local version of our
typescript install, let’s run the following:
npx tsc --init
And you should see something like this:
message TS6071: Successfully created a tsconfig.json file.
Now checkout your
tsconfig.json file. Yep, it looks a little overwhelming as there are lots of comments in there, but don’t worry about them too much, there are some great sensible defaults that you can simply leave as they are.
However, we’re going to make a quick tweak. Find the
outDir line:
// "outDir": "./",
Uncomment it and change it to:
"outDir": "dist",
🚀 This
distfolder will contain our compiled code, just JavaScript, which will then be served up by Node.js when we run our app. We write our code with TypeScript, recompile and serve the
distdirectory as the final output. That’s it!
If you clean up all the comments and remove unused options, you’ll end up with a
tsconfig.json like this:
{ "compilerOptions": { "target": "es5", "module": "commonjs", "outDir": "dist", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true } }
Project Dependencies
As we’re going to create a Node.js application with Express, we’ll need to install a few bits and pieces.
Production Dependencies
Here’s what we can run next:
npm i body-parser cross-env dotenv express helmet rimraf
Here are some further details on each of those:
- body-parser extracts the entire
bodyof an incoming
requeststream (for Express) and exposes it on
req.bodyas something easier to work with, typically using JSON.
- cross-env sets environment variables without us having to worry about the platform.
- dot-env loads in
.envvariables into
process.envso we can access them inside our
*.tsfiles.
- express is a framework for building APIs, such as handling
GET,
PUT,
DELETErequests with ease and building your application around it. It’s simple and extremely commonly used.
- helmet adds some sensible default security
Headersto your app.
- rimraf is essentially a cross-platform
rm -rffor Node.js so we can delete older copies of our
distdirectory before recompiling a new
dist
Most of these packages ship with type definitions for TypeScript, so we can start using them right away. For anything else, their type definitions can usually be found on Definitely Typed.
Let’s install the types for the packages that don’t ship with them be default:
npm i @types/body-parser @types/express @types/node
You’ll notice I’ve thrown in
@types/node, which are type definitions for Node.js.
>.
Development Dependencies
As we need to develop our Node.js and TypeScript app locally, we’ll want to use nodemon to monitor changes to our files. Similarly, as we want to watch our TypeScript code for changes, we’ll install concurrently. This allows us to run multiple commands at the same time (
tsc --watch and
nodemon).
Don’t worry, this is the final install, which using
--save-dev will save to the
devDependencies prop inside our
package.json (instead of just
dependencies like a regular
npm i <package>):
npm i --save-dev concurrently nodemon
We should then end up with a nice
package.json that looks like so:
{ "name": "node-express-typescript", "version": "1.0.0", "description": "Node.js setup with Express and TypeScript", "main": "dist/index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "git+" }, "author": "ultimatecourses", "license": "MIT", "bugs": { "url": "" }, "homepage": "", "dependencies": { "@types/body-parser": "^1.19.0", "@types/express": "^4.17.11", "@types/node": "^14.14.22", "body-parser": "^1.19.0", "cross-env": "^7.0.3", "dotenv": "^8.2.0", "express": "^4.17.1", "helmet": "^4.4.1", "rimraf": "^3.0.2", "typescript": "^4.1.3" }, "devDependencies": { "concurrently": "^5.3.0", "nodemon": "^2.0.7" } }
You’ll notice your
package-lock.json file will also be updated too, we’ll commit this to GitHub shortly too.
NPM Scripts: Serve and Start
Now time to get our project running locally via
localhost, then we’ll talk about how it will “run” on a real server when we deploy the application.
Before we can do that, I’ve already created the commands (npm scripts) that we’ll need to run to get everything running locally for development and also built for production.
Adjust your
"scripts" property to look like this:
"scripts": { "build": "rimraf dist && tsc", "preserve": "npm run build", "serve": "cross-env NODE_ENV=development concurrently \"tsc --watch\" \"nodemon -q dist/index.js\"", "prestart": "npm run build", "start": "cross-env NODE_ENV=production node dist/index.js", "test": "echo \"Error: no test specified\" && exit 1" },
Let’s walk through these commands in detail, so you can understand what’s going on:
"build"is used in a few places so let’s start here. When we
npm run buildour
rimrafpackage will delete our existing
distfolder, ensuring no previous files exist. Then, we run
tscto build our project, which as we know is compiled into
dist(remember we specified this in our
tsconfig.json). i.e. deletes the old code and replaces it with the new code.
"preserve"just calls our
"build"command to clean up any existing
distfolders and recompiles via
tsc. This gets called before the
"serve"command, when we run
npm run serve(which we’ll use to develop on
localhost).
"serve"uses our
cross-envpackage to set the
NODE_ENVto
development, so we know we’re in dev mode. We can then access
process.env.NODE_ENVanywhere inside our
.tsfiles should we need to. Then, using
concurrentlywe’re running
tsc --watch(TypeScript Compiler in “watch mode”) which will rebuild whenever we change a file. When that happens, our TypeScript code is outputted in out
distdirectory (remember we specified this in our
tsconfig.json). Once that’s recompiled,
nodemonwill see the changes and reload
dist/index.js, our entry-point to the app. This gives us full live recompilation upon every change to a
.tsfile.
"prestart"runs the same task as
"preserve"and will clean up our
dist, then use
tscto compile a new
dist. This happens before the
"start"is kicked off, which we run via
npm start.
"start"uses
cross-envagain and sets the
NODE_ENVto
production, so we can detect/enable any “production mode” features in our code. It then uses
node dist/index.jsto run our project, which is already compiled in the
"prestart"hook. All our
"start"command does is execute the already-compiled TypeScript code.
"test"pffft. What tests.
It’s a lot to take in, read it a few times if you need to, but that’s about it.
Now we know this, let’s create some files and get things up and running.
Creating an
index.ts file
Otherwise known as our “entry-point” to the application, we’ll be using
index.ts.
📢 You can use another filename if you like, such as
app.ts, however
index.tsis a naming convention I prefer as it is also the default export for a folder. For example importing
/srcwould be the same as importing
/src/index. This cleans things up and can neatly arrange code in larger codebases, in my opinion.
Now, in the root of the project, create a
src folder (or use
mkdir src) and then create an
index.ts with this inside (so you have
src/index.ts):
console.log('Hello TypeScript');
Ready to compile and run TypeScript? Let’s go!
npm run serve
You should see some output like this (notice
preserve,
build,
serve are executed in the specific order):
> [email protected] preserve > npm run build > [email protected] build > rimraf dist && tsc > [email protected] serve > cross-env NODE_ENV=development concurrently "tsc --watch" "nodemon -q dist/index.js" [0] Starting compilation in watch mode... [1] Hello TypeScript
Boom! We’ve done it. It seems a lot to get going, but I’ve wanted to show you how to do this from scratch. There’s nothing more for us to “setup” now, we can get coding and both our local development and deployed application are ready to roll.
Now, change
console.log('Hello TypeScript') to
console.log('Hello JavaScript') and you’ll see things recompile:
[0] File change detected. Starting incremental compilation... [0] Found 0 errors. Watching for file changes. [1] Hello JavaScript
Mission success. We’ve now setup a local development environment that’ll recompile everytime we save a
*.ts file and then
nodemon will restart our
dist/index.js file.
Adding Express.js
Let’s get Express setup and a server listening on a port.
We’ve actually got everything installed already, including any
@types we might need. Amend your
index.ts to include the following to test out your new Express server:
import express, { Express, Request, Response } from 'express'; import bodyParser from 'body-parser'; import helmet from 'helmet'; import dotenv from 'dotenv'; dotenv.config(); const PORT = process.env.PORT || 3000; const app: Express = express(); app.use(helmet()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.get('/', (req: Request, res: Response) => { res.send('<h1>Hello from the TypeScript world!</h1>'); }); app.listen(PORT, () => console.log(`Running on ${PORT} ⚡`));
Now visit
localhost:3000 in your browser and you should see the text
Hello from the TypeScript world!.
This is because we’ve setup an
app.get('/') with Express to return us some basic HTML.
TypeScript to ES5
Inside our
tsconfig we’ve set the compiler option
"target": "es5", so if you actually jump into your
dist/index.js file, you’ll see the “transpiled” code, which is then fully compatible with most Node.js environments.
> why using TypeScript is a great choice, as we can use many modern features that aren’t available just yet on Node.js environments (and even in browsers, TypeScript isn’t just for Node.js).
Here’s the code that
tsc creates from our
index.ts file, this is known as “emitting” in TypeScript talk:
"use strict"; var __importDefault = (this && this.__importDefault) || function (mod) { return (mod && mod.__esModule) ? mod : { "default": mod }; }; Object.defineProperty(exports, "__esModule", { value: true }); var express_1 = __importDefault(require("express")); var body_parser_1 = __importDefault(require("body-parser")); var helmet_1 = __importDefault(require("helmet")); var dotenv_1 = __importDefault(require("dotenv")); dotenv_1.default.config(); var PORT = process.env.PORT || 3000; var app = express_1.default(); app.use(helmet_1.default()); app.use(body_parser_1.default.json()); app.use(body_parser_1.default.urlencoded({ extended: true })); app.get('/', function (req, res) { res.send('<h1>Hello from the TypeScript world!</h1>'); }); app.listen(PORT, function () { return console.log("Running on " + PORT + " \u26A1"); });
It’s pretty similar, but you’ll notice it’s just ES5 which can run pretty much everywhere with no problems.
One of my biggest gripes with Node.js (especially with Serverless Lambda functions) is having to use the CommonJS
require('module-name') syntax and lack of import/export keywords, which you’ll see our ES5 version “backports”. Meaning, we get to write shiny new TypeScript, a superset of JavaScript, without having to worry too much about different environments.
Now you’re ready to get started with your Node.js and TypeScript development. Go build and I’d love to hear on Twitter how you get on, give me a follow and check out my TypeScript courses if you haven’t already.
Though, we’re not done just yet… We still need to push our final project to GitHub.
Pushing to GitHub
In the project root, create a
.gitignore file (with the dot prefix) and add the following values like so (then save and exit the file):
node_modules dist .env
A
.gitignore file will stop us pushing those folders, and their containing files, to GitHub. We don’t want to push our huge
node_modules folder, that’s pretty bad practice and defeats the purpose of dependencies.
Similarly, we don’t want to push our compiled
dist directory.
With Node.js development you’ll also likely use a
.env file as well, so I’ve included that as a default too (even though we’re not using one here).
👻 The
.envfile is used to keep secrets such as API keys and more, so never push these to version control! Set up the
.envlocally then mirror the variables on your production environment too.
Now time to commit our code to GitHub!
Run each of these (line by line):
git init git add . git commit -m "Initial commit" git branch -M main git remote add origin git push -u origin main
Now head over to your Node + TypeScript + Express GitHub project and check out your finished masterpiece.
🙌 If you want to learn even more, I’ve built a bunch of TypeScript Courses which might just help you level up your TypeScript skills even further. You should also subscribe to our Newsletter!
Deploying your Node.js app
Once you’ve built out your application you can simply push it to a service. I like to use something like Heroku, where upon each commit to the GitHub repo it will automatically deploy a new version.
Remember, all you need is to have your production environment run
npm start, and the project will build and execute the ES5 code. And that’s it! You’re fully setup and ready to go.
I hope you learned tonnes in this article, it was certainly fun breaking it all down for you.
Happy TypeScripting!
|
https://ultimatecourses.com/blog/setup-typescript-nodejs-express
|
CC-MAIN-2021-21
|
refinedweb
| 3,030
| 66.84
|
React vs Angular 2: Comparison Guide for Beginners
At the time of writing, it becomes more and more difficult for beginners to choose a JavaScript framework to use for their project, or even to start learning. Every day, we hear new systems, approaches, and tools to make things easier. Some tools bundle, minify, abstract, hide, log, debug, and interact more directly to the DOM. Each one has their uses, but they also contribute to the JavaScript fatigue — the (too) many tools in the JavaScript world only make learning and using it feel more complicated than it should.
React vs Angular 2: Comparison scopeReact vs Angular 2: Comparison scope
This article aims to provide you some insight into JavaScript by comparing React and Angular 2; two of the most popular JavaScript frameworks today (you can read past comparisons between React and Angular 1; and React and Angular performance comparison). I will try to take you to a short but discerning journey to help make an informed decision when deciding which of the two you should learn or use for your project. React and Angular 2 will be compared based on the following:
- The concepts
- Setting up
- Learning
1. The concepts1. The concepts
Let's begin with the basics...
A. Angular2A. Angular2
Watchers: Watchers are attached to each component and each time a component is changed, watchers check if we should modify something else; and if needed, make appropriate modifications. The Angular 2 team did a great job to make that part way faster than its previous version. So from now on, each time a component is changed, we don’t have to run any verifications on objects (depending on immutable elements).
There is another striking point in using Angular 2: it requires TypeScript. But we will talk about this again…
B. ReactB. React
Facebook’s baby is more like a UI component render than a full framework. The big thing (that everyone is talking about) is the virtual DOM. This is a killer feature which is giving React three main advantages:
- The changes occur by comparison between the DOM and the virtual DOM only, so React will only change what’s needed in the most optimal way.
- We don’t really need a browser to test React as we don’t interact directly with the DOM.
- We can connect the Virtual DOM to another entity (look at the mobile developments made in native code or Electron)
Components created in React have a state (representing the component-related data) and updating this state will allow your page to be reactive.
Imagine creating a “counter” component — the thing that you will likely want to change is the value of that counter, it will then be the state of our counter component.
2. Setting up2. Setting up
One of the key factors to choose a framework today are the tools we have to learn to fully understand and utilize it well. As we already have a lot (Docker, Git, Rails, Django, Node.js, .NET) that can help us in deploying, versioning, providing servers, and for APIs — there’s only too much we can learn. And this can especially feel overwhelming for beginners.
But it’s no question that we will invest so much time using it, so let’s look at the overall learning difficulty of these two frameworks.
First, say hello!First, say hello!
At first, a very naive approach — let’s learn how to say hello in both of these systems (with the prerequisite of already having Node.js and npm installed).
A. Angular2:A. Angular2:
I went to the quickstart guide on Angular’s website:
From the website, copy-paste these four configuration files in your application folder:
- package.json identifies npm package dependencies for the project.
- tsconfig.json defines how the TypeScript compiler generates JavaScript from the project’s files.
- typings.json provides additional definition files for libraries that the Typecript compiler doesn’t natively recognize.
- systemjs.config.js provides information to a module loader about where to find application modules, and registers all the necessary packages. It also contains other packages that will be needed by later documentation examples.
Then install the dependencies and create the root module:
import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; @NgModule({ imports: [BrowserModule] }) export class AppModule { }
Theoretically, this step is enough to provide us a working app, but as it is doing nothing, it’s not really fun at the moment… Let’s take one another step.
Let’s add a component to our app — a component in charge of displaying a nice welcome message to anyone executing the app.
import { Component } from '@angular/core'; @Component({ selector: 'my-app', template: '<h1>Welcome everyone!!</h1>' }) export class AppComponent { }
In addition, we need to slightly change our app.module.ts file as we need to reference our brand new
AppComponent. We also need to tell Angular to start our application (in a new main.ts file).
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app.module'; const platform = platformBrowserDynamic(); platform.bootstrapModule(AppModule);
Add some style and provide the adapted index.html file (again, just copy-paste the one found on the Angular guide), and you can run it in the first terminal where you find:
~/projects/angulartest$ npm start
At this point, we already used some TypeScript (and compared to Vanila JS, it provides a better code organization, typing, and annotations; but this is another thing to learn when you get into Angular 2).
Done!
You now have a nice first page that will change whenever you make changes in your module (the start command launches both the server and a watcher).
B. React:B. React:
Again, let’s proceed like any newbie would do at the beginning — let’s start at the installation guide:
React has a slightly different approach. Here, you can install a package that will create a very simple working app for you.
The only thing you have to do is to install the package, create an application with the command provided, and start the server.
npm install -g create-react-app create-react-app hello-world cd hello-world npm start
And with just those first commands, we have this result:
Now that we got React started, they’ll first recommend using bundlers. If you are not familiar with how it works and what it does, I just recommend you to go the Webpack’s homepage or read this getting started tutorial.
In other words, you turn a bunch of small files with a lot of relationships and connections into bigger, “reunited” files. The main point is that it can be interpreted then by the browsers, even when minified. With such tools, you will be able to bundle all react components to make sure all the dependencies are resolved.
Webpack and Browserify (just to mention some) are of course useful, and you will need to choose which one you want to use in your React project. If you’re interested in using Browserify, you can check out this guide. It can be difficult to know at first how and what we need to work properly with React but it can be learned through more use.
Verdict:
From what we have discussed until now, I must admit that I prefer Angular 2’s way to get started. It is a little longer for sure, but it’s easier to have an idea of what we need to setup, and how components are interacting.
3. Learning3. Learning
Let’s breathe a little after those long descriptions and let’s try to make informed opinions out of it. We previously focused on setting up for initial use of Angular 2 and React, and it is a key factor in knowing which UI framework/library to use and learn. Let’s go through other key factors and bring other considerations to the table to see how the two compare.
A. PracticeA. Practice
But in the long run, it is actually a little easier to think in React. Angular 2 is, of course, an efficient framework — but my personal preference goes with React for clarity. Coding with Flux has a lot to do with it and states a very simple workflow to follow.
Moreover, working in JSX makes things more readable and would have the effects you plan it to have. One of the most criticized aspects of Angular is that a newcomer has to learn a lot of new directives and keywords, notoriously all the
ng-* friends. This issue has been tackled a lot by the Angular team as they continue to improve the framework. This Q&A with Google’s Angular Core Team might help users understand Angular 2’s features more.
But one of the big differences between Angular and React is the way they consider HTML and JS.
Angular puts JS in HTML whereas React puts HTML into JavaScript. Some would say that is a matter of taste, but I find it more convenient to handle JS from the beginning to the end, and just to show you, here is how things look like in these two systems:
Here is a list in Angular 2:
<ul> <li * {{i}} {{item}} </li> </ul>
Here is the same in React:
let List = function({ items }) { return ( <ul> {items.map(item => <li key={item.id}>{item.name}</li> )} </ul> ); }
You have real and actual JavaScript code inside the braces, and the function used to render the component is clear — any developer who is used to JS won’t get lost. But then again, it’s all just a matter of personal preference.
Verdict:
B. DifficultyB. Difficulty
The use of TypeScript is better from a “strictness” perspective, and not from a “learning” perspective, as you will find yourself learning Angular and TypeScript at the same time. So, you will really find it more difficult to climb that wall at the beginning. But after flexing your muscles, you will walk better. Note that nothing prevents you from not using TypeScript, but most of the examples that you will find on the web for Angular will be in TS.
Verdict:
C. CommunityC. Community
In terms of community and popularity, both frameworks can now rely on a huge developer base all over the world — and both still continue to grow fast.
A quick look at Stackoverflow and stats of Github repos show, however, that React is more popular at the moment, but it should not be a huge deciding factor for beginners, as both communities are really active.
Here are the figures for React ():
Here are the figures for Angular ():
As you can see, both are really popular, even if React seems to get more and more people getting to know and experimenting with it.
Verdict:
D. DebuggingD. Debugging
React’s “magic” is about the update of DOM (and how it is changed from the virtual DOM), other than that, there’s not a lot of notable advantages, especially if you are using Flux (but we will get back to that in a while). In Angular, because you have watchers everywhere, debugging can be a little challenging on its own. But we have to be fair with Angular: providing true HTML templates can make HTML debugging easier. But I guess it depends on the projects you are working on.
Let’s talk about Flux for a moment so I can show you why I would say that debugging components in React is really not that painful.
As Facebook wanted a unidirectional data flow, they came up with a specific way to organize the key files and functionalities of components to make them readable, self-explanatory, and easy to debug.
- When there is a change in the app (someone presses a button, clicks on a link, etc.), views send some actions to a dispatcher. (Ex: Someone clicked on the “plus one” button of a counter, it should then go from 9 to 10. The view sends an action to the dispatcher called “INCREMENT”)
- The dispatcher sends the action off to all the stores registered to it. Each store is responsible for taking and executing this action or not. (Ex: As the dispatcher, I send the “INCREMENT” action to all the listening stores.)
- The store in question changes the state of the component and notifies the controller view. (Ex: The “Counter” store updates the state from 9 to 10 and makes the controller view know)
- Child views of the view controller are updated. (Ex: We really see the 9 switching to 10 in our counter component)
It makes building a component ridiculously easy (once you’ve done one, you just follow the same pattern over and over) and allows you to trigger the step to incriminate when the whole chain has a problem. It is unidirectional, clean, and simple. To debug with such an architecture becomes smooth and quick.
Verdict:
E. SpeedE. Speed
Before talking about performances, we should talk a little about how the two handle binding. On one side, Angular 2 uses two-way data binding. This means that if I decide to change the value in the DOM, say an input field or a text area, both the view and the model will be updated. This behavior is made possible by a lot of observers. Each binding requires a watcher; so the bigger your app is getting, the bigger the impact of those watchers is also going to be.
On the other side with React, we have to write the code that handles tracking the changes between view and models. But once it is done (and even though you might feel that implementing something like Flux is going to slow down your app), the components stay very fast as we only change the elements that are changed in the DOM (thanks to the virtual DOM of React, only the virtual DOM elements that have difference with the actual DOM elements are updated). The result is that updates are made in a smoother way.
To make it a bit more precise, please check this link, as you will get true benchmarks on the performances of both Angular 2 and React (among others, which is always good to see).
Verdict:
In conclusionIn conclusion
Don’t be shy. Try React and Angular 2 to make your own choice! But I hope this comparison could help you decide (if you need to pick one) for your project, and that you could better understand the environments of these frameworks on your journey to becoming an expert developer.
There is no clear good or bad guy here, both Angular 2 and React are complete products, with strong communities and teams behind them.
Author’s BioAuthor’s Bio
Muhammad has been working as a freelance technical writer for the past 3 years, and has worked with approximately 100 clients from across the globe on 150+ projects. Muhammad is fond of programming, logic development, digital healthcare, and neurosciences. He has a Bachelors degree in Electrical Engineering and a Masters degree in Biomedical Engineering.
|
https://www.codementor.io/codementorteam/react-vs-angular-2-comparison-beginners-guide-lvz5710ha
|
CC-MAIN-2018-05
|
refinedweb
| 2,501
| 58.42
|
From: Pavol Droba (droba_at_[hidden])
Date: 2002-12-20 03:18:21
On Thu, Dec 19, 2002 at 03:32:12PM -0500, David Abrahams wrote:
> Pavol Droba <droba_at_[hidden]> writes:
>
> > On Thu, Dec 19, 2002 at 12:04:03PM -0500, David Abrahams wrote:
> >> David Abrahams <dave_at_[hidden]> writes:
> >>
> >> > Pavol Droba <droba_at_[hidden]> writes:
> >> >
> >> >> but I'm wondering, why it is not documented, or at least mentioned somewhere?
> >> >> There are more of such useful headers in boost/detail directory, which are used
> >> >> by various libraries. Whouldn't it make sense to write a simple doc mentioning
> >> >> all this utilities, so people don't have to invent a wheel again and
> >> >> again?
> >> >
> >> > Be my guest ;-)
> >> >
> >> > The hardest part of writing libraries is certainly documenting them.
> >> > Speaking for myself, I'm lazy, and only wish to spend the effort to
> >> > document libraries I'm going to release to the world.
> >
> > I know what do you mean, I'm working on string_algo library for boost,
> > and (un)fortunately, I'm getting the to stage where I have to write documentation :(
> > For me it is the hardest part of work ...
> >
> >>
> >> Pavol,
> >>
> >> I just realized the above might sound discouraging... I really *would*
> >> encourage you to document the useful utilities so we can make them an
> >> official part of boost. That file in particular might be good as part
> >> of the compatibility library.
> >
> > Well, I might try if I find a time for doing it:)
> >
> > Anyway, what I would like to suggest in the mean time, is to at least make a
> > summary of these utilities, so that boost developers have a place where they
> > can check, if something they need, does not already exists.
> > I don't know the place and the form, I think it would be useful.
>
> Maybe a Wiki page at
>
> ?
>
> Why don't you start one?
Good point, I'll try.
Actually I have another question. Why is iterator_traits class so hidden. There is a boost::iterator class
which is supposed to be a replacement for std::iterator. Why isn't iterator_traits also in boost/ directory
and part of boost namespace?
Regards
Pavol
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2002/12/41505.php
|
CC-MAIN-2020-10
|
refinedweb
| 374
| 72.66
|
I'm trying to rewrite the application in this, in Android Studio link, which is written in Eclipse. There are two problems, first problem is there is this line in the project :
import com.example.webserviceactivity.R;
I couldn't write this one on Android Studio. The second problem is, in this part of the code :
b = (Button) findViewById(R.id.button1); //Button Click Listener b.setOnClickListener(new OnClickListener() { public void onClick(View v) { //Check if Celcius text control is not empty if (et.getText().length() != 0 && et.getText().toString() != "") { //Get the text control value celcius = et.getText().toString(); //Create instance for AsyncCallWS AsyncCallWS task = new AsyncCallWS(); //Call execute task.execute(); //If text control is empty } else { tv.setText("Please enter Celcius"); } } });
I have this error :
error: cannot find symbol class AsyncCallWS Android in this part :
AsyncCallWS task = new AsyncCallWS();
How can I solve these problems? Thanks.
|
http://www.howtobuildsoftware.com/index.php/how-do/bfA/java-android-web-services-error-cannot-find-symbol-class-asynccallws-android
|
CC-MAIN-2018-22
|
refinedweb
| 147
| 50.12
|
Technical Support
On-Line Manuals
CARM User's Guide
Discontinued
#include <stdio.h>
void vprintf (
const char *fmtstr, /* pointer to format string */
char *argptr); /* pointer to argument list */
The vprintf function formats a series of strings and
numeric values and builds a string to write to the output stream
using the putchar function. This function is similar to the
printf.
Note
The vprintf function returns the number of characters
actually written to the output stream.
gets, puts, sprintf, sscanf, vsprintf
#include <stdio.h>
#include <stdarg.h>
void error (char *fmt, ...) {
va_list arg_ptr;
va_start (arg_ptr, fmt); /* format string */
vprintf (fmt, arg_ptr);
va_end (arg_ptr);
}
void tst_vprintf (void) {
int i;
i = 1000;
/* call error with one parameter */
error ("Error: '%d' number too large\n", i);
/* call error with just a format string */
error ("Syntax.
|
http://www.keil.com/support/man/docs/ca/ca_vprintf.htm
|
CC-MAIN-2019-43
|
refinedweb
| 131
| 64.1
|
If:
9 thoughts on “Combine 2 Django Querysets from Different Models”
Thanks, It worked like a charm. Also you can add info about how to sort in descending order
Simply add reverse=True to the sorted method. I’ve updated the gist with an example.
Good article. Hopefully people will read this and realize Django is just Python, and be more pythonic when writing Django code.
Thanks
How would the front end code look to display the required fields from all the chained models in the front end templates?
Front-end code remains unchanged from how you would normally display fields from a model.
{{ mymodel.field }}
Is there a generic CBV that could handle these chained querySets? Can’t figure out how to unwrap these things.
Nice article, it’s helped a lot to solve my problem. thanks
hello man! I used chain to combine two different query set, but the problem I am having is that I can not sort or filter using the slug. it seems chain does not allow .object.get(slug)
def sport_list(request, slug):
travels= Travel.objects.all().order_by(‘-date’)
nutritions= Nutrition.objects.all().order_by(‘-date’)
sports= Sport.objects.all().order_by(‘-date’)
sportas = chain(travels, nutritions, sports)
newone = sportas.object.get(slug)
|
https://chriskief.com/2015/01/12/combine-2-django-querysets-from-different-models/
|
CC-MAIN-2019-09
|
refinedweb
| 207
| 68.36
|
Overview
For me, the hardest part of learning Go was in structuring my application. Prior to Go, I was working on a Rails application and Rails makes you structure your application in a certain way. “Convention over configuration” is their motto. But Go doesn’t prescribe any particular project layout or application structure and Go’s conventions are mostly stylistic.
I’m going to show you four patterns that I’ve found to be tremendously helpful in architecting Go applications. These are not official Gopher rules and I’m sure others may have differing opinions. I’d love to hear them! Please comment as you go through if you have suggestions.
1. Don’t use global variables
The Go net/http examples I read always show a function registered with http.HandleFunc like this:
package main
import (
“fmt”
“net/http”
)
func main() {
http.HandleFunc(“/hello”, hello)
http.ListenAndServe(“:8080", nil)
}
func hello(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, “hi!”)
}
This example gives an easy way to get into using net/http but it teaches a bad habit. By using a function handler, the only way to access application state is to use a global variable. Because of this, you may decide to add a global database connection or a global configuration variable but these globals are a nightmare to use when writing unit tests.
A better way is to make specific types for handlers so they can include the required variables:
type HelloHandler struct {
db *sql.DB
}
func (h *HelloHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
var name string
// Execute the query.
row := h.db.QueryRow(“SELECT myname FROM mytable”)
if err := row.Scan(&name); err != nil {
http.Error(w, err.Error(), 500)
return
}
// Write it back to the client.
fmt.Fprintf(w, “hi %s!\n”, name)
}
Now we can initialize our database and register our handler without the use of global variables:
func main() {
// Open our database connection.
db, err := sql.Open(“postgres”, “…”)
if err != nil {
log.Fatal(err)
}
// Register our handler.
http.Handle(“/hello”, &HelloHandler{db: db})
http.ListenAndServe(“:8080", nil)
}
This approach also has the benefit that unit testing our handler is self contained and doesn’t even require an HTTP server:
func TestHelloHandler_ServeHTTP(t *testing.T) {
// Open our connection and setup our handler.
db, _ := sql.Open("postgres", "...")
defer db.Close()
h := HelloHandler{db: db}
// Execute our handler with a simple buffer.
rec := httptest.NewRecorder()
rec.Body = bytes.NewBuffer()
h.ServeHTTP(rec, nil)
if rec.Body.String() != "hi bob!\n" {
t.Errorf("unexpected response: %s", rec.Body.String())
}
}
UPDATE: Tomás Senart and Peter Bourgon mentioned on Twitter that you can simplify this further by wrapping your handlers with a closure. This allows you to easily compose your handlers.
2. Separate your binary from your application
I used to place my main.go file in the root of my project so that when someone runs “go get” then my application would be automagically installed. However, combining the main.go file and my application logic in the same package has two consequences:
- It makes my application unusable as a library.
- I can only have one application binary.
The best way I’ve found to fix this is to simply use a “cmd” directory in my project where each of its subdirectories is an application binary. I originally found this approach used in Brad Fitzpatrick’s Camlistore project where he uses several application binaries:
camlistore/
cmd/
camget/
main.go
cammount/
main.go
camput/
main.go
camtool/
main.go
Here we have 4 separate application binaries that can be built when Camlistore is installed: camget, cammount, camput, & camtool.
Library driven development
Moving the main.go file out of your root allows you to build your application from the perspective of a library. Your application binary is simply a client of your application’s library. I find this helps me make a cleaner abstraction of what code is for my core logic (the library) and what code is for running my application (the application binary).
The application binary is really just the interface for how a user interacts with your logic. Sometimes you might want users to interact in multiple ways so you create multiple binaries. For example, if you had an “adder” package that that let users add numbers together, you may want to release a command line version as well as a web version. You can easily do this by organizing your project like this:
adder/
adder.go
cmd/
adder/
main.go
adder-server/
main.go
Users can install your “adder” application binaries with “go get” using an ellipsis:
$ go get github.com/benbjohnson/adder/...
And voila, your user has “adder” and “adder-server” installed!
3. Wrap types for application-specific context
One trick I’ve found especially helpful is realizing that some generic types should be wrapped to provide application-level context. A great example of this is wrapping the DB and Tx (transaction) types. These types can be found in the database/sql package or other database libraries such as Bolt.
We start by wrapping these types like this:
package myapp
import (
"database/sql"
)
type DB struct {
*sql.DB
}
type Tx struct {
*sql.Tx
}
We then wrap the initialization function for our database and transaction:
// Open returns a DB reference for a data source.
func Open(dataSourceName string) (*DB, error) {
db, err := sql.Open("postgres", dataSourceName)
if err != nil {
return nil, err
}
return &DB{db}, nil
}
// Begin starts an returns a new transaction.
func (db *DB) Begin() (*Tx, error) {
tx, err := db.DB.Begin()
if err != nil {
return nil, err
}
return &Tx{tx}, nil
}
And now we can add application specific functions to our transactions. For example, if our application has users that need to be validated before being created, a Tx.CreateUser() would be a good function to add:
// CreateUser creates a new user.
// Returns an error if user is invalid or the tx fails.
func (tx *Tx) CreateUser(u *User) error {
// Validate the input.
if u == nil {
return errors.New("user required")
} else if u.Name == "" {
return errors.New("name required")
}
// Perform the actual insert and return any errors.
return tx.Exec(`INSERT INTO users (...) VALUES`, ...)
}
This function can get more complicated if, for example, a user needs to be validated against another system before being created or other tables need to be updated. To your application’s caller, though, it’s all isolated in one function.
Transactional composition
Another benefit to adding these functions to your Tx is that it allows you to compose multiple actions in a single transaction. Need to add one user? Just call Tx.CreateUser() once:
tx, _ := db.Begin()
tx.CreateUser(&User{Name:"susy"})
tx.Commit()
Need to add a bunch of users? You can use the same function. No need for a Tx.CreateUsers() function:
tx, _ := db.Begin()
for _, u := range users {
tx.CreateUser(u)
}
tx.Commit()
Abstracting your underlying data store also makes it trivial to swap out a new database or to use multiple databases. They’re all hidden from your calling code by your application’s DB & Tx types.
4. Don’t go crazy with subpackages
Most languages let you organize your package structure however you’d like. I’ve worked in Java codebases where every couple of classes get stuffed into another package and these packages would all include each other. It was a mess!
Go only has one requirement for packages: you can’t have cyclic dependencies. This cyclic dependency rule felt strange to me at first. I originally organized my project so each file had one type and once there were a bunch of files in a package then I’d create a new subpackage. However, these subpackages became difficult to manage since I couldn’t have package “A” include package “B” which included package “C” which included package “A”. That would be a cyclic dependency. I realized that I had no good reason for separating out packages except for having “too many files”.
Recently I’ve found myself going the other direction — only using a single root package. Usually my project’s types are all very related so it fits better from a usability and API standpoint. These types can also take advantage of calling unexported between them which keeps the API small and clear.
I found a few things helped me move toward larger packages:
- Group related types and code together in each file. If your types and functions are well organized then I find that files tend to be between 200 and 500 SLOC. This might sound like a lot but I find it easy to navigate. 1000 SLOC is usually my upper limit for a single file.
- Organize the most important type at the top of the file and add types in decreasing importance towards the bottom of the file.
- Once your application starts getting above 10,000 SLOC you should seriously evaluate whether it can be broken into smaller projects.
Bolt is a good example of this. Each file is a grouping of types related to a single Bolt construct:
bucket.go
cursor.go
db.go
freelist.go
node.go
page.go
tx.go
Conclusion
Code organization is one of the hardest parts about writing software and it rarely gets the focus it deserves. Use global variables sparingly, move your application binary code to its own package, wrap some types for application-specific context, and limit your subpackages. These are just a few tricks that can help make Go code easier and more maintainable.
If you’re writing Go projects the same way you write Ruby, Java, or Node.js projects then you’re probably going to be fighting with the language.
|
https://medium.com/@benbjohnson/structuring-applications-in-go-3b04be4ff091?source=userActivityShare-4485cf75ad68-1463451454&utm_campaign=A%20Semana%20Go&utm_medium=email&utm_source=Revue%20newsletter
|
CC-MAIN-2019-13
|
refinedweb
| 1,611
| 58.79
|
Lens
A
Lens is an optic used to zoom inside a
Product, e.g.
case class,
Tuple,
HList or even
Map.
Lenses have two type parameters generally called
S and
A:
Lens[S, A] where
S represents the
Product and
A an element inside of
S.
Let’s take a simple case class with two fields:
case class Address(streetNumber: Int, streetName: String)
We can create a
Lens[Address, Int] which zooms from an
Address to its field
streetNumber by supplying a pair of functions:
get: Address => Int
set: Int => Address => Address
import monocle.Lens val streetNumber = Lens[Address, Int](_.streetNumber)(n => a => a.copy(streetNumber = n))
This case is really straightforward so we automated the generation of
Lenses from case classes using a macro:
import monocle.macros.GenLens val streetNumber = GenLens[Address](_.streetNumber)
Once we have a
Lens, we can use the supplied
get and
set functions (nothing fancy!):
val address = Address(10, "High Street") // address: Address = Address(10,High Street) streetNumber.get(address) // res1: Int = 10 streetNumber.set(5)(address) // res2: Address = Address(5,High Street)
We can also
modify the target of
Lens with a function, this is equivalent to call
get and then
set:
streetNumber.modify(_ + 1)(address) // res3: Address = Address(11,High Street) val n = streetNumber.get(address) // n: Int = 10 streetNumber.set(n + 1)(address) // res4: Address = Address(11,High Street)
We can push the idea even further, with
modifyF we can update the target of a
Lens in a context, cf
scalaz.Functor:
def neighbors(n: Int): List[Int] = if(n > 0) List(n - 1, n + 1) else List(n + 1) import scalaz.std.list._ // to get Functor[List] instance
scala> streetNumber.modifyF(neighbors)(address) res6: List[Address] = List(Address(9,High Street), Address(11,High Street)) scala> streetNumber.modifyF(neighbors)(Address(135, "High Street")) res7: List[Address] = List(Address(134,High Street), Address(136,High Street))
This would work with any kind of
Functor and is especially useful in conjunction with asynchronous APIs,
where one has the task to update a deeply nested structure with the result of an asynchronous computation:
import scalaz.std.scalaFuture._ import scala.concurrent._ import scala.concurrent.ExecutionContext.Implicits._ // to get global ExecutionContext def updateNumber(n: Int): Future[Int] = Future.successful(n + 1)
streetNumber.modifyF(updateNumber)(address) // res9: scala.concurrent.Future[Address] = Future(<not completed>)
Most importantly,
Lenses compose together allowing to zoom deeper in a data structure
case class Person(name: String, age: Int, address: Address) val john = Person("John", 20, address) val address = GenLens[Person](_.address)
(address composeLens streetNumber).get(john) // res11: Int = 10 (address composeLens streetNumber).set(2)(john) // res12: Person = Person(John,20,Address(2,High Street))
Lens Generation
Lens creation is rather boiler platy but we developed a few macros to generate them automatically. All macros
are defined in a separate module (see modules).
import monocle.macros.GenLens val age = GenLens[Person](_.age)
GenLens can also be used to generate
Lens several level deep:
scala> GenLens[Person](_.address.streetName).set("Iffley Road")(john) res13: Person = Person(John,20,Address(10,Iffley Road))
For those who want to push
Lenses generation even further, we created
@Lenses macro annotation which generate
Lenses for all fields of a case class. The generated
Lenses are in the companion object of the case class:
import monocle.macros.Lenses @Lenses case class Point(x: Int, y: Int) val p = Point(5, 3)
Point.x.get(p) // res14: Int = 5 Point.y.set(0)(p) // res15: Point = Point(5,0)
You can also add a prefix to
@Lenses in order to prefix the generated
Lenses:
@Lenses("_") case class Point(x: Int, y: Int) val p = Point(5, 3)
Point._x.get(p) // res16: Int = 5
Laws
A
Lens must satisfy all properties defined in
LensLaws from the
core module.
You can check the validity of your own
Lenses using
LensTests from the
law module.
In particular, a
Lens must respect the
getSet law which states that if you
get a value
A from
S and
set it back in, the result is an object identical to the original one. A side effect of this law is that
set
must only update the
A it points to, for example it cannot increment a counter or modify another value.
def getSet[S, A](l: Lens[S, A], s: S): Boolean = l.set(l.get(s))(s) == s
On the other hand, the
setGet law states that if you
set a value, you always
get the same value back.
This law guarantees that
set is actually updating a value
A inside of
S.
def setGet[S, A](l: Lens[S, A], s: S, a: A): Boolean = l.get(l.set(a)(s)) == a
|
https://julien-truffaut.github.io/Monocle/optics/lens.html
|
CC-MAIN-2019-18
|
refinedweb
| 791
| 56.76
|
primitive Coversion in methods
Kishan Kumar
Ranch Hand
Joined: Sep 26, 2000
Posts: 130
posted
Sep 26, 2000 04:01:00
0
Hi all,
Everybody here are doing a splended job. Thanks to Internet.
Please see the code below,
public class callsub { public static void main(String s[]) { new callsub1().method(10); } public void method(int i) { System.out.println("Int version : " + i); } public void method(long i) { System.out.println("Long version : " + i); } }
I have defined two methods one takes int and another takes long.
There is no problem here and output is
Int Version : 10
But when i put one of the methods in another class and do a inheritance, see code below
class call { public void method(int i) { System.out.println("Int version : " + i); } } public class callsub extends call { public static void main(String s[]) { new callsub().method(10); } public void method(long i) { System.out.println("Long version : " + i); } }
This is giving compile time error
Reference to method is ambigious, It is defined in void method(long) and void method(int).
This ambiguity should have arised in the earlier code itself but
it is rightly taking the int method. Why is that, problem arises
only in inheritance. It is a instance of overloading.
Also if I interchange the method declerations and change it as
public void method(long i) in the superclass call and
public void method(int i) in the subclass callsub
It is able to give the output
Int Version : 10
Also I am not able to call the int method in the superclass at all from the subclass.
Can you folks please explain this behaviour.
Your help is highly appreciated.
[I added UBB CODE tags to your source code to make it more readable. Please try to use them in the future.
Learn more about UBB codes
- Ajith]
[This message has been edited by Ajith Kallambella (edited September 26, 2000).]
Regards,<BR>V. Kishan Kumar
Anonymous
Ranch Hand
Joined: Nov 22, 2008
Posts: 18944
posted
Sep 26, 2000 10:23:00
0
Hi all !
I think this is an interesting question. Can someone come up with an explanatiopn for this.
All I can tell is:
For overloading in a class, the most specific method is choosen.
But with the 2nd case, it'd be nice if someone came with an explanation.
-sampaths
Ajith Kallambella
Sheriff
Joined: Mar 17, 2000
Posts: 5782
posted
Sep 26, 2000 11:58:00
0
When determining if there is a maximally specific method, the compiler uses not only the types of the arguments, but also the type of the definer
In the second example above, the versions of method defined in call and callSub are both
maximally specific. 'call's method is not more specific than 'callsub's because the class 'call' cannot be converted to the class 'callsub' by method invocation conversion. callsub's method is not more specific than 'call's because the type long cannot be converted to int by method invocation conversion. Since there is more than one maximally specific method, according
JLS section 15.12.2.2 Choose the Most Specific Method
we have an ambiguity.
Look at these two posts in Sun's bug parade for more detailed explanation -
Note 4038412
Note 4067106
Hope this helps,
Ajith
Open Group Certified Distinguished IT Architect. Open Group Certified Master IT Architect. Sun Certified Architect (SCEA).
Kishan Kumar
Ranch Hand
Joined: Sep 26, 2000
Posts: 130
posted
Sep 27, 2000 03:11:00
0
Thanks Ajith for the reply.
But My brain could not catch hold the line
"'call's method is not more specific than 'callsub's because the class 'call' cannot be converted to the class 'callsub' by method invocation conversion"
If possible can you explain me more about that.
I get the point that the subclass mthods should be more specific
than the superclass methods. Am I right?
Thanks for your time.
I agree. Here's the link:
subject: primitive Coversion in methods
Similar Threads
Method overloading
why does it throws compilation error in the subclass?
Program Execution Flow.
Private method
Inheritance and method overloading
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/194235/java-programmer-SCJP/certification/primitive-Coversion-methods
|
CC-MAIN-2014-35
|
refinedweb
| 701
| 63.39
|
Blender Development: creating an actuator part 2
I’m back with part 2 of my creating an actuator series. Part 1 dealt with creating the actuator and getting it to appear in the UI so you could attach it to a game object. In this instalment we’re going cover creating the code to get the actuator doing something.
I lied somewhat in part 1: the new actuator still wont be doing anything particularly useful, however, we will go through the game object and the sort of things you can access and modify. From there you can begin to create a useful actuator. In next part is going to focus on adding properties that can be modified in the UI, for now everything will be hard coded in C++. I think this part is going to be a bit more straightforward than the first one (personally, I prefer working with C++ over C) and there’s going to be a bit less jumping around the source code. Like the first part, I don’t know everything and I’m still learning. So please help fill in the gaps in the explanations and correct any errors.
Setup
First of all, make sure you’ve read part 1 and got your new test actuator setup as explained. I’m still using Qt Creator, CMake and MinGW. We’re going to be editing the build setup today, so I can only write about how to do it for CMake as I don’t use Scons. If someone wants to let me know the details for adding files to a Scons build I’ll include them. Following on from part 1, the actuator we’re adding is still called ‘Test’. Right, let’s get stuck in.
SCA_IActuator.h
We start by updating the SCA_IActuator class to include a new type for our new actuator:
... KX_ACT_SHAPEACTION, KX_ACT_STATE, KX_ACT_ARMATURE, KX_ACT_STEERING, KX_ACT_TEST, }; ...
This will get passed to SCA_IActuator along with KX_GameObject when we initalise the new actuator class.
KX_TestActuator.h
Actuators live in two places in the Blender source code: ../blender/source/gameengine/GameLogic/ and ../blender/source/gameengine/Ketji/. The directory /GameLogic/ contains what comments in the Blender source describe as native logic bricks while /Ketji/ holds the game engine specific logic bricks. Through getting things wrong I learnt that in order to access the KX_GameObject class methods your actuators source files need to live in /Ketji/, otherwise they can only access the methods associated with the CValue class. There are probably other differences and reasons for the separation, if anyone knows, speak up and I’ll expand this section.
So we start by adding a new header file to the project in ../blender/source/gameengine/Ketji/ called SCA_TestActuator.h. In that file we’re going to declare a class called KX_TestActuator and add a few function declarations. So add the following:
/* GPL license block and what not - copy it in from another source file*/ #ifndef KX_TESTACTUATOR_H #define KX_TESTACTUATOR_H #include "SCA_IActuator.h" class KX_TestActuator : public SCA_IActuator { public: KX_TestActuator( SCA_IObject* gameobj); ~KX_TestActuator(); CValue* GetReplica(); virtual bool Update(); }; #endif // KX_TESTACTUATOR_H
There’s not a huge amount going on here. The only thing the actuator needs to take is a pointer to an SCA_Object as it is required by the SCA_IActuator parent class. I don’t really know what the GetReplica() function does. My guess it’s something to do with copying data when creating instances, might also do some reference counting(?). If anyone knows, please let me know! Anyway, Blender wont compile without it. The Update() function defines what it needs to do on each logic tic.
KX_TestActuator.cpp
Now we need to add implementations for the functions we declared. So create the file KX_TestActuator.cpp in ../blender/source/gameengine/Ketji/ and enter:
/* GPL license block and what not - copy it in from another source file*/ #include "KX_TestActuator.h" #include "KX_GameObject.h" KX_TestActuator::KX_TestActuator(SCA_IObject *gameobj) : SCA_IActuator(gameobj, KX_ACT_TEST) { ; } KX_TestActuator::~KX_TestActuator() { ; } bool KX_TestActuator::Update() { bool bNegativeEvent = IsNegativeEvent(); RemoveAllEvents(); if (bNegativeEvent) return false; cout << "Hello World!" << endl; return false; } CValue* KX_TestActuator::GetReplica() { KX_TestActuator* replica = new KX_TestActuator(*this); replica->ProcessReplica(); return replica; }
The GetReplica() functions do more or less what it does in other actuators. The main area we’re interested in is the Update() function. This can return true or false, which will affect the actuators behaviour. The logic manager builds a list of triggered sensors and from that it creates a list of controllers and actuators attached to that sensor. When a actuators Update() function is called by the manager if it returns false it is removed from the active actuator list, if it returns true it remains in this list. This means that actuators that returns true will continue to run until the actuator receives a negative pulse whereas those that return false then it only trigger on each positive pulse. Experiment with returning true and false to see the effects.
The SCA_IActuator class, which our actuator inherits from, has 2 bool member variables for storing received events: m_posevent and m_negevent. These both start out as false and receipt of an event are respectively set to true. The first thing Update() does is fetch the value of the negative event variable and store it. It then resets these values by calling RemoveAllEvents() ready for the next frame. Both these functions come from SCA_IActuator.
If the event received is negative (ie, bNegativeEvent == true) then there’s nothing to do and we can return from the function – returning false to ensure that the actuator is removed from the logic managers active list. If the event is not negative then we can continue on with the rest of the function. In this case we print ‘Hello World!’ to the console.
SCA_ConvertActuators.cpp
Now we need to get our new test actuator code running in the game engine. Open up ConvertActuators.cpp and include the test actuator header:
... #include "KX_ParentActuator.h" #include "KX_SCA_DynamicActuator.h" #include "KX_SteeringActuator.h" #include "KX_TestActuator.h" ...
The next step is to add a new case to the switch statement in the BL_ConvertActuators() function. Not too sure on the purpose of this function, my guess is that it passes the necessary data to the actuator code and links it into the game engine.
... case ACT_TEST: { KX_TestActuator *tmptest = new KX_TestActuator(gameobj); baseact = tmptest; break; } default: ; /* generate some error */ } ...
Remeber we defined ACT_TEST in DNA_actuator_types.h in part 1.
CMakeList.txt
If you compile now you’ll get an error complaining about an undefined reference to KX_TestActuator in ConvertActuators.cpp, that’s because we need to include the class source files has part of the build. Open up ../blender/source/gameengine/Ketji/CMakeList.txt in any text editor and add:
set(SRC BL_Action.cpp BL_ActionManager.cpp BL_BlenderShader.cpp BL_Material.cpp ... KX_VisibilityActuator.cpp KX_WorldInfo.cpp KX_WorldIpoController.cpp KX_TestActuator.cpp ... KX_WorldInfo.h KX_WorldIpoController.h KX_TestActuator.h )
As mentioned previously, this step applies only to using CMake as the build system. I don’t know what the equivalent is for Scons is. Now Blender should compile. If you open up Blender and attach the test actuator to an object you should now see ‘Hello World!’ being printed in the console. Success!
KX_TestActuator.cpp
Ok, back to the test actuators source to explore the KX_GameObject a little. Lets modify the Update() function to do a little more:
bool KX_TestActuator::Update() { bool bNegativeEvent = IsNegativeEvent(); RemoveAllEvents(); if (bNegativeEvent) return false; KX_GameObject *ob = (KX_GameObject*) GetParent(); cout << "Object: " << ob->GetName() << " on layer: " << ob->GetLayer() << endl; cout << "Visible: " << ob->GetVisible() << endl; cout << "State: " << ob->GetState() << endl; cout << "Linear Velocity: " << ob->GetLinearVelocity(false) << endl; cout << "Orientation: " << ob->NodeGetWorldOrientation() << endl; cout << "Position: " << ob->NodeGetWorldPosition() << endl; cout << "#### Object properties ####" << endl; int propCount = ob->GetPropertyCount(); vector<str_string> propNames = ob->GetPropertyNames(); for(int i = 0; i < propCount; i++) { CValue* prop = ob->GetProperty(i); cout << propNames[i] << ": " << prop->GetText() << endl; } return false; }
The actuator still isn’t doing a lot, but now we’re printing out to the console various details about the object the actuator is attached to – you could call it a debug brick if you like. We get the KX_GameObject by calling GetParent() – a method of SCA_ILogicBrick, inherited through SCA_IActuator, that returns a pointer to the logic bricks owner, an SCA_IObject. Here were asking it to return the game object. Through the game object we can access lots of functions to do things or return details – like it’s position, orientation, name, attached properties (stored as a CValue object) and so on.
As a final example, we’ll look at setting some values through the game object:
bool KX_TestActuator::Update() { bool bNegativeEvent = IsNegativeEvent(); RemoveAllEvents(); if (bNegativeEvent) return false; KX_GameObject *ob = (KX_GameObject*) GetParent(); //Move along the y axis - like simple motion MT_Vector3 motion(0,1,0); ob->ApplyMovement(motion, false); //Rotate on the z axis MT_Vector3 rotate(0,0,5); ob->ApplyRotation(rotate, false); //Add force on the x axis to a dynamic object MT_Vector3 force(4,0,0); ob->ApplyForce(force, false); return true; }
And that’s it for now. It’s worth looking through KX_GameObject.h to see the functions that you can access and having a play with them. The file is quite well commented and most of the functions and parameters should appear pretty straightforward. At this point the best way to learn is through experimenting and using this as a starting point to exploring the rest of the source code. If you’ve spotted any errors, or know more than I do, let me know. If you have any questions ask them in the comments and I’ll do my best to answer them. In the next installment we’ll looking at adding properties to the actuator in the UI and using them in the update function.
Change Log
07/04/12 – expanded explanation of Update() function.
08/04/12 – missing changes to SCA_IActuator.h
|
https://whatjaysaid.wordpress.com/2012/04/03/blender-development-creating-an-actuator-part-2/
|
CC-MAIN-2019-04
|
refinedweb
| 1,626
| 54.22
|
GET STARTED DEVELOPING FOR WINDOWS 8 WINDOWS PHONE AZURE WITH APP BUILDER
I started overhauling my infamous Windows Phone user group prize picking application, dfRandomWinner, yesterday. One particular screen called for use of an image. The image I was looking at using is a black and white image (see below). I want my applicaiton to respect theming on the phone. Which means I would have had to have two images, one black, one white, and used a technique to switch the images based on the theming background.
As I pondered this, I remembered a post by Kevin Wolf with a ThemeableImage control in. The ThemeableImage adapts the control based on the theme's background color. Kevin's control uses an Opacity mask to filp the colors on the image based on the theme. His article (with source code) can be found here . I'm going to walk you through rigging up the ThemeableImage in your project.
Step 1 - Create a metro styled image (black) - ours is named dfrandomwinner.black.png - see a picture belowStep 2 - Create a new Window Phone 7 projectStep 3 - Create an images directory in the project, then add the image to the directoryStep 4 - Modify the properties and make the image "Content", "Copy Always"Step 5 - Add a new empty class called ThemeableImage.cs to the project Step 6 - Go to Kevin's blog and copy the ThemeableImage class into the clipboard Step 7 - Paste Kevin's code into ThemeableImage.cs Step 8 - Modify the namespace statements at the top of MainPage.xaml to reference our assembly
xmlns:wolf="clr-namespace:WolfBytes;assembly=WolfBytes"
Step 9 - Modify MainPage.xaml to show a regular Image control, and the ThemeableImage side by side - XAML below
<Grid x:
<Grid.ColumnDefinitions>
<ColumnDefinition Width="220"/>
<ColumnDefinition Width="10"/>
<ColumnDefinition Width="220"/>
</Grid.ColumnDefinitions>
<StackPanel Grid.
<Image Width="200" Source="/images/dfrandomwiner.black.png" />
<TextBlock Text="Image" HorizontalAlignment="Center"/>
</StackPanel>
<Rectangle Grid.
<StackPanel HorizontalAlignment="Center" Grid.
<wolf:ThemeableImage x:
<TextBlock Text="ThemableImage" HorizontalAlignment="Center"/>
</StackPanel>
</Grid>
Step 10 - Build and run in the emulator With the black background the ThemeableImage will be white Step 11 - Hit the back button to stop the app. Go into settings, change the theme to White, run the app again by double click on it. The image will be black on a white background
As you can see once you've rigged up ThemeableImage, it can save you a ton of work in setting up your projects. I don't have to create two sets of images. Nor do I have to do any pathing tricks for referencing multiple image directories. It'd be nice to see ThemeableImage make the Windows Phone Toolkit. In the meantime, you can use this post as a reference for rigging up the ThemeableImage in your projects.
|
http://blogs.msdn.com/b/devfish/archive/2011/04/11/rigging-up-themeableimage-in-windows-phone-applications.aspx
|
CC-MAIN-2014-42
|
refinedweb
| 465
| 55.95
|
New Features in PHP 5.6
It’s no blasphemy saying the core devs of PHP have had some hiccups and some truly ridiculous arguments about some of the features – just look at this silly discussion on why a shorthand array syntax is bad, back from ten years ago. Arguments like these, among other things, make people think some of the core devs don’t even use PHP in their day to day lives, and are why the devs of PHP and the people using PHP are often considered unprofessional.
The future isn’t bleak, though..
While a full explanation of all the upcoming updates would be far too vast to cover in one article, I would like to direct your attention at some I personally deem most important.
MIME types in the CLI web server
MIME types in PHP can be used to output content as a type other than PHP, that is, as a type other than text/html. When you run a PHP page, the default output is text/html, but you can use headers to set it as, for example, PDF and generate PDF files. When a server is aware of different MIME types, as most servers like HHVM, Apache and Nginx usually are, they know how to serve a given file by default, judging by its extension, without you having to set specific instructions in PHP itself. The command line server from PHP 5.4 had only a few MIME types so far, and this version will introduce dozens more. It’s safe to say that all the common MIME types will be covered by the built in PHP server now.
Internal Operator Overloading
This is a feature we as web developers using PHP probably won’t be exposed to, due to the keyword “internal”. Internal means “non userland” where userland is the area of PHP development we, the end users of PHP, use. It applies only to internal classes, in order to make development in that area simpler and code cleaner to read. A detailed explanation can be found here.
Uploads of over 2GB are now accepted
Until 5.6, no uploads of 2GB and over were supported in PHP. This is no longer the case, as the changelog states, and uploads of arbitrary size are now supported.
POST data memory usage decreased
POST data memory usage has been shrunk by 2 to 3 times, following two removals: the
always_populate_raw_post_data setting from
php.ini, and the superglobal variable
$HTTP_RAW_POST_DATA. What this means is you can no longer access raw post data that way, but need to rely on a solution such as:
$postdata = file_get_contents("php://input");
Note that getting POST via
://input is unavailable when a form is multipart (in other words, when a form has a file upload element).
Improved syntax for variadic functions
Variadic functions are functions which can take an arbitrary amount of arguments. When you supplied some arguments to it, you usually had to do some splicing after calling
func_get_args, which was somewhat impractical. As taken from example here, the syntax in 5.5 and earlier was:
class MySQL implements DB { protected $pdo; public function query($query) { $stmt = $this->pdo->prepare($query); $stmt->execute(array_slice(func_get_args(), 1)); return $stmt; } // ... } $userData = $db->query('SELECT * FROM users WHERE id = ?', $userID)->fetch();
now, the syntax will be:
class MySQL implements DB { public function query($query, ...$params) { $stmt = $this->pdo->prepare($query); $stmt->execute($params); return $stmt; } // ... }
As you can see, the
...$params syntax tells the function to accept the first parameter as is, and to put all the others into the $params array. This rids us of the splicing and calling
func_get_params, improves function signatures, and makes the code more readable.
The new syntax also allows passing of extra arguments by reference, by prefixing
...$params with an ampersand, like so:
&...$params. This was not possible before with
func_get_args.
Argument unpacking
Note: thanks to nikic for pointing out this feature – it got implemented around the time of this article’s original writing
Following improved support for variadic functions, argument unpacking got approved, too.
Until now, the only way to call a function with an arbitrary number of arguments passed in as params was by using
call_user_func_array meaning literally “call userland function with array of params”. This was clumsy and awkward, wasn’t supported in constructors, was slow, and required a callback in the form of a string – a function name – which means no IDE support in most cases.
Unpacking would eliminate all the downsides of said function, and naturally complements the variadic support seen above. Unpacking works like so:
$args = [1, 3, 5, 7, 9]; MyClass::someMethod(...$args);
This is the same as calling
MyClass::someMethod(1, 3, 5, 7, 9);
i.e. passing in the arguments in one by one. This works in any concievable scenario, from class constructors to being called multiple times in a call, anything goes. See the RFC linked above for usage examples and further explanations.
Constant Scalar Expressions
This RFC added the ability to have expressions in places that only expect static values. What this means is you can now have basic arithmetic and logical structures in constant declarations, functions arguments, class properties etc.
For example, previously, code like this would throw an error:
const a = 1; const b = a?2:100;
Now, this is no longer the case.
One might argue whether a constant really is constant if it depends on the value of another constant, but such meta discussions are better left for other times.
PHPDBG bundled by default
The gdb-like debugger, phpdbg, is now bundled by default as SAPI. It’s used from the command line, or a simplistic Java UI, specifying break points interactively, altering programs at runtime and more. It can also inspect Opcode, and be used from withinyour PHP code. Learn more about phpdbg here.
Zip improved
The Zip library got several improvements, particularly in the form of new methods. One that stands out especially is
ZipArchive::setPassword($password) which finally allows you to easily create password protected Zip files.
Importing namespaced functions
As per this RFC, the new version will allow importing of namespaced functions and constants. Right now we are able to import namespaces and types (classes/interfaces/traits) via the use statement, like so:
namespace foo\bar { function baz() { return 'foo.bar.baz'; } } namespace { use foo\bar as b; var_dump(b\baz()); }
From 5.6, we’ll be able to use the
use function and
use const statements to import a lone function or constant (even class constant).
namespace { use function foo\bar as foo_bar; use const foo\BAZ as FOO_BAZ; var_dump(foo_bar()); var_dump(FOO_BAZ); }
Conclusion
PHP 5.6, which as of this moment still doesn’t have a release date, definitely looks promising, and hopefully this short overview of the changelog helped you understand how important it will be for you to upgrade as soon as it’s out, if at all. For the remainder of the upgrades, please see the NEWS file, and keep checking back for updates. If I’ve missed something important, or misinterpreted something, please let me know in the comments below.
- Sam the Sahara Hacker
- Michael Long
- Etienne
- Bruno Skvorc
- Акакий Акакиевич
- Mikushi
- sebastiaan hilbers
- Matthew Setter
- Kevin
- Roy
- Bruno Skvorc
- dojoVader
- Bruno Skvorc
- Mikushi
- philsturgeon
- Bruno Skvorc
- Bruno Skvorc
- dave harris
- Jooms
- Bruno Skvorc
|
http://www.sitepoint.com/new-features-php-5-6/
|
CC-MAIN-2014-23
|
refinedweb
| 1,223
| 61.26
|
Bitwise operators in c programming language: In this tutorial i am going to discuss bitwise operators with example c code. As you may know data is stored in memory in the form of bits and a bit is the unit of memory which can be either zero(0) or one(1). We will perform operations on individual bits.
C programming code
#include <stdio.h> main() { int x = 7, y = 9, and, or, xor, right_shift, left_shift; and = x & y; or = x | y; xor = x ^ y; left_shift = x << 1; right_shift = y >> 1; printf("%d AND %d = %d\n", x, y, and); printf("%d OR %d = %d\n", x, y, or); printf("%d XOR %d = %d\n", x, y, xor); printf("Left shifting %d by 1 bit = %d\n", x, left_shift); printf("Right shifting %d by 1 bit = %d\n", y, right_shift); return 0; }
Left shift operator example program
#include <stdio.h> main() { int n = 1, c, power; for ( c = 1 ; c <= 10 ; c++ ) { power = n << c; printf("2 raise to the power %d = %d\n", c, power); } return 0; }
|
http://www.programmingsimplified.com/c/tutorial/bitwise-operators
|
CC-MAIN-2016-44
|
refinedweb
| 177
| 61.19
|
NAMEcam_open_device, cam_open_spec_device, cam_open_btl, cam_open_pass, cam_close_device, cam_close_spec_device, cam_getccb, cam_send_ccb, cam_freeccb, cam_path_string, cam_device_dup, cam_device_copy, cam_get_device — CAM user library
LIBRARYCommon Access Method User Library (libcam, -lcam)
SYNOPSIS#include < stdio.h>
#include < camlib.h>
struct cam_device *
cam_open_device( const char *path, int flags);
struct cam_device *
cam_open_spec_device( const char *dev_name, int unit, int flags, struct cam_device *device);
struct cam_device *
cam_open_btl( path_id_t path_id, target_id_t target_id, lun_id_t target_lun, int flags, struct cam_device *device);
struct cam_device *
cam_open_pass( const char *path, int flags, struct cam_device *device);
void
cam_close_device( struct cam_device *dev);
void
cam_close_spec_device( struct cam_device *dev);
union ccb *
cam_getccb( struct cam_device *dev);
int
cam_send_ccb( struct cam_device *device, union ccb *ccb);
void
cam_freeccb( union ccb *ccb);
char *
cam_path_string( struct cam_device *dev, char *str, int len);
struct cam_device *
cam_device_dup( struct cam_device *device);
void
cam_device_copy( struct cam_device *src, struct cam_device *dst);
int
cam_get_device( const char *path, char *dev_name, int devnamelen, int *unit);
DESCRIPTIONThe CAM library consists of a number of functions designed to aid in programming with the CAM subsystem. This man page covers the basic set of library functions. More functions are documented in the man pages listed below.
Many of the CAM library functions use the cam_device structure:
struct cam_device { char device_path[MAXPATHLEN+1];/* * Pathname of the * device given by the * user. This may be * null if the user * states the device * name and unit number * separately. */ char given_dev_name[DEV_IDLEN+1];/* * Device name given by * the user. */ uint32_t given_unit_number; /* * Unit number given by * the user. */ char device_name[DEV_IDLEN+1];/* * Name of the device, * e.g. 'pass' */ uint32_t dev_unit_num; /* Unit number of the passthrough * device associated with this * particular device. */ char sim_name[SIM_IDLEN+1];/* * Controller name, e.g.'ahc' */ uint32_t sim_unit_number; /* Controller unit number */ uint32_t bus_id; /* Controller bus number */ lun_id_t target_lun; /* Logical Unit Number */ target_id_t target_id; /* Target ID */ path_id_t path_id; /* System SCSI bus number */ uint16_t pd_type; /* type of peripheral device */ struct scsi_inquiry_data inq_data; /* SCSI Inquiry data */ uint8_t serial_num[252]; /* device serial number */ uint8_t serial_num_len; /* length of the serial number */ uint8_t sync_period; /* Negotiated sync period */ uint8_t sync_offset; /* Negotiated sync offset */ uint8_t bus_width; /* Negotiated bus width */ int fd; /* file descriptor for device */ };
cam_open_device() takes as arguments a string describing the device it is to open, and flags suitable for passing to open(2). The "path" passed in may actually be most any type of string that contains a device name and unit number to be opened. The string will be parsed by cam_get_device() into a device name and unit number. Once the device name and unit number are determined, a lookup is performed to determine the passthrough device that corresponds to the given device.
cam_open_spec_device() opens the pass(4) device that corresponds to the device name and unit number passed in. The flags should be flags suitable for passing to open(2). The device argument is optional. The user may supply pre-allocated space for the cam_device structure. If the device argument is NULL, cam_open_spec_device() will allocate space for the cam_device structure using malloc(3).
cam_open_btl() is similar to cam_open_spec_device(), except that it takes a SCSI bus, target and logical unit instead of a device name and unit number as arguments. The path_id argument is the CAM equivalent of a SCSI bus number. It represents the logical bus number in the system. The flags should be flags suitable for passing to open(2). As with cam_open_spec_device(), the device argument is optional.
cam_open_pass() takes as an argument the path of a pass(4) device to open. No translation or lookup is performed, so the path passed in must be that of a CAM pass(4) device. The flags should be flags suitable for passing to open(2). The device argument, as with cam_open_spec_device() and cam_open_btl(), should be NULL if the user wants the CAM library to allocate space for the cam_device structure. cam_close_device() frees the cam_device structure allocated by one of the above open() calls, and closes the file descriptor to the passthrough device. This routine should not be called if the user allocated space for the cam_device structure. Instead, the user should call cam_close_spec_device().
cam_close_spec_device() merely closes the file descriptor opened in one of the open() routines described above. This function should be called when the cam_device structure was allocated by the caller, rather than the CAM library.
cam_getccb() allocates a CCB using malloc(3) and sets fields in the CCB header using values from the cam_device structure.
cam_send_ccb() sends the given ccb to the device described in the cam_device structure.
cam_freeccb() frees CCBs allocated by cam_getccb().
cam_path_string() takes as arguments a cam_device structure, and a string with length len. It creates a colon-terminated printing prefix string similar to the ones used by the kernel. e.g.: "(cd0:ahc1:0:4:0): ". cam_path_string() will place at most len-1 characters into str. The len'th character will be the terminating ‘
\0’.
cam_device_dup() operates in a fashion similar to strdup(3). It allocates space for a cam_device structure and copies the contents of the passed-in device structure to the newly allocated structure.
cam_device_copy() copies the src structure to dst.
cam_get_device() takes a path argument containing a string with a device name followed by a unit number. It then breaks the string down into a device name and unit number, and passes them back in dev_name and unit, respectively. cam_get_device() can handle strings of the following forms, at least:
- /dev/foo1
-
- foo0
-
- nsa2
-
cam_get_device() is provided as a convenience function for applications that need to provide functionality similar to cam_open_device().
RETURN VALUEScam_open_device(), cam_open_spec_device(), cam_open_btl(), and cam_open_pass() return a pointer to a cam_device structure, or NULL if there was an error.
cam_getccb() returns an allocated and partially initialized CCB, or NULL if allocation of the CCB failed.
cam_send_ccb() returns a value of -1 if an error occurred, and errno is set to indicate the error.
cam_path_string() returns a filled printing prefix string as a convenience. This is the same str that is passed into cam_path_string().
cam_device_dup() returns a copy of the device passed in, or NULL if an error occurred.
cam_get_device() returns 0 for success, and -1 to indicate failure.
If an error is returned from one of the base CAM library functions described here, the reason for the error is generally printed in the global string cam_errbuf which is CAM_ERRBUF_SIZE characters long.
SEE ALSOcam_cdbparse(3), pass(4), camcontrol(8)
HISTORYThe CAM library first appeared in FreeBSD 3.0.
AUTHORS<ken@FreeBSD.org>
BUGScam_open_device() does not check to see if the path passed in is a symlink to something. It also does not check to see if the path passed in is an actual pass(4) device. The former would be rather easy to implement, but the latter would require a definitive way to identify a device node as a pass(4) device.
Some of the functions are possibly mis-named or poorly named.
|
http://www.yosbits.com/opensonar/rest/man/freebsd/man/en/man3/cam_getccb.3.html?l=en
|
CC-MAIN-2022-27
|
refinedweb
| 1,119
| 53.61
|
Today I wrote an awesome program called mkdong that will make a dong of your desired length and print it to your terminal, like this:
% ./mkdong usage: mkdong <length> % ./mkdong 5 ()/()/////D % ./mkdong 25 ()/()/////////////////////////DThat last one is impressive, isn’t it? Hmm… Yeah, it’s Friday. What do you want from me? I still got work done! Cool thing is if the dong is too big, well then it throws an error:
% ./mkdong 60 warning: a 60" dong is too big! cannot be longer than 40"!“What is the point of this?”, you might ask yourself. That’s a good question. I’ve been so busy with other shit lately that I’ve barely had time to code. I suppose I was itching to write something, anything… Dongs!!
It all started harmlessly enough with a silly AIM conversation with my coding buddy at work. We were talking about a bug, and well, read on and you’ll see. It regressed quickly.
So I took the stupidity and ran with it and mkdong was born!
The initial dongs were a little primitive and sickly looking. So I took his suggestion and improved their visual style. Here is how it turned out:
#!/usr/bin/env python
import sys
maxlen = 40(1, donglen): dong += "\" dong += 'D'
print dong We laughed. We joked. We Tweeted. And then it regressed even further:
A feature request! I had to make it print in blue! But to do that I had to replace all of the “\” that make up the dong itself, with “/” so as to not have the ANSI escape codes eat up the extra backslashes. (Backslashes are interpreted characters, duh.) I also had to replace the print statement with a system call to echo -e so that the colorization would be interpreted. This is high tech shit, man!!
And then I released it to the public. So there you have it. Here is the final release of mkdong 2.0 for your pleasure:
#!/usr/bin/env python
import os, sys
maxlen = 40 color = '\\e[0;34m' # blue(donglen): dong += '/' dong += 'D'
os.system('echo -e "%s%s"' % (color, dong)) Use it well. And remember they aren’t bugs, they’re dongs! Squish? Gross.
|
http://jathan.com/page/3/
|
CC-MAIN-2018-26
|
refinedweb
| 369
| 87.01
|
#include <xdgshell.h>
Detailed Description
A XdgShellPopup.
- Since
- 5.25
Definition at line 522 of file xdgshell.h.
Member Function Documentation
When a configure event is received, if a client commits the Surface in response to the configure event, then the client must make an ackConfigure request sometime before the commit request, passing along the
serial of the configure event.
- See also
- configureRequested
- Since
- 5.56
Definition at line 368 of file xdgshell.cpp.
Emitted when the server has configured the popup with the final location of
relativePosition This is emitted for V6 surfaces only.
- Since
- 5.39
Destroys the data held by this XdgShellPopup. xdg_popup interface once there is a new connection available.
It is suggested to connect this method to ConnectionThread::connectionDied:
Definition at line 348 of file xdgshell.cpp.
- Returns
- The event queue to use for bound proxies.
Definition at line 358 of file xdgshell.cpp.
- Returns
trueif managing an xdg_popup.
Definition at line 410 of file xdgshell.cpp.
This signal is emitted when a XdgShellPopup is dismissed by the compositor.
The user should delete this instance at this point.
Releases the xdg_popup interface.
After the interface has been released the XdgShellPopup instance is no longer valid and can be setup with another xdg_popup interface.
Definition at line 343 of file xdgshell.cpp.
Requests a grab on this popup.
- Since
- 5.39
Definition at line 363 of file xdgshell.cpp.
Sets the
queue to use for bound proxies.
Definition at line 353 of file xdgshell.cpp.
Setup this XdgShellPopup to manage the
xdgpopupv5.
When using XdgShell::createXdgShellPopup there is no need to call this method.
- Deprecated:
- Since 5.49. This was for XDGShellV5, this is now deprecated
Definition at line 328 of file xdgshell.cpp.
Setup this XdgShellPopup to manage the
xdgpopupv6 on associated
xdgsurfacev6 When using XdgShell::createXdgShellPopup there is no need to call this method.
- Since
- 5.39
Definition at line 333 of file xdgshell.cpp.
Setup this XdgShellPopup to manage the
xdgpopupv on associated
xdgsurface When using XdgShell::createXdgShellPopup there is no need to call this method.
- Since
- 5.XDGSTABLE
Definition at line 338 of file xdgshell.cpp.
Sets the position of the window contents within the buffer.
- Since
- 5.59
Definition at line 373.
|
https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1XdgShellPopup.html
|
CC-MAIN-2020-10
|
refinedweb
| 371
| 60.51
|
by Matthew Ford 11th September 2019
(original 4th September 2019)
© Forward Computing and Control Pty. Ltd. NSW Australia
The tutorial describes how to run multiple task tutorial will develop a temperature controlled, stepper motor driven damper with a user interface. The entire project can be developed and testing on just an Arduino UNO. Because this page is concentrating on the software the external thermocouple board and stepper motor driver libraries are used, but the hardware is omitted and the input temperature simulated in the software. Finally the same project code is moved from the UNO to an ESP32 so that you can control it remotely via WiFi, BLE or Bluetooth.
If you search for 'multitasking arduino' you will find lots of results. Most of them deal with removing delays or with using an RTOS. This page goes beyond just removing delays, that was covered in How to code Timers and Delays in Arduino, and covers the other things you need to do for multi-tasking Arduino without going to an RTOS.
This tutorial also covers moving from an Arduino to an FreeRTOS enabled ESP32 board and why you may want to keep using “Simple Multi-tasking” approach even on a board that supports an RTOS.
Hardware
Arduino UNO or any other board supported by the Arduino IDE. All the code developed here can be tested with just an Arduino UNO.
Optional - an ESP32 e.g. Sparkfun ESP32 Thing. The last step in this tutorial moves the code, unchanged, to an ESP32 and adds remote control.
Software
Install the Arduino IDE V1.8.9+
Install the following libraries, using Arduino's Sketch -> Include Library -> Add.ZIP library :-
millisDelay.zip, loopTimer.zip and pfodParser.zip (for pfodBufferedStream and pfodNonBlockingInput),
The loopTime library has the sketches used here in its examples directory. Open Arduino's File → Examples → loopTimer for a list of them.
For the temperature controlled stepper motor drive damper example sketch:-
Temperature input library MAX31856_noDelay.zip. Adafruit-MAX31855-library-master.zip is also used for illustration purposes, but because it uses delay() it is replaced by MAX31856_noDelay.zip
Stepper motor control, AccelStepper-1.59.zip
Optional – install ESP32 Arduino support, see ESP32_Arduino_SetupGuide.pdf
Here is an example of multi-tasking for Arduino
void loop() { callTask_1(); // do something callTask_2(); // do something else callTask_1(); // check the first task again as it needs to be more responsive than the others. callTask_3(); // do something else }
That is very simple isn't it. What is the trick?
The trick is that each callTask..() method must return quickly so that the other tasks in the loop get called promptly and often. The rest of this tutorial covers how to keep your tasks running quickly and not holding everything up, using a temperature controlled, stepper motor driven damper with a user interface as a concrete example.
A 'Real Time Operating System' (RTOS) adds another level of complexity to your programs as well as needing more RAM and taking more time to execute. There are a number of RTOS (Real Time Operating Systems) systems for Arduino such as,, and The preemptive RTOS systems, work by dividing the CPU's time up into small slices, 2mS to 15mS or more, and sharing these slices between competing tasks. The cooperative multi-tasking RTOS systems depend on each task pausing to let another task run.
These frameworks add extra program code, use more RAM and involve learning a new 'task' framework. Some of them only run on specific Arduino boards. In general they aim to 'appear' to execute multiple 'tasks' (code blocks) at the same time, but in most cases just put one block of code to sleep for some milliseconds while executing another block. Because of this task switching taking place at a low level it is difficult to ensure any particular tasks will respond in a given time. E.g. running the AccelStepper stepper motor library on an RTOS system can be difficult because the run() method needs to be called every 1mS for high speed stepping. Later we will look at how “simple multi-tasking' lets you run these types of libraries on an ESP32 which runs FreeRTOS.
On an UNO there is not really enough RAM available to run an RTOS. The “Real Time” in RTOS is a misnomer. All computers take time to do tasks. While the cpu is occupied with that task it can miss other signals. In contrast to RTOS systems, the approach here uses the minimal of RAM, follows the standard Arduino framework of first running the setup() and then repeatedly running loop() method. The “simple multi-tasking” examples below are run on an Arduino UNO.
Why do you want a fast and 'responsive' loop()? Well for simple single action programs like blinking one led, you don't need it to be fast or responsive. However as you add more tasks/functions to your sketch, you will quickly find things don't work as expected. Inputs are missed, outputs are slow to operate and when you add debugging print statements everything just gets worse. This tutorial covers how to avoid the blockages that cause your program to hang without resorting to using a 'Real Time Operating System' (RTOS)
To keep the appearance of 'real time' the loop() method must run as quickly as possible without being held up. The approach here aims to keep the loop() method running continually so that your tasks are called as often as possible.
There are a number of blockages you need to avoid. The 'fixes' covered here are generally applicable all Arduino boards and don't involve leaning a new framework. However they do involve some considered programming choices. The following topics will be covered:-
Simple
Multi-tasking in Arduino
Add a loop timer
Rewriting the Blink Example as a task
Another Task
Doing two things at once
Get rid of delay() calls, use millisDelay
Buffering Print Output
Getting User Input without blocking
Temperature
Controlled Damper
Adding the Temperature Sensor
Modifying Arduino libraries to remove delay() calls
Giving Important Tasks Extra Time
ESP32 Damper Remote Control
Simple Multi-tasking versus ESP32 FreeRTOS
The first thing to do is to add a loop timer to keep track of how long it takes your loop() method to run. It will let you know if one or more of your tasks in holding things up. As we will see below, third party libraries often have delays built in that will slow down your loop() code.
The loopTimer library (which also needs millisDelay library) provides a simple timer that keeps track of the maximum and average time it take to run the loop code. Download these two zip files, loopTimer.zip and millisDelay.zip, and use Arduino IDE menu Sketch -> Include Library -> Add.ZIP library to add them in. Insert #include <loopTimer.h> at the top of the file and then add loopTimer.check(&Serial); to the top of your loop() method. The loopTime library uses millisDelay library, which is why you need to install that as well.
#include <loopTimer.h> … void setup() { Serial.begin(9600); … } void loop() { loopTimer.check(&Serial); …. }
loopTimer.check(&Serial) will print out the results every 5sec. You can suppress the printing by omitting the Serial argument, i..e loopTimer.check() and then call loopTimer.print(&Serial) later. You can also create extra named timers from the loopTimerClass that will add that name to their output. e.g. loopTimerClass task1Timer("task1");
The loopTimer library includes a number of examples. LoopTimer_BlinkDelay.ino is the 'standard' Blink code with a loop timer added
Running the LoopTimer_BlinkDelay.ino gives the following output
loop uS Latency 5sec max:2000028 avg:2000028 sofar max:2000032 avg:2000029 max
As this shows the loop() code takes 2sec (2000000 uS) to run. So not even close to 'real time' if you are trying to do anything else.
Lets rewrite the blink example as task in Simple Multi-tasking Arduino, BlinkDelay_Task.ino
// the task method void blinkLed13() { digitalWrite(LED_BUILTIN, HIGH); // turn the LED on (HIGH is the voltage level) delay(1000); // wait for a second digitalWrite(LED_BUILTIN, LOW); // turn the LED off by making the voltage LOW delay(1000); // wait for a second } // the loop function runs over and over again forever void loop() { loopTimer.check(&Serial); blinkLed13(); // call the method to blink the led }
Now lets write another task that prints the current time in mS to the Serial every 5 secs, PrintTimeDelay_Task.ino
// the task method void print_mS() { Serial.println(millis()); // print the current mS delay(5000); // wait for a 5 seconds }
Here is some of the output when that task is run just by itself (without the blink task)
10021 loop uS Latency 5sec max:5007288 avg:5007288 sofar max:5007288 avg:5007288 max 15049
The millseconds is printed every 5secs and the loopTimer shows the loop is taking about 5secs to run.
Putting the two task in one sketch clearly shows the problem most people face when trying to do more than one thing with their Arduino. PrintTime_BlinkDelay_Task.ino
void loop() { loopTimer.check(&Serial); blinkLed13(); // call the method to blink the led print_mS(); // print the time }
A sample of the output now shows that now the loop() takes 7 secs to run and so the blinkLed13() and the print_mS() tasks are only call once every 7secs
14021 loop uS Latency 5sec max:7000288 avg:7000288 sofar max:7000288 avg:7000288 max 21042
Clearly the delay(5000) and the two delay(1000) are the problem here.
The PrintTime_Blink_millisDelay.ino example replaces the delay() calls with millisDelay timers. See How to code Timers and Delays in Arduino for a detailed tutorial on this.
void blinkLed13() { if (ledDelay.justFinished()) { // check if delay has timed out ledDelay.repeat(); // start delay again without drift ledOn = !ledOn; // toggle the led digitalWrite(led, ledOn?HIGH:LOW); // turn led on/off } // else nothing to do this call just return, quickly } void print_mS() { if (printDelay.justFinished()) { printDelay.repeat(); // start delay again without drift Serial.println(millis()); // print the current mS } // else nothing to do this call just return, quickly }
Running this example code on an Arduino UNO gives
15001 loop uS Latency 5sec max:7280 avg:13 sofar max:7280 avg:13 max 20004
So now the loop() code runs every 7.28mS and you will see the
LED blinking on and off every 1sec and the every 5sec the
milliseconds will be printed to Serial. You now have two tasks
running “at the same time”.
The 7.2mS is due to the print() statements as we will see next.
However delay() is not the only thing that can hold up your loop from running quickly. The next most common thing that blocks your loop() is print(..) statements.
The LongPrintTime_Blink.ino adds some extra description text as the LED is turned On and Off.
void blinkLed13() { if (ledDelay.justFinished()) { // check if delay has timed out ledDelay.repeat(); // start delay again without drift ledOn = !ledOn; // toggle the led Serial.print("The built-in board LED, pin 13, is being turned "); Serial.println(ledOn?"ON":"OFF"); digitalWrite(led, ledOn?HIGH:LOW); // turn led on/off } // else nothing to do this call just return, quickly }
When you run this example on an Arduino UNO board, the loop() run time goes from 7.2mS to 62.4mS.
…. The built in board LED, pin 13, is being turned OFF The built in board LED, pin 13, is being turned ON The built in board LED, pin 13, is being turned OFF 40000 loop uS Latency 5sec max:62400 avg:13 sofar max:62400 avg:13 max
As you add more debugging output the loop() gets slower and slower.
What is happening? Well the print(..) statements block once the TX buffer in Hardware Serial is full waiting for the preceding bytes (characters) to be sent. At 9600 baud it takes about 1mS to send each byte to the Serial port. In the UNO the TX buffer is 64 bytes long and the loopTimer Latency message is 66 bytes long so every 5sec it fills the buffer and the “led ON” “led OFF” message is blocked waiting for 53 bytes of the Latency debug message to be sent so there is room for the ON/OFF message. The print mS also blocks waiting for another 7 bytes (including the /r/n) to be sent. The net result is that the loop() is delayed for 62mS waiting for the print() statements to send the output to Serial.
This is a common problem when adding debug print statements and other output. Once you print more than 64 chars in the loop() code, it will start blocking. Increasing the Serial baud rate to 115200 will reduce the delay but does not remove it.
pfodBufferedStream can be used to avoid blocking the loop() code due to prints(..) by providing a larger buffer to print to and then slowly releasing characters to the Serial port. It actually runs as another task.
Install the pfodParser
library which contains the pfodBufferedStream
class and then run the
BufferedPrintTime_Blink.ino
example. With a 130 byte buffer the
print(..)
statements don't block.
The bufferedStream is connected to the Serial and thereafter the sketch prints to the bufferedStream. Calling any bufferedStream print/read method will release a buffered character if appropriate, so a call to bufferedStream.available(); is added to the loop to ensure characters are released. This runs like another background task releasing buffered characters to Serial at 9600 baud.
void setup() { Serial.begin(9600); for (int i = 10; i > 0; i--) { Serial.println(i); delay(500); } bufferedStream.connect(&Serial); // connect buffered stream to Serial pinMode(LED_BUILTIN, OUTPUT); ledDelay.start(1000); // start the ledDelay, toggle every 1000mS printDelay.start(5000); // start the printDelay, print every 5000mS } // the task method void blinkLed13() { if (ledDelay.justFinished()) { // check if delay has timed out ledDelay.repeat(); // start delay again without drift ledOn = !ledOn; // toggle the led bufferedStream.print("The built-in board LED, pin 13, is being turned "); bufferedStream.println(ledOn?"ON":"OFF"); digitalWrite(led, ledOn?HIGH:LOW); // turn led on/off } // else nothing to do this call just return, quickly } // the task method void print_mS() { if (printDelay.justFinished()) { printDelay.repeat(); // start delay again without drift bufferedStream.println(millis()); // print the current mS } // else nothing to do this call just return, quickly } // the loop function runs over and over again forever void loop() { loopTimer.check(&bufferedStream); bufferedStream.available(); // call buffered stream task to release a char as necessary. blinkLed13(); // call the method to blink the led print_mS(); // print the time }
A sample of the output is.
The built-in board LED, pin 13, is being turned ON 10000 loop uS Latency 5sec max:1304 avg:17 sofar max:1304 avg:17 max
Now the loop() is running every 1.3mS. Of course if you add more print( ) statements then eventually you will exceed the bufferedStream's capacity. To avoid blocking at all, you can choose to set the pfodBufferedStream to just discard any bytes that won't fit in the buffer
pfodBufferedStream bufferedStream(9600,buf,bufSize, false); // false sets blocking == false, i.e. excess bytes are just dropped once buffer is full
The false argument means blocking is false, i.e. non-blocking. With
this setting you may loose some output, but you will not slow down
the loop() code waiting for print statements to be sent.
If you think your extra print statements are blocking the loop, just add the extra false argument to the pfodBufferedStream constructor and see if the loop() speeds up.
Another cause of delays is handling user input. The Arduino Stream class, which Serial extends, is typical of the Arduino libraries in that includes calls to delay(). The Stream class has a number of utility methods, find...(), readBytes...(), readString...() and parseInt() and parserFloat(). All of these methods call timedRead() or timedPeek() which enter a tight loop for up to 1sec waiting for the next character. This prevents your loop() from running and so these methods are useless if you need your Arduino to be controlling something as well as requesting user input.
The next example also illustrates how easy it is to pass data between tasks. Use either global variables or arguments to pass in values to a task and use global variables or a return statement to the return the results. No special locking is needed to ensure things work as you would like.
PfodNonBlockingInput, available in the pfodParser library, provides two non-blocking methods, readInputLine() and clearInput(), to collect user input while keeping your loop() code running.
Install the pfodParser library which contains the pfodNonBlockingInput class and then run the Input_Blink_Tasks.ino example. This cascades a nonBlockingInput with a bufferedStream. The readInputLine() method has an option to echo the user's input.
A small buffer is used to capture the user input
pfodNonBlockingInput nonBlocking; const size_t lineBufferSize = 10; // 9 + terminating null char lineBuffer[lineBufferSize] = ""; // start empty
The task to collect user input is
// task to get the first char user enters, input terminated by \r or \n // return 0 if nothing input char getInput() { char rtn = 0; int inputLen = nonBlocking.readInputLine(lineBuffer, lineBufferSize, true); // echo input if (inputLen > 0) { // got some user input either 9 chars or less than 9 chars terminated by \r or \n rtn = lineBuffer[0]; // collect first char for rtn. } return rtn; }
The blinkLed13 task now takes an argument to stop the blinking
void blinkLed13(bool stop) { if (ledDelay.justFinished()) { // check if delay has timed out ledDelay.repeat(); // start delay again without drift if (stop) { digitalWrite(led, LOW); // turn led on/off ledOn = false; return; } ledOn = !ledOn; // toggle the led digitalWrite(led, ledOn ? HIGH : LOW); // turn led on/off } // else nothing to do this call just return, quickly }
The loop() is now
void loop() { loopTimer.check(&nonBlocking); char in = getInput(); // call input task, this also releases buffered prints if (in != 0) { bufferedStream.print(F("User entered:")); bufferedStream.println(in); if (in != 's') { stopBlinking = false; } else { stopBlinking = true; } lineBuffer[0] = '\0'; // clear buffer for reuse // prompt user again if (stopBlinking) { bufferedStream.println(F("Enter any char to start Led Blinking:")); } else { bufferedStream.println(F("Enter s to stop Led Blinking:")); } nonBlocking.clearInput(); // clear out any old input waiting to be read. This call is non-blocking } blinkLed13(stopBlinking); // call the method to blink the led print_mS(); // print the time }
A sample of the output is below. The loop() time is still ~1mS even while clearing out old Serial data and waiting for new user input.
s User entered:s Enter any char to start Led Blinking: 15000 loop uS Latency 5sec max:1164 avg:36 sofar max:1164 avg:36 max
So now the 'simple multi-tasking' sketch is controlling the Led blinking via a user input command while still printing out the milliseconds every 5 sec.
Now that we have a basic multi-tasking sketch that can do multiple things “at the same time”, print output and prompt for user input, we can add the temperature sensor and stepper motor libraries to complete the Temperature Controlled Damper sketch.
The next task in this project is to read the temperature that is going to be used to control the damper. Most Arduino sensor libraries use calls to delay() to wait for the reading to become available. To keep your Arduino loop() running you need to remove these calls to delay(). This takes some work and code re-organization. The general approach is to start the measurement, set a flag to say a measurement is under way, and start a millisDelay to pick up the result.
For the temperature sensor we are using Adafruits's MAX31856 breakout board. The MAX31856 uses the SPI interface which uses pin 13 for the SCK, so the led in the blinkled13 task is moved to pin 7. You don't need the breakout board to run the sketch, it will just return 0 for the temperature.
As a first attempt we will use the Adafruit's MAX31856 library (local copy here). The sketch TempDelayInputBlink_Tasks.ino, adds a readTemp() task. For simplicity this task does not check for thermocouple faults. A full implementation should.
// return 0 if have new reading and no errors // returns -1 if no new reading // returns >0 if have errors int readTemp() { tempReading = maxthermo.readThermocoupleTemperature(); return 0; }
And the loop() is modified to allow the user to start and stop taking temperature readings. This is an example of using a flag, stopTempReadings, to skip a task that need not be run.
void loop() { loopTimer.check(&nonBlocking); char in = getInput(); // call input task, this also releases buffered prints if (in != 0) { bufferedStream.print(F("User entered:")); bufferedStream.println(in); if (in != 's') { stopTempReadings = false; } else { stopTempReadings = true; } lineBuffer[0] = '\0'; // clear buffer for reuse // prompt user again if (stopTempReadings) { nonBlocking.println(F("Enter any char to start reading Temp:")); } else { nonBlocking.println(F("Enter s to stop reading Temp:")); } nonBlocking.clearInput(); // clear out any old input waiting to be read. This call is non-blocking } blinkLed13(stopTempReadings); // call the method to blink the led printTemp(); // print the temp if (!stopTempReadings) { int rtn = readTemp(); // check for errors here } }
The print_mS() is replaced with a printTemp() task
void printTemp() { if (printDelay.justFinished()) { printDelay.repeat(); // start delay again without drift //bufferedStream.println(millis()); // print the current mS if (stopTempReadings) { bufferedStream.println(F("Temp reading stopped")); } else { bufferedStream.print(F("Temp:")); bufferedStream.println(tempReading); } } // else nothing to do this call just return, quickly }
The led output will only blinks we are taking temperature readings.
Running the TempDelayInputBlink_Tasks.ino on an UNO with no breakout board attached (that is the SPI leads are not connected) gives
Enter any char to start reading Temp: Temp reading stopped loop uS Latency 5sec max:452 avg:36 sofar max:452 avg:36 max Temp reading stopped loop uS Latency 5sec max:456 avg:36 sofar max:456 avg:36 max r User entered:r Enter s to stop reading Temp: loop uS Latency 5sec max:253428 avg:388 sofar max:253428 avg:388 max Temp:-0.01 loop uS Latency 5sec max:252192 avg:251747 sofar max:253428 avg:251747 max
As you can see before we start taking reading the loop() runs every 0.46mS. Once we start taking readings, the loop() slows to a crawl, 250mS. The problem is the delay(250) which is built into Adafruit's MAX31956 library. Searching through the library code from shows that there is only one use of delay in the oneShotTemperature() method, which adds a delay(250) at the end to give the board time to read the temperature and make the result available.
Fixing
this library turns out to be relatively straight forward. Remove the
delay(250)
at the
end of the oneShotTemperature()
method
and delete the calls to oneShotTemperature()
from
readCJTemperature()
and
readThermocoupleTemperature().
The modified library, MAX31956_noDelay, is available here.
To use the modified noDelay library, we need to start a reading and then come back is a little while later to pick up the result. The readTemp() task now looks like
int readTemp() { if (!readingStarted) { // start one now maxthermo.oneShotTemperature(); // start delay to pick up results max31856Delay.start(MAX31856_DELAY_MS); } if (max31856Delay.justFinished()) { readingStarted = false; // can pick up results now tempReading = maxthermo.readThermocoupleTemperature(); return 0; // new reading } return -1; // no new reading }
Running the modified sketch TempInputBlink_Tasks.ino, gives the output below. The loop() runs in ~2mS while taking temperature readings.
Enter any char to start reading Temp: Temp reading stopped loop uS Latency 5sec max:452 avg:29 sofar max:452 avg:29 max r User entered:r Enter s to stop reading Temp: Temp:0.00 loop uS Latency 5sec max:2016 avg:233 sofar max:2016 avg:233 max Temp:0.00
The last part of this simple multi-tasking temperature controlled damper is the damper's stepper motor control. Here we are using the AccelStepper library to control the damper's stepper motor. The accelStepper's run() method has to be called for each step. That means in order to achieve the maximum 1000 steps/sec, the run() method needs to be called at least once every 1mS.
As a first attempt, just add the stepper motor library and control. Since this tutorial is about the software and not the hardware, it will use a very simple control and just move the damper to fixed positions depending on temperature. 0 degs to 100 degs will be mapped into 0 to 5000 steps position. To test the software without a temperature board, the user can input numbers 0 to 5 to simulate temperatures 0 to 100 degs. The readTemp() task will still be called but its result will be ignored.
There are two new tasks setDamperPosition() to convert temp to position and runStepper() to run the AccelStepper run() method.
void setDamperPosition() { if (closeDampler) { stepper.moveTo(0); } else { long stepPosition = simulatedTempReading * 50; stepper.moveTo(stepPosition); } } void runStepper() { stepper.run(); }
The loop() handles the user input temperature simulation and adds these two extra tasks on the end
void loop() { loopTimer.check(&bufferedStream); char in = getInput(); // call input task, this also releases buffered prints if (in != 0) { bufferedStream.print(F("Cmd Entered:")); bufferedStream.println(in); closeDampler = false; if (in == '0') { simulatedTempReading = 0.0; } else if (in == '1') { simulatedTempReading = 20.0; } else if (in == '2') { simulatedTempReading = 40.0; } else if (in == '3') { simulatedTempReading = 60.0; } else if (in == '4') { simulatedTempReading = 80.0; } else if (in == '5') { simulatedTempReading = 100.0; } else { closeDampler = true; bufferedStream.println(F("Close Damper")); } lineBuffer[0] = '\0'; // clear buffer for reuse // prompt user again nonBlocking.clearInput(); // clear out any old input waiting to be read. This call is non-blocking } blinkLed7(closeDampler); // call the method to blink the led printTemp(); // print the temp int rtn = readTemp(); // check for errors here setDamperPosition(); runStepper(); }
Running the FirstDamperControl.ino sketch give the following timings
Temp:60.00 Position current:1530 loop uS Latency 5sec max:2352 avg:1122 sofar max:2352 avg:1122 max
Since the loop() only runs every 2.3mS, runStepper() is only called that often. We need to add more calls to runStepper() so that it is called more frequently. Moving the loopTimer from loop() into the runStepper() task allows us to monitor how often runStepper() is called.
void runStepper() { loopTimer.check(&bufferedStream); // moved here from loop() stepper.run(); }
The FinalDamperControl.ino, adds two more calls to runStepper() around the printTemp() task
… blinkLed7(closeDampler); // call the method to blink the led runStepper(); // <<<< extra call here printTemp(); // print the temp runStepper(); // <<<< extra call here int rtn = readTemp(); // check for errors here setDamperPosition(); runStepper(); }
Running the FinalDamperControl.ino, gives these timings
Temp:60.00 Position current:1515 loop uS Latency 5sec max:1252 avg:382 sofar max:1252 avg:382 max
Adding more extra calls to runStepper() does not noticeably improve the timings. So the UNO, with a 16Mhz clock, is just not quite fast enough to scan for user input, print output, blink the led and run the stepper at 1000 steps/sec. 1.25mS limits the maximum speed of the stepper to 800 steps/min. To do better we need to use a faster processor. The ESP32's clock is 80Mhz, so lets try it.
Without making any changes to the FinalDamperControl.ino sketch, recompile and run it on an ESP32 board. Here we are using a Sparkfun ESP32 Thing. The timings for runStepper() are now
Temp:60.00 Position current:379 loop uS Latency 5sec max:85 avg:13 sofar max:126 avg:13 max
So running on an ESP32, there is no problem achieving 1000 steps/sec for the stepper motor. Using an ESP32 also gives you the ability to control the damper via WiFi, BLE or Classic Bluetooth.
Note that although the ESP32 is a dual core processor running FreeRTOS, no changes were needed to run the “simple multi-tasking” sketch on it. The loop() code runs on core 1, leaving core 0 free to run the communication code. You have a choice of WiFi, BLE or Classic Bluetooth for remote control of the damper system. WiFi is prone to 'Half-Open' connections and requires extra work to avoid problems. BLE is slower with smaller data packets and requires different output buffering. If you are using the free pfodDesigner Andoid app to create your control menu to run on pfodApp then the correct code for these cases are generated for you.
Here we will use Classic Bluetooth as it the simplest to code and easily connects to a terminal program on old computers as well as mobiles.
The ESP32DamperControl.ino sketch has the necessary mods. The output is now redirected to the SerialBT and the baud rate increased to 115200. Once you see “The device started, now you can pair it with Classic bluetooth!” in the Serial Monitor, you can pair with your computer or mobile. After pairing with the computer, a new COM port was created on the computer and TeraTerm for PC (or CoolTerm Mac/PC) can be used to connect and control the damper. On your Android mobile you can use a bluetooth terminal app such as Bluetooth Terminal app.
Of course now that you have finished checking the timings you can comment out the loopTimer.check() statement. You could also add you own control menu. The free pfodDesigner Android app lets you do that easily and generate the menu code for you to use with the, paid, pfodApp.
Given that “simple multi-tasking” works on any Arduino board, why would you want to use ESP32 FreeRTOS or other RTOS system? Well probably the most compelling reason is that RTOS systems generally are tolerant of delay() calls so you can use third-party libraries unchanged. Other then that, using ESP32 FreeRTOS is not as straight forward as “simple multi-tasking”.
ESP32's FreeRTOS is a cooperative multi-tasking system, so you have to program delays into your tasks to give other tasks a chance to run. You need to learn new methods for starting tasks and if you use the default method, your task can be run on either core, so you can find your task competing with the high priority Radio tasks for time. Also if you have multiple tasks distributed across the two cores, you have to worry about safely transferring data between the tasks in a thread safe manner, i.e. locks, semaphores, critical sections etc. Finally due to a quirk in the way the ESP32 implements the task switching, you can find your task is not called at all, or called less often then you would expect. You can code around this problem, but it takes extra effort.
In a preemptive RTOS system as used by TeensyThreads it can be difficult to force a tasks like the AccelStepper run() method to run as often as you want.
All RTOS systems add an extra overhead of support code with its own set of bugs and limitations. So all in all, the recommendation is to code using the “simple multi-tasking” approach that will run on any Arduino board you choose. If you want to add a communication's module, then the ESP32's second core provides it without impacting your code and having two separate cores minimizes the impact of the underlying RTOS.
This tutorial presented “simple multi-tasking” for any Arduino board. The detailed example sketches showed how to achieve 'real time' execution limited only by the cpu's clock, by replacing delay() with millisDelay, buffering output and getting user input without blocking. The loopTimer lets you bench mark how responsive your sketch is. As a practical example a temperature controlled, stepper motor driven damper program was built.
Finally the example sketch was simply recompiled for an ESP32 that provides a second core for remote control via WiFi, BLE or Classic Bluetooth, without impacting the responsiveness of original code.
|
https://www.forward.com.au/pfod/ArduinoProgramming/RealTimeArduino/index.html
|
CC-MAIN-2019-39
|
refinedweb
| 5,280
| 64
|
Multiple Dwellings
February 20, 2009
Cooper doesn’t live on the bottom floor; he also doesn’t live on the top floor, because Miller lives above him. Therefore, Cooper must live on floors two through four, as does Fletcher, and since they don’t live on adjacent floors, one of them must live on the second floor and the other on the fourth floor. Assume for the moment that Fletcher lives on the second floor. Then Smith must live on the top floor, since the second and fourth floors are already occupied and he can’t live on the first or third floors adjacent to Fletcher. But then there is no place for Miller to live, since Cooper is on the fourth floor and Miller must be above him. Thus, the assumption that Fletcher lives on the second floor is impossible, so Fletcher lives on the fourth floor and Cooper lives on the second floor. Smith must live on the first floor, since he doesn’t live adjacent to Fletcher on the third or fifth floors, and the second floor is already occupied. Baker must live on the third floor, since he doesn’t live on the top floor. And Miller lives on the top floor, since it is the only place left, and it is above Cooper on the fourth floor.
This problem is easily solved with John McCarthy’s
amb operator, which takes zero or more expressions and non-deterministically returns the value of one of them if it will lead to the success of the overall expression.
Amb is an angelic operator, because it always knows the right answer. It works by backtracking, but the client program never sees the backtracking; from the point of view of the client program, it is as if
amb mysteriously knows the right answer. Here is an implementation of
amb:
(define (fail)
(error 'amb "tree exhausted"))
(define-syntax amb
(syntax-rules ()
((amb) (fail))
((amb expr) expr)
((amb expr ...)
(let ((prev-fail fail))
((call-with-current-continuation
(lambda (success)
(call-with-current-continuation
(lambda (failure)
(set! fail failure)
(success (lambda () expr))))
...
(set! fail prev-fail)
prev-fail)))))))
(define (require condition)
(if (not condition) (amb)))
Given
amb, the puzzle is easy to solve. We arrange five variables, one for each dweller, require that the five variables have distinct values, and require each of the conditions given in the puzzle statement.
(define (distinct? xs)
(cond ((null? xs) #t)
((member (car xs) (cdr xs)) #f)
(else (distinct? (cdr xs)))))
))))
The solution can be seen at.
> (multiple-dwelling)
((baker 3) (cooper 2) (fletcher 4) (miller 5) (smith 1))
Any constraint puzzle can be formulated in this way, using
amb; for instance,
amb provides an easy solution for sudoku puzzles. McCarthy’s original article (from 1961!) describing
amb is at. Abelson and Sussman give an excellent description at.
In an appartment house?
P!
Haskell (assuming 0 = bottom floor):
[…] Multiple Dwellings? […]
After some false starts, I came up with Haskell solution that isn’t half bad. Here’s mine:
Clojure code using only a list comprehension for this particular problem.
nb: in my country floors start at zero
package main
import "fmt"
func pass(b, c, f, m, s int) bool {
return b < 5 && c > 1 && f > 1 && f < 5 && m > c && f-s != 1 && f-s != -1 && f-c != 1 && f-c != -1 &&
b != c && b != f && b != m && b != s && c != f && c != m && c != s && f != m && f != s && m != s &&
b >= 1 && c >= 1 && f >= 1 && m >= 1 && s >= 1 && b <= 5 && c <= 5 && f <= 5 && m <= 5 && s <= 5
}
func main() {
for l := 0; l < 55555; l++ {
if pass(l%10,l/10%10,l/100%10,l/1000%10,l/10000%10) {
fmt.Printf("lives %d\n", l)
}
}
}
Great problem! I used to do these logic puzzles as a kid by marking off squares on a grid. Here’s my solution in Scala:
|
http://programmingpraxis.com/2009/02/20/multiple-dwellings/2/
|
CC-MAIN-2015-27
|
refinedweb
| 651
| 69.82
|
Programming for the Windows Tablet Foundation
I'm surprised by how many people don't know this trick... but equally surprised by how obscure the UI.("Arithex.com")][assembly: System.Reflection.AssemblyCopyright("© 2003 Shawn A. Van Ness. All rights reserved.")]
With C++ this is trivial -- you create a header file, and #include it from wherever you like. But C# doesn't have a #include directive. Or does:
<File RelPath = "AssemblyCompanyAndCopyright.cs" Link = "..\..\AssemblyCompanyAndCopyright.cs" SubType = "Code" BuildAction = "Compile" />
Note that "RelPath" attribute is not, as you might think, a relative path to the file. That would make sense! Rather, it's the relative path of where the node will appear in the Solution Explorer treeview. (This is the same "RelPath" attribute that you see on every other file in your C# project.) The "Link" attribute is the interesting one -- that's the relative path to the actual file (relative to the .csproj file). In Solution Explorer, the "linked' file will appear with an Explorer-style shortcut arrow emblazoned over its icon.
If you have some trepidation about hacking up your .csproj files, or if you absolutely, positively insist on using a mouse... right-click your project in Solutions Explorer, and select Add Existing Item. Browse to your shared C# source file, then click the unnoticeable little dropdown arrow next to the Open button. See the ctxmenu item entitled "Link File"? That's your man.
(There's always a catch.) For anything more complex than assembly-level attributes, one must think carefully about duplicating C# code across multiple assemblies. Consider the following hypothetical SharedConstants.cs file...
using System;namespace MyCompany {internal class SharedConstants { public const string Hello = "Hello"; public const string World = "World"; }}
This will produce a new, different type named MyCompany.SharedConstants in each and every assembly you "include" it in. If the constants are large, or if the sanctity of SharedConstants' type-identity is important to you (imagine, perhaps, defining an enum this way, then later developing methods and properties which attempt to pass parameters of that type across assembly boundaries) then this approach may be undesirable -- or it may flat out break.
In that case, your only real option is to build a shared assembly which defines the constants (or enumeration values, etc), then ref that assembly in your other projects. And that is, of course, what the founding fathers of .NET intended.
{Sudhakar's .NET Dump Yard;}
This rocks - thanks!
The second catch is that this is only available in VS.Net 2003.
I've been "linking" to C# files, successfully, since v7.0. What problem are you having?
Works fine for me in VS.NET 2002 as well.
One problem that I have encountered with this is in a similar scenario to your example: sharing assembly level attributes. I wanted all my component to have the same strong name and version number too. Works fine, right up until you have some depth to your filesystem hierarchy... At which point, you either need multiple copies of your key files, or you need to move to the alternative crypto container approach.
FYI -- this is a no go for ASP.NET projects...
Works fine otherwise...
Thanks for posting this tip. I was just about to do something awkward instead.
Thanks for posting this.
The solution works really well in console and stand-alone applications, not in ASP.NET. The 'link' option doesn't appear in the 'Open' dialog. Manually editing the .csproj file fails too, it doesn't seem to link in the file, or display it in the Solution Explorer. UGH!
As Ethan says above, this is a no-go for ASP.NET projects.
That doesn't bother me, because there are about 97 other things about VS's handling of ASP.NET projects that I find utterly unacceptable.
I recommend folks build all significant ASP.NET code as conventional "Class Library" projects (aka DLLs ;) and write only the most superficial UI code in the pages, to call into those libraries.
VS hokeyness aside, you'll be extra glad when you get around to developing alternative site UIs for different platforms and devices (think: mobile).
I assume this is a C# thing only as I do not see this in the IDE when using VB.NET.
No, it's there, Greg. Look harder.
(Note the comments above -- it's not there, for ASP.NET projects.)
Thanks for your recommendation. It provides the best way I have found yet to work around a substantial design flaw in the way .NET as a whole treats cross-architecture (OS) systems. Microsoft needs to come up with a better solution.
We have an extensive smart device project that should run easily on the desktop with less than 3% code change. As it turns out the Visual Studio environment makes this almost impossible without creating two complete source trees. Your recommended solution will help but will be expensive to sustain over time since I will have to maintain a separate project for each primary output architecture and then add a link to new files to every project every time.
I am a C# advocate because at the language level it fixes many things that have frustrated me for years with Java. I find it frustrating that Microsoft made this mistake, which could have been avoided with a little forethought.
In Java and Python I can simply copy my compiled class or JAR file anywhere the JVM exists and it will run without change, recompilation or re-linking. If I have that inevitable 3% to 5% of source that changes across operating systems, I simply change the Class Path to include directories where the modules specific for my local OS are located and it works like magic.
For Microsoft to ignore large scale cross OS execution and build strategies like this are almost unforgivable and clearly indicate that they did not think their deployment strategy for large scale cross OS environments through.
Microsoft should seriously consider adopting the class path or Python path environment variable strategy. They work much better to give implementation architects better choices without having to resort back to the development environment. EG: Replacing one XML implementation with a different one that is API compatible. As a whole C# would substantially benefit from adopting a more portable output like Python or Java.
The solution above sidesteps the issue of why the Smart device application does not automatically build a version that will run on XP. It is already a sub set and should be able to run through the standard .NET libraries unless the device specific features are used. In Java, we would not even need a separate version; it would work simply by copying the class file, changing the environment variable to point at the right supporting classes and running. It should be this easy in .NET except that Microsoft made a fundamental design mistake in their compilation and link strategy for .NET.
Thanks for your valuable tip. I defined a SharedConstants class as you described in the catch but I don't know How can I include it in my source code. Can you help me please?
|
http://weblogs.asp.net/savanness/archive/2003/07/22/10417.aspx
|
crawl-002
|
refinedweb
| 1,188
| 65.42
|
This is what I have, and it's causing some extremely weird discrepancies.
#include<iostream>
using namespace std;
void input_data(int data[], short size);
int main(){
short size;
int data[1000];
cout << "How many values would you like to store? ";
cin >> size;
while (size > 0){
cout << "Enter your " << size << " values:" << endl;
input_data(&data[1000], size);
cout << size << " values stored in array." << endl;
}
}
void input_data(int data[], short size)
{
int i;
for (i=0;i<size;i++){
cin >> data[i];
}
}
size
cout << size << " values stored in array." << endl;
input_data
size
You are overwriting the stack.
First you declare a function that takes an array:
void input_data(int data[], short size);
And then you call it. Instead of passing a pointer to an actual array like this:
input_data(data, size);
You are passing a pointer to the space immediately AFTER the array (which includes many things including size):
input_data(&data[1000], size);
While a parameter of
data and
&data[0] are essentially the same,
&data[1000] points to a completely different thing - it points to the address of
data[1000] which doesn't exist in the array because the array only has 1000 elements (from 0 to 999)
|
https://codedump.io/share/QpAOAkE5LY6H/1/function-changing-a-variable-even-when-passed-by-value
|
CC-MAIN-2017-17
|
refinedweb
| 196
| 54.46
|
Just a thought,I've been helping some people come up to speed on arduino, they are all working on projects that need to take in controls form buttons an keypads,and they have all come up with different libs from the arduino sites. I did not realise there were so many different libs that are designed to cope with buttons.
Suggestion , I'd guess that a large majority of arduino projects take in a sensor or two, that could be a button class, I don't suppose that we could have one standard library to handle buttons effectively ? how about one that can handle all sorts of button sensors, key pads scanned and none scanned.As I say, i know there are different libs for all these, I think my students have found every combination possible, which in some ways is good, it just makes using and debugging much harder than it should be for a simple function like reading a switch.
#include "MrmButton.h"const int button_pin = 2;Mrmbutton_pulldown button (button_pin);void setup (void){ button.setup ();}void loop (void){ int value; // only do something if the button changed state if (button.read (&value)) { if (value) { // do action if button was just pressed } else { // do action if button was just released } }}
Mrmbutton_pullup button (button_pin);
|
http://forum.arduino.cc/index.php?topic=117464.0
|
CC-MAIN-2015-32
|
refinedweb
| 215
| 62.61
|
UnBooks talk:Pulp Novel, the case of the dashing dame
From Uncyclopedia, the content-free encyclopedia
edit Pee review
I've gone mad. I'm pretty sure that I have, anyway. I've started calling people "palooka" and "dame".--Sir Modusoperandi Boinc! 05:29, 30 November 2006 (UTC)
- Well, you definitely nailed the pulpish style well. First things first - That template thing on the left is, as some of my peers would say, "hella long". Integrate it into the article better, shorten it, organize it, something. -- §. | WotM | PLS | T | C | A 05:35, 30 November 2006 (UTC)
- You mean the right, right? The left bit is the "real" story. Yes, normally the book/character template (on the right) is dead short. Sigh. That the template thingy that explains characters and whatnot is a tale that's almost as long as the "real" story is my favourite part. The story's good too. Odd, but melikes. I'll sit on it for awhile...and come back to it when I stop calling the guy at the 7/11 a big lug...--Sir Modusoperandi Boinc! 06:12, 30 November 2006 (UTC)
- I mooshed some of the "story" paragraphs together. I was writing it as I was saying it, see? Saying it like a hardnosed, down on his luck writer who's tryin' to make a page. The extra paragraphs added punch. Punch like a flyweight boxer punchdrunk from going mano a mano with the champ. Ten rounds of glory, then darkness. Fuck, I'm still doing it...--Sir Modusoperandi Boinc! 06:22, 30 November 2006 (UTC)
- I squished the template text to emphasize the "story" (or de-ephasize the book description a bit). Is the template text too small? The grotesquely overwrought text of the template is still my favourite bit. It juxtaposes nicely with the moderately overwrought text of the "story", which is also my favourite bit. The lollipop on the book cover is my favourite bit too. Of course, I am quite mad.--Sir Modusoperandi Boinc! 07:09, 30 November 2006 (UTC)
edit V1.1
I swapped the "template" text down to footnotes. I don't much like the look, but it makes it much easier to read.--Sir Modusoperandi Boinc! 07:09, 2 December 2006 (UTC)
- Agreed. Also, I want to say that some of the thumbnail images are overlapping each other, which always bugs me. Like I said before, the major problem with this page is formatting, and not actual content (which is great). You may want to make the footnotes into a couple sections/sub-sections. And if you could do me a favor and look at this for me... -- §. | WotM | PLS | T | C | A 07:16, 2 December 2006 (UTC)
- Formatting is hard. They're un-overlapped as best I could. I'm sitting on it for now (the "template" text as footnotes is growing on me). I'll take a look at your page.--Sir Modusoperandi Boinc! 07:27, 2 December 2006 (UTC)
- Okay, this "sitting on it" thing wasn't working, so I messed with it some more. I'm guessing that soon, I will mess with it some more...and more...and more...--Sir Modusoperandi Boinc! 11:50, 2 December 2006 (UTC)
- OK, at this point I think it's about as readable as it will get. I think that it's ready to don the UnBooks namespace. If you have any personal tweaks/additions you want to add, go ahead, but I stand for the masses when I say: The masses like it. -- §. | WotM | PLS | T | C | A 20:11, 2 December 2006 (UTC)
- I'm setting it aside for a few days (serious this time!). I'll come back to it when I speak modern english again. Then, if I find no commas to move, delete, or add, I'll release it on an unprepared and unprotected mainspace.--Sir Modusoperandi Boinc! 22:56, 2 December 2006 (UTC)
edit Argh!
I had to let this one go. It was fun to write...for awhile, but I'm a "method writer", so everyone I know thinks I've lost my mind (what with me talking like the protagonist in a pulp novel for the last couple of weeks and all). I'm fairly certain they'll ask for a "sample" at work, too. It probably doesn't help that the same thing has happened before.--Sir Modusoperandi Boinc! 23:28, 4 December 2006 (UTC)
edit nice
I massaged your prose a little, tried to introduce more 50's slang. Your sentences were good, but too many periods, too broken up. Part of the joy of those old noir books and movies was the fast pace of the dialogue - bon mots would whip past buried in the banter and only the sharpest ears wold catch them all. You can only have one punchline and it plays better when the thoughts preceding it are continuous. Great work though, nice characterization of the dick. Very Leslie Nielson.—The preceding unsigned comment was added by Super90 (talk • contribs)
- It was me, pretty much. I'm like that. But taller. --Sir Modusoperandi Boinc! 14:02, 6 December 2006 (UTC)
edit also
whenever I could, I cut out soft consonant sounds in favor of K's, T's, and D's. Even the word "detective "is a symphony of hard consonants. Really, that's noir dialog in a nutshell.—The preceding unsigned comment was added by Super90 (talk • contribs)
- Cool, and thanks for the tweaks.--Sir Modusoperandi Boinc! 13:46, 6 December 2006 (UTC)
edit Useless praise and no constructive nothing
This is wonderful. Whoever wrote this, it's brilliant. I want more of it. I imagine a whole fucking novel in this style. I'd eat it right up, like you eat up a dame when she's actually made from ham. --Bringa 16:15, 14 December 2006 (UTC)
- Thanks, and Super90 thanks you too. --Sir Modusoperandi Boinc! 17:16, 14 December 2006 (UTC)
edit Congrats
I really like Pulp Fictions, altough it is hard to find here in my country. I really enjoyed your writing and humour, it reminded me of that noir Steve Martin movie (can't quite remember the name, and I'm lazy enough not to search for it right now). Anyways, congratulations, you achieved one more happy reader (well, wasn't that your point?). Eh, and sorry for the mispellings and poor english, is not my native language.--FoxyBabe 19:16, 15 December 2006 (UTC)
- "Dead Men Don't Wear Plaid", I think...and thanks. --Sir Modusoperandi Boinc! 19:24, 15 December 2006 (UTC)
|
http://uncyclopedia.wikia.com/wiki/UnBooks_talk:Pulp_Novel,_the_case_of_the_dashing_dame?oldid=2362067
|
CC-MAIN-2014-15
|
refinedweb
| 1,101
| 76.93
|
September 21, 2008.
There.
notes
Now it's time to actually get started.
Create a new Django project.
django-admin.py startproject ajax_tut
Create the
notes app.
python manage.py startapp notes
Create a handful of directories.
mkdir media mkdir notes/templates mkdir notes/templates/notes
Open up
settings.py to use the
Django media server (which is for development purposes
only, don't use it for deployment!), and to pass all
incoming urls to the
ajax_tut/notes/urls.py file
(which we haven't written quite yet).
ajax_tut/urls.py should file
in the
notes app.
Now is a good time to run the development server and check to see if any typos have entered the system.
python manage.py runserver
Then navigate to
and you should get a shiny error page complaining that the
ajax_tut.notes.urls module doesn't exist.
We're on the right track.
Next we want to create the
Note model that will store the
data for our webapp. Open up
ajax_tut/notes/models.py method file.
from django.conf.urls.defaults import * from models import Note notes = Note.objects.all() urlpatterns = patterns( '', (r'^$', 'django.views.generic.list_detail.object_list', dict(queryset=notes)), (r'^note/(?P<slug>[-\w]+)/$', 'django.views.generic.list_detail.object_detail', dict(queryset=notes, slug_field='slug')), (r'^create/$','notes.views.create_note'), (r'^note/(?P<slug>[-\w]+)/update/$','notes.views.update_note'), )
For all our displaying content needs we will using the two generic
views
list_detail.object_list and
list_detail.object_detail.
The former will display all the notes, and the later will display
one specific note based on the
slug parameter detected in the
url's regex.
The generic views operate on a queryset of objects, which is why
we have to create the
notes queryset and pass it to the views.
We are also specifying two custom views,
create_note and
update_note which.html and
notes/note_detail.html.
So our next step is to create those two templates, but
first we'll want to create a
base.html template.html template_note method
in
notes.views.
Let's go implement that.
Open up
notes/views.py, and we'll implement
create_note.
It'll need to make sure that it's recieving a POST request, and
and also that the request contains values for
title.html and.html
template,.html template.
Open up the
notes/notes_list.html file,_note function here is use the jQuery
ajax
function to send an asynchronous request to the
/create/
url. It extracts the values of the title and slug inputs
and passes them as POST parameters.
It also specifies the
done function to be called
once it receives a response. (We'll look at that function
in just a second.)
Finally, it is very important that
create_note return function function_note
function.html template as well, so we're going
to make sure we write reusable code and throw it into a
.js file function creates a new div element
with a specified error message, then fades it in, display
it for five seconds, and then fades it out (and removes
the div).
Then we have to load it in our
base.html template,
adding this line beneath loading the jQuery library:
<script type="text/javascript" src="/media/notes.js"></script>
And finally we have to integrate it into our
notes/note_list.html
template_note replace
to function functions
once when the page loads, so that they begin as spans.
Go ahead and test it out on the page.
You'll probably think to yourself that its pretty annoying, seeing
as the resizing really shifts things around. Lets throw a few lines
of CSS into
style.css to
function to complement the
display_error function we previously
created.
Go ahead and open
media/notes.js and and
display_error functions.
Open up
media/style.css view in
notes.views
flexible enough to handle recieving all values at once, or
to also handle recieving one value at a time, which allows
us to write this simple update function.
Also notice that
perform_update is caling the
done
function on completion, which we haven't written yet.
done looks like this:
var done = function(res, status) { if (status == "success") display_success("Updated successfully.", $(".text")); else display_error(res.responseText, $(".text")); }
It uses our helpful
display_error and
display_success
functions to handle most of the details.
Finally, we have to modify the
title_to_span and
slug_to_span functions to call send updates.
That is as simple as changing them to look like:
var title_to_span = function() { var title = $("#title"); perform_update("title", title.val()); // etc etc }
But that leads to a problem: we call the
_to_span functions_note request, and modify the
perform_update
function in
notes/note_detail.html, then you'd be pretty
close to finished.
You would have to rewrite
done.html
looks
|
https://lethain.com/intro-to-unintrusive-javascript-with-django/
|
CC-MAIN-2019-13
|
refinedweb
| 786
| 60.61
|
Chia-liang Kao <clkao@clkao.org> writes:
> I didn't ignore any request about the name change; if you have seen the
> README, I explicitly mention that: "all things are subject to change,
> including project name, binary name, perl module namespaces, depot
> spec, etc."
Sorry, Chia-Liang, I didn't mean to accuse you of ignoring us. I was
just pointing out that the name had not (so far as we know) actually
changed, and that therefore we have to be careful.
> The point about potential confusion was taken, altough I have never used
> nor even seen bitkeeper myself.
Understood. But that's not the issue in an identity dispute.
> I just can't make naming stop myself from doing interesting real works.
Huh? No one's asking you to do that.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Nov 6 06:19:06 2003
This is an archived mail posted to the Subversion Dev
mailing list.
|
http://svn.haxx.se/dev/archive-2003-11/0283.shtml
|
CC-MAIN-2016-18
|
refinedweb
| 172
| 66.03
|
February 2019
Volume 34 Number 2
[C#]
Minimize Complexity in Multithreaded C# Code
By Thomas Hansen | February 2019
Forks, or multithreaded programming, are among the most difficult things to get right when programming. This is due to their parallel nature, which requires a completely different mindset than linear programming with a single thread. A good analogy for the problem is a juggler, who must keep multiple balls in the air without having them negatively interfere with each other. It’s a major challenge. However, with the right tools and the right mindset, it’s manageable.
In this article, I dive into some of the tools I’ve crafted to simplify multithreaded programming, and to avoid problems such as race conditions, deadlocks and other issues. The toolchain is based, arguably, on syntactic sugar and magic delegates. However, to quote the great jazz musician Miles Davis, “In music, silence is more important than sound.” The magic happens between the noise.
Put another way, it’s not necessarily about what you can code, but rather what you can but choose not to, because you’d rather create a bit of magic between the lines. A quote from Bill Gates comes to mind: “To measure quality of work according to the number of lines of code, is like measuring the quality of an airplane by its weight.” So, instead of teaching you how to code more, I hope to help you code less.
The Synchronization Challenge
The first problem you’ll encounter with multithreaded programming is synchronizing access to a shared resource. Problems occur when two or more threads share access to an object, and both might potentially try to modify the object at the same time. When C# was first released, the lock statement implemented a basic way to ensure that only one thread could access a specified resource, such as a data file, and it worked well. The lock keyword in C# is so easily understood, that it single-handedly revolutionized the way we thought about this problem.
However, a simple lock suffers from a major flaw: It doesn’t discriminate read-only access from write access. For instance, you might have 10 different threads that want to read from a shared object, and these threads can be given simultaneous access to your instance without causing problems via the ReaderWriterLockSlim class in the System.Threading namespace. Unlike the lock statement, this class allows you to specify if your code is writing to the object or simply reading from the object. This enables multiple readers entrance at the same time, but denies any write code access until all other read and write threads are done doing their stuff.
Now the problem: The syntax when consuming the ReaderWriterLock class becomes tedious, with lots of repetitive code that reduces readability and complicates maintenance over time, and your code often becomes scattered with multiple try and finally blocks. A simple typo can also produce disastrous effects that are sometimes extremely difficult to spot later.
By encapsulating the ReaderWriterLockSlim into a simple class, all of a sudden it solves the problem without repetitive code, while reducing the risk that a minor typo will spoil your day. The class, as shown in Figure 1, is entirely based on lambda trickery. It’s arguably just syntactic sugar around some delegates, assuming the existence of a couple interfaces. Most important, it can help make your code much more DRY (as in, “Don’t Repeat Yourself”).
Figure 1 Encapsulating ReaderWriterLockSlim
public class Synchronizer<TImpl, TIRead, TIWrite> where TImpl : TIWrite, TIRead { ReaderWriterLockSlim _lock = new ReaderWriterLockSlim(); TImpl _shared; public Synchronizer(TImpl shared) { _shared = shared; } public void Read(Action<TIRead> functor) { _lock.EnterReadLock(); try { functor(_shared); } finally { _lock.ExitReadLock(); } } public void Write(Action<TIWrite> functor) { _lock.EnterWriteLock(); try { functor(_shared); } finally { _lock.ExitWriteLock(); } } }
There are just 27 lines of code in Figure 1, providing an elegant and concise way to ensure that objects are synchronized across multiple threads. The class assumes you have a read interface and a write interface on your type. You can also use it by repeating the template class itself three times, if for some reason you can’t change the implementation of the underlying class to which you need to synchronize access. Basic usage might be something like that shown in Figure 2.
Figure 2 Using the Synchronizer Class
interface IReadFromShared { string GetValue(); } interface IWriteToShared { void SetValue(string value); } class MySharedClass : IReadFromShared, IWriteToShared { string _foo; public string GetValue() { return _foo; } public void SetValue(string value) { _foo = value; } } void Foo(Synchronizer<MySharedClass, IReadFromShared, IWriteToShared> sync) { sync.Write(x => { x.SetValue("new value"); }); sync.Read(x => { Console.WriteLine(x.GetValue()); }) }
In the code in Figure 2, regardless of how many threads are executing your Foo method, no Write method will be invoked as long as another Read or Write method is being executed. However, multiple Read methods can be invoked simultaneously, without having to scatter your code with multiple try/catch/finally statements, or repeating the same code over and over. For the record, consuming it with a simple string is meaningless, because System.String is immutable. I use a simple string object here to simplify the example.
The basic idea is that all methods that can modify the state of your instance must be added to the IWriteToShared interface. At the same time, all methods that only read from your instance should be added to the IReadFromShared interface. By separating your concerns like this into two distinct interfaces, and implementing both interfaces on your underlying type, you can then use the Synchronizer class to synchronize access to your instance. Just like that, the art of synchronizing access to your code becomes much simpler, and you can do it for the most part in a much more declarative manner.
When it comes to multithreaded programming, syntactic sugar might be the difference between success and failure. Debugging multithreaded code is often extremely difficult, and creating unit tests for synchronization objects can be an exercise in futility.
If you want, you can create an overloaded type with only one generic argument, inheriting from the original Synchronizer class and passing on its single generic argument as the type argument three times to its base class. Doing this, you won’t need the read or write interfaces, since you can simply use the concrete implementation of your type. However, this approach requires that you manually take care of those parts that need to use either the Write or Read method. It’s also slightly less safe, but does allow you to wrap classes you cannot change into a Synchronizer instance.
Lambda Collections for Your Forks
Once you’ve taken the first steps into the magic of lambdas (or delegates, as they’re called in C#), it’s not difficult to imagine that you can do more with them. For instance, a common recurring theme in multithreading is to have multiple threads reach out to other servers to fetch data and return the data back to the caller.
The most basic example would be an application that reads data from 20 Web pages, and when complete returns the HTML back to a single thread that creates some sort of aggregated result based on the content of all the pages. Unless you create one thread for each of your retrieval methods, this code will be much slower than desired—99 percent of all execution time would likely be spent waiting for the HTTP request to return.
Running this code on a single thread is inefficient, and the syntax for creating a thread is difficult to get right. The challenge compounds as you support multiple threads and their attendant objects, forcing developers to repeat themselves as they write the code. Once you realize that you can create a collection of delegates, and a class to wrap them, you can then create all your threads with a single method invocation. Just like that, creating threads becomes much less painful.
In Figure 3 you’ll find a piece of code that creates two such lambdas that run in parallel. Notice that this code is actually from the unit tests of my first release of the Lizzie scripting language, which you can find at bit.ly/2FfH5y8.
Figure 3 Creating Lambdas
public void ExecuteParallel_1() { var sync = new Synchronizer<string, string, string>("initial_"); var actions = new Actions(); actions.Add(() => sync.Assign((res) => res + "foo")); actions.Add(() => sync.Assign((res) => res + "bar")); actions.ExecuteParallel(); string result = null; sync.Read(delegate (string val) { result = val; }); Assert.AreEqual(true, "initial_foobar" == result || result == "initial_barfoo"); }
If you look carefully at this code, you’ll notice that the result of the evaluation doesn’t assume that any one of my lambdas is being executed before the other. The order of execution is not explicitly specified, and these lambdas are being executed on separate threads. This is because the Actions class in Figure 3lets you add delegates to it, so you can later decide if you want to execute the delegates in parallel or sequentially.
To do this, you must create a bunch of lambdas and execute them using your preferred mechanism. You can see the previously mentioned Synchronizer class in Figure 3, synchronizing access to the shared string resource. However, it uses a new method on the Synchronizer, called Assign, which I didn’t include in the listing in Figure 1 for my Synchronizer class. The Assign method uses the same “lambda trickery” that I described in the Write and Read methods earlier.
If you’d like to study the implementation of the Actions class, note that it’s important to download version 0.1 of Lizzie, as I completely rewrote the code to become a standalone programming language in later versions.
Functional Programming in C#
Most developers tend to think of C# as being nearly synonymous with, or least closely related to, object-oriented programming (OOP)—and obviously it is. However, by rethinking how you consume C#, and by diving into its functional aspects, it becomes much easier to solve some problems. OOP in its current form is simply not very reuse friendly, and a lot of the reason for this is that it’s strongly typed.
For instance, reusing a single class forces you to reuse every single class that the initial class references—both those used through composition and through inheritance. In addition, class reuse forces you to reuse all classes that these third-party classes reference, and so on. And if these classes are implemented in different assemblies, you must include a whole range of assemblies simply to gain access to a single method on a single type.
I once read an analogy that illustrates this problem: “You want a banana, but you end up with a gorilla, holding a banana, and the rainforest where the gorilla lives.” Compare this situation with reuse in a more dynamic language, such as JavaScript, which doesn’t care about your type, as long as it implements the functions your functions are themselves consuming. A slightly more loosely typed approach yields code that is both more flexible and more easily reused. Delegates allow you to do that.
You can work with C# in a way that improves reuse of code across multiple projects. You just have to realize that a function or a delegate can also be an object, and that you can manipulate collections of these objects in a weakly typed manner.
The ideas around delegates present in this article build on those articulated in an earlier article I wrote, “Create Your Own Script Language with Symbolic Delegates,” in the November 2018 issue of MSDN Magazine (msdn.com/magazine/mt830373). This article also introduced Lizzie, my homebrew scripting language that owes its existence to this delegate-centric mindset. If I had created Lizzie using OOP rules, my opinion is that it would probably be at least an order of magnitude larger in size.
Of course, OOP and strongly typing is in such a dominant position today that it’s virtually impossible to find a job description that doesn’t mention it as its primary required skill. For the record, I’ve created OOP code for more than 25 years, so I’ve been as guilty as anyone of a strongly typed bias. Today, however, I’m more pragmatic in my approach to coding, and less interested in how my class hierarchy ends up looking.
It’s not that I don’t appreciate a beautiful class hierarchy, but there are diminishing returns. The more classes you add to a hierarchy, the less elegant it becomes, until it collapses under its own weight. Sometimes, the superior design has few methods, fewer classes and mostly loosely coupled functions, allowing the code to be easily extended, without having to “bring in the gorilla and the rainforest.”
I return to the recurring theme of this article, inspired by Miles Davis’ approach to music, where less is more and “silence is more important than sound.” Code is like that, too. The magic often lives between the lines, and the best solutions can be measured more by what you don’t code, rather than what you do. Any idiot can blow a trumpet and make noise, but few can create music from it. And fewer still can make magic, the way Miles did.
Thomas Hansen works in the FinTech and ForEx industry as a software developer and lives in Cyprus.
Discuss this article in the MSDN Magazine forum
|
https://docs.microsoft.com/en-us/archive/msdn-magazine/2019/february/csharp-minimize-complexity-in-multithreaded-csharp-code
|
CC-MAIN-2020-05
|
refinedweb
| 2,234
| 50.36
|
/namespace QgisGui The QgisGui namespace contains constants and helper functions used throughout the QGIS GUI. More...
/namespace QgisGui The QgisGui namespace contains constants and helper functions used throughout the QGIS GUI.
Convenience function for readily creating file filters.
Given a long name for a file filter and a regular expression, return a file filter string suitable for use in a QFileDialog::OpenFiles() call. The regular express, glob, will have both all lower and upper case versions added.
Definition at line 179.cpp.
Open files, preferring to have the default file selector be the last one used, if any; also, prefer to start in the last directory associated with filterName.
Stores persistent settings under /UI/. The sub-keys will be filterName and filterName + "Dir".
Opens dialog on last directory associated with the filter name, or the current working directory if this is the first time invoked with the current filter name.
This method returns true if cancel all was clicked, otherwise false
Definition at line 28 of file qgisgui.cpp.
/var ModalDialogFlags /brief Flags used to create a modal dialog (adapted from QMessageBox).
Using these flags for all modal dialogs throughout QGIS ensures that for platforms such as the Mac where modal and modeless dialogs have different looks, QGIS modal dialogs will look the same as Qt modal dialogs and all modal dialogs will look distinct from modeless dialogs. Althought not the standard Mac modal look, it does lack the minimize control which makes sense only for modeless dislogs.
The Qt3 method of creating a true Mac modal dialog is deprecated in Qt4 and should not be used due to conflicts with QMessageBox style dialogs.
Qt::WindowMaximizeButtonHint is included but will be ignored if the dialog is a fixed size and does not have a size grip.
Definition at line 48 of file qgisgui.h.
|
http://qgis.org/api/namespaceQgisGui.html
|
CC-MAIN-2015-35
|
refinedweb
| 304
| 62.17
|
kstat2_runq_enter, kstat2_waitq_enter, kstat2_waitq_exit, kstat2_runq_exit, kstat2_waitq_to_runq, kstat2_runq_back_to_waitq - update I/O kstat statistics
#include <sys/types.h> #include <sys/kstat2.h> void kstat2_waitq_enter(kstat2_io_t *kiop); void kstat2_waitq_exit(kstat2_io_t *kiop); void kstat2_runq_enter(kstat2_io_t *kiop); void kstat2_runq_exit(kstat2_io_t *kiop); void kstat2_waitq_to_runq(kstat2_io_t *kiop); void kstat2_runq_back_to_waitq(kstat2_io_t *kiop);
Solaris DDI specific (Solaris DDI)
Pointer to a kstat2_io(9S) structure.
A large number of I/O subsystems have at least two basic lists (or queues) of transactions they manage: one for transactions that are accepted for processing but the processing is yet to begin, and one for transactions that are actively being processed but not completed. For this reason, two cumulative time statistics are kept: wait (pre-service) time, and run (service) time.
The kstat2 queue family of functions manage the time based on the transitions between the driver wait queue and run queue.
The kstat2_waitq_enter() function is called when a request arrives and is placed into a pre-service state (such as just prior to calling disksort(9F)).
The kstat2_waitq_exit() function is used when a request is removed from its pre-service state. (such as just prior to calling the driver's start routine).
The kstat2_runq_enter() function is also called when a request is placed in its service state (just prior to calling the driver's start routine, but after kstat2_waitq_exit() function).
The kstat2_runq_exit() function is used when a request is removed from its service state (just prior to calling biodone(9F)).
The kstat2_waitq_to_runq() function is used to transition a request from the wait queue to the run queue. This is useful wherever the driver would have normally done a kstat2_waitq_exit() followed by a call to kstat2_runq_enter().
The kstat2_runq_back_to_waitq() function is used to transition a request from the run queue back to the wait queue. This may be necessary in some cases (write throttling is an example).
None
These functions can be called from user or kernel context.
biodone(9F), disksort(9F), kstat2_create(9F), kstat2_hold_bykid(9F), kstat2_nv_init(9F), kstat2(9S), kstat2_io(9S)
These transitions must be protected by holding the kstat's ks2_lock, and must be completely accurate (all transitions are recorded). Forgetting a transition might, for example, make an idle disk appear 100% busy.
|
https://docs.oracle.com/cd/E88353_01/html/E37855/kstat2-waitq-exit-9f.html
|
CC-MAIN-2019-43
|
refinedweb
| 360
| 52.8
|
A long term *BSD user, I decided to extend our GNU package nightly test system setup with Dragonfly BSD. This is an install under virtualisation (qemu or Xen).
The actual install went smoothly, but the package install have failed utterly.
I found and have followed the various methods suggested there. None works for me.
# uname -a DragonFly biko-dflybsd64.gmplib.org 2.6-RELEASE DragonFly v2.6.3-RELEASE #10: Mon May 3 09:57:53 PDT 2010 root@pkgbox64.dragonflybsd.org:/usr/obj/usr/src-misc/sys/X86_64_GENERIC x86_64
Method 1:
# pkg_radd bash pkg_add: Error: package `bash-4.1nb1' was built with a newer pkg_install version pkg_add: 1 package addition failed
(As far as I can tell, I am using the latest release of DragonFly. And even if I didn't, shouldn't it be possible to install a package with the existing tools?)
Method 2:
# cd /usr #.git cd /usr/pkgsrc&& git fetch origin fatal: The remote end hung up unexpectedly *** Error code 128
Stop in /usr.
(This is not a temporary problem. I retried this several consecutive days.)
Method 3 (from docs/handbook/handbook-pkgsrc-sourcetree-using/):
# cd /usr # cvs -d anoncvs@anoncvs.us.netbsd.org:/cvsroot co pkgsrc # cd shells/bash # make Unknown modifier '!'
Unknown modifier '!'
Unknown modifier '!'
... "../../mk/bsd.prefs.mk", line 717: if-less endif Unknown modifier 'u'
Variable PKG_OPTIONS is recursive.
(It is not my typo to get netbsd's code. This is what I am instructed to do by the DragonFly web at the directory indicated.)
|
http://leaf.dragonflybsd.org/mailarchive/users/2010-10/msg00156.html
|
CC-MAIN-2015-27
|
refinedweb
| 254
| 52.15
|
Server Rendering
The on the server (FOUC). To inject the style down to the client, we need to:
- Create a fresh, new
ServerStyleSheetsinstance on every request.
- Render the React tree with the server-side collector.
- Pull the CSS out.
- Pass the CSS along to the client.
On the client side, the CSS will be injected a second time before removing the server-side injected CSS.
Setting Up
In the following recipe, we are going to look at how to set up server-side rendering.
The theme
Create a theme that will be shared between the client and the server:
theme.js
import { createTheme } from '@material-ui/core/styles'; import red from '@material-ui/core/colors/red'; // Create a theme instance. const theme = createTheme({ palette: { primary: { main: '#556cd6', }, secondary: { main: '#19857b', }, error: { main: red.A400, }, background: { default: '#fff', }, }, }); export default theme;
The server-side
The following is the outline for what the server-side is going to look like. We are going to set up an Express middleware using app.use to handle all requests that come in to the server. If you're unfamiliar with Express or middleware, just know that the handleRender function will be called every time the server receives a request.
server.js
import express from 'express'; // We are going to fill these out in the sections to follow. function renderFullPage(html, css) { /* ... */ } function handleRender(req, res) { /* ... */ } const app = express(); // This is fired every time the server-side receives a request. app.use(handleRender); const port = 3000; app.listen(port);
Handling the Request
The first thing that we need to do on every request is create a new
ServerStyleSheets.
When rendering, we will wrap
App, the root component,
inside a
StylesProvider and
ThemeProvider to make the style configuration and the
theme available to all components in the component tree.
The key step in server-side rendering is to render the initial HTML of the component before we send it to the client side. To do this, we use ReactDOMServer.renderToString().
We then get the CSS from the
sheets using
sheets.toString().
We will see how this is passed along in the
renderFullPage function.
import express from 'express'; import React from 'react'; import ReactDOMServer from 'react-dom/server'; import { ServerStyleSheets, ThemeProvider } from '@material-ui/core/styles'; import App from './App'; import theme from './theme'; function handleRender(req, res) { const sheets = new ServerStyleSheets(); // Render the component to a string. const html = ReactDOMServer.renderToString( sheets.collect( <ThemeProvider theme={theme}> <App /> </ThemeProvider>, ), ); // Grab the CSS from the sheets. const css = sheets.toString(); // Send the rendered page back to the client. res.send(renderFullPage(html, css)); } const app = express(); app.use('/build', express.static('build')); // This is fired every time the server-side receives a request. app.use(handleRender); const port = 3000; app.listen(port);
Inject Initial Component HTML and CSS
The final step on the server-side is to inject the initial component HTML and CSS into a template to be rendered on the client side.
function renderFullPage(html, css) { return ` <!DOCTYPE html> <html> <head> <title>My page</title> <style id="jss-server-side">${css}</style> </head> <body> <div id="root">${html}</div> </body> </html> `; }
The Client Side
The client side is straightforward. All we need to do is remove the server-side generated CSS. Let's take a look at the client file:
client.js
import React from 'react'; import ReactDOM from 'react-dom'; import { ThemeProvider } from '@material-ui/core/styles'; import App from './App'; import theme from './theme'; function Main() { React.useEffect(() => { const jssStyles = document.querySelector('#jss-server-side'); if (jssStyles) { jssStyles.parentElement.removeChild(jssStyles); } }, []); return ( <ThemeProvider theme={theme}> <App /> </ThemeProvider> ); } ReactDOM.hydrate(<Main />, document.querySelector('#root'));
Reference implementations
We host different reference implementations which you can find in the GitHub repository under the
/examples folder:
Troubleshooting
Check out the FAQ answer: My App doesn't render correctly on the server.
|
https://v4.mui.com/guides/server-rendering/
|
CC-MAIN-2022-33
|
refinedweb
| 642
| 59.4
|
Directory structure
- Create a directory that is named exactly how you want your package to be named.
- Place all the files, folders and classes that you want to publish into this directory.
- Create files required by PyPI to prepare the project for distribution.
- It should look something like this:
Clean your code
- Remove all "print" statements from the code.
- Use logs instead of print statements(debug/info/warn etc..)
Configuring metadata
setup.cfg--> It is the configuration file for setuptools. It tells setuptools about your package(such as the name and version) as well as which code files to include. There are a variety of metadata and options supported here.
[metadata] description-file = README.md
setup.py--> It is the build script for which tells setuptools about your package(such as the name and version) as well as which code files to include.
setuptools is a library designed to facilitate packaging Python projects.
Open
setup.py and enter the following content. Change the name to include your username; this ensures that you have a unique package name and that your package doesn’t conflict with packages uploaded by other people.
from setuptools import setup, find_packages version = '0.0.1' # Any format you want with open("README.md", "r", encoding="utf-8") as fh: long_description = fh.read() setup( name='your-package-name', packages=find_packages(), version=version, license='MIT', description='Short description', long_description=long_description, long_description_content_type="text/markdown", author='Author Name', author_email='author@email.com', url='', download_url=f'{version}/repo-name-{version}.tar.gz', keywords=['Some', 'keywords'], install_requires=[ 'dependency-1', # All external pip packages you are importing 'dependency-2', ], classifiers=[ 'Development Status :: 3 - Alpha', 'Intended Audience :: Developers', 'Operating System :: OS Independent', 'Topic :: Software Development :: Build Tools', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', ], )
You can find details about each arguments here.
Creating README.md (optional but recommended)
Open README.md and enter details about your package. You can customize this as you’d like. This content can be displayed on the homepage of your package on PyPI.) 2021.
Create a PyPI account
Register yourself for a PyPI account here. Remember your username (not the Name, not the E-Mail Address) and your password, you will need it later for the upload process.
Upload your package to github/gitlab
Create a github/gitlab repo including all the above files and folders. Name the repo exactly as the package.
If uploading package to Gitlab,
- After uploading the required files, go to
Repository --> Tags --> New Tagand create new tag(tag name should be same as the version) for every release of the package to PyPI.
- This copied link should be the same as download_url argument in the
setup.pyfile.
If uploading package to GitHub,
NOTE:
Every time you want to update your package later on, upload a new version to gitlab/github, create a new release as discussed, specify a new release tag and copy-paste the link to Source into the setup.py file (do not forget to also increment the version number).
Uploading the distribution archives
The first thing to do is register an account on TestPyPI, which is a separate instance of the package index intended for testing and experimentation. To register an account, go to here and complete the steps on that page. You will also need to verify your email address before you’re able to upload any packages.
After registration, use twine to upload the distribution packages.
- Navigate to folder where you have al the files and package located.
- Run
python setup.py sdist
- Install Twine:
pip install twine
- Run twine:
twine upload --repository testpypi dist/*Upload successful
Installing newly uploaded package
pip install -i your-package-name
When you are ready to publish your package to PyPI after proper testing, run the following command:
twine upload dist/*
to publish it on PyPI.
NOTE:
Now for every future updates to your package:
- change the version number in setup.py file
- create tag/release in gitlab/github
- update the download_url in setup.py
- Run
python setup.py sdist
- Run
twine upload dist/*
- Run
pip install your-package --upgrade--> to see whether your changes worked.
AND IT'S DONE ...YAAAY !!
Visit PyPI and search for your package name. Now you can use your package just like any other python packages by installing it with pip i.e.
pip install your-package-name.
I hope this blog helped you understand how to publish your own Python Package easily.
Happy coding !!
|
https://blog.audarya.tech/publish-your-python-package
|
CC-MAIN-2022-27
|
refinedweb
| 745
| 57.77
|
351 [details]
Logs & screenshot attached.
Description:
Intelligence not displayed when trying to change the component or value of the android widget from the designer source.
Environment:
Xamarin for VS - 4.3.1.33
XA - 7.2.0.1
XI - 10.8.0.17
Steps to reproduce:
1. Create single view Android project.
2. Double click on Main.xaml file to open the designer.
3. Drag button on to the designer.
4. Click on Source tab to open the designer source code.
5. Try to add more values into the button code from source, enter "a" to open the intelligence & observe.
Actual: Intelligence not displayed for the widget value available.
Expected:
All the property should start with android widget & should displayed in intelligence window.
NOTE: Its working fine with Xamarin studio on MAC machine.
Please find the attached logs & screen shot for more detail.
Also find the expected screen-cast:
Created attachment 20456 [details]
DesignerLogs
Hi,
Issue is still reproducible in latest 15.1-RC build.
And also reproduce for Vs-2017 build 4.4.0.5
Attached the logs for more details.
@jeremie,
I am reverting back the status for this defect to "New" since requested logs are been provided.
Please check and let us know if there any more information you need on this defect.
Ok, it looks look it won't autocomplete the namespace, but once you have "android:" typed in or whatever prefix is being used that it'll autocomplete the rest.
I'll figure out why it is doing that.
Jacky, would you be able to go into the obj/Debug/Schemas/*/ and zip up and attach the xsd files. There should be about 4 files.
* -> it'll be some number. Probably 25? It doesn't really matter which number is there though.
Created attachment 21094 [details]
Attaching obj/Debug/Schemas/25.zip
@Stephen Attached 25.zip you asked for , Kindly check the attachment
Hello,
This bug is still reproducible for VS 2015 and VS 2017
Verified on build :-
Microsoft Visual Studio Enterprise 2015
Version 14.0.25431.01 Update 3
Microsoft .NET Framework
Version 4.6.01586
Xamarin 4.6.0.600 (8a9c886)
Xamarin.Android 7.3.99.38 (21d46f4)
Xamarin.iOS 10.11.0.144 (c3cecd5)
Build Information:-
Screencast link :-
Attached Logs
Created attachment 22255 [details]
IDE Logs for VS 2015
We do not control *when* the intellisense popup appears. If you write some text and the popup does not appear because you started typing while the caret was right beside the `/` character then that is normal Visual Studio behaviour and it is correct.
We are only concerned about the contents of the popup when it appears. In this case it looks like it does contain all the correct information when it appears, so this bug looks like it's validated-resolved.
If you think the bug still exists can you describe what the problem is?
Hi Alan,
Issue here is intellisense only work when user pass command as ctrl+space without any character written.
When user write any latter say "a", intellisense not working & doesn't show the recomadations which is perfectly fine with VSFM.
I also crosschecked with Android studio, its working fine.
Expected: intellisense should work with blank or some latter entered.
Screencast link for more info for the bug:
@Tammay unfortunately, I believe this is how it works in Visual Studio (for Windows). As far as I can tell this is because there are several namespaces. I don't know if it is because as soon as you type a character it views it as an invalid namespace? With it being a blank line everything is an option. If you type in "android:", "app:", "tools:", etc it'll provide a completion list again however without the namespace included.
Part of the reason you see this work in things like Visual Studio for Mac and Android Studio is because they do not use XSD files for their intellisense or validation support. With Visual Studio we are generating and regenerating the XSD files on the fly because of the dynamic nature of android layouts (ie, changes to +ids, strings, colors, other layouts, libraries and their resources, etc).
The hope for the future is a much better system, but for now we have to use XSD.
Marking it as Resolved because this is how the system works in VS. :(
As per Comment 13 , Intelligence only works with ctrl+space on blank line.
Also it works in case of user types "android:", "app:", "tools:", etc
As this is expected for Visual Studio, hence marking this bug as Verified.
Test Environment:
Microsoft Visual Studio Enterprise 2017 Preview Version 15.3 (26510.0-Preview) Preview VisualStudio.15.Preview/15.3.0-Preview+26510.0
Microsoft .NET Framework Version 4.6.01586
Xamarin 4.6.0.620
Xamarin.Android SDK 7.3.99.44
Xamarin.iOS and Xamarin.Mac SDK 10.11.0.144
|
https://xamarin.github.io/bugzilla-archives/53/53353/bug.html
|
CC-MAIN-2019-43
|
refinedweb
| 817
| 67.86
|
Hi,
You can reach me at: jeorgen at webworks dot se
I've been dabbling with perl to and fro since 93, when I switched to it from Icon for sifting through library data. Many of the programs I wrote then worked, although they contained big misconceptions about perl :-) I suppose that is one of perl's strengths.
I liked Icon though.
My favorite programming language is actually Hypertalk.
There is a page somewhere on the web on how to shoot yourself in the foot in different programming languages. For hypertalk it says:
"put bullet one of gun into foot of leg of you"
Looking it up on the WWW I realised it's:
"Put the first bullet of the gun into foot left of leg of you. Answer the result"
Charming.
On my spare time I'm the creator of euliberals.net, for those into European politics. If you choose to venture there, and you're from the US political namespace, please note that the term "liberal" does not carry the same meaning in Europe as in the US.
Mix in a bit of the US term "libertarian" there and you get the general direction. How much you mix in depends on what country in Europe :-)
...and I have to admit I did the site in Zope,.
|
http://www.perlmonks.org/index.pl?node_id=18720
|
CC-MAIN-2016-40
|
refinedweb
| 218
| 80.31
|
Welcome to the CodeGuru Forums.
VC++ and C++ Topics
Ask questions about Windows programming with Visual C++ and help others by answering their questions.
Please Help me I cant figure...
December 9th, 2017, 04:26 AM
MODERATED forums containing common questions and answers.
Ask or answer C and C++ questions not related to Visual C++. This includes Console programming, Linux programming, or general ANSI C++.
RSA Implementation
December 8th, 2017, 04:37 AM
Discuss Windows API related issues using C++ (and Visual C++). This is a non-MFC forum.
dll code, don't know how to...
December 5th, 2017, 06:48 AM
Discuss Managed C++ and .NET-specific questions related to C++.
how do i find decimal ?
Today, 04:25 AM
Share bugs and fixes for source code on this site. Also use this to share bugs/fixes in the various class libraries etc. of use to VC++ developers. This is NOT for general programming questions.
[RESOLVED] VS 2017 and...
August 21st, 2017, 03:52.
OpenGL program compilation...
November 23rd, 2017, 02:48 PM.
detecting removed network...
December 8th, 2017, 03:52 AM
Discussions on the development of drivers.
How delete all files and...
September 4th, 2017, 05:58 AM
C# Programming Topics
Post questions, answers, and comments about C#.
Question about namespace...
November 28th, 2017, 01:28 PM
Visual
Java programming topics. Discussions shared with Gamelan.com and JARS.com
Ask your Java programming question and help out others with theirs.
program and track a car on a...
Yesterday, 05:05 AM
Other
General
CodeGuru
We'll let you know about new CodeGuru developments and features here. Plus, you can give suggestions for improvements, point out bugs or problems, make enhancement requests, and vent your feelings about this site.
Recover an old submission
December 6th, 2017, 09.
SOFTWARE DEVELOPER-Supply...
November 28th, 2017, 03:41 AM.
About the latest windows...
June 16th, 2017, 09:40 AM
There are currently 1141 users online. 1 members and 1140 guests
Most users ever online was 9,762, May 13th, 2015 at 06:22 AM.
Welcome to our newest member, gnanangowthaman
|
http://forums.codeguru.com/forum.php?s=85a4534be41e979c9ccf85cfbdbad0ab
|
CC-MAIN-2017-51
|
refinedweb
| 348
| 62.75
|
Whatever application you are working on, when it reaches some size it is beneficial to split it into microservices, i.e. a couple of smaller ones. This allows you for better management of the resources in each, (usually) more transparent architecture and independent scaling of your components. Microservices are a hot topic nowadays, and if you want to know more about them, Martin Fowler wrote a great piece.
Microservices communicate with each other usually using an HTTP protocol, but there is a bunch of situations when simple HTTP calls are taking too long, or there are many steps that you need to take. In cases like this, you may need to use publish-subscribe pattern. In Google Cloud the most obvious option for that is Google Cloud Pub/Sub, but a very often overlooked feature for messaging between a bunch of App Engine services is task queues.
Task queues are a Google App Engine-specific option for performing some part of your work, usually one that takes some time to process or one that is not directly connected with the front-end application. If you are not familiar with pub/sub pattern, just think of them as of some messages being sent by one of the modules, and being received (and processed) by another.
If you are still not convinced, then let me mention one more benefit of task queues: they are usually way easier to setup, compared to whatever language or framework you are using. When you want to run some background jobs in, say, Django, the recommended way is to use Celery. This requires setting up some backend and additional configuration. In Java, you usually end up using some futures mechanism (I might be wrong here though, I usually ended up setting up a queue myself). Setting up a task queue in App Engine requires to add a single file in your default App Engine application.
Google App Engine offers two types of task queues. The first one is push queues, as in “push this message away from me”, and the other one is pull queues, as in “make this task available for pulling”. I’ll give some use cases for both types, and show how to use them.
Adding a task to a push queue is basically saying “I need this to be done”. I first came across them when writing an RSS aggregator. My use case was: “I have a bunch of URLs of feeds to get, I want them to be processed in a different module, one at a time, probably in parallel”. If you have a similar situation, i.e. you have a relatively large task to be done and you want it to be done somewhere else, go ahead and use push queues.
First, you need to create a push queue. In order to do that, in the default application of your project (I mean it, I spent couple of hours trying to figure out why it doesn’t work), you need to create
queue.yaml (or
queue.xml for Java) file, which looks like this:
queue: - name: my-push-queue rate: 1/s
This is a minimal queues definition file, which sets the name of the queue to
my-push-queue, in which the messages will be processed once every second. There are bunch of other options, which you can look up in the official documentation.
After creating the queue with the redeployment of the default application (or you can use
appcfg.py update_queues command), you are ready to start adding tasks to you queue. This is pretty similar to all the languages, for example in Python it looks like this:
from google.appengine.api import taskqueue def add_task(): task = taskqueue.add( queue_name = "my-push-queue", url = "/handle_task", target = "target_service", params = { 'param': value }) return task
And that’s it. The
target parameter is the name of the service you want to call and
url is the path to the handler. Once the task is added to the queue, it will be sent to the handler with the rate you had defined in the queue configuration.
There’s not much to add about push queues. There are some limits of course. The handlers need to be HTTP endpoints. By default the content is sent as
POST with the content encoded as form, but you can easily override it. Also, when using automatic scaling, the timeout for the task execution is 10 minutes, while when scaling manually, the timeout is 24 hours. The last thing worth mentioning is probably that the order of the delivery is not guaranteed and you shouldn’t be relying on that.
Pull queues are a bit more complicated type of the queues. Just like with the push queues, the service that adds a task says “I need this to be done”. Also, creating the queue is similar, you need to add
queue.yaml file to your default project:
queue: - name: my-pull-queue mode: pull
Even adding a task looks almost identical:
from google.appengine.api import taskqueue def add_task(): task = taskqueue.add( queue_name = "my-pull-queue", method = 'PULL', params = { 'param': value }) return task
The huge difference is how the tasks are being distributed. Instead of letting GAE take care of everything, you need take care of reading the tasks from your pull queue manually. This is done by leasing the tasks from the queue, which is saying “Hey, queue! Give me some tasks for some time, I’ll try processing them”. This is done like this:
from google.appengine.api import taskqueue def handle_tasks(): q = taskqueue.Queue('my-pull-queue') tasks = q.lease_tasks(3600, 100) # 100 tasks for an hour # Perform some work with the tasks here q.delete_tasks(tasks)
This last part, deleting the tasks, informs the queue that the tasks have been processed properly and the lease can end. If the tasks will not be deleted after an hour, the lease will end and the tasks will be picked up by other worker.
This approach requires a bit more design in you handler and doesn’t provide you the same convenient scaling mechanisms, but is way more powerful than using push queues. You don’t have to worry about your timeouts, as you set them on your own, the access management is more granular. Also, your handler doesn’t have to be in Google App Engine, or even in Google Cloud - you can utilize services somewhere else using REST API.
Both of the task queues allow you to distribute you work across different services, and both have their applications and tradeoffs. If you need to distribute a single, long-running task, try using push queues. When you publish many small tasks that can be batched, pull queues are your friends. For those use cases that don’t fit into any of these two, there’s always Google Cloud Pub/Sub - which deserves it’s own blog post.comments powered by Disqus
|
https://mhaligowski.github.io/blog/2017/02/06/task-queues.html
|
CC-MAIN-2017-30
|
refinedweb
| 1,155
| 70.63
|
Basic Python Coding¶
In this course you will learn basic programming in Python, but there is also excellent Python Language reference material available on the internet freely. You can download the book Dive Into Python or there are a host of Beginners Guides to Python on the Python Website. Follow the links to either:
depending on your level of current programming knowledge. The code sections below assume a knowledge of fundamental programming principles and mainly focus on Syntax specific to programming in Python.
Note: >>> represents a Python command line prompt.
Loops in Python¶
for Loop¶
The general format is:
for variable in sequence: #some commands #other commands after for loop
Note that the formatting (indents and new lines) governs the end of the for loop, whereas the start of the loop is the colon :.
Observe the loop below, which is similar to loops you will be using in the course. The variable i moves through the string list becoming each string in turn. The top section is the code in a .py file and the bottom section shows the output
#The following code demonstrates a list with strings ingredientslist = ["Rice","Water","Jelly"] for i in ingredientslist: print i print "No longer in the loop"
gives
Rice Water Jelly No longer in the loop
while Loop¶
These are similar to for loops except they continue to loop until the specified condition is no longer true. A while loop is not told to work through any specific sequence.
i = 3 while i <= 15: # some commands i = i + 1 # a command that will eventually end the loop is naturally required # other commands after while loop
For this specific simple while loop, it would have been better to do as a for loop but it demonstrates the syntax. while loops are useful if the number of iterations before the loop needs to end, is unknown.
The if statement¶
This is performed in much the same way as the loops above. The key identifier is the colon : to start the statements and the end of indentation to end it.
if j in testlist: # some commands elif j == 5: # some commands else: # some commands
Here it is shown that “elif” (else if) and “else” can also be used after an if statement. “else” can in fact be used after both the loops in the same way.
Array types in python¶
Lists¶
A list is simply a sequence of variables grouped together. The range function is often used to create lists of integers, with the general format of range(start,stop,step). The default for start is 0 and the default for step is 1.
>>> range(3,8) [3,4,5,6,7]
This is a list/sequence. As well as integers, lists can also have strings in them or a mix of integers, floats and strings. They can be created by a loop (as shown in the next section) or by explicit creation (below). Note that the print statement will display a string/variable/list/... to the user
>>> a = [5,8,"pt"] >>> print a [5,8,'pt'] >>> print a[0] 5
Tuples¶
Tuples are basically the same as lists, but with the important difference that they cannot be modified once they have been created. They are assigned by:
>>> x = (4,1,8,"string",[1,0],("j",4,"o"),14)
Tuples can have any type of number, strings, lists, other tuples, functions and objects, inside them. Also note that the first element in the tuple is numbered as element “zero”. Accessing this data is done by:
>>> x[0] 4 >>> x[3] "string"
Dictionaries¶
A Dictionary is a list of reference keys each with associated data, whereby the order does not affect the operation of the dictionary at all. With dictionaries, the keys are not consecutive integers (unlike lists), and instead could be integers, floats or strings. This will become clear:
>>> x = {} # creates a new empty dictionary - note the curly brackets denoting the creation of a dictionary >>> x[4] = "programming" # the string "programming" is added to the dictionary x, with "4" as it's reference >>> x["games"] = 12 >>> print x["games"] 12
In a dictionary, the reference keys and the stored values can be any type of input. New dictionary elements are added as they are created (with a list, you cannot access or write to a place in the list that exceeds the initially defined list dimensions).
costs = {"CHICKEN": 1.3, "BEEF": 0.8, "MUTTON": 12} print "Cost of Meats" for i in costs: print i print costs[i] costs["LAMB"] = 5 print "Updated Costs of Meats" for i in costs: print i print costs[i]
gives
Cost of Meats CHICKEN 1.3 MUTTON 12 BEEF 0.8 Updated Costs of Meats LAMB 5 CHICKEN 1.3 MUTTON 12 BEEF 0.8
In the above example, the dictionary is created using curly brackets and colons to represent the assignment of data to the dictionary keys. The variable i is assigned to each of the keys in turn (in the same way it would be for a list with
>>> for i in range(1,10)
). Then the dictionary is called with this key, and it returns the data stored under that key name. These types of for loops using dictionaries will be highly relevant in using PuLP to model LPs in this course.
List/Tuple/Dictionary Syntax Note¶
Note that the creation of a:
- list is done with square brackets [];
- tuple is done with round brackets and a comma (,);
- dictionary is done with parentheses{}.
After creation however, when accessing elements in the list/tuple/dictionary, the operation is always performed with square brackets (i.e a[3]?). If a was a list or tuple, this would return the fourth element. If a was a dictionary it would return the data stored with a reference key of 3.
List Comprehensions¶
Python supports List Comprehensions which are a fast and concise way to create lists without using multiple lines. They are easily understandable when simple, and you will be using them in your code for this course.
>>> a = [i for i in range(5)] >>> a [0, 1, 2, 3, 4]
This statement above will create the list [0,1,2,3,4] and assign it to the variable “a”.
>>> odds = [i for i in range(25) if i%2==1] >>> odds [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23]
This statement above uses the if statement and the modulus operator(%) so that only odd numbers are included in the list: [1,3,5,...,19,21,23]. (Note: The modulus operator calculates the remainder from an integer division.)
>>> fifths = [i for i in range(25) if i%5==0] >>> fifths [0, 5, 10, 15, 20]
This will create a list with every fifth value in it [0,5,10,15,20]. Existing lists can also be used in the creation of new lists below:
>>> a = [i for i in range(25) if (i in odds and i not in fifths)]
Note that this could also have been done in one step from scratch:
>>> a = [i for i in range(25) if (i%2==1 and i%5==0)]
For a challenge you can try creating
- a list of prime numbers up to 100, and
- a list of all “perfect” numbers.
More List Comprehensions Examples
Wikipedia: Perfect Numbers.
Other important language features¶
Commenting in Python¶
Commenting at the top of a file is done using “”” to start and to end the comment section. Commenting done throughout the code is done using the hash # symbol at the start of the line.
Import Statement¶
At the top of all your Python coding that you intend to use PuLP to model, you will need the import statement. This statement makes the contents of another module (file of program code) available in the module you are currently writing i.e. functions and values defined in pulp.py that you will be required to call, become usable. In this course you will use:
>>> from pulp import *
The asterisk represents that you are importing all names from the module of pulp. Calling a function defined in pulp.py now can be done as if they were defined in your own module.
Functions¶
Functions in Python are defined by: (def is short for define)
def name(inputparameter1, inputparameter2, . . .): #function body
For a real example, note that if inputs are assigned a value in the function definition, that is the default value, and will be used only if no other value is passed in. The order of the input parameters (in the definition) does not matter at all, as long as when the function is called, the positional parameters are entered in the corresponding order. If keywords are used the order of the parameters does not matter at all:
def string_appender(head='begin', tail='end', end_message='EOL'): result = head + tail + end_message return result
>>> string_appender('newbegin', end_message = 'StringOver') newbeginendStringOver
In the above example, the output from the function call is printed. The default value for head is ‘begin’, but instead the input of ‘newbegin’ was used. The default value for tail of ‘end’ was used. And the input value of endmessage was used. Note that end_message must be specified as a key word argument as no value is given for tail
Classes¶
To demonstrate how classes work in Python, look at the following class structure.
The class name is Pattern, and it contains several class variables which are relevant to any instance of the Pattern class (i.e a Pattern). The functions are
- __init__
- function which creates an instance of the Pattern class and assigns the attributes of name and lengthsdict using self.
- __str__
- function defines what to return if the class instance is printed.
- trim
- function acts like any normal function, except as with all class functions, self must be in the input brackets.
class Pattern: """ Information on a specific pattern in the SpongeRoll Problem """ cost = 1 trimValue = 0.04 totalRollLength = 20 lenOpts = [5, 7, 9] def __init__(self,name,lengths = None): self.name = name self.lengthsdict = dict(zip(self.lenOpts,lengths)) def __str__(self): return self.name def trim(self): return Pattern.totalRollLength - sum([int(i)*self.lengthsdict[i] for i in self.lengthsdict])
This class could be used as follows:
>>> Pattern.cost # The class attributes can be accessed without making an instance of the class 1 >>> a = Pattern("PatternA",[1,0,1]) >>> a.cost # a is now an instance of the Pattern class and is associated with Pattern class variables 1 >>> print a # This calls the Pattern.__str__() function "PatternA" >>> a.trim() # This calls the Pattern.trim() function. Note that no input is required.
The self in the function definition is an implied input
|
http://pythonhosted.org/PuLP/main/basic_python_coding.html
|
CC-MAIN-2018-05
|
refinedweb
| 1,780
| 69.21
|
Equivalent functionality is directly in Tcl from 8.6.2 onwards as string cat.Obsolete discussion
Note: This is a renaming of string append.This extends the command string to accept a new sub-command concat:
if {[catch {string concat}]} then { rename string STRING_ORIGINAL proc string {cmd args} { switch -regexp -- $cmd { ^con(c(a(t)?)?)?$ { uplevel [list join $args {}] } default { if {[catch { set result\ [uplevel [list STRING_ORIGINAL $cmd] $args] } err]} then { return -code error\ [STRING_ORIGINAL map\ [list\ STRING_ORIGINAL string\ ", compare, equal,"\ ", compare, concat, equal,"]\ $err] } else { set result } } } } }Test if it does as expected:
% string concat hully gully hullygully %Yeah. Check an original sub-cmd:
% string match -nocase hully gully 0 %Works. Great. Now some erraneous situation:
% string match -nocase hully gully bully wrong # args: should be "string match ?-nocase? pattern string" %The err msg hides the STRING_ORIGINAL and shows up string instead. Great again. Now another situation:
% string what'dya'mean? bad option "what'dya'mean?": must be bytelength, compare, concat, equal, (...) %The err msg shows up all sub-cmds inclusive concat. Yep. That's it. Errp.
AMG: Why is there no [string concat] in the core? It would be helpful for concatenating strings that are quoted in different ways. For example:
string concat "dict with Sproc-[list $name]" {([list [namespace tail [lindex [info level 0] 0]]])} \{$script\}is quite a bit easier to read (in my opinion) than:
"dict with Sproc-[list $name](\[list \[namespace tail \[lindex \[info level 0\] 0\]\]\]) {$script}"Lars H: There probably isn't any particular reason. TclX provides this as the cconcat command. string concat is to some extent emulatable by combining join and list, like
join [list "dict with Sproc-[list $name]" {([list [namespace tail [lindex [info level 0] 0]]])} \{$script\}] ""or more recently using apply
apply {args {join $args ""} ::} "dict with Sproc-[list $name]" {([list [namespace tail [lindex [info level 0] 0]]])} \{$script\}so maybe that worked well enough for those with the power to add it. I know I have on occation missed it, though.
LV Does string concat have functionality different from just using the two strings together?
set str1 hully set str2 gully set str3 $str1$str2or even using append?
set str3 hully append str3 gullyAMG: As far as I can tell, no. But it avoids creating temporary variables.LV I only used the variables for illustration. You can do the automatic concatenation in most contexts. The trickiest point would be when mixing a list and a string:
set a "[list 1 [list 2 3 4] [list 5 6 [list 7 8 9]]] are the first nine digits" 1 {2 3 4} {5 6 {7 8 9}} are the first nine digitsIf one of the strings is going to be a list, then you have to make use of join. However, otherwise, you can do things like
puts "string1String2" set b "string1[proc2]"and so forth.
Stu 2009-01-26
proc strJoin {args} { return [append {} {*}$args] } set map [namespace ensemble configure string -map] dict append map join strJoin namespace ensemble configure string -map $mapAMG: Hmm, [join] could also be used. This way no dummy variable is created.
proc strJoin {args} {join $args ""}
|
http://wiki.tcl.tk/16206
|
CC-MAIN-2017-04
|
refinedweb
| 524
| 62.68
|
Get the highlights in your inbox every week.
Containers vs. VMs, Istio in production, and more industry news | Opensource.com
Containers vs. VMs, Istio in production, and more industry news.
Tech adoption in the cloud native world: Containers and more
- Adoption of containers in production rose to from 73% in 2018 to 84% in 2019. Among this group, those running at least 250 containers rose from 46% in 2018 to 58% in 2019. From 2017 to 2019, the number of respondents with more than 50 machines (physical or virtual) in their fleet rose from 77% in 2017 to 81% in 2019.
- Implication: Container adoption appears to have mitigated the growth of VMs that need to be managed. However, be wary of claims that the raw number of machines being managed will decline.
The impact: It intuitively makes sense that virtual machine growth would slow down as container use grows; there are lots of containers being deployed inside VMs to take advantage of the best features of both, and lots of apps that won't be containerized any time soon (looking at you legacy enterprise monoliths).
Everything we learned running Istio in production
At HelloFresh we organize our teams into squads and tribes. Each tribe has their own Kubernetes namespace. As mentioned above, we enabled sidecar injection namespace by namespace then application by application. Before enabling applications for Istio we held workshops so that squads understood the changes happening to their application. Since we employ the model of “you build it, you own it”, this allows teams to understand traffic flows when troubleshooting. Not only that, it also raised the knowledge bar within the company. We also created Istio related OKR’s to track our progress and reach our Istio adoption goals.
The impact: The parts of technology adoption that aren't technology adoption are ignored at your own peril.
Aether: the first open source edge cloud platform
Aether is bringing together projects that have been under development and operating in their own sandbox, and under that framework ONF is trying to support a diversity of edge services on a converged platform, Sloane explained. ONF’s various projects will remain separate and continue to be consumable separately, but Aether is its attempt to bring multiple capabilities together to simplify private edge cloud operations for enterprises.
"We think we’re creating a new collaborative place where the industry and community can come together to help drive some maybe consolidation and critical mass behind a common platform that can then help common functionality proliferate in these edge clouds," he said.
The impact: The problems being solved with technology today are too complex to be solved with a single technology. The business problems being solved on top of that require focus on the truly, value-adding. Taken together, businesses need to find ways to collaborate on their shared needs and compete on what makes them unique in the market. You couldn't find a better way to do that than open source.
Women in cloud careers are challenging the narrative
"As cloud is a relatively new technology, my experience of being a 'woman in tech' may not be typical, as the cloud industry is extremely diverse," Yordanova says. "In fact, my team has an equal gender split with a real mix of personalities, cultures and strengths from people who grew up with this technology."
The impact: One thing I like to think about is the idea of leapfrogging; that you might be able to skip a certain step or stage in a process because the circumstance that caused its existence in the first place no longer applies. The cloud era didn't have as long a period with static stereotypes of who made it and who it was for, so maybe it carries less of the baggage of some previous generations of technology?
How StarlingX shines in the starry sky of open source projects in China
Our team is in China, so one of our missions is to help the Chinese community to develop the software, contribute code, documentation, and more. Most of the StarlingX project meetings are held late at night in China, so the presence and participation for the Chinese community members are quite challenging. To overcome these obstacles, together with other community members (like friends in 99cloud) in China, we made some initiatives, such as engaging with other Chinese community members at the meet-ups, hands-on workshops ad-hoc tech meetings in Chinese, translating some documents to Chinese, and continuously interacting in WeChat groups (just like a 24/7 on-call services for and by everyone)
The impact: As Chinese contributions to open source projects continue to grow this seems like a situation that is likely to reverse, or at least equalize. It doesn't really make sense that "learn English" should be a pre-requisite to participating in the open source development process.
I hope you enjoyed this list and come back next week for more open source community, market, and industry trends.
Comment now
|
https://opensource.com/article/20/3/survey-istio-industry-news
|
CC-MAIN-2020-24
|
refinedweb
| 836
| 55.47
|
I could resort to copy and paste from the e-book, but would rather have all the code as C++, Java, etc., files. For example:
Example 6-1 contains a sample C++ solution. Note that vertex color information is used only within the dfsVisit methods.
Example 6-1. Depth-First Search implementation
#include "dfs.h"
// visit a vertex, u, in the graph and update information
void dfsVisit (Graph const &graph, int u, /* in */
etc.
but dfs.h is not included anywhere?
Help get this topic noticed by sharing it on
Twitter,
Facebook, or email.
Reply
Algorithms in a Nutshell: Is the code in the book available for download? Or will it be when the final version is published?
Algorithms in a Nutshell: Is the code in the book available for download? Or will it be when the final version is published?
- Chris Olson (Employee) October 30, 2015 18:30Hi Wayne,
The example code isn't available for that book yet. It should become available after the final version of the book is released.
Best regards,
Chris Olson
O'Reilly Book SupportComment
- Thanks! I'll try to be as patient as I can...
|
http://support.oreilly.com/oreilly/topics/algorithms-in-a-nutshell-is-the-code-in-the-book-available-for-download-or-will-it-be-when-the-final
|
CC-MAIN-2017-39
|
refinedweb
| 192
| 67.65
|
Socialwg/2015-01-27-minutes
Contents
27 Jan 2015
Attendees
- Present
- eprodrom, jasnell, KevinMarks, aaronpk, Arnaud, Ann, wilkie, bblfish, bret, rhiaro, Sandro, Lloyd_Fassett, ShaneHudson, hhalpin, tantek, elf-pavlik, Harry
- Regrets
Chair
- eprodrom
- Scribe
- wilkie
Log
<trackbot> Date: 27 January 2015
<eprodrom> Hmm
<KevinMarks> hm, did I dial in too early?
<eprodrom> Apparently by like 30 seconds
<Arnaud> hi there
<eprodrom> Hmm
<eprodrom> Did I miss something in my incantation?
<eprodrom> I have 13:01 on my clock
<mattl> hey... not calling in, but GNU social got a ton more users thanks to Twitter banning a user.
<KevinMarks> hm. hearing beeps on the call
<eprodrom> AH
<eprodrom> There we go
<eprodrom> Thanks Arnaud
<mattl> eprodrom: if you have the db of the old StatusNet wiki, it would be great to get a copy :)
mattl: ... what user?
<Arnaud> it should have been unnecessary
<eprodrom> mattl: OK, I can try and get that to you
<aaronpk> whoa now there's music
<Arnaud> but for some reason it doesn't happen automatically
<Arnaud> already reported to sysreq
<eprodrom> Great
<jasnell> aw, I was just starting to dance around a little
<mattl> wilkie:
do we have a scribe? I can scribe.
<mattl> eprodrom: thanks man :)
<KevinMarks> we need adactio to come along and play us some folk
<wilkie> yay
<aaronpk> wilkie wins by 3 seconds
<Arnaud> fyi: I'm officially on paternity leave for 2 weeks :-)
<eprodrom> scribe: wilkie
<ShaneHudson> I can't even hear ringing
<Arnaud> yes, really!
<wilkie> rude: (
<ShaneHudson> Arnaud: Congrats! :)
<AnnB> holy smokes
<jasnell> Arnaud++
<Loqi> Arnaud has 2 karma
<bret> +1
<jasnell> +1
<eprodrom> PROPOSED: Alec Le Hors becomes youngest honorary member
<bblfish> :-)
<wilkie> :)
<wilkie> +1
<KevinMarks> +1
<bblfish> +1
<bblfish> we have a girl here
<bblfish> she is 4 months old
<eprodrom> RESOLVED: extend honorary membership to youngest ever member
eprodrom: if we are ready to go, unless we are waiting for somebody in particular. tantek will be joining shortly
<eprodrom>
eprodrom: first step: approval of our minutes from last week
Approval of Minutes From Last Week
eprodrom: any objections to approving these minutes?
resolved: minutes approved from last week
eprodrom: next meeting is Feb. 3rd, there's no reason to not have this meeting unless any objections?
Actions and Issues
eprodrom: we have a few that have sitting on the queue for a while
... one that came up last week was the json-ld context for the activity streams namespace. not sure where that landed
... don't see harry on the call
jasnell: there is some magic incantation that needs to be done to serve the json-ld properly. sandro may offer some insight.
... it is queued up and part of the publication of the draft, but I need to follow up with the team
eprodrom: can harry help out?
jasnell: harry doesn't know the incantation
eprodrom: another action on the list is to look at social apis
<Arnaud> he said he would do it next week
eprodrom: we should add talking about this to next week so we can mark that one off
jasnell: tantek is going to go through the microformat examples, there's a lot, he may not have done all of them
eprodrom: we'll leave it open
jasnell: (wrt speaking out to people) still getting a list and reaching out to people, some may need IE status
eprodrom: other open actions... "archiving osf blog posts" that may be open for some time
jasnell: the archives are available, we just need to have someone volunteer to do that work and have a place set up to put it
eprodrom: I went to the social IG call last week to discuss the process and present social apis to the IG. it went well
<harry> Note that the archives are available as a SQL dump from drupal
eprodrom: a very open discussion about the process that seemed helpful and there was a general agreement and approval of what we've done by the IG
<harry> so someone would have to 1) reset-up drupal and 2) snapshot the blogs as HTML.
eprodrom: one thing that did come up is that most of the APIs we reviewed were primarily US focused and it was noted we should look at networks in other parts of the world
<KevinMarks> vkontakte?
eprodrom: I took that as an action. my ability to navigate documentation in Russian and Chinese is slim, but I'll give it a good effort.
... the idea was to see if there were significant patterns in these APIs not found in Western social networks
<harry> Note that Jeff is going to discuss Social with Weibo in two weeks
AnnB: yes, I wondered if there is a difference between these networks. We do have members who are in China, only a few in Russia, but some may be recommended [to help]
<KevinMarks> historically, several of them adopted opensocial and had some mapping; the differences were often about payment
eprodrom: between a few members, most of these countries are covered. I feel like I can look at the bottom and read through the documentation so I could collaborate
<eprodrom>
<ShaneHudson> Worth looking at Orkut?
<ShaneHudson> (defunct I know)
eprodrom: here is the list. part of this may be to reach out to these organizations.
... any other issues we have not captured? any actions we should cover?
<harry> Note that minor HTML issues have delayed publication of AS 2.0 till Thursday.
eprodrom: new stuff for the tracker?
<tantek> Tracker:
eprodrom: time to move on to the next agenda item
<KevinMarks> orkut's api was opensocial
Review List of Requirements
<AnnB> hmm, Harry .. re: Weibo .. that's interesting
eprodrom: we are tasked with coming up with a social api. the process we are following to do so is the following
<eprodrom>
<harry> yes, its a f2f meeting, we'll see what happens
eprodrom: we identified a number of APIs throughout the web and we've looked at what they do
... we've looked at twitter, facebook, etc and open source pump.io, etc and some from non-specific standardization e.g. linked data platform
... we've covered quite a bit and our next step is gathering from these multiple apis and coming up with a set of requirements for our API
... we've talked about them a lot and documenting them online, but the time is rapidly arriving when we need to decide what these requirements are to move forward with a candidate proposal
... the idea is if we can approve a list of requirements, then we can start soliciting proposals and can measure the quality of the proposals based on those requirements
<eprodrom>
eprodrom: we had a list of requirements earlier and should be updated and we should discuss if these requirements are good for upcoming proposals
... one thing we can do is say "great, these are fine" and move on, or we can look at these and rewrite or elaborate on them further
... my goal is to move the process further
<Zakim> tantek, you wanted to suggest a simpler approach to API *requirements*, based on a previous group resolution, vs. "nice to haves"
eprodrom: it would be fantastic to have looked at proposals before the face to face
... tantek?
<harry> yes can hear you
<bblfish> +1 can hear you
tantek: there are many ways to pick features and requirements.
... on the lower end of the spectrum to just decide politically: go through a list and vote on each point
... we've taken a slightly better approach so far; we've researched existing APIs
<bret> drowed out by static
tantek: what would be better than that would be to annotate which requirements belong to what existing examples and what don't
... right now we don't know which requirements are based on an example or not
... there is still a better method: basing requirements on use cases
<bblfish> ah where are the use cases?
<tantek>
tantek: looking at use cases we have already chosen to adopt there are only one so far. the only one we have resolved to adopt is SWAT0
... therefore, all requirements should be adopted to follow ONLY what SWAT0 needs and all else is pushed to nice-to-have
eprodrom: I understand the point, but that is not the procedure we agreed upon and have been doing for the last many weeks
... we have done reviews to take requirements from those reviews. if SWAT0 is all we want, we could have saved ourselves a lot of work
... SWAT0 is not intended to be a social API usecase, it is a federation usecase.
<wilkie> where are the usecases from the IG??
<bblfish> was there not an IG doing use cases?
tantek: it's the only usecase we have
... there is nothing wrong with research and documentation, but the current set of requirements is too big for a first draft
<wilkie> The IG though... they are doing use-cases. that was the point of us NOT DOING THEM
bblfish: I like swat0, but don't we have an IG that builds up use cases? some kind of community group?
<tantek> no the point is we only one have use case WE HAVE APPROVED
bblfish: I don't really think, if I look at the requirements, I don't think they are difficult to do.
<AnnB> yes, socialIG ... has lots of use cases, just not in common template format
<harry> The IG had a slowdown due to chair changing.
<tantek> there are lots of use-cases. there is only one use case that Social Web WG has formally adopted.
<tantek> and that is SWAT0
<jasnell> running through the list of requirements... I've gone through and checked off the cases that are implemented by IBM's connections product (shipping currently)...
<tantek> I don't believe any claims of "don't think they are difficult" unless you have it already running
<tantek> e.g. on your own website
bblfish: the idea is to have an API that has all of them, and it may not be necessary for the WG to list all cases, but seeing all features together would be nice
<jasnell> would recommend that others match the requirements up against their existing implementations as well
bblfish: you can add audio, video, all kinds of things, in similar fashion
<tantek> with all due respect, I don't think anyone is qualified to say something is "easy" unless they've already *SHIPPED* it, e.g. on their own website
<sandro> wilkie, this is harry
<Loqi> sandro: ben_thatmustbeme left you a message on 1/20 at 12:26pm: i'll be co-organizing IWC Cambridge 2015, can you confirm that we have a venue for those dates? 2015-03-19/20?
<jasnell> the requirements list can be simplified and achieve the same result
harry: tantek is saying take the minimal use case agreed upon and add to that, and evan is about developing the list of potental requirements that we can shave down
... the real issue is that we don't have a draft and it is really hard to whiteboard an API draft from scratch, which is why we have done that research
<AnnB> Social IG use cases:
<KevinMarks> instead of the superset, choose the interesection
<bblfish> tantek. You can ask 9 implementations from the LDP group, to work out what easy is . I have built one by myself, so if one person can get implement it, that makes it easy. That's what I am basing my statement on.
harry: it, as a superset, is quite big. to resolve the tension, if somebody wants to whiteboard the draft and looks at the requirements we have made progress and we can say "this is not necessary for swat0" or "hey I'll need this for X"
<AnnB> with most focus so far on Profile use cases:
<tantek> bblfish - you were going to make all that work on your own site? have you?
<tantek> I don't believe the 9 implementations report in the context of Social Web WG
harry: my proposal is to let evan or whomever else to work on this, do a draft of it, and let tantek criticise it
<AnnB> mostly only defined in "scenarios" thus far:
<sandro> ben_thatmust, yes, confirmed
<elf-pavlik> +1 harry
<bblfish> yes
<harry> They in other words, let's just whiteboard something, as driven by SWAT0 and Evan's empirical work
<Zakim> tantek, you wanted to point out the process we agreed on does not specify *HOW* to "Assemble functional requirements of a social API"
<bblfish> there are pieces not implemented, but that does not make them difficult tantek to implement. It just requires agreeement
<harry> and then we can add use-cases as needed.
tantek: a point to harry's claim the requirements are easy: I'm going to say if you haven't shipped the requirements, you can't say it is easy
... I won't believe you if you haven't shipped
<tantek>
<jasnell> getting echo
tantek: if you look at the process we agreed on (pasted the url) the first step is the requirements of the social api, and I am suggesting a concrete way of doing that:
<harry> KevinMarks, ping or just type "Zakim, unmute me"
<KevinMarks> I'm already muted on my end
<KevinMarks> hm
<aaronpk> KevinMarks: that happened to me too. not sure how.
eprodrom: another mechanism we could use is to formalize
<KevinMarks> maybe gvoice is using the wrong mic
eprodrom: if we ask the IG to do it, what kind of timeframe would it take
... if we cannot go forward without the IG making those use cases, we'll need to ask them what their timeframe is or we do them internally in the WG
... another option is to take what we have and then look just at what is needed for SWAT0 and everything else is nice-to-have
... to be frank, SWAT0 is not a social api use-case
... for me, I don't think it is sufficient social api nor reaches the minimum social networking we expect from a social api
<jasnell> Here's an example of a comprehensive existing social api that implements quite a lot of these requirements:
AnnB: in regard to the usecases in the IG, which I am chairing, we have many scenarios and we have maybe too much.
... question I have for the WG is "what would you want to see and what format would it be in" what would be the most useful way to give you [the usecases]?
<elf-pavlik> +1 AnnB
<bblfish> AnnB: need to read this now :-) Looks good.
eprodrom: I think the use cases you posted are very detailed and there's a lot in here, but I think what we want is a checklist we can compare a proposal against
... "this proposal doesn't have a way to post content, so this isn't good for this use case"
<tantek> I actually don't want a checklist - because that's again likely political rather than user-based
AnnB: checklist of what
<tantek> AnnB, for each use-case, we need a BRIEF summary of the user-interactions.
AnnB: I think the IG needs guidance on what is useful
<tantek> just like SWAT0 has
<sandro>
<eprodrom>
<tantek> AnnB look at how brief the summary is here:
<tantek> so that's the request back to the IG
<tantek> all use-cases should have a *brief* user-scenario at the top
eprodrom: my concern is if we do these whole user scenarios for this kind of API requirements I think that is months of work
<tantek> just like that
AnnB: I agree. I think that's where we are stuck and need a new direction
eprodrom: not sure if we can do briefer ones or if there is another way to do this. if we hold out for full user scenarios it is unlikely we can do things promptly.
<KevinMarks> that's backwards, evan
<AnnB> thanks Tantek
<Zakim> tantek, you wanted to note that IG function is to *provide* use-cases, not *approve*. The Social Web WG must approve use-cases explicitly. and to also note that I specifically said
<KevinMarks> defining and implementing things with no use is months of work
tantek: I think I answered Ann's question on what to provide. I do agree that IG use cases are detailed and lengthy, which is good, they need a summary of steps. much like SWAT0 that fits in a tweet
... goes back to the IG: a summary of steps is what we want
<ShaneHudson> I do really like how simple SWAT0 is
tantek: I also agree with concerns about not a complete API and that it may take months of work to come up with usecases.
... this is why we should not wait for the IG usecases but just immediately go forward with even just a small draft with the usecases we have and extend with new cases later
<KevinMarks> the assumption that completeness is necessary is how we end in the weeds
tantek: so when people come and say "I need this feature" they can make a claim upon the draft we have
<elf-pavlik> i'll try to write something on importance of *extensibility* for this API, this way we just provide clear path to add support for all kind of requirements which will come up down the road!
tantek: my point is that I do believe we can come up with small incremental usecases if we need to. we don't need to wait.
... the IG does the work, but does not approve them. the WG approves and accepts them one at a time.
<tantek> zakim, mute me
sandro: one more vote for tweetable scenarios
<tantek> sandro: definitely a vote for tweetable scenarios
sandro: even if we start small with SWAT0, it is hard to go from something that expresses SWAT0 to something larger.
<KevinMarks> if it's in your head, you should be abel to write use cases for it
sandro: I think it is valuable to have a larger roadmap in mind.
<AnnB> distilling info to small bits IS necessary (re: suggestions for Social IG) .... and takes time to do! :-)
<tantek> sandro: "maybe we don't all build that all at once"
<tantek> agreed
sandro: evan, what process do you want for editing
<tantek> "don't build that all at once" = minimal requirements at first, and grow (build) more incrementally
eprodrom: the wiki is the next step
<AnnB> time better spend, though, than moving our scenarios to common template format
<tantek> I think I'm agreed with sandro
<jasnell> I am muted. no idea why it's saying me
eprodrom: look at each of the apis we reviewed and pull out relevant parts. lots of work and hopefully we have volunteers.
... I think adding questions on discussion page or mailing list are good
<bret> jasnell, local mute dosnt get everything, only trust Zakim mute
eprodrom: I'd like to point out that maybe 60% of these requirements are covered by the open social activity streams api
<tantek> I greatly prefer "collection" or "set" for a user-facing "thing" that users put other things into
<tantek> rather than "container"
<AnnB> why?
<tantek> "container" sounds too abstract / programmery
<harry> I think my proposal is we let Evan and whoever else is interested draft an API that fits at least SWAT0
eprodrom: it's really when we get down to endpoints that aren't those 5 major endpoints that it gets strange
<AnnB> aha
<bret> collection is a flickr term
harry: we are at an impasse. tantek's fear is legitimate and we shouldn't have a monster api with no implementors.
<KevinMarks> container is also an opensocial term of art
<tantek> jasnell - and I say users do care, and thus it matters
harry: at the same time, evan's fear that SWAT0 is not sufficient is legitimate
<jasnell> tantek: and I'm saying that right now's it's not critically important that we decide what to call it.
<tantek> jasnell - if you don't care, then change it to "collection"
harry: so, let's make a draft with requirements, and mark those that are good for swat0, mark the others as such, and use that as a place for discussion
<tantek> jasnell from our experience in indiewebcamp - that term resonates / explains much better
harry: we need a place to add/subtract features and discuss these features
<tantek> jasnell: see
<KevinMarks> i think the noise is on your end , elf-pavlik
<bret> can we mute Harry?
<AnnB> it's helpful to start with SOMEthing, which gives something to focus on, discuss, edit, improve
bblfish: I think the problem that things that seem complicated may in fact be simple
<tantek> I'll point out that "SWAT0" seeming "light" is flawed, because we don't even have any SWAT0 interop yet.
<tantek> again this is the same problem as the use of "easy"
<harry> Yes, SWAT0 is quite large
<tantek> don't say something is "light" or "simple" unless you've shipped it.
<harry> so we should just clearly demarcate in any API what parts are necessary for SWAT0 and what isn't, as tantek said
bblfish: I think one or many could present how they would do things with a base api and that would allow us to make a case that one way is simpler than another
<tantek> harry - exactly - thus we start the requirements bar at "necessary for SWAT0"
<tantek> and then we list more requirements later
<harry> yes, so I think we agree :)
<tantek> or rather *candidate* requirements
<tantek> they aren't *actual* requirements, because they're not *required*
bblfish: in a usecase we could specify that is the criteria of what should be allowed
<harry> I'm just suggesting that if Evan or someone else is drafting an API, they can go with functional requirements greater than SWAT0, just clearly mark those
<harry> should be pretty straightforward, i.e. contacts are needed for SWAT0
<elf-pavlik> +1 URI opacity!
bblfish: for instance URI opaqueness to work with certain criteria of privacy etc
<tantek> I agree to large extent re: use URLs and follow-your-nose discoverability that bblfish is mentioning
bblfish: this is how we can foster a system that may grow perhaps way beyond the scope of what this group is capable of specifying explicitly
eprodrom: I think that is helpful
... one of the requirements in this list is follow-your-nose semantics
... most of the APIs we reviewed are single implementation APIs which don't have those semantics. but this is a question for another day
<elf-pavlik> most (all) APIs which we reviewed don't aim at interoperability and extensibility
<bblfish> true: but most of the apis are centralised, so are not at that level examples of what the social web should be :-)
eprodrom: I think we need to postpone proposing these requirements at this moment
<Zakim> tantek, you wanted to discuss flaws in existing APIs, lack of URLs, lack of follow-your-nose, too many special APIs just copy/pasted for different types
eprodrom: we need to talk about activity streams a little bit
tantek: I want to make a point to agree with bblfish to use URLs and follow-your-nose ideas and use a more minimal API
... I agree with evan that many services do not have that follow-your-nose idea and it would be horrible to implement a similar system
eprodrom: I'll put APIs and follow-your-nose up for next week
<sandro> +1 tantek avoiding replicating unnecessary complexity
Activity Streams 2.0
eprodrom: I want jasnell and harry to talk about where we are with AS2.0 and the next version of the working draft
jasnell: based on the current process, hopefully draft will be published thursday
... validation errors and such caused delay. should be good though.
... we are trying to get the context documents served up with the magic incantation sometime after
harry: the problem is when there is an html error in a document, no matter how small, the webmaster will push back and we have to fix it before we can publish. that's how the w3c process works.
<KevinMarks> so w3c is less tolerant of html errors than w3c specs?
harry: there may be a process change that would make this process easier, but things should be set for thursday unless the webmaster finds something else
eprodrom: that's good news. very exciting.
<harry> yes, and no broken links :)
<bblfish> btw, we did not cover the testing of the current activities stream 2.0
<harry> jasnell, stay in IRC and I'll check with webmaster to see if everything is OK post-meeting
<jasnell> harry: +1
eprodrom: we will copy the agenda item for requirements to next week's call
<AnnB> thanks for good chairing, Evan!
<AnnB> and for scribing, Wilkie!
<bret> ty bye
eprodrom: thank you. appreciate your time. talk to you next week
<Arnaud> thanks
<elf-pavlik> thanks eprodrom wilkie !
<aaronpk> wilkie++ for scribing!
<Loqi> wilkie has 5 karma
<elf-pavlik> eprodrom++
<Loqi> eprodrom has 1 karma
<elf-pavlik> wilkie++
<Loqi> wilkie has 6 karma
<Arnaud> don't forget to get rrsagent to create the minutes
<eprodrom> trackbot, end meeting
<Arnaud> thanks
Summary of Action Items
None.
|
https://www.w3.org/wiki/Socialwg/2015-01-27-minutes
|
CC-MAIN-2018-30
|
refinedweb
| 4,209
| 60.38
|
Authors: Émilien Kia, Vincent Quint, Irène Vatton -- INRIA Rhône-Alpes
Version: 1.0 - Date: 2009-12-15
Abstract.
Contents
Most popular XML document formats used on the web, such as XHTML or SVG, are very flexible: they allow many different types of documents to be represented. This is an advantage in a wide space such as the Web, as a broad range of documents can be handled consistently. XHTML, for instance, is used to represent not only traditional Web pages, but also complex technical documents, sophisticated e-commerce forms or rich media slides, and all these documents can be accessed with a single browser. But this flexibility makes document authoring a complex task. When producing a specific type of document, an author is faced with all the possibilities provided by XHTML, and she has to make a number of difficult decisions. If multiple similar documents have to be produced consistently, for a particular use or for some specific application, authors have to make a consistent use of the XHTML document format, which has proven to be very difficult.
XTiger (eXtensible Templates for Interactive Guided Editing of Resources) tackles this problem by defining how the document format (XHTML, for instance) has to be used for representing a certain type of document. To do so, XTiger relies on the notion of a template. A template is a skeleton representing a given type of document, expressed in the format of the final documents to be produced (XHTML, for instance). The format of the final documents is called the target language and must be an XML language. The skeleton contains some statements, expressed in the XTiger language, that specify how this minimal document can evolve and grow, while keeping in line with the intended type of the final documents. Some parts of the template may be frozen, if they have to appear as is in the final document. Some parts may be modified when producing the final document, some others may be added either freely or under some constraints. It is the role of the XTiger language to specify these possibilities and constraints.
When talking about XTiger, it is important to make a distinction between two kinds of documents: a template and its instances. A template is the skeleton presented above, containing XTiger elements and defining a certain type of document. It is the seed used to produce a series of documents, called instances, that are derived from the template by following the statements expressed by the embedded XTiger elements. In the rest of the paper, we use the term instance instead of final document.
The statements expressed by XTiger elements are supposed to be interpreted by a document authoring tool. Starting from a template, the tool helps the user to follow the XTiger statements, thus ensuring that the instance being edited will stick to the document type specified by the template.
XTiger templates may be used to specify the overall structure of a large document, as well as the fine details of some of its parts. This latter feature allows in particular to express how to use microformats in large documents.
XTiger is not a document type like XHTML, SVG or MathML. It is always used in combination with a target language, which is a document type. The XTiger elements interspersed in a template are not supposed to be displayed in the same way as elements of the target language. Instead, the role of these XTiger elements is to specify what elements and attributes of the target language must, should or could be present at these positions in the document instance. That is the core of the language, which specifies the structure and (parts of) the content of documents.
This functionality is complemented with additional features that make the language easier to use. For instance, structure fragments can be defined only once and used at several places, in one or several templates. This facilitates a modular construction of templates, by sharing reusable pieces of structure stored in libraries.
As XTiger is used to describe structures, and because it is always mixed with XML languages, it is itself an XML language. XML namespaces are used to distinguish between XTiger elements and elements from the target language. This distinction allows existing web browsers to simply ignore the XTiger elements and to display a template as if these elements were not present.
The XTiger namespace is. For the
sake of readability, all examples in this document use prefix
xt:
for XTiger element names, while names from the target language are not
prefixed.
The target language used in the following examples is XHTML, but it might be
any other XML language as well. The first example below is a piece of XTiger
language that defines a component called "author" (see also other template
examples). This component is constituted by a XHTML
paragraph
that contains a few XHTML
span and
br elements, with
classes from the hCard microformat. The "author" component can be used to
generate the XHTML structure representing an author in document instances,
following the hCard microformat.
<xt:component <p class="vcard"> <span class="fn"> <xt:useAuthor name</xt:use> </span> <br/> <span class="adr"> <xt:useAddress ...</xt:use> </span> <br/> <span class="email"> <xt:useemail ...</xt:use> </span> </p> </xt:component>
In XTiger, types are used to specify pieces of structure that may occur at several places in a template or in several templates. XTiger offers a few basic types and allows constructed types to be built. Constructed types are built with constructors that combine XTiger basic types and types from the target language. Two constructors are available: component and union.
XTiger offers three basic types:
numberrepresenting integers and floating point numbers,
booleanrepresenting boolean values (
trueor
false),
stringrepresenting variable length character strings.
As XTiger always works with a target language and is used to produce
documents in that language, it may use elements and attributes from the target
language. For instance, when the target language is XHTML, elements
h1,
h2,
p,
strong,
span,
cite are target language types.
Component is a constructor that creates a new constructed type
by specifying an XML structure assembling other types, which may be basic
types, target languages types and constructed types (unions and other
components). The type thus created has a name that allows it to be referred
from other XTiger elements. This name must be unique in the template where it
is defined.
The XTiger element
component is used to define a component
type:
<!ELEMENT component ANY> <!ATTLIST component name NMTOKEN #REQUIRED>
name
componentelement defines the structure of the new type. It may be any XML structure that combines target language elements (possibly with attributes) and XTiger elements allowed in the template body.
An example :
<xt:component <p>Hello world!</p> </xt:component>
This example defines a type called "hello" that is a XHTML paragraph (the
target language of the template where this element occurs is XHTML) containing
the text "Hello World !". It uses a target language type (
p
element).
Union is a constructor that defines a new type as a choice
between several types, each of which being a basic type, a target language
type, or a constructed type (component or other union). The new type has a name
that allows it to be used in other XTiger elements. This name must be unique in
the template where it is defined.
The XTiger element
union is used to define a union type:
<!ELEMENT union EMPTY> <!ATTLIST union name NMTOKEN #REQUIRED include CDATA #REQUIRED exclude CDATA #IMPLIED>
name
include
div,
h1,
h2,
p, ... for XHTML), or the name attribute of a component or another union. This attribute is used to define the options that constitute the union. This attribute is mandatory.
exclude
div,
h1,
h2,
p, ...), or the name attribute of a component or another union. This attribute is used to exclude some elements that are part of the union as defined by the
includeattribute. This attribute is optional.
XTiger provides four predefined unions that may be used in any type definition:
anySimple
number,
stringand
boolean)
anyElement
anyComponent
any
anySimple,
anyElementand
anyComponent.
Example :
<xt:union <xt:union <xt:union
With these definitions, the
hello_or_p union provides a choice
between the
hello component and the
p element. The
headings union provides a choice between all HTML headings
(
h1 to
h6). The
headings1to4 union
provides a choice between all HTML headings except
h5 and
h6.
The definitions of components and unions presented above must appear in the head of an XTiger template, or in a XTiger library imported by a template.
Type definitions do not appear in document instances. Instead, instances include a Processing Instruction that refers to their template, which contains type definitions and reference to libraries containing additional type definitions.
The head element collects definitions of components and unions that are used
in the template. It also refers to the libraries that contain additional
components and/or unions used in the template. This is done with the
import element.
There is always a
head element in a template, but only one. It
may appear anywhere in the template, but it cannot be the root of the document.
In XHTML documents it is recommended to insert it in the XHTML
head element.
<!ELEMENT head ((component | union | import )*) > <!ATTLIST head version CDATA #REQUIRED templateVersion CDATA #IMPLIED>
version
templateVersion
component,
unionand
importelements, but no other elements.
A XTiger library is an XML document containing definitions of constructed
types (components and/or unions). Libraries allow types to be declared only
once and to be shared between different templates. A XTiger library is defined
by the root element
library. Its content model is the same as the
head element of a template. Like the
head element, a
library can import other XTiger libraries using the
import
element.
<!ELEMENT library ((component | union | import)*)> <!ATTLIST library version CDATA #REQUIRED templateVersion CDATA #IMPLIED>
When a template or a library uses constructed types defined in a library,
that library must be explicitly imported in the template or library that uses
it by an
import element.
<!ELEMENT import EMPTY > <!ATTLIST import src CDATA #REQUIRED>
src
All components and unions declared in the imported library are inserted at
the position of the
import element. Some imported components and
unions can be redeclared (same
name attribute) in the current
head or
library element. The order of
import elements in a head or library is important: a
component or a
union defined in an imported library
with the same
name attribute as a previous definition replaces
that previous definition.
A template contains a set of type definitions grouped in the
head element but also the skeleton of a target language document
and some XTiger statements that are used to generate instances. The latter
(skeleton and statements) is called the template body. A copy of it
serves as initial instance when a new document is created from the template.
The
head element and its definitions are not copied in the
instance.
All target language elements included in the template body appear in all document instances exactly as they are in the template. Their content is preserved and can not be modified in instances. This is the static part of the template.
There is also a dynamic part in a template, i.e. a part that can be modified under the control of XTiger elements. The XTiger elements that control the dynamic part are:
use, for including elements defined by their type,
bag, for defining free content areas,
repeat, for repeating elements of a given type,
attribute, for specifying how to use target language attributes and their values.
The
use element indicates what type(s) of element can appear at
that position in an instance. Only one element of the specified type(s) can
appear at that position in an instance document.
<!ELEMENT use ANY> <!ATTLIST use label NMTOKEN #IMPLIED types CDATA #REQUIRED option (set|unset) #IMPLIED currentType CDATA #IMPLIED initial (true) #IMPLIED
label
useelement. element to be inserted at that position in an instance must be of one of these types, but there is no constraint on the descendants of the inserted elements, provided they comply with the DTD or schema of the target language, when target language elements are used. This attribute is mandatory.
Recursion is forbidden. For example, when a
use element
is part of a component, it cannot refer to that component.
option
useelement is optional. The value is
setwhen the content is generated and
unsetwhen it is omitted. Usually in a template the value is
set.
currentType
initial
useelement may have a content. If a content is present in the template, it must be of one of the types listed in the
typesattribute. This content is considered as an initial value that will be present in an instance. It may be replaced by an instance author by another content, provided it is compliant with the
typesattribute.
Even if a
component is used only once in the template, it must
be declared within the template
head and a
use
element will refer to it.
Example 1:
<xt:use Your birth date here </xt:use>
In this example "Your birth date here" is the content that will be displayed when a new instance is created from the template. This string can be freely replaced by an instance author by any other string, but only by a string.
Example 2:
<xt:head <xt:component <xt:use20</xt:use> / <xt:use10</xt:use> / <xt:use1981</xt:use> </xt:component> ... </xt:head> ... <xt:use
This example shows how a component can be used to make sure that the user will enter a date in the dd/mm/yyyy format.
<xt:use <em>20 october 1981</em> </xt:use>
Here, the content of the
xt:use element may be either an XHTML
em element or a
short_date component. Only one of
them can be inserted at that position in an instance. The current content
<em>20 october 1981</em> is a valid value, because it
is an
em. It does not need to be also a
short_date.
The
use element puts strong constraints on the structure and/or
content of a part of a document. It is sometimes useful to have more
flexibility. That is the role of the
bag element. It indicates
that any number of a set of elements may appear at that position in an instance
document, and it specifies the allowed types for these elements.
<!ELEMENT bag ANY> <!ATTLIST bag label NMTOKEN #REQUIRED types CDATA #REQUIRED> include CDATA #IMPLIED> exclude CDATA #IMPLIED>
label
bagelement. elements to be inserted at the top level of the bag in an instance
(bag children) must be of one of these types. The
types
attribute is mandatory. By default, all descendant element types allowed
by the target language can be inserted into bag children.
include
This attribute is used to extend the list of allowed descendant element types that could be inserted into bag children. This attribute is optional.
exclude
This attribute is used to exclude some element types from the possible set of descendant element types. This attribute is optional.
bagelement may have a content. If a content is present, it must follow the constraints set by the
typesattribute. This content is considered as an initial value that will be present in an instance. It may be replaced by an instance author by another content, provided it remains compliant with the
typesattribute.
Example 1:
<div> <xt:bag <p> This <em>paragraph</em> contains <em><strong>strings</strong></em> and <strong><code>any</code></strong> combination of <em>emphasis</em>, <code>code</code> and <strong>strong</strong> elements. </p> </xt:bag> </div>
Many occurrences of
p,
h2,
h3,
h4, and
div elements may appear at the top level of
the
bag, and only these elements. There is no constraint about the
order of these elements and as for the
use element, no constraint
is specified on the content of these elements.
By default the
bag element will generate this initial
paragraph ;
em and
strong elements are allowed by
the target language.
Example 2:
<div> <xt:bag <h2>Title...</h2> <p> This <em>paragraph</em> contains <em><strong>strings</strong></em> and <strong><code>any</code></strong> combination of <em>emphasis</em>, <code>code</code> and <strong>strong</strong> elements. </p> </xt:bag> </div>
In example 2,
h3 and
h4 elements cannot appear at
the top level of the bag, only
p,
h2, and
div elements are allowed. The
include attribute says
that the
author component can be inserted within the
bag but not at the top level. The
exclude attribute
says that the
h2 element can be inserted only at the top level.
Example 3:
<div> <xt:bag <p> This <em>paragraph</em> contains <em><strong>strings</strong></em> and <strong><code>any</code></strong> combination of <em>emphasis</em>, <code>code</code> and <strong>strong</strong> elements. </p> </xt:bag> </div>
In example 3, any element of the target language can appear at any levey level of the bag. Only the target langage constraints apply.
It is often useful to be able to repeat a piece of the document structure
(or an alternative of pieces) several times. In this case, the structure to be
repeated must first be declared as a
component. It can then be
used with a
repeat element in the template body around a
use element that refers the component(s).
<!ELEMENT repeat ( use+ )> <!ATTLIST repeat label NMTOKEN #REQUIERED minOccurs CDATA #IMPLIED "1" maxOccurs CDATA #IMPLIED "*">
label
repeatelement. This attribute allows authors of instances to make a difference between the many XTiger elements that appear in a document. It is mandatory.
minOccurs
maxOccurs
A
use element indicates (with its
types
attribute) the type of the component to be repeated. Basic types are not
allowed. The
use element cannot have an
option
attribute, as the option is equivalent to a
minOccur="0".
If the
types attribute of the
use element is
a list of several types, the repeated elements may have any of these
types. Several
use elements may be present in a
repeat element in a template to provide initial values to
several repeated elements.
Example:
<xt:head <xt:component <xt:use <xt:use </xt:component> <xt:component <li> <xt:repeat <xt:use </xt:repeat> ... </li> </xt:component> ... </xt:head> ... <h2>Bibliography</h2> <ul> <xt:repeat <xt:use </xt:repeat> </ul>
This example describes a bibliography section which includes at least one
bib_item element. Each of these elements may contain one to five
authors.
The document bibliography could be also defined with a
bag
element:
<h2>Bibliography</h2> <ul> <xt:bag </ul>
In that case, the list of
bib_item could be empty. It is
equivalent to a
repeat with
minOccurs="0".
XTiger provides a way to control attributes from the target language. This
is achieved by inserting an
attribute element as a child of a
target language element. The
attribute element makes an attribute
of its parent element mandatory, fixed, or prohibited. If several attributes of
a single target language element have to be controlled, several
attribute elements must be used, one for each of these
attributes.
<!ELEMENT attribute EMPTY> <!ATTLIST attribute name NMTOKEN #REQUIRED type (number, string, list) #IMPLIED "string" use (required, optional, prohibited) #IMPLIED "required" default CDATA #IMPLIED fixed CDATA #IMPLIED values CDATA #IMPLIED>
name
type
typeattribute is not present, the default type "string" is assumed.
use
useis not present, the default value "required" is assumed.
default
defaultis optional.
fixed
fixedis optional
values
valuesis optional.
Example:
<div> <xt:attribute ... </div>
This example shows a XHTML
div element whose
class
attribute is made optional with value limited to the three options
comment,
example and
info. The default
value is set to
comment.
When working with XTiger templates, three different kinds of resources are involved:
.xtdextension.
.xtlextension.
When a user creates a document from a template, the new document
instance is created as a copy of the template. However, the
xt:head element with its type definitions is kept by the authoring
tool, but it is not copied in the document instance.
The template is linked to the new instance by a processing
instruction:
<?xtiger template="URI/of/the/template.xtd" version="1.0"
templateVersion="xx" ?>
which is inserted at the beginning of the instance, in the same way CSS style sheets are linked to XML documents. With this link, the authoring tool can find all the type definitions needed during editing sessions. All other XTiger elements (
use,
bag,
repeat,
attribute) as well as all target language elements are kept in the
copy that constitutes the initial instance. XTiger types that appear in these
elements are replaced by references to their definition in the template
(actually, by references to a parsed representation of types in core memory
which is more compact).
Francesc Campoy Flores, Vincent Quint, Irène Vatton, Templates, Microformats and Structured Editing, Proceedings of DocEng'06, ACM Symposium on Document Engineering, 10-13 October 2006, Amsterdam, The Netherlands, pp. 188-197. This research paper presents an early version of the XTiger language.
|
http://www.w3.org/Amaya/Templates/XTiger-spec.html
|
CC-MAIN-2014-49
|
refinedweb
| 3,492
| 54.63
|
!python --version
Python 3.8.0rc1
Python 3.8 is a major release of Python. Here's a roundup of some new features.
- New Syntax Rules!
- Parallel Data Ops Improvements!
- Static Type Checking Features!
- CPython Stuff!
Let's start with a new syntax rule:
Assignment in Expressions (PEP 572)
Python now allows you to create variables inside of expressions (e.g. the body of a list comprehension.)
def f(x): for k in range(10000000): x*x*x/x+x+x+x+x return x x = 2 # Reuse a value that's expensive to compute %timeit [y := f(x), y**2, y**3] # Without reuse %timeit [f(x), f(x)**2, f(x)**3]
1.78 s ± 13.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 5.45 s ± 93.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Doing this in a "naked" expression is strongly discouraged in the PEP. See: Exceptional Cases
Here's one more new syntax rule:
Positional-Only Args (PEP 570)
This one confused me until I scrolled down to the syntax section, so let's just see it in action:
def prelaunch_func(foo, bar, baz=None): print(f"\nFoo:{foo}\nBar:{bar}\nBaz:{baz}") def pep570_func(foo, /, bar, baz=None): print(f"\nFoo:{foo}\nBar:{bar}\nBaz:{baz}")
The
/ in the call signature of
pep570_func denotes where the positional-only args end. These functions will let us see the practical difference PEP 570 causes.
prelaunch_func(foo=1, bar=2, baz=3)
Foo:1 Bar:2 Baz:3
# Violating positional-only pep570_func(foo=1, bar=2, baz=3)
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-4-9e9851275486> in <module> 1 # Violating positional-only ----> 2 pep570_func(foo=1, bar=2, baz=3) TypeError: pep570_func() got some positional-only arguments passed as keyword arguments: 'foo'
# This is fine, as bar is right of the "/" in the call signature pep570_func(1, bar=2, baz=3)
Foo:1 Bar:2 Baz:3
# This was wrong before PEP-570 of course pep570_func(foo=1, 2, baz=3)
File "<ipython-input-6-adcd14865e94>", line 2 pep570_func(foo=1, 2, baz=3) ^ SyntaxError: positional argument follows keyword argument
What I like best about this PEP is that you can also have positional-only args with default values, as shown here:
def pep570_func(foo, bar=None, baz=1, /): print(f"\nFoo:{foo}\nBar:{bar}\nBaz:{baz}") pep570_func(10)
Foo:10 Bar:None Baz:1
# But don't mistake them for kwargs pep570_func(10, bar=1)
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-05c1c4b07ca6> in <module> 1 # But don't mistake them for kwargs ----> 2 pep570_func(10, bar=1) TypeError: pep570_func() got some positional-only arguments passed as keyword arguments: 'bar'
For more details check out the PEP: PEP-570
Now let's move on to some parallel data processing stuff:
Shared Memory and New Pickles
Two important improvements in 3.8 were inspired by libraries like dask that try to solve the problem of passing data between processes. One is to create a new version of
pickle that can pass data objects between processeses using zero-copy buffers, for more memory efficient sharing..
The other biggie in this category is that processes have a new interface for sharing memory:
SharedMemory and
SharedMemoryManager. The docs feature a very exciting example:
import multiprocessing from multiprocessing.managers import SharedMemoryManager # Arbitrary operations on a shared list def do_work(shared_list, start, stop): for idx in range(start, stop): shared_list[idx] = 1 # Example from the docs with SharedMemoryManager() as smm: sl = smm.ShareableList(range(2000)) # Divide the work among two processes, storing partial results in sl p1 = multiprocessing.Process(target=do_work, args=(sl, 0, 1000)) p2 = multiprocessing # `do_work` set all values to 1 in parallel print(f"Total of values in shared list: {total_result}")
Total of values in shared list: 2000
It remains to be seen if these improvements create a happy path that parallel ops focused libraries will adopt. In any case, it's exciting. Check out
SharedMemory here.
Now let's look at some staic type checking stuff:
Typed Dictionaries (PEP 589)
Type hints don't cover nested dicts and dataclasses don't parse to JSON well enough, so now Python will support some static type-checking for dictionaries with a known set of keys. Consider the script below:
# scripts/valid_589.py from typing import TypedDict class Movie(TypedDict): name: str year: int # Cannonical assignment of Movie movie: Movie = {'name': 'Wally 2: Rise of the Garbage Bots', 'year': 2055}
mypy will pass this with no issues because the dictionary is a valid implementation of a type.
!mypy scripts/valid_589.py
[1m[32mSuccess: no issues found in 1 source file[m
Now consider the invalid script below -- it has values of the wrong types:
# scripts/invalid_values_589.py from typing import TypedDict class Movie(TypedDict): name: str year: int def f(m: Movie): return m['year'] f({'year': 'wrong type', 'name': 12})
!mypy scripts/invalid_values_589.py
scripts/invalid_values_589.py:10: [1m[31merror:[m Incompatible types (expression has type [m[1m"str"[m, TypedDict item [m[1m"year"[m has type [m[1m"int"[m)[m scripts/invalid_values_589.py:10: [1m[31merror:[m Incompatible types (expression has type [m[1m"int"[m, TypedDict item [m[1m"name"[m has type [m[1m"str"[m)[m [1m[31mFound 2 errors in 1 file (checked 1 source file)[m
You can also check for missing values, invalid fields, or create
TypedDicts that will accept missing values by using the
total=False with the constructor.
Finality (PEP 591)
3.8 will also implement finality. We can prevent objects from being overriden or inherited. The
@final decorator can be used with a
class definition to prevent inheritence, and the
Final type will prevent overrides. Here's two examples from the PEP:
# Example 1, inheriting a @final class from typing import final @final class Base: ... class Derived(Base): # Error: Cannot inherit from final class "Base" ... # Example 2, overriding an attribute from typing import Final class Window: BORDER_WIDTH: Final = 2.5 ... class ListView(Window): BORDER_WIDTH = 3 # Error: can't override a final attribute
Finality can be applied to methods, attributes and inheritence. Check out all the features in PEP 591
Literals (PEP 586)
The type hinting call signature
def f(k: int): doesn't help us if the function expects
ints in a range.
open requires a
mode: str argument from a specific set of strings.
Literals to the rescue!
# scripts/problematic_586.py def problematic_586(k: int): if k < 100: return k else: raise ValueError('Gotta be less than 100') problematic_586(144)
!mypy scripts/problematic_586.py
[1m[32mSuccess: no issues found in 1 source file[m
Instead, we can pass a
Literal to the type hinting for our argument
k.
# scripts/valid_586.py from typing import Literal def valid_586(k: Literal[0, 1, 2, 99]): if k < 100: return k else: return float(k) valid_586(43)
!mypy scripts/valid_586.py
scripts/valid_586.py:8: [1m[31merror:[m Argument 1 to [m[1m"valid_586"[m has incompatible type [m[1m"Literal[43]"[m; expected [m[1m"Union[Literal[0], Literal[1], Literal[2], Literal[99]]"[m[m [1m[31mFound 1 error in 1 file (checked 1 source file)[m
There's a bit of nuance to using
Literal so if you decide to explore further start with PEP-586.
And that's it! I'm going to hold off on writing about the CPython features because, frankly, I would prefer not to write about them until I have my arms more firmly around CPython generally and those features in particular.
Thanks for reading!
P.S. If you would like to play with any of the examples in this blog or test drive Python3.8 without installing it, I'm providing the source repository for this blog below. It includes a Dockerfile that spins up a Python 3.8 jupyterlab environment.
CharlesDLandau
/
python38_blog
A repo with the source material for a blog
Python38_Tour
This is the source repo for this blog, and it also includes a Dockerfile for simply launching python 3.8 without having to install it locally. So:
//build docker build -t <img_tag> . //run on *nix docker run -p 8888:8888 -it --rm -v $(PWD):/code --name dev <img_tag> //run on windows docker run -p 8888:8888 -it --rm -v %CD%:/code --name dev <img_tag>
If all goes well you'll be prompted to copy a URL into your browser, which will point to your local port 8888 with a token to authorize access to the Jupyter instance.
Discussion (8)
Very helpful compilation, saved lot of time, thanks! 👍 👏
Great article :]
Thanks! I'm glad you enjoyed it.
multiprocessing.SharedMemoryis a game changer in my opinion. With more and more people doing heavy computation in Python for data science, this will help a lot.
I agree with you on the walrus operator (:=), IMO it is not that useful and will only make code difficult to understand.
Python is a weird language.
The fact that a positional argument can be a keyword argument too, is such a poor and error prone feature.
At least this update helps with this somewhat.
|
https://dev.to/charlesdlandau/python-3-8-has-been-released-let-s-take-a-tour-bj3
|
CC-MAIN-2022-27
|
refinedweb
| 1,524
| 52.9
|
The status of JavaScript libraries & frameworks: 2018 & beyond.
Libraries and frameworks are one of the intense competitiveness on front-end development.
In these days, the meaning of FE dev is about what libraries or frameworks is using.
Every year new different projects appeared with their own features, but now we can roughly agreeing that Angular, React or Vue.js are the pioneers in this world.
This was proved by the survey result.
The nowdays FE dev can be illustrated as agonizing what is the best choice and taking a look of the current & future status will be a good starting point of your consideration.
React
Without doubt, React leads among others. Last year faced a difficult moment — the license issue — which could led loose the actual position, but treated well.
Also, React is the most wanted on job markets. This result shows clearly that React is surpassing others.
At Jan/2018, the create-react-app(CLI tool helping create React app), has been moved from the incubation to the official facebook repository.
The version 16(codenamed ‘Fiber’) which was announced Sep/2017, improved SSR(rewrite of server renderer) and added the support of custom DOM prop and fragment. Also, the render method has been updated to return multiple elements.
Fragment looks like to be an empty JSX tag. It allows groping child nodes without adding new one and has been improved the usability on v16.2.
// fragments
render() {
return (
<React.Fragment> // or Short Syntax: '<>'
<ChildA />
<ChildB />
<ChildC />
<React.Fragment> // or Short Syntax: '</>'
);
}
Since Dec/2017, React started a new process to treat new features & suggestions via RFC(Request For Comments) inspired by Rust RFC.
New Context API
If you have experienced using React, is quite likely had the annoyance of through passing the top component’s status value to the lower in the React tree structure, known ‘prop drilling’.
This is one of the reason which state management libraries such as Redux or MobX comes to play.
Did you know? There’s a way to solve this issue using React’s API without using them. The ‘context API’.
But when you want to try using it, you’ll find “Why Not To Use Context” from the official doc and even more, it recommends not using.
If you’re still learning React, don’t use context
To solve, purposed a new experimental ‘Context API’ as the first RFC issue. The new API was included on the new v16.3 release.
- React’s ⚛️ new Context API
- Replacing Redux with the new React context API
Async Rendering
The ‘async rendering’ was raised from the basic question of “How to provide the best user experience among the differences of the computer power and net latency?”.
During the development of v16, there was a consideration on it, but couldn’t continued due to the potential backward compatibility, but the support will be added on the future release.
Dan Abramov(member of React dev team), did two demos called as ‘Time Slicing’ and ‘Suspense’ from his speech during JSConf Iceland last March.
Doing improvement on CPU side, is to filling gaps of the computer power difference. And the network side from IO.
Time Slicing(CPU)
Time Slicing, provides a generic way for high priority updates to not blocked from the lower priority.
Also it improves the responsiveness on lower performed devices by scheduling the complicated/difficult CPU jobs without the any intervention of the developer.
Suspense (IO)
Suspense provides a generic way to suspend and deferring component rendering from the asynchronous data loading.
It helps to get smoothness of the user experience(loading and rendering) without hurt from the async tasks(like REST, GraphQL API calls, etc).
Vue.js
2017 was an impressive and impactive year for Vue.js and similarly positioned as React and Angular as well.
Also became the most popular Front-End GitHub projects receving 40K stars last year.
Having the similarity as AngularJS, Vue.js can be an attractive alternative for AngularJS users.
This is some how persuasive having the consideration of:
- The migration to Angular is not that easy.
- Knowing the official support for AngularJS will end.
Evan You, the creator, talked about the similarity with AngularJS..
—
What differs Vue.js from others?
Vue.js describes itself as “The Progressive JavaScript Frameworks”.
The core focuses on data binding and component as React and it get easier for whom knowing the basic web technologies like HTML, JS and CSS.
But when the application becomes complicated, is inevitable the necessities of helping tools like routing, state management, communication among components, etc.
Frameworks like Ember and Angular approach including all of these helpers on their own. Like React let these to the community ecosystem.
Vue.js takes as the middle. The core provide the minimal functionality. But of course, they provides officially maintained well made tools with docs also.
The ecosystem is growing
Nuxt.js which was inspired from the React based SSR universal webapp framework next.js(first appearance 2016), reached v1.0 on Jan/2018.
The vuetify, which helps build material design UI component, also reached v1.0 on Feb/2018.
The popular VSCode from MS, also started to support debugging functionality of Vue.js.
These changes makes attractive atmosphere for Vue.js more and more.
Prospectives
The powerful CLI tool vue-cli, which let configure dev environment, will release a new version soon.
On v3.0, the new targeted build option will be added. It’ll allow create easily with the three(App, Lib and Web Componet — and planned to add more w/community collaboration) targets. Also will have the ‘zero configuration’ support.
The current latest of the core is 2.5.x. Next minor release(v2.6), will support native ESM import, improved async error handling, iterator on ‘v-for’ directive and more.
Support of IE will be dropped starting the next version of 2.6(2.6-next or 3.0), having evergreen browser supports only.
Vue 3 is not going to be one of those “big change releases” — the main difference will be that Vue 3 only targets modern “evergreen” browsers (i.e. IE11 & below are out). ‐ Hashnode: AMA with Vue.js Team
Basically, the development of the next version will have the backward compatibility with the v2.6 dev in parallel. The codebase will have the newest ES specification also.
There’s no doubt predicting another great year 2018 for Vue.js. Many developers used to say about ‘Angular vs React’ before, but now most will be agreeing on ‘Vue.js vs React’.
Take a look "Why we moved from Angular 2 to Vue.js (and why we didn’t choose React" article for reference.
Angular
Following the every 6 months release schedule, the v5.0 was release Nov/2017. The coming v6.0, reached RC status and if follows the schedule, it expected to be released on April.
Last year Angular focused improving performance mostly. In this year will be expected to have adding new features and new different approaches as well.
Prospectives on the v6.0 changes
Ivy Renderer
The new backward compatible experimental renderer called ‘Ivy Renderer’, will be added. It aims provides small size, simple debugging and faster compilation.
It will not be a ‘breaking change’. It will be automatically enabled updating to the newer version.
Angular Elements
Angular Elements allows Angular components to publish as Custom Element. Simply think as a Angular component wrapped as Custom Element.
This means the expansion of Angular Component more freely. And being as Custom Element, it can be used on vanillaJS or on different frameworks such as React!
- Angular Elements — Rob Wormald — AngularConnect 2017
- Export Angular components as Custom Elements with “Angular Elements”
Angular Labs
Angular Labs is the idea announced from the ‘AngularMix’ conference on Oct/2017. The main goal is to provide having a clear and balanced communication with the previous releases, about new features and research.
Has been set the following initial three goals.
1. Shematic
The Angular DevKit / CLI team’s efforts to build generic tooling for transforming code, likes scaffolding, querying backend APIs, etc.
2. Component Dev Kit
The Angular Material team’s working to extract some of the core solutions to common component development problems, and to expose them via the CDK.
CDK also includes extensible tools bringing different mechanisms helping the component development.
3. ABC (Angular Buildtools Convergence: Angular + Bazel + Closure)
An effort to converge the toolchain used internally at Google on building Angular applications with the external one. They’re consisted with:
- Bazel: The build system used for nearly all software at Google.
- Closure Compiler: The optimizer used to create JavaScript artifacts for nearly all Google web applications.
Migration of examples
In conjunction with StackBlitz, the example codes on Plunker will be moved to StackBlitz.
If you’re familiar with the VSCode, it can give more comfort way on using.
What is the future of AngularJS (v1.x)?
As most knows, AngularJS is the v1.x(Angular is v2.x+). How many users still using it?
The answer can be found from the developer’s survey result and according the result, there’re still a significant AngularJS users.
From this, we can infer that the upgrade to Angular doesn’t mean a simple ‘upgrade’ for AngularJS users.
This isn’t just on compatibility issue of Angular with previous 1.x version only. There’s also new learning curves, TypeScript.
Long time ago, the core team promised on supporting multiple languages, but it didn’t happened. This is critical for those can’t give up their comfortable way on developing application(ex. those whom using CoffeeScript, etc.).
Until when the support of AngularJS will be continued?
The new coming minor release v1.7, will be held before the July/2018. After that, starting from the July 1st, v1.2.x and v1.7.x will be enter a Long Term Support period for 3 years.
Web Components and Polymer
2017 was a ‘big impressive’ for Web Components, because the support of browser was expanded.
Safari officially added Custom Elements and ShadowDOM support and Firefox(currently flags: dom.webcomponents.enabled, dom.webcomponents.shadowdom.enabled enabling are required) will be expected follow on version 60/61. Edge remained the only one without support.
The new <script type=”module”> has been added newly as a part of Web Components substituting HTML Imports. The lack of interest and the slow adoption of browser was the reason.
Since then, Polymer 3.0 announced the transition plan on using ES6 modules instead of HTML Imports.
ESM is supported on all modern browser, so technically the lack support of browsers has been removed!
The recent and 2017 changes
Polymer
With the release of v2.0(May/2017), has been improved the interoperability with other library/framework. Also removed the restriction on using Polymer.dom for DOM handling and ShadyDOM(the ShadowDOM shim) has been splitted as stand alone polyfill.
The way on defining elements via factory method, changed to use more on standard way by ES6 class syntax and custom elements.
The recent release v2.4(Jan/2018), added the support of TypeScript and by February announced polymer-decorator.
In term of service adoption, the renewal design of YouTube was developed using Polymer. The adoption by Google on their flagship service can be a significant for many others.
The Web Components ecosystem
The new specification “Template Instantiation” purposed by Apple, brings different ways of instantiate template with the usage of template syntax, the use of condition and loops.
CSS Shadow Parts proposal also looks interesting. With the ::part() and ::theme() functions, make possible style shadow DOM elements from outside.
<my-slider>
#shadow-root
<div part="track"></div>
<div part="thumb"></div>
</my-slider>
// defined outside of <my-slider>
my-slider::part(thumb) {
color: red;
}
Polymer v3.0
The v3.0 will be automatic translation from v2.0 and will be transition to ESM as mentioned earlier.
The new library called ‘lit-html’(still experimental)will be used on creating custom element. The element created will be called ‘lit-element’.
lit-html was announced on Polymer Summit(2017), which focuses on DOM rendering implementing it by ES6 Tagged Template Literals. Similar as React’s JSX, but it doesn’t require build process, because is standard.
It’s extensible providing directive and customized syntax.
Similar Tagged Template Literals libraries are: hyperHTML, hyperx, t7.js
The prefix ‘lit’ stands for ‘literals’ and ‘little’
Static Type System
JavaScript is dynamic typed language, which types of variable are defined at the runtime by interpreter. Because of this nature, many traditional language developers indicated as weak point.
One of the problem not having types is the possibility of bug increase. To overcome, many attempts have been in non-standard way.
The representative tools and languages are, TypeScript from Microsoft, Flow and ReasonML from Facebook and followed by PureScript.
What are the biggest advantages on using it?
One study report says, the adoption of static type system can decrease 15% of bug rates.
static type systems find an important percentage of public bugs: both Flow 0.30 and TypeScript 2.0 successfully detect 15%!
— To Type or Not to Type:Quantifying Detectable Bugs in JavaScript
Which one choose?
TypeScript leads for now followed by Flow and ReasonML.
TypeScript(superset of JavaScript), ReasonML(OCaml) and PureScript(Haskell) approaches as new programming languages and Flow as a tool. This are illustrated by how describes themselves.
- TypeScript: TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
- Flow: Flow is a static typechecker for JavaScript.
- ReasonML: Simple, fast & type safe code that leverages the JavaScript & OCaml ecosystems
- PureScript: A strongly-typed language that compiles to Javascript, written in and inspired by Haskell
Take a look on simple example below. The code throws an error, but it’s not shown until runtime.
function square(n) {
return n * n;
}
square("oops");
As being superset of JavaScript, TypeScript has same syntax we already know. Just adding data types on it, can make to detect errors during compile time.
function square(n: number): number { ... }
Flow, can be done without defining types. Everything can be inferred.
This characteristic provide the adoption of type check without code changes and cost.
- Adopting Flow & TypeScript
- JavaScript vs. TypeScript vs. ReasonML
Prospectives
The growth of TypeScript will be consistent. Many well known projects are using it.
Angular, Vue.js, Polymer and GitHub Desktop are using TypeScript
While Flow and ReasonML built and used by Facebook on React and Facebook Messenger, not having a clear reference like TypeScript.
Reducing the rate of bugs is optimistic, but the needs of additional configuration(compile) and the learning curve can be an obstacle.
These all are basically an additional “supportive” tools helping to make good quality code and they aren’t “standard”.
jQuery
Still alive and well
jQuery isn’t primary option anymore when you consider start a project, but 300K downloads happens everyday. This is almost 300% growth compared at the beginning of 2017.
Not only downloads. The 90% of top 500K websites still uses jQuery.
It can be a surprising result for whom thought jQuery is an old fashioned forgotten ancient library.
Prospectives
jQuery team had two releases(3.2.0 and 3.2.1) last year. Seeing this, it looks changes are slow downed.
Timmy Willison(core member), explains about as:
the team decided a while ago to release at a slow but steady pace, which we translated to about 2 releases a year
— What’s the future of jQuery in 2018?
The planned v4.0 will have below changes.
- A complete rewrite using next generation JavaScript
- A rewrite of our speed framework
- An all-new event module design
For more details about future plans, check out:
- v4.0 Milestone and Future Milestone as well.
Using the latest libray and framework isn’t the right answer. We don’t know what will be the future of jQuery, but is doing well not loosing its influence.
Closing
Front-end development is quite dynamic and impressive.
Following each corner of stack seems impossible, but knowing and understanding the part of it, can give us the insight on front-end development.
How many libraries/framework we should know? Knowing them all is the prove of your skill capability?
Of course, is better than not knowing, but each of them approaches with different philosophy saying they are all good and best.
There’s no right thing.
|
https://medium.com/@alberto.park/the-status-of-javascript-libraries-frameworks-2018-beyond-3a5a7cae7513
|
CC-MAIN-2018-17
|
refinedweb
| 2,728
| 59.19
|
Maplat::Worker::MemCache - Module for access to memcached
This is a wrapper around Cache::Memcache (and similar) with a few addons.
This module provides a Worker modules that gives the caller an interface to the memcached service. Internally, it tries to use (in that order) Cache::Memcached::Fast, Cache::Memcached and Maplat::Helpers::Cache::Memcached and test if they actually can set and retrieve keys.
<module> <modname>memcache</modname> <pm>MemCache</pm> <options> <service>127.0.0.1:11211</service> <namespace>MyMaplat</namespace> </options> </module>
service is IP address and port of the memcached service.
Refresh the lifetick variable for this application in memcached.
Temporarly disables lifetick handling for this application. To resume, just call refresh_lifetick once more.
Save data in memcached.
Takes two arguments, a key name and a reference to the data to be stored in memcached. Returns a boolean to indicate success or failure.
Read data from memcached. Takes one argument, the key name, returns a reference to the data from memcached or undef.
Delete a key from memcached. Takes one argument, the key name, returns a boolean indicating success or failure.
Sets the command currently processed by this application (or 0 to indicate no active command). Takes one argument, the id of the currently active command. Returns a boolean indicating success or failure.
Internal function to sanitize (clean up and re-encode) the memcached key string. Memcached has some limitations how the keys can be named, this functions is used on every access to memcached to make sure the keys adhere to this restrictions.
This module is a basic module which does not depend on other worker modules.
Memcache caches data between runs of Maplat. If you're upgrading Maplat or changing some data structures you want to save/retrieve with Memcache, you should restart your memcached daemon.
Otherwise, expect some unexpected results (aka the "WTF is going on" effect)..
|
http://search.cpan.org/~cavac/Maplat-0.995/lib/Maplat/Worker/MemCache.pm
|
CC-MAIN-2016-40
|
refinedweb
| 314
| 59.19
|
User talk:BryanDavis
Contents
- 1 Your shell access was granted
- 2 Welcome to Tool Labs
- 3 A beer for you!
- 4 Une étoile pour vous !
- 5 Administrator rights
- 6 Access request for Tool Labs to support non-WMF wikis
- 7 New
- 8 Access Request
- 9 No access to add URI - Diffusion
- 10 Shell Access Request
- 11 A pie for you!
- 12 Some JS deletions, please
- 13 Outdated info
- 14 Logstash access for slow filters
- 15 Tool question
- 16 User to block
- 17 Toolforge membership request
- 18 Request
- 19 mkdir -p && venv
- 20 User Rights
- 21 geocommons/kml source
- 22 Setting for tools per page on list?
- 23 Revdels
- 24 deleting repository for tool:20, 29 July 2013 (UTC)
Welcome to Tool Labs
Hello BryanDavis,) 18:54, 16 August 2013 (UTC)
A beer for you!
Une étoile pour vous !
Administrator rights
I don't feel a need to «talk about need for full admin rights». I assume however you know what's the gain in moving me from one group to the other, otherwise you would not have done so, and it would have been nice to write in my user talk page what permissions you sought to remove. Special:ListGroupRights is not especially enlightening (speaking of which, a different setup of the groups could be appropriate for clarity; if there's only a difference of 10 permissions, just remove those from the sysop group and move them to a super-sysop group, or something). Nemo 14:28, 18 February 2017 (UTC)
- @Nemo_bis: Sorry, no offense was intended. The reason for the change is described in task T158315. Fundamentally, having the ability to edit the
MediaWiki:namespace on wikitech is considered a very sensitive user right, and I decided to remove first and ask later. --BryanDavis (talk) 16:23, 18 February 2017 (UTC)
Access request for Tool Labs to support non-WMF wikis
Could you please process this access request for Tool Labs? I think it's outside of (Tool) Labs's primary scope; there have been exceptions to that in the past, but I don't want to decide that on my own. --Tim Landscheidt (talk) 18:16, 23 February 2017 (UTC)
- Thanks Tim Landscheidt. I have left a message on the user's talk page. --BryanDavis (talk) 19:23, 23 February 2017 (UTC)
New
Hello, sorry for bothering, but can you review my request? Alaa (talk) 16:07, 4 March 2017 (UTC)
- @Alaa:
Done --BryanDavis (talk) 18:02, 4 March 2017 (UTC)
Access Request
Sorry for bothering, but can you review my access request?
I am a intern at the Wikimedia Taiwan, we are running a project about a third part wiki that uses Taiwan aboriginal languages to write encyclopedia articles, and we need a bot to rapidly export Taiwan aboriginal language articles from Wikimedia Incubator and import them to that wiki, there is more details in the request page.Thank you!--R96340 (talk) 06:05, 8 May 2017 (UTC)
No access to add URI - Diffusion
Task T172413
Hello,
I followed your explanation, but when I click on "Add URI" I get an "Access Denied: R2119" message.
This is regarding the tool "signature-manquante-bot", diffusion-rep.
Can you help me with it?
--Brclz (talk) 15:14, 3 August 2017 (UTC)
- @Brclz: What URI did you try to use for your mirror origin? --BryanDavis (talk) 15:40, 3 August 2017 (UTC)
- I can't even add one, the error shows when I go on the page to add some (here, can't even see this page). URI to watch and update from would be (still experimenting to see if this would fit my needs). --Brclz (talk) 15:45, 3 August 2017 (UTC)
- @Brclz: The access policy for the repository shows that only the
Repository-Adminsgroup can manage the repo. This is either because you had not linked your Phabricator account with your LDAP account before you created the repository or there is some regression bug in repository creation. I can fix the repo permissions for you by adding your Phabricator account to the ACL, but to do that you will need to connect your accounts using. We should probably move this discussion to a Phabricator task as well to make it easier to track and to leave a better record for people how have similar issues in the future. --BryanDavis (talk) 16:00, 3 August 2017 (UTC)
- => phab:T172413 :-) Thx for the help so far --Brclz (talk) 16:33, 3 August 2017 (UTC)
Shell Access Request
Hello. I'm not sure where I should put the request for shell access, so I ended up here in the idea that you might be able to help. My old account "User:Simeondahl" is not getting used anymore as of right now since I renamed myself as "SimmeD". Is it possible that you can add "Shell user" back on this new account and remove it from it from the old? --SimmeD (talk) 16:08, 22 October 2017 (UTC)
- @SimmeD: "Shell user" is something that is automatically granted when a user becomes a member of a Cloud VPS project like Toolforge or Beta Cluster. See Help:Getting_Started for more information. --BryanDavis (talk) 04:23, 23 October 2017 (UTC)
A pie for you!
Some JS deletions, please
Hello and happy new year if we don't talk before the event. I wonder if you could please have the .js pages User:Thiemo Mättig (WMDE)/common.js and User:Thiemo Kreuz/common.js deleted? They're listed as double redirects and I can fix them as content administrators can't edit those user subpages. Thank you. --MarcoAurelio (talk) 12:19, 30 December 2017 (UTC)
- Link fix:
- --MarcoAurelio (talk) 12:20, 30 December 2017 (UTC)
- @MarcoAurelio:
Done --BryanDavis (talk) 17:14, 30 December 2017 (UTC))
Request
Please rename my username to Ruyaba. Thanks. RUYABA (talk) 01:46, 27 January 2019 (UTC)
- @RUYABA: Please create a Phabricator task tagged with #wikitech.wikimedia.org and #ldap requesting the account rename. Renaming Developer accounts is unfortunately not quite as easy as renames on the main project wikis, so it may take some time to complete the request depending on which systems you have used your Developer account to interact with. --BryanDavis (talk) 16:01, 27 January 2019 (UTC)
mkdir -p && venv
Hi! About [1], I think it does. Using
virtualenv:
$ rm -rf www/ $ virtualenv -p python3 ~/www/python/venv Running virtualenv with interpreter /usr/bin/python3 Using base prefix '/usr' New python executable in /data/project/permission-denied-test/www/python/venv/bin/python3 Also creating executable in /data/project/permission-denied-test/www/python/venv/bin/python Installing setuptools, pip...done. $ ls www/python/venv/ bin/ include/ lib/
And using
venv:
$ rm -rf www/ $ webservice --backend=kubernetes python shell If you don't see a command prompt, try pressing enter. $ python3 -m venv ~/www/python/venv $ ls www/python/venv/ bin/ include/ lib/ lib64/ pyvenv.cfg
Running this command creates the target directory (creating any parent directories that don’t exist already).
Dalba (talk) 03:40, 4 March 2019 (UTC)
- @Dalba: That's pretty cool. I will admit to not having tried it myself before I reverted your edit. I am not sure if making the need to create the directory structure explicit to the reader is more or less important than the number of steps we describe. If you feel strongly about removing the mkdir instruction I will not edit war with you over it. --BryanDavis (talk) 03:57, 4 March 2019 (UTC)
User Rights
Hi Bryan Davis, Please be requested to add autopatroller and shell user rights for me. Regards, ZI Jony (Talk) 19:30, 27 March 2019 (UTC)
- @ZI Jony: shell user is a legacy right that actually does nothing with the current implementation of Cloud VPS. Is there a specific reason that you would need the autopatroller right? We generally only grant it on this wiki to users who have made many good faith contributions, and generally just to reduce workload on our patroller community. --BryanDavis (talk) 19:38, 27 March 2019 (UTC)
- @BryanDavis: thanks for response. Shell user supposed to grant automatically when I was becomes a member of Toolforge project. Regarding Autopatroller right, on Phabricator I've stated work with several projects so it will reduce workload of patroller community when documented. Regards, ZI Jony (Talk) 20:08, 27 March 2019 (UTC)
geocommons/kml source
Hello BryanDavis. Thank you for the account activation. Please could you help me locate the
geocommons/kml source code. SSH'ing in and running
find / -name \*geocommons\* did not produce a result. Many appreciations, —Sladen (talk) 05:18, 29 June 2019 (UTC)
- See also User talk:Para#License_and_publish_code_so_you_can_get_some_help_with_maintenance? and User_talk:Para#kmlexport (extra pings to Dvorapa and Para. This is for trying to work T226710 where
geocommons/kmlis missing the majority of images.
- Sladen The source code you are looking for is somewhere under /data/project/geocommons/, but that tool's $HOME has permissions applied that keep everyone who is not a maintainer of the geocommons tool from seeing the files. Para is currently the only maintainer of geocommons (per). --BryanDavis (talk) 22:09, 30 June 2019 (UTC)
- BryanDavis, thanks. Had a reply today from Para over at commons:User_talk:Para#Geocommons/kml not showing most results; and have asked to be added as an additional maintainer at [2] in order to try and start debugging with a view to getting things fixed on a longer-term basis. —Sladen (talk) 20:01, 2 July 2019 (UTC)
Setting for tools per page on list?
Was wondering if there was a setting to increase the number of entries per page (and if so, where) on DSquirrelGM (talk) 03:17, 5 January 2020 (UTC)
- @DSquirrelGM: I don't think I made a secret page length setting in that app. The browsing functionality in toolsadmin was really only started and never made into a full featured system. lists them all in one giant page. --BryanDavis (talk) 04:03, 5 January 2020 (UTC)
- Added that to my list of bookmarks, thanks. DSquirrelGM (talk) 05:19, 5 January 2020 (UTC)
Revdels
I would recommend some for Special:Contributions/Vamdalise_Wikitech_on_WheeIs and Special:Contributions/Fu'erdai_vamdal, if for no other reason than w:en:WP:DENY. Koavf (talk) 01:34, 13 January 2020 (UTC)
deleting repository for tool
I decided not to use the repository I had previously set up for my tool, but I can't find anything about where exactly to run the remove destroy command from to delete the repository. What server and directory do I need to connect to to run that command? DSquirrelGM (talk) 05:37, 13 January 2020 (UTC)
- Or should I send in a ticket to have it removed by an administrator on Phabricator? DSquirrelGM (talk) 05:42, 13 January 2020 (UTC)
- DSquirrelGM, the repository is created on servers which are not directly user accessible. There is also currently no mechanism in to delete a repository. Leaving an empty repo laying around really does not cause any ongoing issues (a small amount disk on the Phabricator server is really on that is consumed). If having the repo listed really bothers you, your instinct to ask for admin help through a Phabricator ticket is the right one. We do not have a runbook for how to do this type of cleanup yet, so if you decided to make that task please add me (
@bd808on Phabricator) as a subscriber so I notice and can figure out the steps needed. --BryanDavis (talk) 18:39, 13 January 2020 (UTC)
- Guess I'll just leave it disabled then, unless you WANT to investigate a potential cleanup project for use later. DSquirrelGM (talk) 20:13, 13 January 2020 (UTC)
|
https://wikitech.wikimedia.org/wiki/User_talk:BryanDavis
|
CC-MAIN-2020-05
|
refinedweb
| 1,945
| 59.33
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.