text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The cumsum() function can be used to calculate the cumulative sum of values in a column of a pandas DataFrame. You can use the following syntax to calculate a reversed cumulative sum of values in a column: df['cumsum_reverse'] = df.loc[::-1, 'my_column'].cumsum()[::-1] This particular syntax adds a new column called cumsum_reverse to a pandas DataFrame that shows the reversed cumulative sum of values in the column titled my_column. The following example shows how to use this syntax in practice. Example: Calculate a Reversed Cumulative Sum in Pandas Suppose we have the following pandas DataFrame that shows the total sales made by some store during 10 consecutive days: import pandas as pd #create DataFrame df = pd.DataFrame({'day': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'sales': [3, 6, 0, 2, 4, 1, 0, 1, 4, 7]}) #view DataFrame df day sales 0 1 3 1 2 6 2 3 0 3 4 2 4 5 4 5 6 1 6 7 0 7 8 1 8 9 4 9 10 7 We can use the following syntax to calculate a reversed cumulative sum of the sales column: #add new column that shows reverse cumulative sum of sales df['cumsum_reverse_sales'] = df.loc[::-1, 'sales'].cumsum()[::-1] #view updated DataFrame df day sales cumsum_reverse_sales 0 1 3 28 1 2 6 25 2 3 0 19 3 4 2 19 4 5 4 17 5 6 1 13 6 7 0 12 7 8 1 12 8 9 4 11 9 10 7 7 The new column titled cumsum_reverse_sales shows the cumulative sales starting from the last row. Here’s how we would interpret the values in the cumsum_reverse_sales column: - The cumulative sum of sales for day 10 is 7. - The cumulative sum of sales for day 10 and day 9 is 11. - The cumulative sum of sales for day 10, day 9, and day 8 is 12. - The cumulative sum of sales for day 10, day 9, day 8, and day 7 is 12. And so on. Additional Resources The following tutorials explain how to perform other common tasks in pandas: How to Sum Specific Columns in Pandas How to Perform a GroupBy Sum in Pandas How to Sum Columns Based on a Condition in Pandas
https://www.statology.org/pandas-cumsum-reverse/
CC-MAIN-2022-40
refinedweb
380
59.77
#include <unistd.h>: On success, execveat() does not return. On error, −1 is returned, and errno is set appropriately. The same errors that occur for execve(2) can also occur for execveat(). The following additional errors can occur for execveat(): dirfd is not a valid file descriptor. Invalid flag specified in flags. flags includes AT_SYMLINK_NOFOLLOW and the file identified by dirfd and a non-NULL pathname is a symbolic link.. pathname is relative and dirfd is a file descriptor referring to a file other than a directory.(2) is to set the close-on-exec flag on dirfd. (But see(). execve(2), openat(2), fexecve(3)
http://manpages.courier-mta.org/htmlman2/execveat.2.html
CC-MAIN-2017-17
refinedweb
106
60.51
A few weeks ago, the R community went through some handwringing about plotting packages. For outsiders (like me) the details aren't that important, but some brief background might be useful so we can transfer the takeaways to Python. The competing systems are "base R", which is the plotting system built into the language, and ggplot2, Hadley Wickham's implemntation of the grammar of graphics. For those interested in more details, checkout The most important takeaways, are that Item 2 is not universally agreed upon, and it certainly isn't true for every type of chart, but I'm going to use it as fact for now. I'm not foolish enough to attempt a formal analogy here, like matplotlib is python's base R. But there's at least a rough comparison: like ggplot2, the combination of pandas and seaborn allows for fast iteration and exploration. You can quickly explore a dataset and transformations of that dataset. When you need to, you can "drop down" into matplotlib for further refinement. Here's a brief sketch of the plotting landscape as of April 2016. For some reason, plotting tools feel a bit more personal than other parts of this series so far, so I feel the need to blanket this who discussion in a cavet: this is my personal take, shaped by my personal background and tastes, on how to handle plotting in Python. Matplotlib is an amazing project, and is the foundation of pandas' built-in plotting and Seaborn. Matplotlib handles everything from the actual drawing to the screen, to several APIs of various levels. I've found knowing the pyplot api useful. You're less likely to need things like Transforms or artists, but when you do the documentation is there. I'll typically start with a pandas or seaborn plot, and then make adjustments with the pyplot API. DataFrame and Series have a .plot namespace, with various chart types available ( line, hist, scatter, etc.). Pandas objects additional metadata available that can be used to enhance plots (the Index for a better automatic x-axis then range(n) or Index names as axis labels for example). And since pandas had fewer backwards compatability constraints, it had a bit better default aesthetics, though matplotlib is addressing this in matplotlib 2.0. At this point, I see pandas DataFrame.plot as a useful exploratory tool for quick throwaway plots. Seaborn, created by Michael Waskom, "provides a high-level interface for drawing attractive statistical graphics." Seaborn gives a great API for quickly exploring different visual representations of your data. We'll be focusing on that today Bokeh is a (still under heavy development) visualiztion library that targets the browser. Like matplotlib, Bokeh has a few APIs at various levels of abstraction. They have a glyph API, which I suppose is most similar to matplotlib's Artists API, for drawing single or arrays of glpyhs (circles, rectangles, polygons, etc.). More recently they introduced a Charts API, for producing canned charts from data structures like dicts or DataFrames. This is a (probably incomplete) list of other visualization libraries that I don't know enough about to comment on %load_ext rpy2.ipython %%R suppressPackageStartupMessages(library(ggplot2)) library(feather) write_feather(diamonds, 'diamonds.fthr') import feather df = feather.read_dataframe('diamonds.fthr') df.head() df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 53940 entries, 0 to 53939 Data columns (total 10 columns): carat 53940 non-null float64 cut 53940 non-null category color 53940 non-null category clarity 53940 non-null category depth 53940 non-null float64 table 53940 non-null float64 price 53940 non-null int32 x 53940 non-null float64 y 53940 non-null float64 z 53940 non-null float64 dtypes: category(3), float64(6), int32(1) memory usage: 2.8 MB import bokeh.charts as bc import bokeh.plotting as bk from bokeh.plotting import figure from bokeh.embed import components Bokeh provides two APIs, a low-level glyph API and a higher-level Charts API. fig = (df.assign(xy = df.x / df.y) .sample(n=500) .pipe(bc.Scatter, "xy", "price")) bk.show(fig) script, div = components(fig) with open('../content/images/script.js', 'w') as f: f.write(script) with open('../content/images/div.js', 'w') as f: f.write(div) It's not clear to me where the scientific community will come down on Bokeh for exploratory analysis. The ability to share interactive graphics is compelling. Personally, I have a lot of intertia in matplotlib that I haven't switched to Bokeh for day-to-day exploratory analysis. I have greatly enjoyed Bokeh for building dashboards and webapps with bokeh server. It's still young, and I've hit some rough edges. The Bokeh team is trying to bridge a tough space. sns.set(context='talk', style='ticks') %matplotlib inline Since it's relatively new, I should point out that matplotlib 1.5 added support for plotting labeled data. fig, ax = plt.subplots() ax.scatter(x='carat', y='depth', data=df, c='k', alpha=.15) plt.savefig('../content/images/mpl-scatter.png', transparent=True) This isn't limited to just DataFrames. It supports anything that uses __getitem__ (square-brackets) with string keys. The metadata in DataFrames gives a bit better defaults on plots. df.plot.scatter(x='carat', y='depth', c='k', alpha=.15) plt.tight_layout() plt.savefig('../content/images/pd-scatter.png', transparent=True) We get axis labels from the column names. Nothing major, just nice. Pandas can be more convienient for plotting a bunch of columns with a shared x-axis (the index). from pandas_datareader import fred gdp = fred.FredReader(['GCEC96', 'GPDIC96'], start='2000-01-01').read() gdp.rename(columns={"GCEC96": "Government Expenditure", "GPDIC96": "Private Investment"}).plot(figsize=(12, 6)) plt.tight_layout() plt.savefig('../content/images/vis-gdp.svg', transparent=True) The rest of this post will focus on seaborn, and why I think it's especially great for exploratory analysis. I would encourage you to read Seaborn's introductory notes that lay its design philosophy and attempted goals. Some highlights: Seaborn aims to make visualization a central part of exploring and understanding data. It does this through a consistent, understandable API. The plotting functions try to do something useful when called with a minimal set of arguments, and they expose a number of customizable options through additional parameters. Which works great for exploratory analysis, with the option to turn that into something more complex if it looks promising. Some of the functions plot directly into a matplotlib axes object, while others operate on an entire figure and produce plots with several panels. The fact that seaborn is built on matplotlib means that if you are familiar with the pyplot API, you're knowledge will still be useful. Most seaborn plotting functions (one per chart-type) take a x, y, hue, and data arguments (not all are required or used, depending on the plot type). If you're working with DataFrames, you'll pass in strings referring to column names for x and y, and the DataFrame for data. sns.countplot(x='cut', data=df) sns.despine() plt.tight_layout() plt.savefig('../content/images/vis-countplot.svg', transparent=True) sns.barplot(x='cut', y='price', data=df) sns.despine() plt.tight_layout() plt.savefig('../content/images/vis-barplot.svg', transparent=True) Bivariate relationships can easily be explored, either one at a time: sns.jointplot(x='carat', y='price', data=df, size=8, alpha=.25, color='k', marker='.') plt.tight_layout() plt.savefig('../content/images/vis-joinplot.png', transparent=True) Or many at once g = sns.pairplot(df, hue='cut') plt.savefig('../content/images/vis-pairplot.png', transparent=True) pairplot is a concenince wrapper around PairGrid, and offers our first look at an important seaborn abstraction, the Grid. Seaborn Grids provide a link between matplolib Figures with multiple axes, and features in your dataset. There are two main ways of interacting with grids. First, seaborn provides convience-wrapper functions like pairplot, that have good defaults for common tasks. If you need more flexibility, you can work with the Grid directly by mapping plotting functions over each axes. x = df.select_dtypes(include=[np.number]) x[(x > x.quantile(.05)).all(1) & (x < x.quantile(.95)).all(1)] 34312 rows × 7 columns def core(df, α=.05): mask = (df > df.quantile(α)).all(1) & (x < df.quantile(1 - α)).all(1) return df[mask] cmap = sns.cubehelix_palette(as_cmap=True, dark=0, light=1, reverse=True) (df.select_dtypes(include=[np.number]) .pipe(core) .pipe(sns.PairGrid) .map_upper(plt.scatter, marker='.', alpha=.25) .map_diag(sns.kdeplot) .map_lower(plt.hexbin, cmap=cmap, gridsize=20) ) plt.savefig('../content/images/vis-pairgrid.png', transparent=True) /Users/tom.augspurger/Envs/blog/lib/python3.5/site-packages/matplotlib/axes/_axes.py:519: UserWarning: No labelled objects found. Use label='...' kwarg on individual plots. warnings.warn("No labelled objects found. "
http://nbviewer.jupyter.org/gist/TomAugspurger/cd5197cef13d6103aba01abb4569d266
CC-MAIN-2017-13
refinedweb
1,467
50.63
The Queued Tab: So, as you can see not much has changed on the Queued tab of the build explorer since VSTS 2008, but if you look close, there are a few changes. 1. There is a new filter you can apply - “Only show builds requested by me”. This filter will limit the results to only those builds you directly caused by checking in or manually queuing it. This is really helpful when there are lots of builds for your team project and you only care about yours. 2. There is a new column on the left. If you hover over it, you see it’s the “Reason” column. This shows you what caused the build to start. In the picture above, you can see two builds. The one with the icon was triggered by a checkin and the other was manually queued. 3. All columns except the image columns are sizeable. They start off adjusted to their data, but you can change that. The Completed Tab: Likewise on the Completed tab, not much has changed. In fact, it has the same changes as the Queued tab – a new filter, a new column, and sizeable columns. They work the same here as on the Queued tab. The picture above is of the Log View of the new Build Details View in Visual Studio Team System 2010. See my previous post on the Summary View for more information on the items above the words “Activity Log”. The log view can be seen by opening an in-progress build like the one shown above or by clicking the “View Log” link on a completed build. Here are some things to notice about this view: 1. There are links (“Next Error” and “Next Warning”) to quickly jump to the first or next error or warning. The Log View can be quite long so this should help you find the errors more quickly. Of course, the error messages will also show up on the Summary view, so you may not need to come here at all. 2. The “Show Property Values” link will expand the log view even further and show you all the property values that were logged for the build activities. Because these values can greatly increase the size of the log, they are turned off by default. 3. On the right is a duration column. This shows you the duration of each build activity. If you are trying to speed up your builds, this information should help you determine what build activities are taking the longest. Note that the values roll up. So, parent duration values are approximately the sum of their children’s durations. 4. The data is presented in a hierarchy. This hierarchy maps perfectly to the build process template (more on that in a later post). This allows you to follow the path that the build took through the template and possibly correct problems with your custom templates. Note: showing the property values is important to understanding the flow. 5. The “play” icon in front of some of the lines indicates that those build activities are currently in progress. Another way to follow along as a build follows the process template logic. Note that a parent activity is considered in-progress if any of its children are in-progress. 6. Lastly, like the summary view, there is a slider in the bottom right corner that allows you to zoom in or out on the log view. I hope this gives you some more insight into the 2010! As I mentioned back in January, I created a collapsible section for use in a flow document. In my case, I was removing a treeView and replacing it with indented paragraphs in a flow document (see the previous post as to why). Some of the data that was now shown to the user all the time was rather useless in most cases. So, I had some suggestions to put back the tree. Well, I just didn’t want to lose my beautiful new FlowDocument, so I started experimenting. I noticed that a Section could contain paragraphs or any other type of Block. Naturally, I wondered how hard it would be to create a section that only showed its contained children when some property was changed. So, I wrote the code and this is how it turned out… internal class CollapsibleSection : Section { public CollapsibleSection() { CollapsibleBlocks = new List<Block>(); Header = new Paragraph(); m_expandCollapseToggleButton = new ToggleButton(); m_expandCollapseToggleButton.Click += new RoutedEventHandler(ExpandCollapseToggleButton_Click); m_inlineUIContainer = new InlineUIContainer(m_expandCollapseToggleButton); m_inlineUIContainer.BaselineAlignment = BaselineAlignment.Center; m_inlineUIContainer.Cursor = Cursors.Arrow; Header.Inlines.Add(m_inlineUIContainer); } private void ExpandCollapseToggleButton_Click(object sender, RoutedEventArgs e) { Invalidate(); } public Paragraph Header { get; private set; } public List<Block> CollapsibleBlocks { get; private set; } public bool IsCollapsed { get { return !(m_expandCollapseToggleButton.IsChecked ?? false); } set { m_expandCollapseToggleButton.IsChecked = !value; } } public void Invalidate() { Blocks.Clear(); if (CollapsibleBlocks.Count == 0) { m_expandCollapseToggleButton.IsChecked = null; } Blocks.Add(Header); if (!IsCollapsed) { Blocks.AddRange(CollapsibleBlocks); } } private ToggleButton m_expandCollapseToggleButton; private InlineUIContainer m_inlineUIContainer; } As you can see, it was fairly easy to create this class. Not very much code at all. The class has a Header property that is the one paragraph that you see all the time and contains the toggle button. It also has a CollapsibleBlocks property that you can add any type of Block to. When the user clicks the button the IsCollapsed property is toggled and the Header and CollapsibleBlocks are used to modify the built-in Blocks property of the section. I left out all the work I did after the fact to make the toggle button look like a triangle and rotate when clicked, but you should be able to figure that stuff out the same way I did, by looking at the way a TreeView works. And here’s how I used the class: CollapsibleSection section = new CollapsibleSection(); section.Margin = new Thickness(parentIndent, StandardPadding, 0, 0); section.IsCollapsed = false; . . . section.Header.Inlines.Add(header); . . . section.CollapsibleBlocks.Add(new Paragraph(new Run(“test paragraph”)); section.Invalidate(); I don’t like having to invalidate the section after I have added everything, but I didn’t want to spend a lot of time trying to figure out a clever way around it. I hope this gives you an example of just how extensible FlowDocuments are and why I like them so much!!! As I promised, this is a post about what I learned while creating a flow document that had indented paragraphs. The hierarchy was dynamically built based on the data. The first thing I noticed is that indenting a paragraph is very easy, just set the left margin of the paragraph and the entire paragraph will be indented. If you want just the first line of the paragraph indented, simply set the TextIndent property. In either case, the value is a double that represents WPF device indendependent units. If you want the first line to be outdenting, or you simply want a hanging indent. You have to set the left margin to the hanging indent value and then set the TextIndent to the negative of that value. Another way to indent a paragraph is to add it to a section that has the left margin set. Sections don't have a TextIndent property, but a section can contain another section (unlike paragraphs). So, If you nested 3 sections that all had an indent of 20.0, the result would be that the inner most section would be indented 60.0 units from the left edge of the window. I didn't use this approach because I didn't have to. It was simple enough for me to simply indent each paragraph accordingly. So, that's a lot of talking without any code or pictures. Here is some XAML to demonstrate indenting: <FlowDocumentScrollViewer> <FlowDocument> <Section> <Paragraph> This paragraph is not indented in any way and should wrap normally. This paragraph is not indented in any way and should wrap normally. This paragraph is not indented in any way and should wrap normally. </Paragraph> </Section> <Section> <Paragraph TextIndent="50"> The first line of this paragraph is indented by 50 units, but the rest of the lines will not indent at all. The first line of this paragraph is indented by 50 units, but the rest of the lines will not indent at all. </Paragraph> </Section> <Section> <Paragraph TextIndent="-50" Margin="50,20,0,0"> The first line of this paragraph is not indented, but the rest of the lines will be indented by 50 units. The first line of this paragraph is not indented, but the rest of the lines will be indented by 50 units. </Paragraph> </Section> <Section> <Paragraph Margin="50,20,0,0"> This paragraph is indented by 50 units, but the section is not indented at all. This paragraph is indented by 50 units, but the section is not indented at all. This paragraph is indented by 50 units, but the section is not indented at all. </Paragraph> <Paragraph Margin="100,0,0,0"> This paragraph is indented by 100 units, but the section is not indented at all. This paragraph is indented by 100 units, but the section is not indented at all. This paragraph is indented by 100 units, but the section is not indented at all. </Paragraph> </Section> <Section Margin="50,20,0,0"> <Paragraph Margin="0,0,0,0"> This paragraph is not indented, but the section is indented by 50 units. This paragraph is not indented, but the section is indented by 50 units. This paragraph is not indented, but the section is indented by 50 units. </Paragraph> <Paragraph Margin="50,0,0,0"> This paragraph is indented by 50 units, but the section is also indented by 50 units. This paragraph is indented by 50 units, but the section is also indented by 50 units. This paragraph is indented by 50 units, but the section is also indented by 50 units. </Paragraph> </Section> <Section Margin="50,20,0,0"> <Section Margin="50,0,0,0"> <Paragraph Margin="0,0,0,0"> This paragraph is not indented, but it is in a nested section that is indented by 50 units and the outer section is also indented by 50 units - that makes 100 units total for this paragraph. </Paragraph> </Section> </Section> </FlowDocument> </FlowDocumentScrollViewer>. We. What is a stretching treeView? Recently, I found the need to have a TreeView in my WPF application that was only a few levels deep. I didn't want a horizontal scrollbar to appear and I wanted the long text nodes to wrap. So, I invented the StretchingTreeView control. Why couldn't I do this with a normal WPF TreeView? Well, you can, but you have to replace the ControlTemplate for all the TreeViewItems. That's either a lot of XAML or a lot of code. So, what exactly is the problem? If you look at the default ControlTemplate for a TreeViewItem, you will see that it contains a grid with 3 columns. In the first column is the expander, in the second column is the Header of the TreeViewItem. And in the last column is nothing. The second column is set to a Width of Auto which means that it will size to its contents. The third column is set to a Width of Star which means it will grow and shrink with the width of the TreeView. No matter what you put in the Header of the TreeViewItem, it will never stretch to the right edge of the TreeView. And Since the Header column is set to Auto, it will never cause a TextBlock or anything else to wrap. Instead, if you turn off the horizontal scrollbar, your long text items simply get clipped at the right edge of the TreeView. So, what's the code to fix it? Well it was actually very simple. Once I found out that you can control what kind of Controls a TreeView creates for its items. By subclassing TreeView and overriding the methods GetContainerForItemOverride and IsItemItsOwnContainerOverride, you can control what types are created for your tree. This is especially important if you are databinding your tree to some hierarchy of objects and can't just create TreeViewItems directly. In any case here is the code... class StretchingTreeView : TreeView { protected override DependencyObject GetContainerForItemOverride() { return new StretchingTreeViewItem(); } protected override bool IsItemItsOwnContainerOverride(object item) { return item is StretchingTreeViewItem; } } class StretchingTreeViewItem : TreeViewItem { public StretchingTreeViewItem() { this.Loaded += new RoutedEventHandler(StretchingTreeViewItem_Loaded); } private void StretchingTreeViewItem_Loaded(object sender, RoutedEventArgs e) { // The purpose of this code is to stretch the Header Content all the way accross the TreeView. if (this.VisualChildrenCount > 0) { Grid grid = this.GetVisualChild(0) as Grid; if (grid != null && grid.ColumnDefinitions.Count == 3) { // Remove the middle column which is set to Auto and let it get replaced with the // last column that is set to Star. grid.ColumnDefinitions.RemoveAt(1); } } } protected override DependencyObject GetContainerForItemOverride() { return new StretchingTreeViewItem(); } protected override bool IsItemItsOwnContainerOverride(object item) { return item is StretchingTreeViewItem; } } Basically, all I do is create my own TreeViewItems that wait until they are loaded and then fix the Grid by removing the middle column. I don't like the fix, because it seems like a hack, but it's hardly any code. And that I like. I hope this helps somebody out there! In an previous post (long long ago), I described some scenarios around why you would want to subscribe to change the work item tracking subscription to the build completion event. The purpose there was to help users "correct" the subscription that came out of the box. In this post, I would like to answer a question I got from a user about how to subscribe to the other build event - BuildStatusChangeEvent. The question was basically this: How do I get notified when the build quality changes from 'X' to 'Y'? First, let me say that the name of this event is all wrong. It does not fire when the status of a build changes, but rather when the quality field of the build changes. In the future, it may do more, but not for now. The name aside, it is important to look at the structure of the xml that is sent when the event fires. It is actually very short. Here is an example: <> The important bit is under the StatusChange node which also makes it a little more interesting than the last post that I did. To filter this event in any useful way requires you to know a little more about XPath. Or at least how the "path" to the OldValue and NewValue fields are formed. It's actually pretty simple. Ignore the root node BuildStatusChangeEvent and then use a slash between each node that you need to reference in the path. For example, if you want to filter on the NewValue field, you might want to use the filter expression 'StatusChange/NewValue'='Ready for Deployment'. Finally, here are some scenarios that you might find yourself in: And here are the filters that correspond to those scenarios: The first and last are exactly what you might expect. But the second filter was quite a challenge. It turns out that when OldValue or NewValue are null, they are not included in the XML at all. So, to see if NewValue was set to null, we have to get the StatusChange nodes that have a count of NewValue nodes equal to zero. If any StatusChange nodes are returned then that half of the filter is true. The other half is just the opposite. We want to make sure that the StatusChange node has at least 1 OldValue field. The syntax of the xPath query is a little hard to get right, but it does exactly what we want. Hopefully, this will help you with this and other TFS Events! I have attached the code that I used to add these subscriptions to my server. It is very similar to the code in my previous post.. (or Designing Your Workflow Activity Class Hierarchy) One of the questions that we've had to consider when designing our Workflow Activity Class Hierarchy is whether it is valuable to have our own Activity base class (derived from Activity and inherited by all of our activity classes). Our answer was "Yes". We decided that there were several good reasons to create our own Activity base class. Here are the reasons and some explanation. Our base activity class gives us all this but here's the rub... There are actually two Workflow Activity base classes. If you want to create a Composite activity (one that contains other activities), you have to derive that class from CompositeActivity instead of Activity. This is extremely unfortunate for us, because there were several activities that we wanted to create that were composite activities. But this means that they couldn't derive from our other base class. So, we had to re-implement some of the base classes functionality in our Composite activity class. This is why when I am designing a tree structure, I try not to make a separate base class for the tree nodes and the leaf nodes. Because, it screws up inheritance, polymorphism and a few other fancy OO words I can't recall. Keep this gotcha in mind when you start planning your Activity class hierarchy! In my previous blog post (Creating an Asynchronous Workflow Activity), I explained why your custom activities should either be really fast or run asynchronously. But, I didn't give you a real world example of how to do this. In this post I provide an example of the pattern that my team uses when creating a custom activity. Pattern: 1) Create a service that actually does the work 2) Create a custom activity that uses the service and has a Queue listener event handler to get a message back from the service when the work is done. Instead of walking you through this one, I just want to present you with the code, commented as much as I could :) public class DoSomethingAsyncService : WorkflowRuntimeService { private Random m_random; public DoSomethingAsyncService() { m_random = new Random(); } public void DoSomethingAsync(Guid instanceId, Guid queueId) { ThreadPool.QueueUserWorkItem(delegate(Object state) { // Fake a call to a WebService (let's say it takes 10 seconds) Console.WriteLine("Making webservice call."); Thread.Sleep(10000); // Create the QueueItem QueueItem item = new QueueItem(); item.ReturnCode = m_random.Next(0, 10); item.Message = String.Format("The return code is {0}.", item.ReturnCode); try { // Now send this item to the appropriate queue WorkflowInstance instance = Runtime.GetWorkflow(instanceId); instance.EnqueueItem(queueId, item, null, null); } catch (InvalidOperationException) { // Catch any InvalidOperationExceptions that occur due to the // workflow having already completed, etc. } }); } } public class QueueItem { public int ReturnCode; public String Message; } public class DoSomethingAsyncActivity : Activity { private Guid m_queueId; protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { // Create my queue m_queueId = Guid.NewGuid(); WorkflowQueuingService queuingService = (WorkflowQueuingService)executionContext.GetService(typeof(WorkflowQueuingService)); WorkflowQueue queue = queuingService.CreateWorkflowQueue(m_queueId, true); // Hook the item available event. This event will be triggered when our // service enqueues an item. queue.QueueItemAvailable += new EventHandler<QueueEventArgs>(QueueItemAvailable); // Call the service to perform the async work. DoSomethingAsyncService service = (DoSomethingAsyncService)executionContext.GetService(typeof(DoSomethingAsyncService)); service.DoSomethingAsync(WorkflowInstanceId, m_queueId); // Return Executing to let the workflow know that we are still working. return ActivityExecutionStatus.Executing; } private void QueueItemAvailable(object sender, QueueEventArgs e) { // The activity execution context is always passed in as the sender. ActivityExecutionContext context = sender as ActivityExecutionContext; // Checking for a null context just in case something really strange is happening. if (context != null) { // Get the queue service so we can get the Queue object. // Note: you cannot salt away the actual queue object, the Workflow runtime does not allow it. WorkflowQueuingService queuingService = context.GetService<WorkflowQueuingService>(); // Make sure the queue exists before going any further. if (queuingService.Exists(m_queueId)) { WorkflowQueue queue = queuingService.GetWorkflowQueue(m_queueId); // Dequeue the item that was queued by the service. QueueItem item = queue.Dequeue() as QueueItem; if (item != null) { // We only expect one thing to ever be put into our Queue; // so, now we remove the queue. queue.QueueItemAvailable -= QueueItemAvailable; queuingService.DeleteWorkflowQueue(m_queueId); // Finally inspect the results and do something with them. Console.WriteLine(item.Message); // Once we have finished our mission, we close this activity context.CloseActivity(); return; } } } // We should not get here if everything works as we expect. // So, throw an exception so the workflow will be terminated. throw new Exception("Something unexpected was placed in the Queue."); } } Try creating a workflow that contains a Parallel activity and add this activity to both branches. If you really want to run activities in parallel, this is how it needs to be done. Don't forget to add the service to the runtime! Happy flowing! What a silly topic for a blog post! Workflow already runs my whole activity tree on a another thread, I don't have to worry about being Async, Do I? Yes, you do. While it is true that each Workflow gets its own thread (in the default setup), that thread can be easily blocked by a long running or hung activity. Let's look at an example... Create a Sequential Workflow Console application in Visual Studio with the following Activity tree: Sequential Workflow WhileActivity (set the condition to true so it runs forever) CodeActivity (the code should just do a Console.WriteLine("Hello World") So, now go to the Program.cs file or wherever the code is that Starts the workflow instance. Add a Thead.Sleep(1000) after the instance.Start() call and then an instance.Terminate("kill workflow") to stop the workflow. When you run the workflow, you should notice that you get a few "Hello World" lines and one "kill workflow" line. Here, terminate works just as you expect it to. But now let's change our workflow so that the while is done in the code activity... CodeActivity (the code should contain while(true) Console.WriteLine("Hello World")) When you run the workflow this time, it will never end :( If you put a break point on the instance.Terminate line, it does get called. So, why doesn't the Workflow terminate? Well, like all other actions that happen in the workflow instance, the terminate action goes through a queue and waits for the background thread to dequeue it and do it. Unfortunately, the queue is only processed in between activities executing. So, the termination request remains in the workflow queue forever in our example. If you are running complex workflows that may need to actually be stopped in the middle (like we do) then you need to avoid code activities that have loops and start writing asynchronous custom activities. So, how do we write a custom activity so that it is stoppable? The first thing you have to do is create a workflow runtime service that can do the actual work for the activity. This means for every custom activity you need a new custom service or at least a new method on your custom service. Here's a sample service: public class HelloWorldService : WorkflowRuntimeService { public void WriteHelloWorldForeverAsync() { ThreadPool.QueueUserWorkItem(delegate(Object state) { while (true) Console.WriteLine("Hello World"); }); } } This service will now do the work for us. But to use it, we need a custom activity. Here's the simplest custom activity you could make: public class WriteHelloWorldActivity : Activity { protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { HelloWorldService service = (HelloWorldService)executionContext.GetService(typeof(HelloWorldService)); service.WriteHelloWorldForeverAsync(); return ActivityExecutionStatus.Executing; } } The key thing to note here is that the activity calls a method on the service that returns immediately after starting its background thread. The activity then returns Executing. This allows the workflow to dequeue items from the queue, but not move on to the next activity. Until this activity returns Closed, the next activity will not start. Oops! before you try and run this you need to create your service, add it to the runtime and add your custom activity to the workflow. In Program.cs, add the following line right after you create the runtime service: workflowRuntime.AddService(new HelloWorldService()); Then change the workflow to look like this: WriteHelloWorldActivity Now, you can run it and see that Terminate works again! Of course, in a real application you need more code than this, because you have to shut down the background thread when you get terminated, and you need to call back to your activity when the background work is finished! Tune in for my next post, where I will show you a real example of of an activity and service that do it all :)!!
http://blogs.msdn.com/jpricket/
crawl-002
refinedweb
4,059
55.84
I’m hoping I’m just rigging incorrectly and someone will be able to recognize the problem and point me in the right direction. I’m trying to bring XGen Interactive Grooming Splines into my pipeline, and I’m having some trouble getting it to animate consistently. NOTE* I’m only having this problem with my scalp grooms. Smaller grooms like brows are animating correctly by applying a Geo Cache to the XGen description’s host mesh, which has been obtained separately by caching animation on a duplicate mesh that is Wrapped to the animated character mesh. I think the problem has to do with the fact that the scalp grooms are created using Guides from existing curves. The working grooms don’t use a Guides modifier. The problem arises once animation has been applied to the mesh hosting the XGen description. Visually, the erroneous ‘popping’ always looks the same and is reproducible. The scalp mesh has clean history, no transformations, etc. I have attempted several rigging configurations on the scalp mesh with no joy. They are as follows: -apply Geo Cache to host mesh, as described above. -imported animated character rig (meshes + skeleton) into groom scene with namespaces and used a Wrap deformer directly in the scene. -imported Alembic cache version of animated character mesh, used a Wrap deformer. -imported animated character skeleton only, skinned XGen host mesh to character skeleton head joint. -imported animated character skeleton only, applied parent constraint to scalp mesh driven by head joint. None of the above methods were successful at totally eliminating the wonky animation. Attached are images showing what’s going on. First image is the groom, second is what the Guides look like at frame 60 using any of the above rigging methods. Thanks in advance for any help!!
http://forums.cgsociety.org/t/maya-xgen-interactive-groom-splines-animation-issue/1921376
CC-MAIN-2019-09
refinedweb
298
55.95
I’m trying to scrape through python, already tried: - bs4, selenium, requests, helium. [Included classes, tags, xpath, etc].. Though I never had this kind of issue, maybe I’m doing something wrong. I just can’t take this text value: G120-5, shown in the image bellow: Link: If someone can do it, please let me know. Thanks in advance 🙂 Answer The data you see is loaded from external URL. You can use requests module to simulate it: import json import requests # url = "" car_id = "100550" # <-- this is the id from URL api_url = ( f"{car_id}/dynamic" ) headers = { "kavak-country-id": "76", "kavak-region-id": "4", "kavak-subsidiary-id": "3", } data = requests.get(api_url, headers=headers).json() # uncomment to print all data: # print(json.dumps(data, indent=4)) print(data["data"]["coordinate"]) Prints: G120-5
https://www.tutorialguruji.com/python/web-scraping-cant-take-a-text-value/
CC-MAIN-2021-43
refinedweb
133
57.67
This System.Drawing; using Spire.Xls; namespace Create_Excel_File { class Program { static void Main(string[] args) { //Initialize a new Workboook object Workbook workbook = new Workbook(); //Get the first worksheet Worksheet sheet = { } } } } [VB.NET] Imports System.Drawing Imports Spire.Xls Namespace Create_Excel_File Class Program Private Shared Sub Main(args As String()) 'Initialize a new Workboook object Dim workbook As New Workbook() 'Get the first worksheet Dim sheet As Worksheet = End Try End Sub End Class End Namespace The Created Excel File After executing above code, The Excel file is created and some string values are also added in the first Worksheet”Sheet1”. Please see the screenshot below:
http://www.e-iceblue.com/Tutorials/Spire.XLS/Spire.XLS-Program-Guide/Create-Excel-File-in-C-VB.NET.html
CC-MAIN-2014-42
refinedweb
105
55.03
In our introduction to functions, we casually introduced something that is quite odd. Take a look at the following. def sample_function(): words = 'function body' words --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-2-993ed7d20d5f> in <module>() ----> 1 words NameError: name 'words' is not defined Somehow, our words variable is suddenly inaccessible. Python has various rules about when and how to access variables and data throughout a file. As we'll see in this lesson, accessibility of a variable depends on whether that variable is a global variable or a local variable. We will also see how a function's return statement allows us to send data inside of a function into global scope. Before our introduction to functions in the recent lesson, we always had access to any variable we had declared in Python. number = 1 number We weren't thinking about it, but we were operating in the global scope. Whenever we declare a variable that is not declared inside of a function, we are operating in global scope. This means that the variable is available anywhere in current file. Accordingly, we can access the variable from outside of a function... number ...or inside of a function. def access_to_globals(): # we can access global variables from inside of our function return number access_to_globals() Global variables are a privileged bunch. Once declared, they can be referenced either inside or outside of a function. Local variables are resigned to a different fate than global variables. Below, the variable trapped is a local variable. def locals_stay_local(): trapped = 'not escaping' locals_stay_local() Unlike our global variable number, trapped is first declared from inside a function, making it a local variable. trapped --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-4-956155b55e40> in <module>() ----> 1 trapped NameError: name 'trapped' is not defined Because the variable trapped is declared inside a function, it can only be referenced from that function. Believe it or not, this is a helpful feature. By declaring a local variable, we know that we only have to pay attention to that variable from inside the body of the function. We do not have to search our file to see what that variable may have been assigned or reassigned to. And from inside of the function, we can use that variable to make our code more expressive. def no_return_full_name(): first_name = 'bob' last_name = 'smith' full_name = first_name + ' ' + last_name Of course we want our function to have some impact outside of itself. To do that, we use a return statement. Let's execute our function no_return_full_name. no_return_full_name() full_name If you press shift + enter on the two code blocks above, you have executed our function and then tried to reference the variable full_name. However, because the variable full_name is local, it is only available from inside of the function. That's not very useful. So let's write another function called return_full_name that has a return statement. def return_full_name(): first_name = 'bob' last_name = 'smith' full_name = first_name + ' ' + last_name return full_name return_full_name() 'bob smith' Now the string 'bob smith' is returned from the function. Notice that the full_name is still not available globally. full_name --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-5-1ef4381f016d> in <module>() ----> 1 full_name NameError: name 'full_name' is not defined However, we did throw the variable's value over the wall, and if we wish to use it with more code we can. For example, we can combine the return value with another expression: 'Hello ' + return_full_name() Or we can assign the return value to a global variable, and access it from that variable. a_fine_name = return_full_name() a_fine_name 'bob smith' So variables declared inside of a function are still only available locally. However, by using a return statement we can throw the data over the wall of the function and into global scope. Another thing to note about a return statement is that once a function reaches the return statement, no other lines of the function are executed. def return_statements(): print("this is executed") return "this is the function's output" print("this is not executed") return_statements() this is executed "this is the function's output" So, return statements are an important feature of functions as they terminate the execution of a function and throw the result of the function into global scope. In this section we learned about scope. We saw how when we declare a variable outside of a function, we are declaring that variable in global scope. This means that the variable is available throughout the file it is declared in - inside and outside of functions. Variables declared inside of functions are local variables and are available to be referenced from inside of the function in which they are declared. However, we can access the local variable outside of the function by using the return keyword at the end of our function. The combination of local variables and returning specific data allows us to encapsulate our code inside of a function and explicitly state what should be returned, and thus accessible from outside of the function.
https://learn.co/lessons/python-scope-readme
CC-MAIN-2019-43
refinedweb
831
52.29
Opened 9 years ago Closed 8 years ago #2922 closed defect (worksforme) Parent module 'tracrpc' not loaded error Description I'm new to python and trac so the answer to this may be obvious. My env. is a standalone installation on winxp sp2, trac 0.11b2, tracd is started with BASIC_AUTH. My goal is to link with mylyn/eclipse. To install the plugin I called: c:\python25\Scripts\easy_install D:\temp\tracxmlrpc\xmlrpcplugin\0.10 The error is triggered by settings in trac.ini. I always get the error with tracrpc.* enabled. I've tried e.g. selecting RPC combinations in the webadmin interface. On Stderr/out tracd console I get: 20:09:54 Trac[env] INFO: Reloading environment due to configuration change 20:09:54 Trac[loader] DEBUG: Loading TracXMLRPC from c:\python25\lib\site-packag es\tracxmlrpc-0.1-py2.5.egg 20:09:54 Trac[loader] ERROR: Skipping "TracXMLRPC = tracrpc": (can't import "No module named Search") : : As soon as I select XMLRPCWeb(this equates to "tracrpc.web_ui.xmlrpcweb = enabled" in trac.ini), trac disappears and instead in the browser I get: Traceback (most recent call last): File "c:\Python25\lib\site-packages\trac\web\api.py", line 339, in send_error 'text/html') File "c:\Python25\lib\site-packages\trac\web\chrome.py", line 672, in render_template template = self.load_template(filename, method=method) File "c:\Python25\lib\site-packages\trac\web\chrome.py", line 648, in load_template self.templates = TemplateLoader(self.get_all_templates_dirs(), File "c:\Python25\lib\site-packages\trac\web\chrome.py", line 402, in get_all_templates_dirs dirs += provider.get_templates_dirs() File "build\bdist.win32\egg\tracrpc\web_ui.py", line 76, in get_templates_dirs from pkg_resources import resource_filename SystemError: Parent module 'tracrpc' not loaded My hunch is that I need xmlrpcplugin\0.11? All help appreciated. Mike Attachments (0) Change History (11) comment:1 follow-up: 4 Changed 9 years ago by comment:2 Changed 9 years ago by Tried the xmlrpcplugin\trunk by calling (for python novices like me) c:\python25\Scripts\easy_install D:\temp\tracxmlrpc\xmlrpcplugin\trunk and it worked! I found the tips at to be helpful (section 4 "The Fix") comment:3 Changed 9 years ago by Ooops, I forgot to say THANKS! comment:4 Changed 9 years ago by You should try xmlrpcplugin\trunk instead of xmlrpcplugin\0.10 I tried that alternative, but still the same error on Trac rc0.11. comment:5 follow-up: 6 Changed 9 years ago by comment:6 Changed 9 years ago by comment:7 Changed 9 years ago by I found a way around build TracXMLRPC-0.1-py2.5.egg (xmlrpcplugin\1.0) build TracXMLRPC-1.0.0-py2.5.egg (xmlrpcplugin\trunk) copy both into plugins restart service works Mike comment:8 Changed 9 years ago by Yes, I can confirm this works! If you build both .eggs (one in 0.10 and one in trunk) they will be copied into the plugin folder anyway. So all that is needed is to restart trac - and voilà! comment:9 Changed 8 years ago by using the trunk works with 0.11 comment:10 Changed 8 years ago by trunk works out of the box with Trac 0.11, Python 2.5, Apache 2.2.11/mod_python, Eclipse Ganymede (3.4.1) and Mylyn 3.0 comment:11 Changed 8 years ago by Yup. Using trac 0.11 with xmlrpc plugin from trunk should not be a problem. Closing. Hi I had the same problem. The pg came from this : c:\python25\Scripts\easy_install D:\temp\tracxmlrpc\xmlrpcplugin\0.10. You should try xmlrpcplugin\trunk instead of xmlrpcplugin\0.10 Adrien
https://trac-hacks.org/ticket/2922
CC-MAIN-2017-17
refinedweb
603
61.22
Copy a fixed-length string and return a pointer to the end of the result #include <string.h> char * stpncpy( char * restrict dst, const char * restrict src, size_t num); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The strncpy() function copies no more than num characters from the string pointed to by src into the array pointed to by dst. If the string pointed to by src is shorter than num characters, null characters are appended to the copy in the array pointed to by dst, until num characters in all have been written. If the string pointed to by src is longer than num characters, then the result isn't terminated by a null character. If a NUL character is written to the destination, stpncpy() returns the address of the first such NUL character. Otherwise, it returns &dst[num].
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/s/stpncpy.html
CC-MAIN-2018-43
refinedweb
151
69.72
Cross Platform C# SignalR is well-known by .NET developers as a way to handle communications. Find out what it can do in your iOS and Android apps, too. SignalR has been taking the .NET world by storm, so there's a good chance you've seen it in action by now, or at least heard something about it. For those who haven't, SignalR is a library that makes it extremely easy to add real-time communication to your applications. This means your servers and their clients can push data back and forth to each other in real time. This opens up a whole new world of possibilities for applications that simply weren't feasible in the past, and allows you to interact with your users on a whole new level. There are many different mechanisms under the hood for achieving this type of communication (depending on what the client supports), but SignalR abstracts all that away for you. Instead of dealing with that, you're provided with a simple API to develop against, letting you get started quickly and concentrate on the fun parts when building your applications. The transport mechanisms currently supported by SignalR, in order of preference, are: WebSockets is the most efficient mechanism, because it supports full two-way communication and is supported by most modern Web browsers; SignalR, however, will degrade automatically to the best option supported by the client. If you're running SignalR on top of IIS, it's worth noting that to take advantage of WebSockets, you need to be running at least IIS 8. It's also important to remember that clients of SignalR applications aren't restricted to just Web browsers. SignalR provides client libraries for .NET, JavaScript, Silverlight, Windows Phone, Windows RT and even iOS and Android through Xamarin. To make the deal even sweeter, all client libraries based on C# -- all but the JavaScript client -- share a lot of code and provide a common API. This means that you can write SignalR client code that's portable across many different platforms, which is very powerful. To demonstrate, I'm going to build a simple application that shares its SignalR client code across iOS and Android. I know it's a little bit of a cliché to demonstrate a real-time application by showing a chat application, but bear with me on this one. Seeing how little code is needed to achieve something so powerful is what makes SignalR compelling. It's also worth noting that when working with the iOS and Android clients, the best transport mechanism is currently Server Sent Events. The Server Side For the server-side component, I'll use a simple ASP.NET Web site to host SignalR. In Visual Studio, create a new empty ASP.NET project. To add SignalR to the project, simply add it through NuGet; everything else is taken care of. Next, all I need to do is create a simple chat endpoint for clients to connect to. To do that, add a new class named Chat to the project: using Microsoft.AspNet.SignalR; namespace ChatDemo { public class Chat : Hub { public void Send(string platform, string message) { Clients.All.messageReceived(platform, message); } } } As you can see, there really isn't much code in here at all. In fact, most of those lines are just curly braces and ceremony, and there's only one line that actually requires a semicolon. In SignalR, a hub is a high-level API in the pipeline that provides an easy way to create a communication endpoint. It's possible to get more granular and closer to the metal if you need a higher level of customization in your applications, but for most uses, hubs take care of a lot of the details and keep things nice and easy. In this case, I'll create a hub named Chat that exposes a method named Send to all of its clients. When that method's invoked, it triggers a messageReceived method to be invoked on any client that's listening. The hubs API takes advantage of the dynamic features in C#, which makes it very easy to express the methods you want to invoke on the client. Remember when I said SignalR makes it easy to create real-time applications? That's all the code needed for the server-side part of this app. Now let's move over to the clients. Shared Client Code As mentioned earlier, the actual SignalR client code is pretty portable across platforms, so I'll leverage that in this example. To do that, I'll create a new class named Client that will be used by both iOS and Android, as shown in Listing 1. using System; using System.Threading.Tasks; using Microsoft.AspNet.SignalR.Client; namespace ChatClient.Shared { public class Client { private readonly string _platform; private readonly HubConnection _connection; private readonly IHubProxy _proxy; public event EventHandler<string> OnMessageReceived; public Client(string platform) { _platform = platform; _connection = new HubConnection(""); _proxy = _connection.CreateHubProxy("Chat"); } public async Task Connect() { await _connection.Start(); _proxy.On("messageReceived", (string platform, string message) => { if (OnMessageReceived != null) OnMessageReceived(this, string.Format("{0}: {1}", platform, message)); }); Send("Connected"); } public Task Send(string message) { return _proxy.Invoke("Send", _platform, message); } } } Again, note how little code is required to interact with the SignalR API. In the constructor, I create a new connection to the SignalR server, and a proxy for the Chat hub created earlier. The constructor also takes in a string for the platform using it, just so I can easily distinguish between messages that come from iOS and Android later on. There is also an asynchronous Connect() method that connects to the hub, and registers a listener for the messageReceived message that raises an event when invoked. Once connected, it broadcasts a message through the hub that it has connected through the Send method. You might also have noticed the use of "async" and "await" in this class. Since those keywords are fully supported in Xamarin, they're accessible on both platforms. It's likely that the IP address and port will be different than what's used here, so you'll want to update them accordingly. The iOS Client Now that the client libraries and SignalR client code are sorted out, let's see what's involved in creating an iOS chat client. Create a new Xamarin.iOS application and add a reference to the Client.cs file created earlier. It's also required to set an application name and identifier, which can be done through the project properties dialog under the iOS Application tab (see Figure 1). The easiest way to bring SignalR into the app is through Nuget, and the Microsoft ASP.NET SignalR .NET Client package (see Figure 2). Installing this package will add Portable Class Libraries for SignalR and its dependencies. PCL support in Xamarin is brand new, so some small changes are required after installing the package before things will work properly. I suspect this part will get smoother in the future, and that this won't be an ongoing requirement. First, in the project's references I'll need to remove System.Runtime and System.Threading.Tasks, as they collide with those provided by Xamarin. Second, the version of Json.NET picked by default does not work properly with Xamarin. Fortunately, the Nuget package also includes another version that works correctly, so replace that reference with the one found in Json.NET's "portable-net40+sl4+wp7+win8" folder (see Figure 3). Open up MyViewController.cs, which contains the UI of the app, and update it as shown in Listing 2. using System; using ChatClient.Shared; using MonoTouch.Dialog; using MonoTouch.UIKit; namespace ChatClient.iOS { public class MyViewController : DialogViewController { private readonly EntryElement _input; private readonly Section _messages; private readonly Client _client; public MyViewController() : base(UITableViewStyle.Grouped, null) { _input = new EntryElement(null, "Enter message", null) { ReturnKeyType = UIReturnKeyType.Send }; _messages = new Section(); Root = new RootElement("Chat Client") { new Section {_input}, _messages }; _client = new Client("iOS"); } public override async void ViewDidLoad() { base.ViewDidLoad(); await _client.Connect(); _input.ShouldReturn += () => { _input.ResignFirstResponder(true); if (string.IsNullOrEmpty(_input.Value)) return true; _client.Send(_input.Value); _input.Value = ""; return true; }; _client.OnMessageReceived += (sender, message) => InvokeOnMainThread( () => _messages.Add(new StringElement(message))); } } } That may look like a good amount of code, but it's mostly boilerplate, so let's break it down. In order to create the UI for this screen, I'm using a library called MonoTouch.Dialog that ships in the box with Xamarin.iOS. This library makes it very easy to define table views programmatically in Xamarin.iOS applications. Table views are one of the most common UIs in iOS applications, and can really just be thought of as list views common on other platforms, despite using the word table. Table views typically require a lot of boilerplate code to wire up, so MonoTouch.Dialog abstracts a lot of that away, allowing you to express your UI more easily and concisely. For this UI, there will be two sections to the table, created in the constructor. The first will just contain a text input that can be used to enter a chat message. The second section will contain a list of messages received through the SignalR hub, which will be empty to start with. References to both of these are kept so that I can use them later in the view's lifecycle. Next, in the ViewDidLoad callback, I start wiring things up. After connecting to the SignalR hub, a listener is added to the ShouldReturn event in the text input, which will be called when the return action happens on the keyboard. When called, the keyboard is hidden and the input sent through the SignalR connection. Finally, when a new message is received through the SignalR client, it's added to the list of messages. That's all the code needed for the iOS app; let's move on to Android so that iOS has someone to talk to. The Android Client Create a new Xamarin.Android project, and just as with the iOS app, add a reference to the shared Client.cs file. The default activity file generated by the project template can be removed as well. Follow the same steps used in the iOS app for adding the SignalR Nuget package and fixing up the project references. In the Resources/Layout folder, edit the Main.xml file, which contains the layout definition for our UI: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="" android:orientation="vertical" android:layout_width="fill_parent" android: <EditText android:layout_width="fill_parent" android:layout_height="wrap_content" android: <ListView android:layout_width="fill_parent" android:layout_height="wrap_content" android: </LinearLayout> This UI is as simple as it gets. There's a LinearLayout, which is a layout container that simply places its contents in a row, either horizontally or vertically. If you're familiar with XAML, this is basically a StackPanel. Inside that container is a text box for inputting a chat message, and a list to contain the messages received from the hub. Next, I'll create an Android activity to add behavior to this layout. Add a new class to the project named MainActivity, as shown in Listing 3: using System.Collections.Generic; using Android.App; using Android.OS; using Android.Views.InputMethods; using Android.Widget; using ChatClient.Shared; namespace ChatClient.Droid { [Activity(Label = "Chat Client", MainLauncher = true, Icon = "@drawable/icon")] public class MainActivity : Activity { protected override async void OnCreate(Bundle bundle) { base.OnCreate(bundle); SetContentView(Resource.Layout.Main); var client = new Client("Android"); var input = FindViewById<EditText>(Resource.Id.Input); var messages = FindViewById<ListView>(Resource.Id.Messages); var inputManager = (InputMethodManager)GetSystemService(InputMethodService); var adapter = new ArrayAdapter<string>(this, Android.Resource.Layout.SimpleListItem1, new List<string>()); messages.Adapter = adapter; await client.Connect(); input.EditorAction += delegate { inputManager.HideSoftInputFromWindow(input.WindowToken, HideSoftInputFlags.None); if (string.IsNullOrEmpty(input.Text)) return; client.Send(input.Text); input.Text = ""; }; client.OnMessageReceived += (sender, message) => RunOnUiThread(() => adapter.Add(message)); } } } As with the iOS app, this code is mostly boilerplate. After setting the layout, I pull out references to the text box and message list so I can add behavior to them. I also set up a list adapter for the message list using a simple template built right into Android. This template will essentially take in strings and display them directly in the list. After the client connects to the SignalR hub, I hook up a handler for the text box's editor action, which will be called when the return key is pressed. When this happens, the message will be broadcast through the SignalR client, and the keyboard is hidden. Finally, when a message is received from the client, it's added to the list's adapter so that it will be displayed. That's it! Now that both client apps are ready to go, fire them up. Once both are connected, you should be able to send messages back and forth, and see them instantly in both clients (see Figure 4). Easy, right? Real-Time, Scalable Communication In the past, creating a real-time chat application would have required much more code, and would have relied on continuously polling the server for updates. This would provide a degraded user experience, given the delay between an event happening and it being shown to the user. This approach isn't good for the server, either; it would constantly be serving requests even if there was no new information to send, which makes for a scaling nightmare. SignalR lets you easily weave real-time communication into your apps in a scalable way, with very little code. Basic chat between clients is just the tip of the iceberg for what you can do in your apps with these tools in your belt. With a rich, open connection between you and your users, the sky's the limit for the experiences you can provide for
https://visualstudiomagazine.com/articles/2013/11/01/how-to-use-signalr-in-ios-and-android-apps.aspx
CC-MAIN-2021-17
refinedweb
2,309
56.86
Hi everyone: Any suggestions on the best way to read: student name student number 90 100 23 65 (<-- four marks there) from a text file and store them in a struct? the 4 marks are separated by spaces.. Thanks. Hi everyone: Any suggestions on the best way to read: student name student number 90 100 23 65 (<-- four marks there) from a text file and store them in a struct? the 4 marks are separated by spaces.. Thanks. I did something like this as an exercise once. Although it was onlt 3 marks not 4. Code:#include <string> #include <fstream> #include <iostream> using namespace std; #define FILE_NAME "File.txt" struct STUDENT_INFO { string Name; string Number; int Mark1; int Mark2; int Mark3; }; int main(void) { STUDENT_INFO stu; string LastName; ifstream in; in.open(FILE_NAME); in>>stu.Name; in>>LastName; stu.Name = stu.Name + ' ' + LastName; in>>stu.Number; in>>stu.Mark1; in>>stu.Mark2; in>>stu.Mark3; cout<<stu.Name <<endl; cout<<stu.Number <<endl; cout<<stu.Mark1 <<endl; cout<<stu.Mark2 <<endl; cout<<stu.Mark3 <<endl; return 0; } does the >> operator read in things until it encounters a space? so if it's like: student name student number mark1 mark2 mark3 mark4 in the file, it'll read student name with the first >>, student number with the second >>, mark 1 with the third, mark2 with the fourth, and so on? Thanks sorry that was actually me, i forgot to sign in first.. ok i guess it'll help if i put everything in one post: here's what's in the file: sarah 23456 12 100 98 46 and here's the code: will that code store the file contents into stu properly?will that code store the file contents into stu properly?PHP Code: #include <string.h> #include <fstream.h> #include <iostream.h> struct studentinfo { char name[20]; char number[20]; int marks[4]; }; void main() { studentinfo stu; ifstream sourcefile; int c; sourcefile.open("filename.txt"); sourcefile>>stu.name; sourcefile>>stu.number; for (c=0; c<4; c++) sourcefile>>stu.marks[c]; return; } Thanks a lot for your help. If student name and number contain no whitespace (space, tab, newline, etc) it should work fine. With a liitle added work to create an array/container of studentinfo, you can wrap the code in a while loop looking for end of file marker to read multiple instances of a the struct. As an alternative, iff the information on students is written to file using the write() method, such that it is in binary form, then you can use the read() method to read informtion into a given struct with one fell swoop. okay Thank you! You can take out the "#include <string.h>" tho.
https://cboard.cprogramming.com/cplusplus-programming/12399-file-contents-into-struct.html
CC-MAIN-2017-26
refinedweb
453
75
I need to replace the current DC and migrate to the new one that has server 2012r2. 4 Replies Feb 6, 2015 at 11:16 UTC if its 2012 and you want to go to 2012 r2 just do the update and your done. I have upgraded 3 DC's lately from 2008 r2 to 2012 and then to 2012 r2 with no issues Feb 6, 2015 at 11:25 UTC I just finished a project to put a new 2012 R2 DC into production and to take over all the roles from our old 2008 (non R2) PDC. A few hiccups along the way, but nothing I couldn't handle. Just be sure you're aware of the roles that the old one has and get an idea of the important things like Global Catalog, FSMO roles, etc. Now if you're talking functional levels... Feb 7, 2015 at 1:15 UTC Are you talking DFS namespace or DFS replication, or both? For DFS replication, just add the new server to your replication group, let everything replicate, and then remove the old member from the replication group. For DFS namespace, just add the new host as a namespace host. Feb 7, 2015 at 1:55 UTC hi Kevin I am trying to do both DFS namespace and DFS replication. This discussion has been inactive for over a year. You may get a better answer to your question by starting a new discussion.
https://community.spiceworks.com/topic/779479-how-to-migrate-dfs-from-server-2012-to-server-2012r2
CC-MAIN-2018-09
refinedweb
243
76.96
SYNOPSIS #include <wchar.h> size_t wcsrtombs(char *dst, const wchar_t **src, size_t len, mbstate_t *ps); size_t wcsnrtombs(char *dst, const wchar_t **src, size_t nwc, size_t len, mbstate_t *ps); #include <locale.h> size_t wcsrtombs_l(char *dst, const wchar_t **src, size_t len, mbstate_t *ps, locale_t locale); size_t wcsnrtombs_l(char *dst, const wchar_t **src, size_t nwc, size_t len, mbstate_t *ps, locale_t locale); DESCRIPTION The - When a code is encountered that does not correspond to a valid character. - When the next character would exceed the limit of len total bytes stored into the array pointed to by dst (and dst is not a null pointer). Each conversion takes place as if by a call to the the specified state pointer is null, the The behavior of this function is affected by the LC_CTYPE category of the current locale. The PARAMETERS - dst Is the destination buffer used to receive converted characters. This can be a null pointer if no conversion is desired. - src Points to the string to be converted. If dst is not null, the referenced pointer is updated as described in the DESCRIPTION section. - nwc Stop after processing nwc characters from the src characters. - len Is the maximum number of bytes characters to write to dst, if dst is not null. - ps Is the conversion state. If this is null, an internal mbstate_t object is used. - locale Is a locale_t perhaps returned by newlocale()or LC_GLOBAL_LOCALE or 0 for the current thread locale set with uselocale(). RETURN VALUES If the input conversion encounters a code that does not correspond to a valid character, an encoding error occurs. In this case, the CONFORMANCE MULTITHREAD SAFETY LEVEL MT-Safe, with exceptions. The functioninit(), mbsinit_l(), mbsrtowcs(), mbsrtowcs_l(), newlocale(), setlocale(), wcrtomb(), wcrtomb_l() PTC MKS Toolkit 10.3 Documentation Build 39.
https://www.mkssoftware.com/docs/man3/wcsrtombs.3.asp
CC-MAIN-2021-39
refinedweb
293
55.44
SCJP continued Another SCJP study chapter down. I no longer feel bad about missing some questions in the last chapter, because THIS chapter is the one that talks about abstract classes being able to implement interfaces etc. Nice to ask questions about things before actually covering them. Those silly default no-arg constructors in Java. I missed one review question about those. This is an example of the opposite of the "principle of least surprise" so touted in Perl and Ruby. Default constructors are magically automatically created in Java, but only if you don't supply any constructors of your own. Calls to superclass constructors are magically inserted in all constructors including those you define yourself, but they're always calls to super() with no arguments, and those super() calls aren't inserted into any constructor that calls another constructor as its first line. And so on and so forth. Interesting that interfaces can extend multiple interfaces, while classes cannot extend multiple classes. One of the reasons behind forbidding multiple inheritance is supposedly so that you don't have to deal with method name collisions when multiple parent classes each let a child inherit methods with the same name. I tested what happens when there are name collisions in interfaces: interface Blarg { public void test(); } interface Blarg2 { public int test(); } public class Test2 implements Blarg, Blarg2 { public void test() {} public static void main(String [] args) {} } It appears in this case there is no way to actually implement both the Blarg and Blarg2 interfaces in one class. (If there is, I can't see it.) You can't have two methods with the same name, same argument signature, and different return value. Trying to compile this fails with a "Hey you forgot to implement Blarg2's test() method" error. Similarly if I try interface Blarg3 extends Blarg, Blarg2 {} I get an error about Blarg and Blarg2 being "incompatible" because both define a method of the same name. That's handy. (Why not use the same error and allow multiple inheritance, and forbid it only whenever there are method name collisions? Who knows. Not I. I'm sure there are reasons. I hope there are reasons.) I have a very good grasp on overriding vs. overloading. I remember going over it in my very first C++ class at college. I've somehow carried it all the way with me. Same goes with typecasting. Perl/Ruby don't have an equivalent, but I remember enough of it from C++. I also don't seem to have any trouble remembering which things happen at runtime and which things happen at compile-time. That should serve me well on the exam. Speak your Mind Preview
http://briancarper.net/blog/162/scjp-continued
crawl-003
refinedweb
450
63.59
Red Hat Bugzilla – Bug 225860 python binding behaves poorly when specifying dictpath Last modified: 2007-11-30 17:07:28 EST +++ This bug was initially created as a clone of Bug #224347 +++ FascistCheck() exhibits two problems when specifying a dictpath: 1.) It checks for the existence of the path, not the path plus ".pwd" as with the default. This means you can't specify an alternative path at all because it'll throw an exception in any case. You can't work around it either because 2.) if you specify the path plus ".pwd" yourself, the python module now finds the file, passes path+".pwd" to cracklib which barfs on the non-existence of path+".pwd.pwd" and apparently calls exit() or something else which ends the python interpreter (without traceback or other such superfluopus things ;-). Version-Release number of selected component (if applicable): cracklib-2.8.9-3.1 Judging from the code, this problem exists in Rawhide (cracklib-2.8.9-6) as well. How reproducible: Easy Steps to Reproduce: 1. Run this test program: --- 8< --- #!/usr/bin/python from cracklib import * for path in ('/usr/share/cracklib/pw_dict', '/usr/share/cracklib/pw_dict.pwd'): try: print FascistCheck ('foo', path) except Exception, e: print e print "foo" Actual results: nils@wombat:~> ./fc.py [Errno 2] No such file or directory: '/usr/share/cracklib/pw_dict' /usr/share/cracklib/pw_dict.pwd.pwd: No such file or directory PWOpen: Illegal seek Expected results: No traceback with the first path in the list, a (caught) traceback with the second path, and the line "foo". -- Additional comment from nalin@redhat.com on 2007-01-25 12:33 EST -- Quite right. Fixing in Raw Hide shortly. Let me know if you need this updated in FC6 as well. -- Additional comment from nphilipp@redhat.com on 2007-01-29 11:48 EST -- As I have dicovered now, I don't need to specify the dictpath to use the standard dictionary. Would it be much work to change the module so that "pydoc cracklib" on the cmdline or "help(FascistCheck)" from within python would mention that, i.e. not only have "FascistCheck(...)" without the.
https://bugzilla.redhat.com/show_bug.cgi?id=225860
CC-MAIN-2017-30
refinedweb
357
59.6
Created on 2016-05-08 05:13 by serhiy.storchaka, last changed 2016-07-23 15:55 by skrah. This issue is now closed. Following example causes a segmentation fault. from decimal import Decimal class BadFloat(float): def as_integer_ratio(self): return 1 def __abs__(self): return self Decimal.from_float(BadFloat(1.2)) Here are two alternative patches that fix the crash. The first patch adds checks for as_integer_ratio() result. The only disadvantage is that error messages are quite arbitrary and differ from error messages in Python version. The second patch converts an instance of float subclass to exact float. Disadvantages are that this changes behavior ignoring overriding as_integer_ratio() and __abs__() and slightly slows down Python implementation. Advantages are that this fixes also issue26975 and slightly speeds up C implementation. [Serhiy, on the second patch] > Disadvantages are that this changes behavior ignoring overriding as_integer_ratio() and __abs__() and slightly slows down Python implementation. None of those seem like big issues to me. Certainly we shouldn't care too much about slowing down the Python implementation a bit more, and I think it's reasonable to say that code that's overriding as_integer_ratio (or `__abs__`) and then expecting that override to be picked up and used in `Decimal.from_float` is on its own. > Advantages are that this fixes also issue26975 and slightly speeds up C implementation. Sounds pretty good to me! I'll let Stefan review the patch properly, but this seems like a good solution to me. Actually the part of the second patch for Python implementation not always works correctly due to a bug (issue26983). We should either first fix issue26983 or use more complicated code. Stefan? I'll look at it soon -- I just don't have much time right now. Ping. My preference is to leave the Python implementation of from_float() as-is. Pure Python code is not obligated to defend itself against bizarre code. The C code however is obliged to not segfault. The approaches look good, but for clarity I want to replace all method calls that should never be overridden by the plain C functions of their corresponding static types. I have no opinion about the Python version. The diff also "fixes" #26975 for the C version, so perhaps the Python version should be in sync. It is an academic question, since no one will write a class that triggers it. Preemptively, I'm aware that long_bit_length() is defined with a single argument, then cast to a CFunction, then called with two arguments. ceval.c does the same. -- We had a discussion about that with MvL a while ago, he preferred to define all CFunctions with two arguments. I'd also prefer that, but that is another issue. New changeset f8cb955efd6a by Stefan Krah in branch '3.5': Issue #26974: Fix segfault in the presence of absurd subclassing. Proactively Couldn't keeping references in static variables cause problems in subinterpreters? These are builtin static types. Even with non-builtin static types, the address of the type should always be the same. C-extensions aren't reloaded. Also, IMO the whole capsule mechanism would be broken if function pointers in dynamic libs could just be invalidated due to reloading. I'm leaving this open in case anyone wants to do something about the Python version. I tend to agree with Raymond: It is impractical to "fix" all such constructs in the Python version, unless one consistently uses a style like: float.as_integer_ratio(float.__abs__(-1.5))
https://bugs.python.org/issue26974
CC-MAIN-2021-17
refinedweb
579
66.54
Keywords: SAX, XML, test, conformance Biography Elliotte Rusty Harold is an adjunct professor of computer science at Polytechnic University in Brooklyn. He's the author of numerous books on XML including the XML 1.1 Bible, XML in a Nutshell, Effective XML, and Processing XML with Java. Most recently he has been working on XOM, the only tree-based API for XML that absolutely guarantees well-formedness. While SAX [SAX] , the simple API for XML, is a broadly, almost universally implemented standard among Java parsers, many SAX parsers have serious bugs. The lack of a complete SAX conformance test suite has been a severe hindrance to interoperability. For example, about half of SAX parsers call endDocument() even after reporting a fatal error, while the other half don’t. Existing XML test suites mostly focus on whether the parser correctly answers boolean questions of well-formedness or validity, while ignoring the much more complex questions of whether the parser correctly reports document content in the correct order. Indeed the XML specification is mostly silent on exactly which parts of the document a parser is required to report. Not surprisingly this has led to a number of inconsistencies between parsers as well as outright bugs in more than a few implementations. This paper demonstrates a conformance suite written in Java that tests parsers which claim to implement the SAX API. The framework asks the parser to read a collection of input documents and then logs the methods the parser invokes and their arguments. This log takes the form of an XML document that can be compared against the expected results. The documents in the test set are derived from the W3C XML conformance test suite. The software includes a framework for testing parsers against this document collection and measuring their conformance to both the core and optional parts of both XML and SAX. Conformance results for major parsers including Xerces, Crimson, and Piccolo are reported. A number of areas in which deficiencies in the SAX specification have led to varying parser behavior are identified. Existing test suites Comparing Output Bootstrapping Results for different parsers Common Errors Is endDocument invoked after a fatal error? What kinds of exceptions can parsers throw? How much data is passed after a fatal error? What is the type of enumerated attributes? XML 1.1 support Problems with Specific Parsers Xerces-J 2.6.2 Oracle 9.2.0.6.0 Crimson Piccolo Saxon's Ælfred dom4j's Ælfred GNU JAXP XP SAX Issues Future Directions for more research Test Suite Availability Footnotes Bibliography There are several existing test suites for XML and SAX. However, they are of limited coverage, and failed to expose many bugs I noticed during the development of XOM. [XOM] There is an embryonic, semi-official test suite for SAX [Arnold 2001] . However, this includes only a few dozen JUnit based tests for the most basic features of SAX2. There are also a couple of thousand SAX 1 tests based on a draft version of the OASIS/NIST XML test suite. However, these perform limited output testing, and leave many holes in coverage. The latest version is 0.2 from November 12, 2001. Work appears to have been abandoned. The most comprehensive XML test suite is the W3C's XML test suite [W3C XML Group] , which bundles tests gathered from a variety of sources including James Clark, the OASIS/NIST XML test suite, Sun, IBM, Henry S. Thompson, and others. This offers the broadest coverage of a range of XML documents. However, it focuses on testing binary decisions. Is the document well-formed or not? Is the document valid or not? It provides a limited number of output tests, based on the Second XML Canonical Form. [Sun] . To properly test a SAX parser, it is necessary to verify that it reports the right events with the right content in the right order. This exceeds the scope of the XML Test Suite, which is API independent. However, because the W3C test suite is so broad, it became the primary source of input data for this new test suite. In order to pass the tests, a parser must be able to correctly process all the documents in the W3C XML test suite. The difference is not in the documents themselves. It is in the scope of the output. Passing the W3C test suite primarily requires correctly identifying well-formed and malformed, valid and invalid documents. My test suite requires not only this, but also the reporting of the right content in the right order at the right time using the right methods. The W3C test suite is divided into numerous test cases stored in several directories, mostly organized by the submitter. The test cases are further subdivided into well-formed and malformed, valid and invalid, namespace aware and non-namespace aware, and external entity using and self-contained test cases. The master file lists a typical case like this: This says that the test case document can be found at the relative URL "invalid/attr06.xml", that the document at that URL is invalid (but well-formed); that it tests section 3.3.1 of the XML specification, and more specifically it tests the name token validity constraint for the NMTOKENS attribute type. Here I'm not so interested in testing whether the document is valid or invalid as I am in testing that all the content from that document is properly reported through SAX. In order to compare the the output of different parser, it's necessary to place the output in a standard format that can be easily diffed. It seemed natural to use XML for this purpose. A single class that implemented ContentHandler, ErrorHandler, EntityResolver, and DTDHandler--the four required SAX interfaces--was written that logged all its calls to an XML document. (More specifically, it created a XOM Document object which was later serialized). However, the problem is thornier than it may appear at first. It is necessary to produce well-formed output even for malformed input. We must not assume that the SAX parser will detect such bugs because that would require assuming that the parser is non-buggy, precisely what we're endeavoring to determine. For instance, we cannot assume element names will not contain white space or PCDATA will not contain nulls. For example, suppose we begin with this test document (Test case ibm-valid-P10-ibm10v02.xml) [ibm10v02] When parsed it produces this output: The general format could have the following DTD: This output format is designed to avoid some common problems: Table 1 The actual generation also introduces some issues: characters. Initially, I combined these into a single element in the output using the well-known algorithm [Harold 2002] . However, that proved both difficult to compare by eyeball, and caused problems when different parsers identified ignorable white space in different places. Consequently I decided to report each char separately. java.util.SortedMap[SortedMap] where the keys were a concatenation of the local name, qualfiied name, and namespace URI. The null (\u0000) was used as a concatenation character because this would never show up in any legal data. startPrefixMapping()and endPrefixMapping()order is also indeterminate in SAX. Once again an arbitrary but reproducible sorting was chosen using a SortedMap. However, this time the map had to be maintained across several method calls and only flushed when a startElement()was seen or the next call after an endElement(). startElement(). Again, I sorted by lexical order of the names. Locator, LexicalHandlerand DeclHandlerare ignored. set to true. set to true. Theoretically, a parser does not have to support these two features. In practice, all eight parsers tested did support them. [1] . The canonical output a parser was supposed to report was generated via a bootstrap process. I began by runnings the parser experience had led me to expect was most often correct, Xerces2-J 2.6.1, [Xerces J] through the test harness. Then I compared its output to the output of seven other parsers. Where a difference was found, I manually inspected the reason for the difference to determine, based on the XML 1.0 and SAX specifications, which parser was correct. If Xerces proved incorrect, then the expected result was modified to use the correct result. More than once, both parsers were arguably correct. In these cases the comparison code needed to be adjusted to allow for the differences. After repeating this process several times (and reducing the bugs in the test framework) it became apparent that while Xerces made many mistakes, it had one very nice property: almost all the mistakes were predictable and reproducible. Thus they could be fixed automatically. Specifically, the following changes needed to be made in Xerces' output: startDocumentelements endDocumentelements, especially after fatal errors. XMLReaderwhen generating the expected results. Xerces has several bugs that are only exposed when the parser is reused. These have now been fixed in CVS as a result of this author's report. [Harold 2004] CharacterConversionExceptions and UTFDataFormatExceptions with fatalErrors In some cases these fixes duplicate each other. For instance, not reusing the XMLReader avoids most problems with extraneous characters. However, that's not a problem since the fixes are all careful to check that the problem exists in a particular case before fixing it. These could all be fixed automatically. However, two cases remained which needed to be fixed by hand: fatalErrorand endDocument) Once the bootstrapping process was complete, it becomes possible to compare the results for different parsers. Eight currently available XML parsers were tested: IBM also produces XML4J. However, this is just a rebranded Xerces. Three of these (Saxon, GNU JAXP, and DOM4J) are not independent. They are all descendants of David Megginson's Ælfred parser from Microstar [Megginson 1998] . Currently only Xerces-J and the Oracle XML parser appear to be actively developed. The other six seem to have been abandoned. The lack of a competitive market for SAX parsers in Java came as something of a surprise and is a cause for concern. To a large extent, most users seem satisfied with Xerces; and there does not appear to be a large demand for alternate parsers. The C world has a much broader choice with at least four major parsers that implement the SAX API (libxml2 [Veillard] , expat [expat] , Oracle XML Parser for C++ [Oracle C] , and Xerces-C++ [Xerces C] .). There were several particularly common errors that were exhibited many times by multiple parsers. These are definitely places anyone writing or contemplating writing a parser should watch carefully. The most common error, though perhaps an arguable one, was failing to invoke endDocument() for a malformed document. The API documentation for the endDocument() method states, "The SAX parser will invoke this method only once, and it will be the last method invoked during the parse. The parser shall not invoke this method until it has either abandoned parsing (because of an unrecoverable error) or reached the end of input." [ContentHandler] One could wish the language were slightly clearer (I would prefer it to say "exactly once" rather than "only once") but it still implies that endDocument() should be called even in the event of a fatal error. On the other hand, the API documentation for ErrorHandler.fatalError() states, ." [ErrorHandler] This implies that it is acceptable not to call fatalError(). Given the apparent inconsistency in the spec, what do the authors have to say? David Brownell, the second maintainer of the SAX specification and the probable author of this statement was explicit that endDocument must always be called. According to Brownell, "If it's not, that's a SAX conformance bug. Sadly: last I looked, it wasn't an uncommon bug to omit calling it in the 'abandoned parsing' case. That makes it tough to use endDocument() to do things like clean up application state." [Brownell 2002] That seems clear enough. However, David Megginson, the original maintainer of the SAX specification, disagrees: My original intention at the start of SAX development was that endDocument would not necessarily be called once the parser was in an error state, but the documentation might not have been clear and David Brownell might have clarified things the other way after he took over. [Megginson 2004] Furthermore, Megginson has recently announced plans to revise this as part of the final release of SAX 2.0.1. [Megginson 2004 2] , and as I write these words, it's being hashed out one more time on the sax-devel mailing list. The situation is at best unclear. My opinion is that endDocument must called, and parsers certainly can and should do this. However, reasonable people may disagree with support from both the spec and the maintainers. The SAX documentation is clear that on encountering a well-formedness error, the parse() method must throw a SAXException. "If the application needs to pass through other types of exceptions, it must wrap those exceptions in a SAXException or an exception derived from a SAXException." [SAXException] It is also explicitly allowed to throw IOExceptions for an I/O error. [ErrorHandler] However, it is clearly wrong for the parse() method to throw a RuntimeException. Nonetheless many parsers threw NullPointerExceptions, ArrayIndexOutOfBoundsExceptions, NegativeArraySizeExceptions, and more when encountering malformed documents. Piccolo was the worst offender here, but even the relatively well-behaved Xerces had a few problems in this area. Another common source of errors is reporting too much data after a fatal error. the XML spec says, )." [Bray 2004] However, parsers often mismark the bounds of a fatal error for example, consider James Clark's test not-wf-sa-027.xml [Clark27] : Under some circumstances, Xerces 2.6.1 passes "abc </doc>" to the characters() method before reporting a fatal error. [Harold 2004] . This bug has been fixed in CVS. Or consider James Clark's test not-wf/sa/045.xml [Clark45] : Crimson calls startElement() for a before it realizes the closing greater than sign is missing. Another common, though minor, error was reporting the wrong type for attributes declared with enumerations. The SAX spec requires these to be reported with the type NMTOKEN, However, several parsers use the non-standard ENUMERATION type instead. About 5% of the test cases cover XML 1.1. Except for Xerces none of the parsers explicitly support this. (Some of the parsers almost accidentally pass some of the 1.1 tests though.) What follows are bugs uncovered in individual parsers. Most of these were exposed in multiple tests. Conformance ranged from a low of about 18% to a high just over 90%. Most products could radically improve their scores just by fixing one or two key problems that account for most of their failures. There's quite a bit of low hanging fruit here waiting to be picked off. Xerces is the most popular and broadly used parser I tested. It will become the default parser in Java 1.5. Xerces 2.6.2 achieves a conformance score of only 91%, surprisingly low, especially given that it served as the base for testing other parsers. (The score's even worse, an abysmal 26%, if you require that it call endDocument() following a well-formedness error.) However, almost all of the Xerces problems related to a few easily worked around bugs. In order of frequency, these were: endDocumentafter a fatal error. characters(xmltest/not-wf/sa/027.xml ) [Harold 2004] characters(ibm-valid-P09-ibm09v05.xml). Although Xerces posts very low scores, most of these are for problems that can be worked around if you know you're using Xerces. Several of the bugs in Xerces have now been fixed in CVS, and should no longer be problems as of version 2.7.0. It is probably the parser of choice for most applications. Besides Xerces, the Oracle XML Parser for Java, is the only SAX parser written in Java currently maintained. Thus it's disappointing that it doesn't do a better job. It passed only 42% (19% when requiring endDocument()) of the tests. Furthermore, unlike Xerces, many of the errors were XML conformance errors rather than less serious SAX conformance errors. Notable problems included: endDocument(), even in well-formed documents, with an empty root element tag (sun/invalid/dtd01.xml sun/invalid/el05.xml) fatalError()until it's passed most of the content in, including 1.0 illegal characters (see all the IBM 1.1 test cases) xmlnsattributes even when it isn't supposed to xmlnsattributes, in contrast to SAX spec. In fairness, I must note that shortly before the deadline for the submission of conference papers, Oracle released version 10 of their parser, which shows signs of being significantly improved. I hope to have updated results covering this new version of the Oracle parser at the conference. However, version 9 is clearly too nonconformant to both SAX and XML to rely on. Sun has abandoned their home-grown Crimson parser in favor of the IBM developed Xerces for the next 1.5 release of the Java Development Kit (JDK). However, Crimson is the parser bundled with the JDK through version 1.4.2_03 and is still used by many Java programmers by default. It passes 88% of the tests (but only 29% if you require endDocument on well-formedness errors). Failures are mostly similar to Xerces'. In particular, it also does not call endDocument following a fatal error. However, it has a few unique problems as well: Sun is moving to Xerces; and, given these problems, I see little reason why other programmers shouldn't make the switch as well. When Yuval Oren first released Piccolo two years ago, I had very high hopes for it. It was a very small, very fast parser that filled an important niche of non-validating but entity resolving parser. It was notable for being built using a formal grammar and the parser generator tools JFlex and BYACC/J rather than a handrolled parser, as most implementers working in Java had done up to that point. However, the initial releases had numerous bugs, and no progress has been made on fixing these since July, 2002. My tests uncovered many more problems I had not previously noticed. These include: <mixed1></mixed1>calls characters with no text to report, while it does not do so for empty-element tags such as <mixed1/>. It's not 100% obvious that this is illegal, but it's certainly strange. The overall conformance rate was only 57%. I cannot at this time recommend Piccolo for serious work, though it might make an interesting "fixer-upper" project if someone wished to begin plugging its holes. Michael Kay's Ælfred derivative posted the highest overall conformance scores in the tests (over 90%), until I turned on checking for entity resolution at which point the scores dropped to 0.05%! [3] Such an unbelievably low score makes one question the validity of the tests. However, on investigation the test proved correct. Saxon's Ælfred calls resolveEntity() for the document entity as well as for all external entities. However, the SAX specification specifically prohibits this, "The parser will call this method before opening any external entity except the top-level document entity." (emphasis added) [EntityResolver] Besides this, it had two other significant failure modes: Kay has halted further work on this parser now that an XML parser is bundled with the JDK. [Kay 2002] If anyone is interested in picking this product up again, it would be straight-forward to fix the bugs in character class detection and prevent it from calling resolveEntity() for the document entity. The problems with well-formedness of entity replacement text may run deeper in the code base though. dom4j's Ælfred derivative has the same bug in entity resolution that Saxon's Ælfred exhibited, and consequently scores identically at 0%. However, even when this bug is ignored, this parser performs noticeably worse than Saxon's Ælfred with only 60%. It shared all of Saxon's problems including failure to detect malformed entities used in a well-formed way and allowing unpaired surrogates. However, it also had several new problems: endElement(). [4] fatalError()for a malformed document (xmltest/not-wf/sa/050.xml) I can't see any particular reason to choose this parser over Saxon's Ælfred derivative. At only 46% conformance, GNU JAXP scored significantly worse than the other two Ælfred derivatives. Its problems included most of those of the other two Ælfred derivatives, with one very important exception: it does not call resolveEntity() for the document entity. these tended to be masked by other bugs in GNU JAXP. In addition, it had these unique problems: ArrayIndexOutOfBoundsException, rather than SAXException(not-wf-sa-017 through 019, 024-33) startDocument()(not-wf-sa-099, not-wf-sa-152, o-p24fail2, o-p39fail5, ibm-not-wf-P02-ibm02n30.xml, ibm-not-wf-P24-ibm24n03.xml) GNU JAXP has some features the other Ælfred derivatives don't, such as validation and DOM support. However, its low conformance level makes it a very poor choice for basic SAX work. Its rejection of many well-formed documents is particularly horrendous. I really can't recommend this to anyone. James Clark's XP is the oldest parser tested. The parser code itself dates to 1998, prior to the advent of SAX2. However, Hussein Shafie made a few minor fixes and improvements to Clark's original code, and wrote a SAX 2 interface for the parser. It performs very well for such an old parser, scoring 87.25% (though requiring endDocument() events would reduce its score to 25%). Among others, problems included: There seems little reason to use this product in production today. In many ways this experiment was a test of the SAX specification itself. A good specification leaves little room for interpretation and carefully spells out those areas where different implementations may behave differently. How well does SAX meet this criterion? In other words, is it really a testable spec? With some reasonable assumptions, I think the answer is yes. There is much guaranteed behavior from any conformant SAX parser. However, there are a few tricky areas. Specifically, startElement()by default, or can they be null? Here, pretty much everyone in the SAX community except the maintainer of the SAX specification agrees. In fact, all parsers I tested always passed the full name for the qName argument including Brownell's own Ælfred. However, after a significant discussion on the sax-devel mailing list in 2002 [sax-devel] Brownell unilaterally rewrote the SAX spec to reflect his view. (Previously it had been unclear.) Since I'm writing a test suite, I can unilaterally write it to reflect my view (and incidentally the behavior of all parsers). endDocument()always called? Even if it's always called for a fatal error, what about these cases: startElementor characters()throws an unexpected SAXException? [Wilson] RuntimeException? Erroror other non-exception Throwable? fatalError? I suggest no, because in this case it's the client code that's throwing the exception. However, again both sides of the argument can point to different sentences in the spec to buttress their position. startDocument()always called? Can a parser throw a fatal error before calling startDocument, and then call endDocument()? GNU JAXP does this when encountering a malformed encoding declaration; for instance, in the sun not-wf encoding tests IOExceptions be reported to fatalError()? Especially when it's really a character encoding error rather than an I/O error like a broken socket? or only for lower byte-level I/O error such as broken stream? I also have one feature request for SAX. It would be very helpful to define standard read-only SAX properties analogous to Java's java.version and java.vendor system properties that provide the vendor and version of the parser being used. For example, and. Ultimately I hope the SAX community will come to consensus on these issues, and issue a revised version of SAX (2.0.2?) which nails down these inconsistencies. The test suite is far from complete. It pretty thoroughly tests tests three the four required SAX interfaces, ContentHandler, EntityResolver, and DTDHandler. ErrorHandler is tested to the extent possible given SAX's almost complete lack of requirements for what this interface actually does. However, much work remains to be done: LexicalHandlerand DeclHandlerespecially, for those parsers that implement them. If you'd like to run the test suite for yourself, you can download it from. An Ant build file is included that will produce both the expected data and the individual parser results from the W3C XML Test Suite. Because the license agreement for the test suite is unclear, you'll also need to download that from the W3C at and install it in the same directory. The results cited here were produced using the December 10, 2003 drop of the XML test suite. Of course changes to both the test suite and the parsers are likely to change the exact numbers. I think this indicates that the designers of XML made the wrong choice about where to cut the difference between validating and non-validating parsers. Since all parsers must support the internal DTD subset without exception, it's really not very hard to also add support for the external DTD subset. Many parser writers have expressed a desire to be able to ignore the internal and external DTD subsets. However, a parser that did this would not be conformant to XML 1.0, much less to SAX. Xerces is not incorrect here, and indeed passes this test. However, it does not report the maximum possible amount of content before the well-formedness error which some other parsers do. To make the comparison work, this content needs to be added by hand. To add insult to injury, on further analysis the single test Saxon passed proved to be a false positive due to a bug in the comparison code. It really failed all tests. Local names must always be passed. Even according to David Brownell's rules, only qualified names are sometimes optional. XHTML rendition created by gcapaper Web Publisher v2.1, © 2001-3 Schema Software Inc.
http://www.cafeconleche.org/SAXTest/paper.html
CC-MAIN-2017-13
refinedweb
4,341
55.84
How to Migrate a Toolbar to the NetBeans Platform (Part 2) By Geertjan-Oracle on Aug 03, 2009 Thank you for your explanations. following steps of your tutorial, i now have a customized toolbar on my netbeans platform. Another question arised when doing that. How can i display that toolbar only when a specific window is active? the toolbar is useless in other windows. The answer: @Override protected void componentActivated() { Toolbar tb = ToolbarPool.getDefault().findToolbar("name-of-my-toolbar"); if (!tb.isVisible()) { tb.setVisible(true); } } @Override public void componentDeactivated() { Toolbar tb = ToolbarPool.getDefault().findToolbar("name-of-my-toolbar"); if (tb.isVisible()) { tb.setVisible(false); } } "name-of-my-toolbar" is the name of a folder in the layer, within the Toolbars folder. The result can be seen below, take a look at where the mouse is in the screenshots below to see that the above code works. In other words, the code you see above is in the TopComponent that defines the "Hello Window": Now my toolbar is only shown when the window to which it relates is active. Thanks a lot for the tip. I'd just like to know how to set the visibility to false by default. The way you described it here, it works fine, but on startup, the registered Toolbar is set to visible by default. How do I change this behaviour? Posted by Carsten Schmalhorst on August 04, 2009 at 12:58 AM PDT # Geertjan, Thank you very much for this post. I've been looking for this code tip for a very long time! This is more useful than using xml configuration file, wich could set hidden possibly visible tolbars that you won't hide. Posted by Rocco Casaburo on August 07, 2009 at 07:35 PM PDT # Geertjan pls help, how to migrate visual library PopupMenu actions to the NetBeans Platform toolbar, eg: ... public class Scene extends GraphScene<MyNode, String> { ... int count = 0; public Scene() { ... final JPopupMenu menu = new JPopupMenu(); JMenuItem print = new JMenuItem("Print number of nodes"); print.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { for (MyNode node : this.getNodes()) { count++; } System.out.println(count); //how to migrate this to the NB toolbar } }); menu.add(print); getActions().addAction(ActionFactory.createPopupMenuAction(new PopupMenuProvider() { public JPopupMenu getPopupMenu(Widget widget, Point localLocation) { return menu; } })); } } Posted by Martin Has on August 12, 2009 at 06:51 AM PDT # Martin, I answered your question here: Posted by Geertjan on August 12, 2009 at 09:40 PM PDT # thx for this amazing tip :) Posted by Martin Has on August 14, 2009 at 06:45 AM PDT # Hi Geertjan, you don't happen to have an answer to Carsten Schmalhorst's comment? i've been searching with no success there is sommething like adding _hidden : <folder name="toolbarname_hidden"> but then it seems no instance is initialized and the toolbarpool returns a null pointer thanks Posted by Philippe on August 25, 2009 at 11:32 PM PDT # Rather than ask questions here, it is far smarter to do so at dev AT openide DOT netbeans DOT org. Posted by Geertjan on August 26, 2009 at 01:20 AM PDT # Hi Philippe! In netbean RCP 6.9.1 in layer.xml U can code: <folder name="Toolbars"> <file name="File_hidden"/> <file name="UndoRedo_hidden"/> <file name="Run_hidden"/> <file name="Clipboard_hidden"/> <file name="Memory_hidden"/> </folder> But i can't hide "Run" button in toolbar in this way ^^. Geertjan , can U help me ? Posted by Cường on August 11, 2011 at 08:46 PM PDT #
https://blogs.oracle.com/geertjan/entry/how_to_m
CC-MAIN-2015-18
refinedweb
582
55.54
Converting python-central based package to python-support Step by step - replace python-central with python-support in debian/control sed -i -e 's/DEB_PYTHON_SYSTEM=pycentral/DEB_PYTHON_SYSTEM=pysupport/' debian/rules sed -i -e 's/dh_pycentral/dh_pysupport/' debian/rules remove files that python-central forgot about in one of the previous package versions (see #cleaningup) cleaning up Lenny's python-central doesn't (552595) remove old files at upgrades, so if your package is already in Lenny, you'll need to remove them in preinst maintainer script. Example: # TODO: remove this file after releasing Squeeze set -e if [ "$1" = upgrade ] && dpkg --compare-versions "$2" lt 1.2-3 then pycentral pkgremove python-foo fi #DEBHELPER# Where 1.2-3 is the first version that uses python-support, and python-foo is the name of your package. Note Sometimes pycentral pkgremove package_name is not enough. If list of files provided by your package changed in previous versions, you'll most probably have add these two lines to your preinst script: rm -rf /usr/lib/python2.4/site-packages/foo rm -rf /usr/lib/python2.5/site-packages/foo Please make sure no other package is using foo namespace before you do that, though. You can check that with: apt-file search -x '(packages|pyshared)/foo' -l Warning All packages that share the same namespace have to use the same helper tool so f.e. if your package provides foo/bar/baz.{py,so} files (i.e. foo.bar.baz module) - all other packages that use "foo" namespace have to use the same helper tool. Questions? Ask on #debian-python IRC channel or debian-python@l.d.o mailing list.
https://wiki.debian.org/Python/central2support
CC-MAIN-2016-40
refinedweb
277
55.34
This document describes the namespace. This namespace is used by the following specifications to identify errors: [XPath and XQuery Functions and Operators 3.1] [XPath 3.1] [XQuery 3.1] [XSLT 3.0] [XSLT and XQuery Serialization 3.1] This document describes the names that are defined in this namespace at the time of publication. The W3C reserves the right to define additional names in this namespace in the future. For updated information, please refer to the latest version of the relevant specification. Earlier versions of these specifications also defined names in this namespace. Some of these names may have fallen out of use in the latest version of the specification, in which case they are not listed in this namespace document.. All of the error codes are enumerated below with a hyperlink to the definition of the error in the relevant specification. XQuery 3.1 and XSLT 3.0 both have try/catch constructs allowing errors to be caught, and in both cases system-defined variables are made available within the catch construct to enable the application to determine properties of the error that was caught. The variables available in both XQuery 3.1 and XSLT 3.0 are: err:code - the error code err:description - a free-form error description err:value - an arbitrary value associated with the error err:module - the URI of the query stylesheet module in which the error occurred err:line-number - the line number of the instruction where the error occurred err:column-number - the column number of the instruction where the error occurred An additional variable is available only in XQuery 3.1: err:additional - further implementation-defined information about the error These documents describe the names that are defined in this namespace at the time of publication. The W3C reserves the right to define additional names in this namespace in the future. XSL Transformations (XSLT) Version 3.0 (7 February 2017 version) XQuery 3.1: An XML Query Language (21 March 2017 version) XML Path Language (XPath) 3.1 (21 March 2017 version) XQuery and XPath Functions and Operators 3.1 (21 March 2017 version) XSLT and XQuery Serialization)
http://www.w3.org/2005/xqt-errors/
CC-MAIN-2021-39
refinedweb
358
57.47
Lets say that I have a series of objects that form an aggregate. public class C{ public string Details {get;set;} } public class B{ public string Details {get;set;} public List<C> Items {get;set;} } public class A{ public long ID {get;set;} public string Details {get;set;} public List<B> Items {get;set;} } using Dapper, what is the best way to populate these from tables in a database (in my case it's postgres but that shouldn't matter). The tables in the example are pretty much one for one with the object model. The Items property on the class representing foreign key relationships to each subordinate object. i.e. 3 tables, A has a One to Many relationship with B, B has a One to Many relationship with C. So for a given ID of A I want my objects to have all their child data as well. My best guess is that I should use QueryMultiple somehow but I am not sure how best to do it. I think the helper I propose here: Multi-Mapper to create object hierarchy may be of help. var mapped = cnn.QueryMultiple(sql) .Map<A,B,A> ( A => A.ID, B => B.AID, a, bees => { A.Items = bees}; ); Assuming you extend your GridReader and with a mapper: public static IEnumerable<TFirst> Map<TFirst, TSecond, TKey> ( this GridReader reader, Func<TFirst, TKey> firstKey, Func<TSecond, TKey> secondKey, Action<TFirst, IEnumerable<TSecond>> addChildren ) { var first = reader.Read<TFirst>().ToList(); var childMap = reader .Read<TSecond>() .GroupBy(s => secondKey(s)) .ToDictionary(g => g.Key, g => g.AsEnumerable()); foreach (var item in first) { IEnumerable<TSecond> children; if(childMap.TryGetValue(firstKey(item), out children)) { addChildren(item,children); } } return first; } You could extend this pattern to work with a 3 level hierarchy.
https://dapper-tutorial.net/knowledge-base/6146918/how-do-i-select-an-aggregate-object-efficiently-using-dapper-
CC-MAIN-2019-04
refinedweb
294
56.45
Find online courses made by experts from around the world. Take your courses with you and learn anywhere, anytime. Learn and practice real-world skills and achieve your goals. Do you have to submit your personal tax return to HMRC by January the 31st deadline? Does the thought of doing a tax return seem daunting and do you at times struggle to understand what information they are asking? This course will teach you the elements of a tax return in simple to understand language. This Course Really Does Make Submitting Your Personal Tax Return Easy! What does the course include: I am a chartered accountant working in practice and have discovered that many people who work for themselves find submitting their tax return a daunting experience - especially if it is the first year! The tax return is actually very simple to submit in the majority of cases and I will show you what you need to include in each section of the return as we go along. How is the course taught? This course is taught via a series of videos and at the end of the course we actually go through a full tax return so you can understand where you input all the information you have learned throughout the course. Upon completing this course you will be able to submit your own tax return (without the cost of an accountant) and feel confident at calculating your own self employed income and expenses. Not for you? No problem. 30 day money back guarantee. Forever yours. Lifetime access. Learn on the go. Desktop, iOS and Android. Get rewarded. Certificate of completion. Emily Hyland is the Director of Bright Horizon Accountancy, which is a cutting edge, cloud based accountancy firm based in Bournemouth. She is a qualified accountancy with experience in both accountancy practices and also working for companies in industry. She has a vast breadth of experience working in a variety of industries but specialises in digital creatives and start-up companies.
https://www.udemy.com/demystifying-your-personal-tax-return/
CC-MAIN-2016-50
refinedweb
332
63.19
Red Hat Bugzilla – Bug 26899 daytime (and other services) will kill inetd if connecting socket does not close Last modified: 2008-05-01 11:37:59 EDT From Bugzilla Helper: User-Agent: Mozilla/4.7 [en] (WinNT; I) Using a simple program that loops and does a connect() call to a port under inetd's control without a close() will make inetd stop responding saying Too many open files Reproducible: Always Steps to Reproduce: Compile the following program after changing SOMEHOSTNAME #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/param.h> #include <string.h> #include <stdlib.h> #include <netdb.h> #include <sys/socket.h> #include <netinet/in.h> main() { int i,s; struct sockaddr_in sin; struct hostent *hent; memset(&sin, 0, sizeof sin); sin.sin_family = AF_INET; sin.sin_port = htons(13); hent = gethostbyname("SOMEHOSTNAME"); memcpy(&sin.sin_addr, hent->h_addr, hent->h_length); for (i = 0; i < 1000; i++) { printf("%d\n", i); s = socket(AF_INET, SOCK_STREAM, 0); connect (s, &sin, sizeof sin); } } Change the entry in the inetd.conf so the nowait is set to nowait.1000 Actual Results: In the log messages will appear: inetd[XXX]: accept (for daytime): Too many files open Expected Results: Inetd should not fill up its table and stop network services running. This may be a limitation in TCP/IP but I am not sure This causes a denial of service attack on RedHat boxes running inetd and xinetd. This was resolved with an errata some time ago. *** This bug has been marked as a duplicate of 16729 ***
https://bugzilla.redhat.com/show_bug.cgi?id=26899
CC-MAIN-2018-34
refinedweb
263
69.18
Up to [NetBSD Developer Wiki] / wikisrc / tutorials Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN Markdown. no indenting normal text, even though my editor syntax highlights it. newfs also needs rwd0x and not wd0x, from timothy in comment. Expand that x is the partition name, in case it's not obvious. dhclient->dhcpcd. mention fstab should have a ptyfs entry. heads by timothy in comment. No need to fetch pdisk, so omit this part. heads by timothy in comment. pdisk /dev/rwd0c instead of /dev/wd0c (doesn't work) import all of the how-to articles from the pkgsrc.se wiki
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/how_to_install_netbsd_on_a_power_macintosh_g4___40__grey__41__.mdwn?r1=1.1
CC-MAIN-2017-30
refinedweb
104
60.51
Over. -. Join the conversationAdd Comment Read: ASP.NET has hijacked .NET and Visual Studio, and we are doing our best to take it back from them, one extension at a time. Is it possible to use this tool in ASP.NET 5 project that fit to the CoreCLR? @Steve Kdieh, yes, the generated client proxy code is .NET Core/CoreCLR compatible. Does this actually generate clean proxies and data contract types like a human actually would or does it create the mess that service reference spits out? @Chris Marisic, this will generate similar proxy code as svcutil.exe or Add Service Reference dialog in conventional projects do, but made necessary changes, for example, turning configuration to code and using only types/APIs that exists in .NET Core, so that the generated code can compile and run with CoreCLR/.NET Core (although the generated code can compile and run with full .NET Framework too). If you can share more details (or an example) about what you like or dislike about the generated code, we can consider for future improvements. Thank you for the feedback! Hi, Is it possible to develop an ASP.NET 5 Application (console or web) as a host of WCF services especially for the .NET Core? Is the System.Model namespace supported? Hi @Ramzivic, did you mean System.ServiceModel namespace? Yes, it is supported on .NET Core but only WCF client functionalities are available so you will not be able to host a WCF service with .NET Core libraries. The scenarios that WCF for .NET Core is aiming for is to enable front-end ASP.NET 5 web servers to be able to communicate with back-end WCF services that you built with full .NET Framework and might have invested for years. This is also known as mid-tier scenario. BTW, for runtime questions of WCF for .NET Core, you can also contact us by opening new issues on github (github.com/…/new). We are doing development in open and actively monitoring all incoming issues. For example, it will be great if you can share with us why hosting WCF service with .NET Core is important to you. Hi, Thank you for your response. Speaking about .NET Core/Core CLR lets us think about Nano Server. If we want to execute lightweight Web Services on Nano Servers or Nano Containers, .NET Core has to support the System.ServiceModel. You are right @Ramzivic. Only a subset of server features are available on Nano server. The WCF service is only available on other Windows SKUs where full .NET Framework is available. Thank you for the feedback. I get the following error message when trying to create the proxy in a dnx class library project: Scaffolding Code Error:Failed to generate proxy, the error code: '9' Error:Warning: Endpoint 'XXXXX' at address '' is not compatible with DNX apps. Skipping… Error:Error: No endpoints compatible with DNX apps were found. I replaced the real url and endpoint name. It creates the proxy successfully if its a regular class library project (not dnx), so I guess that the wsdl is valid. Any idea on what to do? @peco, this issue normally happens when the service endpoint contains binding elements that require WCF functionality not existing in the DNX, usually security-related functionality, I'm afraid you will have to implement your client solution using the full framework. As mentioned by Zhenlan in a previous comment, we are currently in active development and if this is blocking you can create a github issue (github.com/…/new) for us to consider it, thank you! Why does the generated reference not add serializable attribute? Thanks for the extension! … also is there anyway to changed the type from an array to a generic list? Thanks @John, regarding your 2nd question. Yes, you can change the Collection type from System.Array to System.Collections.Generic.List in "Data Type Options" tab in "Configure WCF Service Reference" wizard. @john, as per the first question, SerializableAttribute does not exist in DNX. The binary/soap serialization API does not exist in DNX. For a 4.5.2 template, I'm able to add a service reference the old fashioned way. Using this and a 5.0 template, I get all sorts of errors, specifically around unable to obtain metadata and some around authN scheme (removed URI values) Scaffolding Code Error:Failed to generate proxy, the error code: '3', the parameters: '/Verbosity:Silent /Nologo /out:Reference /d:"C:UsersMEAppDataLocalTempWCFConnectedService2016_Jan_08_13_00_06" <URL replace for privacy> /n:*,ServiceReference2 /ct:System.Array /r:"C:UsersME.dnxpackagesSystem.Runtime4.0.21-beta-23516refdotnet5.4System.Runtime.dll" /ct:System.Collections.Generic.Dictionary`2 /r:"C:UsersME.dnxpackagesSystem.Collections4.0.11-beta-23516refdotnet5.4System.Collections.dll" ' Error:Error: Cannot obtain Metadata from <URL replace for privacy> Error:If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at go.microsoft.com/fwlink. Error:WS-Metadata Exchange Error Error: URI: <URL replace for privacy> Error: Metadata contains a reference that cannot be resolved: <URL replace for privacy> '. Error: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'NTLM,Negotiate'. Error: The remote server returned an error: (401) Unauthorized. @Diane Rapp, I’m glad to let you know that we have fixed the issue you reported and an extension update is available for download, you should be able to get it from the Visual Studio notifications window or directly from the gallery: I’m receiving this error when referencing a wsdl in visual studio 2017 with latest version of Visual Studio WCF Connected Service: 0.4.21213.0 from I should mention, the same wsdl works in 2015 when entering wsdl uri and clicking go. In 2017, as soon as I enter the URI, there’s a pause, and this error shows in the status box. @Diane Rapp, thank you for reporting this issue; it seems to be a problem with the WCF connected service extension, we are investigating and will provide an update when we have more data … It's a nice blog to provide a good information. Hope more people reaching your blog because you are sharing a good information. @mac – thank you! please keep the feedback coming … As far as I can tell this does not handle SOAP headers. Can you confirm that they are not yet supported under .NET Core? If they are not, are there any potential workarounds that do not involve changing the server? Thanks! After doing a bit of digging, it looks like the part that is missing (or at least the first part) is the implementation of System.ServiceModel.Dispatcher.OperationFormatter.XmlElementMessageHeader.OnWriteHeaderAttributes. Presumably because it used XmlNodeReader, which has not been implemented either. I can't find much of anything on why that isn't implemented, but my first guess would be that it's because not much of anything seems to use it. @Jeff I opened an issue at GitHub (github.com/…/702) for tracking the issue you mentioned above. Would you mind attaching a repro app to the GitHub issue? Thank you, Shin. I've attached a simple project to reproduce the issue. @Diane Rapp I got the same error make sure the service you are trying to add is exposing a mex endpoint Hello. I am connecting service tracking.russianpost.ru/rtm34 And I have the error: Scaffolding Code Error:Failed to generate proxy, the error code: '9', the parameters: '/Verbosity:Silent /Nologo /out:Reference /d:"C:UsersmeAppDataLocalTempWCFConnectedService2016_Jan_20_00_55_49" tracking.russianpost.ru/rtm34 /n:*,ServiceReference1 /ct:System.Array /ct:System.Collections.Generic.Dictionary`2 ' Error:Error: No endpoints compatible with DNX apps were found. @Diane Rapp, we are pleased to let you know that we have fixed this issue and the extension updated is now available, you should be able to install it from the Visual Studio notifications window or directly from the gallery. @Kroniak, thank you for reporting this issue. I investigated and found that the service requires text message encoding Soap12 (addressing None) which is not supported in DNX, MessageVersion.Soap12 does not exist in this framework. Now, I recognize the error message should be more informative, I will log this issue in our bug tracking system for investigating and considering a fix. Hi I am getting below error when I try to call any method on the proxy. Any ideas An unhandled exception occurred while processing the request. WinHttpException: The proxy auto-configuration script could not be downloaded System.Net.Http.WinInetProxyHelper.GetProxyForUrl(SafeWinHttpHandle sessionHandle, Uri uri, WINHTTP_PROXY_INFO& proxyInfo) HttpRequestException: An error occurred while sending the request. System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) CommunicationException: An error occurred while sending the request. System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) Thanks, Ravi @Miguell – MSFT If I copy classes which was generating by 4.6 NET version (non DNX) to DNX project, it is working fine. The problem is linking only to WCF generator on DNX. I am trying to generate a proxy for VMware's soap web service for dnxcore50. This would be based on 2 wsdl's and some xsd's. Previously using the wsdl.exe or wsewsdl3.exe the format was the following respectively: wsdl.exe /n:VimNamespace /l:CS /o:VimProxy.cs D:<path>vim.wsdl D:<path>vimService.wsdl wsewsdl3.exe /n:VimNamespace /type:webClient /o:VimProxy.cs /l:CS D:<path>vim.wsdl D:<path>vimService.wsdl Are there options to perform this type of generation? I can send you a zip with the wsdl's and xsd's if needed. Is it possible to run this from a command line with more detailed parameters to cover this case? Currently it fails since only one wsdl can be entered, and both wsdls are dependent upon the xsd's. Thanks, Ken Hi Ken I’m having the same issue. I’m trying to use the extension. I point it to a local server e.g. – it starts to scaffold but fails with the172.17.1.30’. An error occurred while sending the request. A security error occurred An error occurred in the tool. Did you found a workarround? Hi @Ken, currently the underlying engine for the WCF Connected Service is the svcutil.exe tool for DNX which can be used for this purpose; however, it has not yet been fully tested for stand-alone usage, and I don't want to create expectations about it since we are still to close on whether we'll continue to use this tool and make it publicly available or we follow a different model. In the meantime you can experiment with it, it can be found somewhere under <visual studio 14.0 install dir>Common7IDEExtensions (dir /s), the '/help' command should give you enough information to be able to perform what you need, the param to use in this case is metadataDocumentPath. @ravi, are you behind a proxy server? it seems to me more a problem with the internet settings than a problem with the WCF connected service. If this is not the case we are going to need more details about your scenario. The best way to provide information in a secure manner is by participating in the 'Visual Studio Experience Improvement Program (Help -> Send Feedback -> Settings)', this way we can get failure information into our Azure Application Insight portal for further analysis. thanks, @Kroniak, if you create a proxy the Visual Studio Add Remove Service feature (or svcutil.exe) you will notice that it creates a CustomBinding configuration in the application configuration file and that it adds a text message encoding with version 'Soap12' which as explained earlier is not available in DNX. You might be able to get the generated code to compile in DNX after removing some attributes (Serializable, DesignerCategory) that are not in DNX but you won't be able to setup a binding that works for this service. If this is not what you are seeing please provide more details about your scenario. thanks, @Miguell, thank you for the quick reply! I was able to get it generated with out issue using svcutil, worked perfectly. I am just trying to get it running now. I have a couple questions I was hoping you could answer, or point me in the right place to ask them. 1. With AllowCookies set to true on the BasicHttpBinding I get the error: "When using CookieUsePolicy.UseSpecifiedCookieContainer, the CookieContainer property must not be null." I am not sure how this is suppose to be set. I tried the following with no luck: (_service being the client generated that inherits ClientBase<>) var cookieContainer = new CookieContainer(); var cookieManager = _service.InnerChannel.GetProperty<IHttpCookieContainerManager>(); cookieManager.CookieContainer = cookieContainer; 2. With AllowCookies set to false I make it to the error: "Could not establish trust relationship for the SSL/TLS secure channel with authority" I believe this may be due to not handling the Certificate. So I am wondering how we should handle Certificates validation now to emulate the old event of: ServicePointManager.ServerCertificateValidationCallback 3. Is there any chatter on a tool to creates an XML serialization assembly for types in a specified assembly like sgen.exe did? Thanks, appreciate your help! Ken Regarding #3, this work is currently tracked by. It will be great if you share your scenarios and why it is important to you on the GitHub issue. Thank you! @Ken – hmm, it is hard to tell from this info. My suggestion is to try with a full-framework (.Net 4.5) project and compare the client’s configuration, I suspect some security settings are missing (and might not be supported in DNX) – It would be useful to be able to talk to SOAP webservices, with or without .asmx endpoints from ASP.NET Core. Is there an extension planned to generate service references for SOAP as well? Hi @Paul Taylor – the WCF Connected Service extension is WCF-based which is all about SOAP. Can you elaborate on the problem you are having? thank you @Miguel I was copying full code that was generation by old tool with 4.5 framework to DNX project with 4.5 full target. And I didn’t delete some attributives like [Serializable()]. And It works with my DNX project. Like /// [System.CodeDom.Compiler.GeneratedCode(“System.Xml”, “4.6.1038.0”)] [Serializable()] [System.Diagnostics.DebuggerStepThrough()] [System.ComponentModel.DesignerCategory(“code”)] [System.Xml.Serialization.XmlType(Namespace = “”)] public partial class CustomDutyEventsForMailFault : object, System.ComponentModel.INotifyPropertyChanged {…} I have added “frameworkAssemblies”: { “System.Runtime.Serialization”: “4.0.0.0”, “System.ServiceModel”: “4.0.0.0”, “System.XML”: “4.0.0.0”, “System.Xml.Serialization”: “4.0.0.0” } I see that jast copying works. @Kroniak – by looking at the version of your project references and the fact that your project compiles even when using the Serializable attribute (not existent in DNX) it must be that it is targeted to a different framework, maybe dnx451. You can tell by looking at your project.json file, you might have something like this: “frameworks”: { “dnx451”: {, can you check? thanks, @Kroniak This worked is likely because you target your DNX project only for 4.5 full framework. If you want to target your DNX project for .NET Core too (by adding a section named “dnxcore50” in the project.json file of our project), you will get compilation errors as APIs such as SerializableAttribute does not exists in .NET Core. Yes, my target is dnx451 (is it not full 4.5.1 framework??), not “dotnet core”. I have DNX project with ASP5 (aka ASP core 1) and only one target dnx451, because most libraries that I use not support “dotnet core”. I undestand that corednx is not supporting it, but I don’t use core dnx. And I would like use this WCF tools to generate classes with my case. I think that I am not alone with it. Yes, dnx451 is full framework. What you are doing is legit and fully supported. Just keep in mind, if you decide to switch to .NET Core one day, your project has to be updated due to API differences in .NET Core as you have been aware of. Mmmmm, may be, but I don’t plan use coreclr now. I am using a windows host and DNX project on IIS\kestrel without coreclr. If the DNX WCF generator will work with this, It will be cool. But as I see, you will not plan support this case (DNX without coreclr). I am working with SOAP which I can’t change ) I’m looking at moving services from asp.net 4.5, some of which use certificates, some basic authentication. Can you advise on how configure the services with service model tags that previously resided in the app.config or web.config? Specifically: – for certificates : – for basic authentication: In asp.net 5 I set up the certificates, but when executing the service it still thinks it’s an anonymous request with basic authentication. Is there a way to set the headers? var service = new SomeService.ProjectsClient( ProjectsClient.EndpointConfiguration.HTTPS_Port); service.ClientCredentials.ClientCertificate.Certificate = new X509Certificate2(@”path to pfx”, “password”); SomeClass result = await service.SomeMethod(); The HTTP request is unauthorized with client authentication scheme ‘Anonymous’. The authentication header received from the server was ‘Basic realm=”XISOAPApps”‘. Any advice? Sorry, the xml got stripped out in my prev post. Looking for how to programatically (or in a config) set: from config binding->security->transport->message->clientCredentialType=”UserName or certificate” My issues: 1. Cant generate service client for an older ASMX-based WSDL with *Get and *Post bindings+ports. svcutil.exe gives similar warnings but still produces working client code, while the connected service generator produces errors like this and no code: Failed to generate proxy, the error code: ‘9’ (…) Error: Cannot import wsdl:binding Detail: The required WSDL extension element ‘binding’ from namespace ‘’ was not handled. (…) 2. Cant load WSDL from a file system path with a space in it, e.g C:\Code\Service References\HelloWorld.wsdl Hi @Anders! The problem with your web service is that the message encoding version it uses Soap12AddressingNone is not supported in DNX. Only Soap11AddressingNone or Soap12Addressing10 are supported. We are working on a version of the Connected Service that will give more information about why the service is not supported and we hope to get it out soon. As per #2, did you try surrounding the full path with quotes? thank you for your feedback. HI Miguell, For #2, I might have tried quotes as well and got another error message, but alas cannot recall exactly at the moment. I did however resolve it by placing the wsdl in another directory without spaces. For #1, I would appreciate if the WCF Connected Service extension behaved similar to svcutil, by simply ignoring the bindings it doesn’t support. This particular legacy service WSDL described three bindings (Get, Post, Soap), and only one were supported (Soap). I fixed this by downloading the WSDL and edited out the Get/Post stuff manually, thus I ran into #2 🙂 Also a feature request: Because I had to do this a couple times over, it would have been nice if the WCF Connected Service extension would have let me go back to retry with the same settings if the generator failed for some reason. All is good now, thanks for your work and feedback. I need to add a reference of Java Web service,whether its possible using this package. Hi miguel, Please clarify the below scenario I tried to generate a proxy class by referring the WSDL file locally but i got a below exception Failed to generate proxy, the error code: ‘3’, the parameters: ‘/Verbosity:Silent /Nologo /out:Reference /d:”C:\Users\user\AppData\Local\Temp\WCFConnectedService\2016_Mar_09_19_28_08″ E:\xtratech documents\travelportAPi\wsdl\uAPI_WSDLschema_Release-V15.5.0.91\air_v35_0\Air.wsdl /n:*,ServiceReference1 /ct:System.Array /r:”C:\Users\user\.dnx\packages\System.Runtime\4.0.21-beta-23516\ref\dotnet5.4\System.Runtime.dll” /ct:System.Collections.Generic.Dictionary`2 /r:”C:\Users\user\.dnx\packages\System.Collections\4.0.11-beta-23516\ref\dotnet5.4\System.Collections.dll” ‘ Error:Error: The input path ‘E:\xtratech’ doesn’t appear to refer to any existing files Hi shahulhameed! The error occurs because the path to the wsdl contains spaces in it, it needs to be specified with quotes. Hi! I was able to generate the proxy but I’ve got a problem now. I cannot access the operations. Ive tried to access the operations with a regular WCF Client Proxy and everything worked out fine. For example (Regular WCF Client): ClientProxy oProxy = new ClientProxy(); string test = oProxy.ping(); // Everything ok! With Connected Service I do not have access to any of the operations. Can you help me out here? Thank you very much, Ricardo. Hi Ricardo! If I understand the problem correctly, the operation you are looking for should be now appended with the ‘Async’ (oProxy.pingAsync()) because the generated operations follow the async pattern now. Please let me know if this is not the case, thanks, Hi. The problem is that I cant find any operation on my proxy object. It only contains the basic operations such as oProxy.ChannelFactory, oProxy.Open, (…) but no operations at all. Thank you! Ricardo, would it be ok for you to share the service address so we can investigate? thanks, Hi, Unfortunetly I only have the service address locally. I will try and explain exactly what is going on. So this is a PHP SOAP API created with NuSOAP. As expected it contains a lot of operations that will comunicate to a MySQL DB. If I create a normal service reference (with .NET 4.6 for example) I can access and call all the operations of the SOAP service troughout the client object. If i create a connected service (with .NET Core 5.0) I cannot access any of the operations. Also when I create the client object I get an exception on the channel – oClient.Channel threw an exception of type System.ServiceModel.CommunicationObjectFaultedException. Let me know if you want me to post some pictures that might ilustrate what is going on. Thank you. Ricardo, would it be possible for you to download the service wsdl and somehow make it available to us so we can investigate? thanks, I am trying to hook up to the Microsoft System Center Orchestrator Web Service and I get the following error Scaffolding Code Error:Failed to generate proxy, the error code: ‘9’, the parameters: ‘/Verbosity:Silent /Nologo /out:Reference /d:”C:\Users\tjordan\AppData\Local\Temp\WCFConnectedService\2016_Mar_17_09_11_20″ /n:*,SCOService /ct:System.Array /r:”C:\Users\tjordan\.dnx\packages\System.Runtime\4.0.21-beta-23516\ref\dotnet5.4\System.Runtime.dll” /ct:System.Collections.Generic.Dictionary`2 /r:”C:\Users\tjordan\.dnx\packages\System.Collections\4.0.11-beta-23516\ref\dotnet5.4\System.Collections.dll” ‘ Error:Error: No endpoints compatible with DNX apps were found. Hi Tim_D_Jordan! Did you try this with the update we posted a couple of weeks ago? it should give you details about why no compatible endpoints were found. We need to update the Client Proxy whenever the wsdl changes. Since we work with something like 100 webservices a methodology to update all of them would be handy. Reconfiguration of the Service references is required also. Hi! We currently plan to release a stand-alone version of svcutil that works for DNX (now CLI) in the near future. This tool can be used, for instance within a cmd script, to automate the reference update process. This tool is used behind the scenes by the WCF Connected Service extension, although it is not fully tested as a stand-alone tool yet, you can use it at your own risk in the meantime. You can find it under the %VSINSTALLDIR%\Common7\IDE\Extensions folder. Hello. I can see why I am having the issue in the below error I receive when adding a service to a .net core 1.0 web app, but I am not sure what I can do or if I can do anything. Any help would be appreciated.’. Content Type application/soap+xml; charset=utf-8 was not supported by service. The client and service bindings may be mismatched. The remote server returned an error: (415) Cannot process the message because the content type ‘application/soap+xml; charset=utf-8’ was not the expected type ‘text/xml; charset=utf-8’.. Failed to generate service reference. Hi Sean! It looks like indeed metadata exchange is not enabled for the service, the connected service tool tries to download metadata using WS-MetadataExchange first and then using HTTP/GET, the first fails because no MEX endpoint is found and the second fails because the service does not seem to have HttpGet enabled. If this is the case, you have several options you can look in the MSDN, here is one that enables both protocols, MEX and HttpGet, add this to your service’s web.config file: I hope it helps, How can I ignore the verification of the certificates when adding a reference to a service? When adding the reference I have this error Scaffolding Code … Attempting to download metadata from ‘’ using WS-Metadata Exchange and HttpGet.login.uh.cu’. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure. Failed to generate service reference. Thanx for the help in advance Hi a.flechilla! It seems to me the problem is that the service requires authentication to be able to provide metadata, you can check by trying the service address in the browser … Miguel Could you please update this extension to work with DotNetCore 1.0 RC2? I’m getting following error after clicking Configure on Add Connected Service -> WCF Service – Preview “The Connected Services component ‘WCF Service – Preview’ failed: (HRESULT:0x80070002) Could not load file or assembly ‘Microsoft.VisualStudio.ProjectSystem.DNX.14.0, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified.” Hi Tom! Yes, we are working to get it out soon, please stay tuned! thanks, I got same error, plz cant you notified our when the error fixed. Hi all! This extension has been updated and a new blog post about it is available here: I have installed the tool but I cannot add any service referenced as the finish tab is forever greyed out. May be because the data type options are blank for Reuse types in referenced assemblies, any clues as to why this is?? Hi Aaron! This issue has been posted in the blog post for the RC2 version of the tool, can you please check the answers and see if that resolves your problem? Also, please continue providing your feedback on that blog post so we can keep everyone informed on a single place: thank you! I just tried using this extension in an aspnetcore class library and I keep getting: Scaffolding Code … Attempting to download metadata from ‘’ using WS-Metadata Exchange and HttpGet. Generating files… C:\Users\USER\AppData\Local\Temp\WCFConnectedService\2016_Jun_06_16_11_15\Reference.cs Updating project.json file … Error:Error: Unable to check out the current file. The file may be read-only or locked, or you may need to check the file out manually. If I manually edit the file so it’s checked out, this works fine… I was not able to add the service references while I had unrelated compilation errors in my code. This came about when I removed an older service reference which removed the namespace I was referencing. I was trying to import the new WSDL. I don’t know if this is intentional behavior or not. I was able to get around the issue by temporarily commenting out code. Hi Mark! The behavior you are seeing is expected, the tool requires the code to be compiled to be able to infer types that can be reused for proxy generation. Please post further feedback in the blog posted for the RC2 version of the tool: thanks, Hi all! Please post feedback in the blog page for the updated version of the extension (RC2): And if you are still on the RC1 version, please upgrade to RC2 as explained in the blog link above. thank you and keep posting your feedback! When I try to add reference of ASMX service thru WCF Connected Services in .NET Core (Standard 1.6.11) library, there are errors in project.json file it is .NET Standard 1.6.1
https://blogs.msdn.microsoft.com/webdev/2015/12/15/wcf-connected-service-visual-studio-extension-preview-for-asp-net-5-projects/
CC-MAIN-2017-51
refinedweb
4,786
57.16
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Mentioned on OS News. The features mentioned in the C# 3.0 preview, if nothing else, will serve to distance the product from Java still further. Lambda and query expressions...hmmm... Extension Methods a la SmallTalk Instead of making a string class of your own, you can extend it without inheritance or composition. The method "Chomp" does not exist in .NET string type. ----------Extensions.cs------------- namespace MyCompany.StringUtilities { public static class Extensions { public static int Chomp(this string s) { return s.EndsWith("\n") ? s.TrimEnd("\n".ToCharArray()) : s; } } } -------MyClass.cs------------ using MyCompany.StringUtilities; class MyClass { public static void Main() { string s1 = "test\n"; string s2 = s1.Chomp(); } } Lambda Expressions As taken from the Language Specifications: x => x + 1 // Implicitly typed, expression body x => { return x + 1; } // Implicitly typed, statement body (int x) => x + 1 // Explicitly typed, expression body (int x) => { return x + 1; } // Explicitly typed, statement body (x, y) => x * y // Multiple parameters () => Console.WriteLine() // No parameters myObject.MyMethodTakingDelegateParam( x => x + 1); Type Inference int i; // you can still use type annotations var i = 0; // type inferred as int var name = "dude"; // type inferred as string; var question = new[] { "dude", "where's", "my", "car" }; // type inferred string[] array Anonymous Types var person = new { Name = "dude", Weight = 150 } var employees = new[] { new { Name = "Chris Smith", PhoneNumbers = new[] { "206-555-0101", "425-882-8080" } }, new { Name = "Bob Harris", PhoneNumbers = new[] { "650-555-0199" } } }; Looking at your first example about "Extension Methods" I wonder if the method is really added to the string class prototype (and in that case how does it types in other parts of program, and what about collisions when two programs are adding the same method ?) Or is it just syntactic sugar and the s1.Chomp() will translate at compilation time to Chomp(s1) ? That would make more sense but in that case how can you share your extensions among several parts of the code ? I re-expanded the example. To answer your question, I quote the spec ...In effect, imported extension methods appear as additional methods on the types that are given by their first parameter and have lower precedence than regular instance methods... Regarding the line: int Chomp(this string s) Is "this" really needed? Wouldn't it be easier just to allow the following: int Chomp(string s) //usual method signature Name spaces will take care of method name collisions? In Dylan, a function call x.f is syntactic sugar for f(x). (I think it also works for multi-argument functions, but I don't recall.) The actual mention on OSNews (titled "Nemerle 0.9.0 released") is getting harder to track down. Falcon, if you don't include the "this" keyword, how can C# tell that you want MyCompany.StringUtilities.Extensions.Chomp() to be an extension method instead of a normal method in the Extensions class? The "this" keyword is what signals to C# 3.0 that this is an extension method. This is logical since the string "s" becomes the hidden "this" parameter when calling Chomp().
http://lambda-the-ultimate.org/node/973
CC-MAIN-2018-39
refinedweb
524
56.55
Data Crawl Stone Soup: analyzing species in DCSS This post was auto-generated from an IPython notebook. By default only the output cells are shown. I recently got back into Dungeon Crawl Stone Soup, a charming old-school roguelike. Not only is DCSS free and open source, it's a gift to an armchair data scientist like me, since it has a lot of publicly available gameplay data. Each completed game yields a 'morgue' file, which gives lots of information about the game, like what level the player reached, which god they worshipped, what was in their inventory, and so on. Here's an example from one of my games. I downloaded all the morgue files from one popular DCSS server,, and parsed out about a million games. My code is available on GitHub here. In this notebook, I'll be doing some analysis involving the 26* different species that players choose from when starting a game. *(This data spans 10 major versions, from 0.10 to 0.20, some of which added or removed species. I'm including all extant species in the most recent release, 0.19. This excludes species removed between 0.10 and 0.19 like sludge elves, and experimental species like lava orcs and barachians. It includes high elves, even though they're set to be removed in 0.20.) %matplotlib inline from __future__ import division from vis_common import load_frame, STORE from crawl_data import CANON_SPECIES from plotting_helpers import xlabel_pct, ylabel_pct, plot_percent from matplotlib import pyplot as plt import numpy as np import pandas as pd import IPython.core.display as di f = load_frame(include=['saw_temple', 'saw_lair', 'saw_vaults', 'saw_zot']) print "Loaded data frame with {} records and {} columns".format(len(f), len(f.columns)) # Some reusable indices. These are just boolean arrays I can use to index into my data # frame to get games that were lost, or won, or quit) def get_original_species(sp): return 'draconian' if sp in drac_species else sp # The draconian species presents a little problem. When draconians hit level 7, they # get a random colour, and their species label changes accordingly. We'd rather lump # black/red/green/baby draconians all into one species, or we'll get some funny results. # (For example, the win rate for coloured draconians is very high, because of the survivor # bias. On the other hand, baby draconians win 0% of the time, because it's basically # impossible to win the game before level 7.) f['orig_species'] = f['species'].map(get_original_species) Loaded data frame with 1295053 records and 16 columns pr = (f.groupby('orig_species').size() / len(f)).sort_values() ax = plot_percent(pr, True, title='Pick rate by species (% of games)', figsize=(10,8)); ax.set_xlabel('% of all games picked'); The most surprising thing here to me was just how much demonspawn exceed every other species in popularity. They're almost twice as popular as the second-place species, with an absolute difference of around 40,000 games. The popularity of minotaurs and deep elves isn't too surprising, since they're the archetypal melee fighter and caster species, respectively. There's maybe some tendency for species with complex mechanics or heavy restrictions to be less popular, but there are lots of exceptions. (Halflings are very vanilla, and unpopular, and mummies, spriggans, and octopodes are weird but popular) wr = f.groupby('orig_species')['won'].mean().sort_values() ax = plot_percent(wr, True, title='Win rate by species', figsize=(10,8)) ax.grid(axis='x'); There's quite a lot of variation here. A random deep dwarf game is 10x as likely to be won as a random octopode game. What makes good species good and bad species bad? I'm going to focus on the two extremes cases to start. Why do deep dwarves win so much?¶ Hypothesis 1: Pakellas abuse¶ When I showed these results to a friend who plays DCSS, he suggested that maybe this was an effect of the god Pakellas being overpowered. Deep dwarves would benefit disproportionately from worshipping Pakellas because of their high evocation aptitude, and the ability to recharge Heal wounds wands to overcome their lack of HP regeneration. Pakellas was added in 0.18 and removed in 0.19, so if this is the reason, we would expect their win rate to be higher in version 0.18, and for most of their wins to come from that version. fig, (ax1, ax2) = plt.subplots(2, sharex=True, figsize=(8,6)) idd = f['species'] == 'deep dwarf' # reusable index dd_winr = f[idd].groupby('version')['won'].mean() dd_plays = f[idd].groupby('version').size() plot_percent(dd_winr, ax=ax1, title="Win rate") dd_plays.plot.bar(ax=ax2, title="Games played"); Nope! Their win rate is actually kinda low in 0.18. Well what if most of these DDs are secretly bots? Hypothesis 2: Deep Dwarf bots¶ It turns out that, as crushingly difficult as DCSS is, there exists at least one bot that can beat it. Elliptic's qw bot can apparently win 15% of games with its best combination, Deep Dwarf(!) fighter of Makhleb. qw leaves some distinctive patterns in the notes section. By looking for them, I was able to positively id on the order of 10,000 bot games. Those games are excluded from the data I'm analyzing here, but I can't be totally certain there don't exist other bots, or variants of qw that don't leave these patterns. One hallmark of bots is that they play really fast. Let's plot turns taken against wallclock time, and see if there's a noticeable difference between DDs and non-DDs. s = 25 alpha = 0.3 n = 250 lw=.5 def plot_turn_time(figsize=(8,6)): ax = f.loc[iwon & ~idd].head(n)\ .plot.scatter(x='time', y='turns', color='red', marker="s", figsize=figsize, loglog=1, label='non-DD winners', alpha=alpha, s=s, lw=lw); f.loc[iwon & idd].head(n)\ .plot.scatter(x='time', y='turns', color='blue', label='deep dwarf winners', marker="o", alpha=alpha, s=s, lw=lw, ax=ax) l, r = 5*10**3, 5 * 10**5 b, t = 10**4, 10**6 ax.set_xlim(l, r) ax.set_ylim(b, t) ax.legend(loc=4); ax.set_title("Time taken vs. turns") return ax plot_turn_time(); (Note the log-log scale, and the truncated axes.) Looks pretty well mixed. If anything, DDs seem to be a little slower, with respect to turns taken per second. Just to make it really clear, let's load up some known bot games and add them to the mix. bots = STORE['bots'] ax = plot_turn_time((10,7)) bots.loc[iwon].head(n)\ .plot.scatter(x='time', y='turns', color='green', label='bot winners', marker="^", alpha=alpha, s=s, lw=lw, ax=ax) l, r = 10**3, 5 * 10**5 b, t = 10**4, 10**6 ax.set_xlim(l, r) ax.set_ylim(b, t) for turns_per_second in [1, 2, 4, 8, 16]: ax.plot( [l, t/turns_per_second], [l*turns_per_second, t], color='black', lw=lw/2, ) ax.legend(loc=4); The diagonal lines represent, from left to right, 16/8/4/2/1 turns taken per second. The bots are clearly in a league of their own here. When they win, they win in a small number of turns and a really small amount of time (I had to move the x-axis back to make that one bot with the ~20 minute time visible - it was literally off the chart). Hypothesis 3: Newb reppelent¶ My best guess is just that deep dwarves are unattractive to inexperienced players. Not being able to regenerate HP is really scary. So DD games are skewed towards experienced players, who are more likely to win (no matter what species they're playing). We could try to test this directly, but it'd be tricky. Number of games played is going to be a poor approximation of level of experience, because lots of users will have played DCSS locally (or on another server, or under another username), before playing their first CAO game. But this hypothesis is at least partially supported by the earlier chart of species pick rate, which showed DDs as the 4th least picked species. Does a general relationship hold between a species' popularity and its win rate? Let's make a scatterplot! from adjustText import adjust_text sp = pr.index fig, ax = plt.subplots(figsize=(10,10)) ax.scatter(pr.values, wr[sp].values) texts = [plt.text(pr.loc[species], wr.loc[species], species) for species in sp] adjust_text(texts, arrowprops=dict(arrowstyle="-", color='k', lw=.5), force_points=.7, force_text=.3) ax.set_title("Pick rate vs. Win rate") ax.set_xlabel('Pick rate') xlabel_pct(ax) ax.set_ylabel('Win rate'); ax.set_ylim(bottom=0, top=0.025) ylabel_pct(ax) There does seem to be a weak correlation here. Demonspawn, the most played class, has a poor win rate. Octopodes, the species with the lowest win rate, is in the top 5 most picked. Centaurs, ghouls, and deep dwarves are very unpopular and win a lot. The two big exceptions are minotaurs and gargoyles, which are very popular and have a high win rate. Why do octopodes suck?¶ At around 1 in 400, octopodes have the lowest win rate of any species. We could make the claim that, whereas DDs are a newb repellent, Octopodes are a newb magnet. "8 rings? Cool!". And they are quite popular. But let's try to dig a bit deeper. How and when do octopodes meet their demise? # Kinda slow. Probably a more efficient way to do this, without looping. # I wish the pandas group by documentation was better. dr_given_level = [] ioct = f['species'] == 'octopode' for lvl in range(1, 28): dr_given_level.append( (~ioct & ilost & (f['level'] == lvl)).sum() / (0.0+(~ioct & (f['level'] >= lvl)).sum()) ) # TODO: download more RAM import gc; gc.collect() oc_dr_given_level = [] for lvl in range(1, 28): oc_dr_given_level.append( (ilost & (f['level'] == lvl) & ioct).sum() / (0.0+((f['level'] >= lvl) & ioct).sum()) ) gc.collect(); fig, ax = plt.subplots(figsize=(11,6)) common_kwargs = dict(marker='.', markersize=7, linestyle=':', lw=.6) ax.plot(np.arange(1,28), dr_given_level, color='brown', label='non-octopodes', **common_kwargs) ax.plot(np.arange(1,28), oc_dr_given_level, color='b', label='octopodes', **common_kwargs) ax.legend(loc=9) ax.set_xlim(1, 27.25); ax.set_xticks(range(1,28)) ylabel_pct(ax) ax.set_title("Given that you just reached player level X, what's your chance of dying there?"); ax.grid(axis='both') The first three levels are about as deadly for octopodes as for anyone else. But the death rate diverges massively at level 4 and doesn't equalize until the mid to late teens. It takes a long time to fill 8 ring slots! (Aside: The huge bump at level 11, and the lesser one around 16/17 are both interesting. I think the first bump corresponds to when most players enter the lair, and the second corresponds to when players start venturing into rune branches. I'll go into more of this stuff in another post.) This suggests octopodes are pretty decent in the mid-to-late game, but frequently die before they can get there. The dramatic jump at level 27 makes sense. Because it's the level cap, players spend a lot more time at level 27 than they do at any other level, so it presents a lot more opportunities to die. What's less clear is why level 27 is almost 5% more deadly for octopodes, after they managed to stay about even with other species since the mid-to-late teens. Aren't octopodes a good late game species? Maybe too late-game for their own good! Ambitious octopodes¶ One factor that may affect a species' win rate is the rate at which they tend to seek more than 3 runes. Let's see how often each species goes into the extended game. In particular, let's measure: given that a player of species X earned 3 runes, how likely were they to continue on to earn more than 5 runes? extended = f.loc[f['nrunes'] > 5].groupby('orig_species').size() ax = (extended / f.loc[f['nrunes'] >= 3].groupby('orig_species').size()).sort_values()\ .plot.barh(title='% of games electing to go into extended', figsize=(8,6)); xlabel_pct(ax) ax.grid(axis='x'); Well, I'm starting to understand octopodes' abysmal win rate. They die like goldfish for most of the early-to-mid game, and when they do manage to get their tentacles on 3 runes, half the time they continue on to the most dangerous branches in the game. Presumably there are lots of octopodes who could have easily won with 3 runes, but ended up as an ink-blot on the floor of pandemonium. octo_death_places = f.loc[ilost & (f['species'] == 'octopode') & (f['level'] > 24)].groupby('wheredied').size() octo_death_places.name = 'octopodes' human_death_places = f.loc[ilost & (f['species'] == 'kobold') & (f['level'] > 24)].groupby('wheredied').size() human_death_places.name = 'kobolds' dp = pd.concat([octo_death_places, human_death_places], axis=1) print "Where do octopodes/kobolds die after level 24?" print dp.select(lambda i: dp.loc[i].sum() > 10).sort_values(by='octopodes', ascending=0) Where do octopodes/kobolds die after level 24? octopodes kobolds wheredied ziggurat 32 8 pits of slime 28 17 realm of zot 23 55 pandemonium 22 15 tomb 16 9 vaults 15 28 depths 11 20 dungeon 7 5 abyss 6 12 crypt 4 7 A higher % of kobolds who make it to level 25 will win than octopodes, but the kobolds who fail to win from that position mostly die in Zot trying to get the orb. The octopodes who fail to win from that point mostly die in exotic extended game locales like ziggurats. Controlling for number of runes sought¶ It seems like the variation in ambition across species makes our table of win rates harder to interpret. Let's try to control for this. What's the win rate for each species, in a world where players always go for just 3 runes? We can calculate this counterfactual as: P(earning >= 3 runes | species) * P(success | species, trying to win with 3 runes) The first term we can easily calculate directly from the data. The second term is more tricky: the chance of a given species winning given that they've already earned 3 runes, and given that they don't intend to get any more. It's tricky because it involves a latent variable: the player's intentions. But we can approximate it as: #(3 rune wins | species) / (#(3 rune wins | species) + #(deaths in Zot, the depths, and the dungeon | species, 3 runes)) The denominator should be a pretty good approximation of the number of 3-rune win attempts by a given species. We can't just count the number of deaths by characters holding 3 runes, because that will include lots of deaths that occurred while trying to get a 4th rune in the slime pits, the abyss, etc. We only want to count deaths that occurred while diving down to Zot to get the orb, or while ascending back up to D:1 with the orb. (This may slightly undercount, since some 3 rune win attempts may end in the Vaults/Shoals/Swamp/etc. whilst trying to get back up to the dungeon with the third rune, especially if it was "ninja'd". They may also occasionally end in the abyss as a result of being banished on the way to the orb.) Here's a graph just showing the value of that second term for each species: geq_three_runes = f[f['nrunes']>=3].groupby('orig_species').size() three_rune_wins = f.loc[iwon & (f['nrunes']==3)].groupby('orig_species').size() three_rune_deathspots = ['dungeon', 'realm of zot', 'depths'] three_rune_win_attempts = (three_rune_wins + f[(f['nrunes']==3) & f['wheredied'].isin(three_rune_deathspots)].groupby('orig_species').size()) zot_success = three_rune_wins / three_rune_win_attempts ax = plot_percent(zot_success.sort_values(ascending=False), figsize=(11,4), title="What % of 3-rune win attempts succeed, once 3 runes are earned?") ax.grid(axis='y', lw=.3); A pretty small spread, relatively speaking. Let's see the full adjusted win rates. adjusted_wr = (geq_three_runes / f.groupby('orig_species').size()) * zot_success wr2 = wr fig, ax = plt.subplots(figsize=(11,11)) x = range(len(wr2.index)) ax.barh(x, wr2.values, label="Raw win rate", color="b", zorder=1) ax.barh(x, adjusted_wr.loc[wr2.index].values, label="Adjusted win rate", color="crimson", zorder=0) ax.legend() ax.set_ylim(-.8, len(x)-.2) ax.set_yticks(x) ax.set_yticklabels(wr2.index) ax.grid(axis='x') xlabel_pct(ax); The new red bars represent each species' hypothetical win rate if players always tried to win as soon as they had 3 runes. In some sense, this should be a fairer measure by which to compare species' power. Popular 15-rune species like gargoyles and especially formicids go up a few ranks. For species like felids and vampires that were already doing mostly 3-rune runs, the adjusted win rate is not much higher than their raw win rate, so they drop a few ranks. Relative to their original win rate, octopodes get a pretty big bump (their 3-rune win rate is the 7th highest), but they're still a solid dead last. It looks like it's their early game fragility that accounts for their low win rate, more so than their tendency to go for 15 runes. Win rates at different stages of the game¶ w = .8 fig, ax = plt.subplots(figsize=(11,8)) temple_wr = f[f['saw_temple'] == True].groupby('orig_species')['won'] .mean().dropna()[wr.index] for label, df, color in [('made it to temple', temple_wr, 'pink'), ('baseline', wr, 'brown')]: ax.barh(np.arange(len(df.index)), df.values, w, label=label, color=color) ax.set_ylim(-w, len(wr)) ax.set_yticks(np.arange(len(wr))) ax.set_yticklabels(wr.index) xlabel_pct(ax) ax.set_title("Base win rate vs. win rate at temple") ax.grid(axis='x') ax.legend(loc=4); The brown bars here are our familiar win-rates from before, but the pink bars are new: they show what % of games each species wins given that it reaches the ecumenical temple. A disproportionately small pink bar (like the one on trolls), suggests that this species has a strong early game. Getting to the temple doesn't much increase their chance of winning, because they were expecting to make it to the temple. A large pink bar (e.g. ghoul, naga), suggests that going from D:1 to the temple is a significant filter for this species. Let's throw in a few more milestones: the lair, the vaults, and the realm of zot. width = 1 fig, ax = plt.subplots(figsize=(11,11)) colours = ['brown', 'pink', 'green', 'grey', 'violet'] branches = ['D:1', 'temple', 'lair', 'vaults', 'zot'] # Order species by win rate given vaults sp = f[f['saw_vaults']==True].groupby('orig_species')['won'].mean().sort_values(ascending=0).index ranked = [] for branch, colour in reversed(zip(branches, colours)): if branch == 'D:1': df = f else: df = f[f['saw_'+branch] == True] branch_wr = df.groupby('orig_species')['won'].mean()[sp] ax.barh(range(len(branch_wr.index)), branch_wr.values, width, label=branch, color=colour, linewidth=.5, edgecolor='black', ) ranked.append(branch_wr.sort_values(ascending=0).index) ranked.reverse() ax.set_title("Win rate given branch reached") ax.set_yticks(np.arange(len(branch_wr.index))) ax.set_ylim(-.5, len(branch_wr.index)-.5) ax.set_yticklabels(branch_wr.index) xlabel_pct(ax) ax.legend(); This time species are sorted by their win rate given that the vaults are reached (the grey bars). The table below might make it easier to see how species go up and down in win rate at different stages of the game - it shows only species' ranking relative to one another. ranked = np.asarray(ranked).T cm = plt.get_cmap('jet') color_indices = np.linspace(0, 1, ranked.shape[0]) canon_sort = list(ranked.T[0]) alpha = .3 def color_species(sp): # I think maybe this can be accomplished with style.background_gradient, but I wrote this before seeing it. i = canon_sort.index(sp) ctup = cm(color_indices[i])[:-1] csstup = tuple(map(lambda x: int(x*255), ctup)) + (alpha,) return ';'.join(['background-color: rgba{}'.format(csstup), 'text-align: center', 'font-size: 16px', 'padding: .3em, .6em', ]) df = pd.DataFrame(ranked, columns=branches) s = df.style\ .applymap(color_species)\ .set_caption('Species ranked by win rate given milestone reached') # Hack to avoid obscure rendering issues with the HTML generated by # pandas' style.render() (not XHTML compliant) and kramdown from BeautifulSoup import BeautifulSoup as BS def sanitize_style(s): soup = BS(s.render()) return soup.prettify() di.display(di.HTML(sanitize_style(s))) Species that move up in rank (like tengu, octopodes, and ghouls), tend to be weak in the early game, and more powerful in the late game. Species that move down in rank (like spriggans, formicids, minotaurs, and trolls) generally have a more powerful early game. This also comes back to the number of runes each species tends to seek. Species like gargoyles and formicids may have a relatively low win rate given that they've seen the entrance to zot, not because they have a weak late game and are in danger of dying in zot, but because they have a high risk of dying in pan or hell, compared to a felid/human/vampire who's probably going for 3 runes. (Gargoyles and formicids are the two species that got the largest increase when we calculated adjusted win rates, assuming only 3-rune games.) Which draconian colour is best?¶ When draconians reach level 7, they're assigned one of 9 colours, each of which has different skill aptitudes, and a unique breath ability. There has been some debate on the forums about which colour is best: Let's see if we can use data to answer this question. fig, ax = plt.subplots(figsize=(11,7)) colours = [(.2,.2,.2), 'pink', (1,.9,0), 'aliceblue', 'ivory', 'purple', 'green', 'crimson', 'grey'] color_winrates = f[cdrac_index].groupby('species')['won'].mean().dropna().sort_values() xrange = np.arange(len(colours)) labels = [name.split()[0] for name in color_winrates.index] bars = ax.bar(xrange, color_winrates.values, color=colours, tick_label=labels, edgecolor='black', lw=1, ) bars[1].set_hatch('.') ylabel_pct(ax) ax.set_title('Win % per draconian colour'); #justiceforblackdraconians Wait, but are these differences statistically significant? If you have a burning curiosity about this question or just want to enjoy watching a mediocre mind struggle to understand the intricacies of statistical hypothesis testing, check out my companion post on the subject. Tagged: DCSS, Data Visualization
http://colinmorris.github.io/blog/dcss_species
CC-MAIN-2017-13
refinedweb
3,703
66.84
Peak rate of window spill and fill traps By Darryl Gove-Oracle on Mar 05, 2009 I've been looking at the performance of a code recently. Written in C++ using many threads. One of the things with C++ is that as a language it encourages developers to have lots of small routines. Small routines lead to many calls and returns; and in particular they lead to register window spills and fills. Read more about register windows. Anyway, I wondered what the peak rate of issuing register window spills and fills was? I'm going to use some old code that I used to examine the cost of library calls a while back. The code uses recursion to reach a deep call depth. First off, I define a couple of library routines: extern int jump2(int count); int jump1(int count) { count--; if (count==0) { return 1; } else { return 1+jump2(count); } }and int jump1(int count); int jump2(int count) { count--; if (count==0) { return 1; } else { return 1+jump1(count); } } I can then turn these into libraries: $ cc -O -G -o libjump1.so jump1.c $ cc -O -G -o libjump2.so jump2.c Done in this way, both libraries have hanging dependencies: $ ldd -d libjump1.so symbol not found: jump2 (./libjump1.so) So the main executable will have to resolve these. The main executable looks like: #include <stdio.h> #define RPT 100 #define SIZE 600 extern int jump1(int count); int main() { int index,count,links,tmp; tmp=jump1(100); #pragma omp parallel for default(none) private(count,index,tmp,links) for(links=1; links<10000; links++) { for (count=0; count<RPT; count++) { for (index=0;index<SIZE;index++) { tmp=jump1(links); if (tmp!=links) {printf("mismatch\\n");} } } } } This needs to be compiled in such a way as to resolve the unresolved dependencies $ cc -xopenmp=noopt -o par par.c -L. -R. -ljump1 -ljump2 Note that I'm breaking rules again by making the runtime linker look for the dependent libraries in the current directory rather than use $ORIGIN to locate them relative to the executable. Oops. I'm also using OpenMP in the code. The directive tells the compiler to make the outer loop run in parallel over multiple processors. I picked 10,000 for the trip count so that the code would run for a bit of time, so I could look at the activity on the system. Also note that the outer loop defines the depth of the call stack, so this code will probably cause stack overflows at some point, if not before. Err... I'm compiling with -xopenmp=noopt since I want the OpenMP directive to be recognised, but I don't want the compiler to use optimisation, since if the compiler saw the code it would probably eliminate most of it, and that would leave me with nothing much to test. The first thing to test is whether this generates spill fill traps at all. So we run the application and use trapstat to look at trap activity: vct name | cpu13 ---------------------------------------- ... 84 spill-user-32 | 3005899 ... c4 fill-user-32 | 3005779 ... So on this 1.2GHz UltraSPARC T1 system, we're getting 3,000,000 traps/second. The generated code is pretty plain except for the save and restore instructions: jump1() 21c: 9d e3 bf a0 save %sp, -96, %sp 220: 90 86 3f ff addcc %i0, -1, %o0 224: 12 40 00 04 bne,pn %icc,jump1+0x18 ! 0x234 228: 01 00 00 00 nop 22c: 81 c7 e0 08 ret 230: 91 e8 20 01 restore %g0, 1, %o0 234: 40 00 00 00 call jump2 238: 01 00 00 00 nop 23c: 81 c7 e0 08 ret 240: 91 ea 20 01 restore %o0, 1, %o0 So you can come up with an estimate of 300 ns/trap. The reason for using OpenMP is to enable us to scale the number of active threads. Rerunning with 32 threads, by setting the environment variable OMP_NUM_THREADS to be 32, we get the following output from trapstat: vct name | cpu21 cpu22 cpu23 cpu24 cpu25 cpu26 ------------------------+------------------------------------------------------- ... 84 spill-user-32 | 1024589 1028081 1027596 1174373 1029954 1028695 ... c4 fill-user-32 | 996739 989598 955669 1169058 1020349 1021877 So we're getting 1M traps per thread, with 32 threads running. Let's take a look at system activity using vmstat. vmstat 1 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr s1 s2 s3 s4 in sy cs us sy id ... 0 0 0 64800040 504168 0 0 0 0 0 0 0 0 0 0 0 3022 427 812 100 0 0 0 0 0 64800040 504168 0 0 0 0 0 0 0 0 0 0 0 3020 428 797 100 0 0 0 0 0 64800040 504168 0 0 0 0 0 0 0 0 0 0 0 2945 457 760 100 0 0 0 0 0 64800040 504168 0 0 0 0 0 0 0 0 0 0 0 3147 429 1025 99 1 0 0 0 0 64800040 504168 0 15 0 0 0 0 0 0 0 0 0 3049 666 820 99 1 0 0 0 0 64800040 504168 0 1 0 0 0 0 0 0 0 0 0 3044 543 866 100 0 0 0 0 0 64800040 504168 0 0 0 0 0 0 0 0 0 0 0 3021 422 798 100 0 0 So there's no system time being recorded - the register spill and fill traps are fast traps, so that's not a surprise. One final thing to look at is the instruction issue rate. We can use cpustat to do this: 2.009 6 tick 63117611 2.009 12 tick 69622769 2.009 7 tick 62118451 2.009 5 tick 64784126 2.009 0 tick 67341237 2.019 17 tick 62836527 As might be expected from the cost of each trap, and the sparse number of instructions between traps, the issue rate of the instructions is quite low. Each of the four threads on a core is issuing about 65M instructions per second. So the core is issuing about 260M instructions per second - that's about 20% of the peak issue rate for the core. If this were a real application, what could be done? Well, obviously the trick would be to reduce the number of calls and returns. At a compiler level, that would using flags that enable inlining - so an optimisation level of at least -xO4; adding -xipo to get cross-file inlining; using -g0 in C++ rather than -g (which disables front-end inlining). At a more structural level, perhaps the way the application is broken into libraries might be changed so that routines that are frequently called could be inlined. The other thing to bear in mind, is that this code was designed to max out the register window spill/fill traps. Most codes will get nowhere near this level of window spills and fills. Most codes will probably max out at about a tenth of this level, so the impact from register window spill fill traps at that point will be substantially reduced.
https://blogs.oracle.com/d/tags/trapstat
CC-MAIN-2015-18
refinedweb
1,197
75.95
ASP.NET MVC - Passing. 1. public class Record 2. { 3. public int Id { get; set; } 4. public string RecordName { get; set; } 5. public string RecordDetail { get; set; } 6. 7. }. 1. public ActionResult Index() 2. { 3. Record rec = new Record 4. { 5. Id = 101, 6. RecordName = "Bouchers", 7. RecordDetail = "The basic stocks" 8. }; 9. ViewBag.Message = rec; 10. return View(); 11. } Add a View for an Index Action by right clicking on it. Give a name to it and select Add button. First of all import the model class. Assign viewbag into a variable and all the properties will be in place, using the variable and Razor block. 1. @using PassDatainMVC.Models 2. 3. @{ 4. ViewBag.Title = "Index"; 5. } 6. 7. <h3>Passing Data From Controller to View using ViewBag</h3> 8. @{ 9. var data = ViewBag.Message; 10. } 11. <h3>Id: @data.Id</h3> 12. <h3>RecordName: @data.RecordName</h3> 13. <h3>RecordDetail: @data.RecordDetail</h3> Build and run your Application. You will get ViewBag Data. The other way of passing the data from Controller to View is ViewData. Also, a dictionary type object is similar to ViewBag. There are no huge changes in Controller and ViewData contains key-value pairs. 1. public ActionResult Index() 2. { 3. Record rec = new Record 4. { 5. Id = 101, 6. RecordName = "Bouchers", 7. RecordDetail = "The basic stocks" 8. }; 9. ViewData["Message"] = rec; 10. return View(); 11. } Access your model class when you are using ViewData, as shown below. 1. @using PassDatainMVC.Models 2. @{ 3. ViewBag.Title = "Index"; 4. } 5. <h3>Passing Data From Controller To View using ViewData</h3> 6. @{ 7. var data = (Record)ViewData["Message"]; 8. } 9. <h3>Id: @data.Id</h3> 10. <h3>RecordName: @data.RecordName</h3> 11. <h3>RecordDetail: @data.RecordDetail</h3>. 1. public ActionResult Index() 2. { 3. Record rec = new Record 4. { 5. Id = 101, 6. RecordName = "Bouchers", 7. RecordDetail = "The basic stocks" 8. }; 9. return View(rec); 10. } Import the binding object of model class at the top of Index View and access the properties by @Model. 1. @using PassDatainMVC.Models 2. @model PassDatainMVC.Models.Record 3. @{ 4. ViewBag.Title = "Index"; 5. } 6. <h3>Passing Data From Controller To View using Model Class Object</h3> 7. 8. <h3>Id: @Model.Id</h3> 9. <h3>RecordName: @Model.RecordName</h3> 10. <h3>RecordDetail: @Model.RecordDetail</h3>. 1. public ActionResult CheckTempData() 2. { 3. TempData["data"] = "I'm temprory data to used in subsequent request"; 4. return RedirectToAction("Index"); 5. } Acccess TempData in Index.Chtml view is given. 1. <h3>Hey! @TempData["data"]</h3> Run the Application and call the respective action method. TempData uses an internal session to store the data. I hope, you liked this article. Stay tuned with me for more on ASP.NET MVC, Web API and Microsoft Azure.
https://tutorialslink.com/Articles/ASPNET-MVC-Passing-Data-From-Controller-To-View/953
CC-MAIN-2020-05
refinedweb
459
63.96
Do you know how to use Android AIDL Service? This kind of services are services that can be consumed by other processes using RPC (Remote Procedure Call). In the other post we talk about Local Service, in other words, the application that hosts the service can consume it. Android AIDL Services are useful when we want to create some new functionalities and we want to distribute them as). As an example, we will use the same example described last time where we get stock quotes. Define a Remote Android AIDL Service In order to create an Android AIDL Service we have: - define and create the service interface using AIDL - Implement our service and override onBind method returning our interface - Define objects that the client and the server exchange and deconstruct them at low OS level so that they can be marshaled and un-marshaled. In other words, our classes have to implement Parcelable interface. - Configure our service in Manifest.xml file: package com.survivingwithandroid.aidlservicetutorial.service; import com.survivingwithandroid.aidlservicetutorial.service.Stock; interface IStockService{ void getQuote(Stock stock); } notice that at line 3 we simply import the Stock definition and at line 6 we define our method. On the other hand, we have to define in AIDL our Stock pojo: package com.survivingwithandroid.aidlservicetutorial.service; parcelable Stock; In this way, we defined our service interface. If you use Eclipse (or Android studio) you can put these two files under source and the package name. Eclipse will create everything you need to use the service. Implement the AIDL Remote Service in Android Now we have our interface, so we can implement the “real” Android service: public class StockService extends Service { ... @Override public IBinder onBind(Intent intent) { Log.d("Srv", "OnBind"); final ResultReceiver rec = (ResultReceiver) intent.getParcelableExtra("rec"); return new IStockService.Stub() { @Override public void getQuote(Stock stock) throws RemoteException { (new Thread(new Worker(stock, rec))).start(); } }; } }. AIDL Client implementation: Android Service Bind The last step is implementing the client that calls and consumes the service. To develop the client we need: - The service interface (described in AIDL) - The pojo class exchanged by the client and server With these two elements we can create our client. We will see later how to structure the project. When we use remote service we have to “bind” our client to the remote service. We can do it, for example, in onCreate method of our Activity using bindService method: Intent i = new Intent(IStockService.class.getName()); ... bindService(i, serviceConnection, Context.BIND_AUTO_CREATE); where serviceConnection is a listener we some callback methods that can be used to monitor the service connection status. So we have to create a ServiceConnection instance to handle: - Service connection event - Service disconnection event ServiceConnection serviceConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder binder) { Log.d("Srv", "Service connected!"); service = IStockService.Stub.asInterface(binder); Log.d("Srv", "Service interface ["+service+"]"); } @Override public void onServiceDisconnected(ComponentName name) { Log.d("Srv", "Service disconnected!"); service = null; } }; At line 5, we finally get the service interface that can be used to call the remote methods on the service side. Once we have the service interface, it is possible to call its methods as if they are class methods. Android Project structure One important aspect we have to consider when developing a remote service is what the client needs to call our service. A very simple approach is mixing pojo classes, ADIL and service implementation in only one lib and distribute it to the developers that want to use our service. Even if this approach works, it has some drawbacks: - The jar could have a dimension big size and the client app developers have to include it in their app distribution - we are distributing as jar our service implementation and maybe it is more wise to hold it in a different place - Even if we don’t modify the service interface and the pojos but only the service implementation we have the client and server jar not aligned It is more wide, in my opinion, to structure the project in the right way so that we can distribute to the client developers only the class they really need. If we use Eclipse we can create two different project one for the server (our service implementation) and one for the client-side lib. One important this we have to remember is to mark this last one project as library: so we have: In this way, we decoupled the client lib classes to the service implementation and we can distribute the jar related to the AIDLServiceLib. The last thing to remember is set AIDLServiceLib as lib of the AIDLServiceClient.
https://www.survivingwithandroid.com/2014/03/android-remote-service-tutorial-aidl.html
CC-MAIN-2017-26
refinedweb
770
52.49
Bcryptor 1.2.2 Python wrapper for bcrypt Bcrypt is an implementation of a modern password hashing algorithm, based on the Blowfish block cipher, by Niels Provos and David Mazieres. It has been the default password scheme since OpenBSD 2 uses the random number generator random.SystemRandom() to create the salts. Installation To build the module from source code, read documentation on doc/source.txt. Logging Yamlog manages the error catching code and error reporting. Read its documentation if want to be set up. Use Typical usage: import bcryptor hasher = bcryptor.Bcrypt() hash = hasher.create('crack my pass') And to validate: >>> hasher.valid('crack my pass', hash) True >>> hasher.valid('Crack my pass', hash) False Change history v1.2.2, 2010-02-26 - Fixed an import error at loading the package to get its docstring. It doesn't works when is imported a module generated by cython which has not built. v1.2.1, 2010-02-25 - Added a null handler to logging, since Yamlog could not has been set up. v1.2, 2010-02-24 - The license has been changed to ISC. - For indentation, is used 4 spaces as is indicated in PEP-8. - The manage of exceptions and imports has been improved. - Better docstrings. - The values of cost can be changed at instancing Bcrypt(). - Changed from Pyrex to Cython. - The logging is managed through Yamlog. v1.1, 2009-05-20 - Initial release. - Author: Jonas Melian - Keywords: bcrypt,crypto,cryptography,hash,openbsd,password,security - Categories - Development Status :: 5 - Production/Stable - Environment :: Other Environment - Environment :: Web Environment - Intended Audience :: Developers - License :: OSI Approved :: ISC License (ISCL) - Operating System :: POSIX - Operating System :: POSIX :: Linux - Programming Language :: C - Programming Language :: Python :: 2.4 - Topic :: Security :: Cryptography - Package Index Owner: kless - DOAP record: Bcryptor-1.2.2.xml
http://pypi.python.org/pypi/Bcryptor/1.2.2
crawl-003
refinedweb
295
51.75
Scala-PactScala-Pact A Consumer Driven Contract testing library for Scala and ScalaTest that follows the Pact standard. Scala-Pact is intended for Scala developers who are looking for a better way to manage the HTTP contracts between their services. Latest version is 2.3.9Latest version is 2.3.9 Scala-Pact now has two branches based on SBT requirements. SBT 1.x compatible (Latest 2.3.9)SBT 1.x compatible (Latest 2.3.9) All development going forward begins at 2.3.x and resides on the master branch. For the sake of the maintainer's sanity, version 2.3.x and beyond will only support Scala 2.12 and SBT 1.x or greater. SBT 0.13.x compatible (Latest 2.2.5)SBT 0.13.x compatible (Latest 2.2.5) The reluctantly maintained EOL maintenance version of Scala-Pact lives on a branch called v2.2.x. These versions support Scala 2.10, 2.11, and 2.12 but are limited by only supporting SBT 0.13.x. More informationMore information Please visit our official documentation site for more details and examples. There is also an example project setup for reference. Getting setupGetting setup Scala-Pact goes to great lengths to help you avoid / work around dependency conflicts. This is achieved by splitting the core functionality out of the library requirements which are provided separately. This allows you to align or avoid conflicting dependencies e.g. If your project uses a specific version of Circe, tell Scala-Pact to use Argonaut! One big change between 2.2.x and 2.3.x is that dependencies are now provided by TypeClass rather than just static linking. Please refer to the example setup. You're using SBT 1.x:You're using SBT 1.x: Add the following lines to your build.sbt file to setup the test framework: import com.itv.scalapact.plugin._ enablePlugins(ScalaPactPlugin) libraryDependencies ++= Seq( "com.itv" %% "scalapact-circe-0-9" % "2.3.9" % "test", "com.itv" %% "scalapact-http4s-0-18" % "2.3.9" % "test", "com.itv" %% "scalapact-scalatest" % "2.3.9" % "test", "org.scalatest" %% "scalatest" % "3.0.5" % "test" ) Add this line to your project/plugins.sbt file to install the plugin: addSbtPlugin("com.itv" % "sbt-scalapact" % "2.3.9") This version of the plugin comes pre-packaged with the latest JSON and Http libraries. Thanks to the way SBT works, that one plugin line will work in most cases, but if you're still having conflicts, you can also do this to use your preferred libraries: libraryDependencies ++= Seq( "com.itv" %% "scalapact-argonaut-6-2" % "2.3.9", "com.itv" %% "scalapact-http4s-0-16a" % "2.3.9" ) addSbtPlugin("com.itv" % "sbt-scalapact-nodeps" % "2.3.9") In your test suite, you will need the following imports: The DSL/builder import for Consumer tests: import com.itv.scalapact.ScalaPactForger._ Or this one for Verification tests: import com.itv.scalapact.ScalaPactVerify._ You'll also need to reference the json and http libraries specified in the build.sbt file: import com.itv.scalapact.circe09._ import com.itv.scalapact.http4s18._ Alternatively, in case your project has both scalapact-http4s and scalapact-circe as dependencies, you could also use the following: import com.itv.scalapact.json._ import com.itv.scalapact.http._ You're using SBT 0.13.x:You're using SBT 0.13.x: Add the following lines to your build.sbt file to setup the test framework: import com.itv.scalapact.plugin._ enablePlugins(ScalaPactPlugin) libraryDependencies ++= Seq( "com.itv" %% "scalapact-circe-0-9" % "2.2.5" % "test", "com.itv" %% "scalapact-http4s-0-18-0" % "2.2.5" % "test", "com.itv" %% "scalapact-scalatest" % "2.2.5" % "test", "org.scalatest" %% "scalatest" % "3.0.5" % "test" ) Add these lines to your project/plugins.sbt file to install the plugin: libraryDependencies ++= Seq( "com.itv" %% "scalapact-argonaut-6-2" % "2.2.5", "com.itv" %% "scalapact-http4s-0-16-2" % "2.2.5" ) addSbtPlugin("com.itv" % "sbt-scalapact" % "2.2.5") In you're test suite, you will need the following import for Consumer tests: import com.itv.scalapact.ScalaPactForger._ Or this one for Verification tests: import com.itv.scalapact.ScalaPactVerify._ Note that you can use different versions of Scala-Pact with the plugin and the testing framework, which can make Scala 2.10 compat issues easier to work around while we get the SBT 1.0 release sorted out.
https://index.scala-lang.org/itv/scala-pact/scalapact-http4s-0-18/2.3.0-RC1?target=_2.12
CC-MAIN-2019-43
refinedweb
738
56.01
Subject: Re: [OMPI users] Memchecker report on v1.3b2 (includes potential bug reports) From: François PELLEGRINI (francois.pellegrini_at_[hidden]) Date: 2008-11-19 11:18:25 Bonjour Shiqing, Shiqing Fan wrote: > Dear François, > > Thanks a lot for your report, it's really a great help for us. :-) No problem. Your software helps me too, so as soon as you have fixes and new builds please tell me, so that I can try again. > For the issues: > 1) When you got "Conditional jump" errors, normally that means some > uninitialized(or undefined) values were used. The parameters that passed > into PMPI_Init_thread might contain uninitialized values, which could > cause errors (even seg-fault) later. I need some time to run your > application to check where these values exactly are. I'll post another OK. For this specific problem, though, you do not need Scotch. The involved lines are just below : #define SCOTCH_PTHREAD int main ( int argc, char * argv[]) { ... #ifdef SCOTCH_PTHREAD int thrdlvlreqval; int thrdlvlproval; #endif /* SCOTCH_PTHREAD */ #ifdef SCOTCH_PTHREAD thrdlvlreqval = MPI_THREAD_MULTIPLE; if (MPI_Init_thread (&argc, &argv, thrdlvlreqval, &thrdlvlproval) != MPI_SUCCESS) errorPrint ("main: Cannot initialize (1)"); if (thrdlvlreqval > thrdlvlproval) errorPrint ("main: MPI implementation is not thread-safe: recompile without SCOTCH_PTHREAD"); #else /* SCOTCH_PTHREAD */ if (MPI_Init (&argc, &argv) != MPI_SUCCESS) errorPrint ("main: Cannot initialize (2)"); #endif /* SCOTCH_PTHREAD */ ...and the line I used to run it was in my previous post. I don't see the problem in my coding. What did I do wrong ? If you want a Scotch tarball to play with, tell me : the release on the forge is currenlty a 5.1.2, and I am close to releasing a 5.1.3 (I will fix my double-Isend bug before, see below), so I can provide you with a 5.1.3beta along with my test data. [...] >. Agreed. I will rewrite my piece of code accordingly. I told you there might be bugs in my code. ;-) Best regards, f.p.
http://www.open-mpi.org/community/lists/users/2008/11/7344.php
CC-MAIN-2014-42
refinedweb
314
75.5
In this guide we're going to cover how to print in Python, both to a file and to standard output. Printing is useful for many things, but when you're learning Python it's especially useful for debugging and logging. There are two steps to writing text to a file: open and write. To open a file for writing, we should always use the with keyword. Using with is advantageous because it handles properly closing the file even if an exception is raised. with open('filename.txt', 'w') as fh: fh.write('This is line 1') In the above code, fh is the file handle created by the open function. fh.write writes a single line to the open file. Notice the second argument to the open function, w, ensures that the file is opened for writing. To test this out, you can create a file called print_1.py and paste the above code. Then run it like this (from the Terminal): python3 print_1.py In your current directory, you should see a file called filename.txt. Check the contents: cat filename.txt Feel free to edit the file and add extra fh.write lines to see what happens. After you run the file, you'll notice that the contents have been overwritten. That is the nature of the w flag. It opens the for writing and truncates the file. If you're running a Python program and want to print text to the console, you can simply use the print('Here I am!') On Unix systems (macOS, Linux, BSD, etc), everything is a file. When you use the print function you're actually writing to a file called standard output, which is abbreviated stdout. And the terminal reads from this file and displays it on the screen. To prove this, try executing the following code snippets. They are effectively the same thing: with open('/dev/fd/0', 'w') as fh: fh.write('Writing to stdout!) /dev/fd/0 is the stdout file. import sys sys.stdout.write('Writing to stdout!') print('Writing to stdout!') When you print, you'll often want to format the text. There are many formatting options, but we're going to cover the most commonly used. Often you'll want to use variables inside a string of text. To do this, you'll use formatted string literals or f-strings. >>>>> f'My name is not {name}.' 'My name is not Ben Franklin.' You can also do number formatting like this: >>> score = 0.9 >>> f'{score:.1%}' '90.0%' More details can be found in the formatting specification. Importantly, the f-string can also be used as an argument to the print function: >>> score = 0.9 >>> print(f'You scored {score:.1%} on your test.')
https://howchoo.com/g/mzm5mwqzyju/printing-in-python
CC-MAIN-2020-05
refinedweb
460
78.04
Compiler: gcc and/or g++ Linux: Mac OS X 10.6.7 curlpp version: 0.7.3 -Hi I am having trouble compiling my cpp program, I am just trying to do an example from the curlpp website . I am using that code exactly. Also I've tried to search online to see what other people did for this problem. This directed me to another link which is further down that page, that told me to add some other libs within my curlpp libs. But still nothing worked :(. That link is this. Here are my errors I am getting. It seems simple enough yes, that it does not recognize the location of the .hpp libs I am trying to use. I've tried to use the actual path to the curlpp libs with my command to run it, but still nothing...I also get the same error compiling with g++. sagareu:cpp Zach$ gcc curl_00.cpp curl_00.cpp:1:29: error: curlpp/cURLpp.hpp: No such file or directory curl_00.cpp:2:27: error: curlpp/Easy.hpp: No such file or directory curl_00.cpp:3:30: error: curlpp/Options.hpp: No such file or directory curl_00.cpp:5: error: ‘curlpp’ has not been declared curl_00.cpp:5: error: ‘options’ is not a namespace-name curl_00.cpp:5: error: expected namespace-name before ‘;’ token curl_00.cpp: In function ‘int main(int, char**)’: curl_00.cpp:11: error: ‘curlpp’ has not been declared curl_00.cpp:11: error: expected `;' before ‘myCleanup’ curl_00.cpp:14: error: ‘curlpp’ has not been declared curl_00.cpp:14: error: expected `;' before ‘myRequest’ curl_00.cpp:18: error: ‘myRequest’ was not declared in this scope curl_00.cpp:18: error: ‘URL’ was not declared in this scope curl_00.cpp:27: error: expected type-specifier before ‘curlpp’ curl_00.cpp:27: error: expected `)' before ‘:’ token curl_00.cpp:27: error: expected `{' before ‘:’ token curl_00.cpp:27: error: expected primary-expression before ‘:’ token curl_00.cpp:27: error: expected `;' before ‘:’ token curl_00.cpp:32: error: expected primary-expression before ‘catch’ curl_00.cpp:32: error: expected `;' before ‘catch’
https://www.daniweb.com/programming/software-development/threads/369339/compiling-with-curlpp-help
CC-MAIN-2016-44
refinedweb
340
63.86
How to save 2000 files at once Hi so here is my problem: I have notepad++ with about 2000 tabs open, all unsaved, all information I’d like to keep Can anyone advise how I can save these files all at once? I’ve tried a plugin called autosave2 but it just crashes Obviously I can press file save as 2000 times but dont want to do that… - gurikbal singh last edited by use save as admin plugin A tiny detail you’ve overlooked to specify: How are these files to be named? That’s just a ridiculous suggestion. Hello @a-s243, @alan-kilborn, @gurikbal-singh and All, Oh, my God ! Personally, I cannot imagine my N++ Window containing two thousand new #opened, simultaneously, in a single N++ session !! ?? Probably, it should be 200, shouldn’t it ? Best regards, guy038 Probably, it should be 200, shouldn’t it ? Ha. I’m sure it is 2000. Why? Because there are a lot of bizarre requests in this forum! Just like people never give enough background to receive good regex help, it would be nice to know the HOW and WHY of 2000 unsaved files. ok yeah so, a) yes i know this is dumb, this is the result of about 3 years of opening a new tab everyday and writing notes in it. we can accept i’m dumb and move on from here b) how should they be named? not bothered, just want to be able to grep it for info - Eko palypse last edited by you know, that unsaved files are saved files on disk already, aren’t you? Check your backup directory, there are your unsaved files. yes i know this is dumb, this is the result of about 3 years of opening a new tab everyday and writing notes in it. This is downright bizarre! we can accept i’m dumb and move on from here Yes we can! You could save the files to independent files on disk with the Pythonscript plugin and the following: import os for (filename, bufferID, index, view) in notepad.getFiles(): if filename.startswith('new '): notepad.activateBufferID(bufferID) if editor.getModify(): filename = os.environ['TEMP'] + os.sep + filename.replace(' ', '') + '.txt' notepad.saveAs(filename) else: notepad.close() It would put your files in your %TEMP%folder. Change that to whatever you want. Ctrl-Shift-S saves all open files. Is it not working? what is your issue? or you mean you have not even saved the files even first time, so there is no name or location of these files to auosave? Thanks. @V-S-Rawat said: or you mean you have not even saved the files even first time, so there is no name or location of these files Well that is at least the supposition. Well that is at least the supposition. :-) not so obvious. unsaved could mean - opened from hard disk, but not saved after making changes. Anyway - would you like to share how you created those 2000 tabs? how did you put different data in those 2000 tabs? on that basis, maybe I or someone can tell some method that such data can be directly put to hdd txt files, instead of bringing npp in between, or maybe a pythonscript macro can run while you put those data in 2000 tabs, so that each tab/ file is saved on hdd as it is created. Thanks. this is the result of about 3 years of opening a new tab everyday and writing notes in it. oh my God. I surrender. one method could still be: install “Autobackup” plugin of npp, there are 2-3 of that type, you need to study which one will serve your purpose. As soon as you install this plugin, it will make backup of all opened tabs in the backup folder (see in configuration - backup), so you have all 2000 tabs saved on hard disk. you move all 2000 files elsewhere, and you get what your asked for.
https://community.notepad-plus-plus.org/topic/16922/how-to-save-2000-files-at-once/4
CC-MAIN-2019-51
refinedweb
663
80.82
I am using xcode version 10, visual studio 2017, mac os Mojave. while building the application getting the Error:--- VisualStudio/7.0/MSBuild/9435_1/Microsoft.CSharp.CurrentVersion.targets(5,5): Error MSB4019: The imported project VisualStudio/7.0/MSBuild/9435_1//Microsoft.CSharp.Core.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. (MSB4019). Any suggestions please ? Answers This issue is known issue in the VS, you can refer to this thread. I found two tips about solving this issue. I have added the older version of mono like 4.8. Now Error is ---Could not load file or assembly 'netstandard, Version=2.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. Confirm that the declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. (MSB4062) . Did you try to update your VS for mac version >8.0? yes i updated the vs for mac version 8.0 You can choose Mono 5.18.1.3 to installed, If it worked. I have updated mono 6.0. Now agian i am getting this error in xamarin.ios. Error MSB4019: The imported project "/Library/Frameworks/Mono.framework/Versions/6.0.0/lib/mono/msbuild/Current/bin/Xamarin/iOS/Xamarin.iOS.CSharp.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This also i tried.but same exception i am getting
https://forums.xamarin.com/discussion/comment/389053
CC-MAIN-2019-43
refinedweb
259
53.98
Dashboard for the e-NABLE Social Media Ecology Thread dedicated to the creation of a dashboard for the e-NABLE Social Media Ecology that was proposed on this post: We count on the community participation to bring this initiative to life. @jonschull @andrewbrow do you know Microsoft Power BI? This tool is free (for a certain number of registers) and can be published. * This video explains how to use it: * Based on Andrew's evaluation we can define if this is an option to help with data sanitation in parallel to the development or even an option to attend specific dashboard needs. Here in Brazil many companies are using the free version for small databases. @evertonlins @jonschull Microsoft Power BI seems much more user friendly when it comes to building the dashboard. This would be a great place to start building in the features that we think are the most pertinent. From there, we can explore using Dash. Dash has seemingly endless options that are only limited by ones Python experience. I know a little and can surely learn more as I go along. After all, I did get a degree in "Critical Thinking" for a reason. I say we test our ideas out with the free version of Microsoft Power BI and either upgrade to Pro ($10/month for a single user) or move to Dash if we want to expand functionality. Moving forward we need to determine what kind of data we want to portray and, more importantly, how we will collect that data. Thanks, @andrewbrow ! I believe that at this point the best path to follow is to create the business case with the assumptions, constraints and goals that will guide the project and use Power BI (free version) for prototyping the final solution. In this way, no coding/development is needed in this first phase. And of course, if the prototype becomes good enough we can use this free version of Powe BI as the final solution if everyone agrees that is fit for use and also fit for purpose as well. I'll start a proposal for this and after that, we can start planning the next steps. Everton Lins started a proposal Thu 6 Jul 2017 Do you accept to use Power BI (free version) to prototype final Dashboard solution? Closed Sun 9 Jul 2017 By using Power BI (free version) no coding/development is needed in this first phase and the team can focus on data acquisition, modelling and definition. I think is the easiest way to start. "Dream big. Start small. But most of all, start." Jon Schull Fri 7 Jul 2017 I'm game! I'm not familiar with this but its an impressive video. Perhaps we should start with the media data corpus I created....? Everton Lins Fri 7 Jul 2017 @jonschull I already have the Power BI desktop version. If you could share a sample of the data I can do some work over the weekend. The tool can import txt, csv, xls and xml. Everton Lins Fri 7 Jul 2017 About licensing, I strongly believe that e-NABLE is eligible for a Power BI subscription for free. Nonprofits Case studies * Eligibility * Everton Lins Sat 8 Jul 2017 First test with the Power BI. The final result will be a link like this embedded on a published web page. Non-static and fully functional dashboard using the free version: Jon Schull Sun 9 Jul 2017 Fantastic! I've added these lines to the MediaMogul script I sent previously. from pandas import DataFrame DataFrame(articles).to_csv('articles.csv', header=True, index=False, encoding='utf-8') and it generated articles.csv which I am attaching here in zipped form. Good luck! Everton Lins Mon 10 Jul 2017 After work on some bugs from Power BI, here it is -> The first data import. We have 966 valid entries in the table. In this first Dashboard we can see that there is a lot of blank data in the fields COUNTRIES, CITIES and PLACES. The map takes a while to load the data but is accurate. This is just a sample, now we can work on defining what KPIs can we get from this data and work on data structuration to adjust the blank and concatenated data on location fields and see if we can work on the dates to start analysing trends. Jon Schull Thu 27 Jul 2017 Everton, somehow I missed this. So sorry. And you didn't grab me by the collar and shake me! It looks really interesting! Everton Lins Fri 28 Jul 2017 Ha ha! No problem! Now that we know that free Power BI works is just a matter of setting the goals for what information and KPIs you guys want to show and go for it. :thumbsup: Everton Lins · Thu 6 Jul 2017 Hi @andrewbrow . I see that you can help with troubleshooting and review and that will be great! On this first step we need to better understand how this propose here can help. As you have some experience with Python can you help us evaluate the effort / technical knowlege involved in to develop something using this? Thank you!
https://www.loomio.org/d/fpMmV52j/dashboard-for-the-e-nable-social-media-ecology
CC-MAIN-2021-31
refinedweb
864
70.94
User:Aya/Wikibooks/A critique of Wikibooks From Wikibooks, the open-content textbooks collection A critique of Wikibooks This is a document I am building up during the time I spend on Wikibooks. It mostly consists of a collection of current problems with the Wikibooks system, some solutions to these problems, and some provisional policies to help implement these solutions. Many of the problems and solutions listed here are ones which I have read from other users on the hundreds of talk pages on Wikibooks, and some are ones I have personally discovered. I have not distinguished between the two, since they are still problems for everyone, regardless of who identified them. This will almost certainly remain a work-in-progress for the foreseeable future, and consequently may be internally inconsistent. I don't mind if other users make minor edits to correct obvious typos, but more substantial changes and other comments are more welcome on the talk page. [edit] Abstract Wiki is an ephemeral community in the sense that many of its users don't remain active for very long. There are a hardcore of long-term residents, but they are comparatively few in numbers. Basically, at any one time, there is an active community, but the members of it change quickly over time. Any procedures must bear this in mind, especially in the sense that it can take a long time to get any meaningful feedback from another user, so some users are being held back by the occasional desolation of pages such as Wikibooks:Staff lounge. Consequently, I think standards need to be more pro-active, rather than discussional (i.e. the procedure would favour the writers own passion for the subject). Many of the pages here remind me of the hilarious scene from Life of Brian, where the inactive 'activists' known as the People's Front of Judea suddenly learn that one of their number is to be crucified, and are informed to come quickly to help, by which their leader passionately declares, "This calls for immediate discussion!". We need to define certain common issues which occur (e.g. VfD), and come up with better criteria, to minimize the amount of discussion required. All the time wasted on discussion is perhaps time that could have been spent writing good educational content. I think it's mostly due to users being too scared to be bold. [edit] The goal of Wikibooks First of all, I should define the goal of Wikibooks. The first definition I can find is the first meaningful revision of the Main page made on 10th July 2003. It states: - The Wikimedia Free Textbook Project intends to cooperatively write textbooks of various fields and topics. The same page currently (almost exactly two years later) states: - Welcome to Wikibooks, a collection of open-content textbooks that anyone can edit. Pretty much the same meaning, so that much seems clear. Moving on to Wikibooks:About, we are told: - Wikibooks is a collection of free textbooks with supporting book-based texts, that is being written collaboratively on this website. Note that the word 'textbooks' is linked to the Wikipedia entry for that word, so we can only assume that this entry is an adequate definition for the types of content permitted on this site. The Wikipedia page appears to begin with a brief dictionary-length definition: - understanding of every subject that can be taught. It is a big business that requires mass volume sales to make the publications profitable. Although most textbooks are only published in printed format with hard covers, some can now be viewed online. To be fair, the NPOV policy means it's fair to look at other dictionary definitions: - A book used in schools or colleges for the formal study of a subject. - A volume, as of some classical author, on which a teacher lectures or comments; hence, any manual of instruction; a schoolbook. There seems to be a common connection with the academic establishments, so it would seem that anything commonly used there ought to be appropriate. e.g.: - Generalized books about a traditional academic discipline, such as: - Languages (English, French, etc.) - Mathematics - Sciences (Physics, Chemistry, Biology, etc.) - etc. - More specific books, but still commonly used by students: - Dictionaries/encyclopedias specific to a single discipline (e.g. a dictionary of medical terms). - In-depth guides, often crossing into multiple disciplines (e.g. electronics is also generally covered in physics and computer science courses) - Biographies of famous people in a particular discipline (e.g. Shakespeare), perhaps with annotated texts. These are commonly used in disciplines such as history, and literature, but generalized to include any famous person in a particular discipline. More generally, the following types of content seem to be allowed: - Guides to computer games (lots of these, not traditionally used in any academic subjects) - Recipe book (arguably a 'textbook' in a 'home economics' or 'cooking' course) I consider these to be perfectly valid. Perhaps 'students' should be extended with 'enthusiasts'. Who else is going to be reading these books? My own opinion is that any book you'd find in the 'non-fiction' section of a library (i.e. those which have a dewey decimal classification) ought to be allowed. These are commonly denied: - 'Fiction' - Basically, any book you'd expect to find in the 'fiction' section of your local library, as opposed to the 'non-fiction' section. There is currently no official Wikimedia wiki for this sort of content. Thinking about it, since Wikibooks is currently host to Wikiversity and Wikijunior projects, then anything in that scope ought to be okay, providing it is clearly marked as such. It's possible Wikijunior may allow educational fiction? Are these books allowed? - An_Unexpected_Visitor - Not listed on Wikibooks:Votes for deletion - Ardvark_the_Aardvark - Listed on Wikibooks:Votes for deletion These are always denied, because they belong on a sister project: - Ephemeral (news) articles. These belong on Wikinews. - A book of quotations. These belong on Wikiquote - A book about the taxonomic classification of species. This belongs on Wikispecies. - A generalized encyclopedia. This belongs on Wikipedia. - A generalized dictionary. This belongs on Wiktionary. - Previously published works by another author, with no intention of annotating the text. These belong on Wikisource. [edit] Common problems The idea is to gradually convert this section into some simple procedures which serve to eliminate them. [edit] Unreliable administrative system Note: This section should not be interpreted as a perjorative statement about actual Wikibooks administrators, but rather a statement about the 'system' in which they act. The actual administrators tend to do their jobs well, but there just aren't enough of them. See also Badly defined scope for acceptable content and NPOV (Neutral Point Of View). There has always been an issue that some users will create non-NPOV material or other meaningless nonsense in the main namespace. Due to the current open nature of the Wikibooks project, there is no way to prevent this. Only an administrator can delete these pages, and in order for them to do so without getting into trouble, the pages must go through a voting process (i.e. Wikibooks:Votes for deletion), with the exception of obvious vandalism and newbie experiments. Consequently, the administrative system only really works where there are a sufficient number of users participating in these votes, and enough active administrators to get things sorted out in a reasonable amount of time. Until this is sorted out, it's most practical to pretend there are no administrators, and to find solutions which any user is capable of implementing. There are murmurings on Meta to address this too. Looking at the latest beta of MediaWiki, if seems as if a new voting system is being worked on as well, although this may be for a different purpose. Also, there has always been the general feeling that a user's pages (e.g. User:Aya and User talk:Aya), and any subpages thereof do not have to be NPOV, and that the user may maintain a strict editorial control of those pages. For example if I were to add the comment "Some users believe this user to be a complete ass" to a user's personal page, it couldn't really be argued to be non-NPOV, but changes of this nature are generally reverted. - Solutions The obvious solution would be to allow this non-NPOV material providing it's housed in the originating user's own pages, and in fact many users already do this, to start work on a community project, before it is ready to be transferred to the main namespace, and continued by the community. This is the paradigm I have tried to propagate, and in the process, I have separated these projects on the Main page into "Community Projects" and "Personal Projects". As to whether or not these personal pages deserve to be linked from the main page is another issue, but if they are ultimately intended to become community projects, I don't see this as a problem, since it helps the material to become more NPOV, by allowing other users to comment on it. [edit] Newbies Newbies experimenting with the system, unaware of the damage they are causing. - Solutions The current Wikibooks:Sandbox solution really only allows them to experiment with editing a single page. This is by no means the entire scope of the wiki-experience. They may still accidentally create loads of pages that just need to be cleaned up later. Perhaps they want to experiment with the "move page" feature on an important page. Perhaps they've been directed here from another site, and have no idea what it's all about. Get them off the site, and onto a safe sandbox site ASAP. Arguably there should be a link in the 'navigation' panel just for newbies. Try to direct them to another Wiki system to play around in like a beta version. In fact, they are the ideal candidates to thoroughly test the error-cases for the MediaWiki software, so getting them to help beta-test the next version would be a bonus. - Suggested sandbox wiki: - Controlled by m:User:Brion VIBBER (allegedly) If that site can be guaranteed to remain up 24/7, that's exactly what we need. If we can get the DNS name sandbox.wikimedia.org to point to the same IP address, and co-ordinate with the developers to ensure that this site always contains the latest bleeding-edge release of MediaWiki, we can get some free beta-testers, while minimizing site damage. We also need to make sure all WikiMedia sites include this link. This could prove more difficult, since there are hundreds of them. [edit] Vandals Vandals, intent on deliberately sabotaging the site. - Solutions User blocks and IP blocks help, but we could never stop a determined vandal without seriously compromising the ease of use of the site for the genuine contributor. The only real solution seems to be for other users to keep their eyes open, and regularly check out the 'Recent changes' list. Perhaps a formalisation of this process would help. [edit] Unnoticed vandalism / newbie experiments If a page is not obviously covered with nonsense, but content has been removed, or changed to be incorrect, no one may spot it, and the offending changes will not be reverted. A future editor may not spot it either, and make their additions. The more edits which then subsequently occur, the more awkward it will be to reverse the damage. This can be quite irritating. You can't just revert back to the version prior to the damage, since you would be accused of vandalising the page yourself by removing the content of subsequent edits. - Solutions Two possibilities come to mind, one changing software, the other wetware (i.e. the brain): - We could implement a system allowing you to reverse-apply the context-diffs from the offending edit or range of edits, with a single click, where possible, in the same way that CVS automatically merges in changes made by other users, while you are editing a document (this would also be neat). The offending edits will remain in the history, but a new revision will be added which appears as if you removed the offending edits by hand. This will not be possible in all cases, but would work with many common offenses such as linkspamming and blanking of sections. - We could make the user more responsible for the complete document they submit, rather than just the edits they've made. So, if you want to edit a page, you must first revert any vandalism before you add anything new. If you don't, you'll be considered a vandal yourself, and might get your account/IP blocked. Do people feel this is too harsh? [edit] Policy visibility A potential author, wanting to 'play by the rules', but can't/won't look to see what the rules are. - Solutions The rules/standards/policies/whatever pages need to be much more visible. Not currently sure as to the best implementation for this. The page Help:Editing is quite visible, since it's linked to from every edit page. Perhaps this should contain a warning that the user should check out the site policies before continuing, if they wish to avoid the possibility that their edits will be reverted or deleted by another user. [edit] Badly defined scope for acceptable content e.g. 'no original research' from WB:WIN. Obviously a user would be wasting their time, as well as the admin's time, to add something which is only going to be subsequently deleted. - Solutions Can people please forget about the phrase 'no original research', and mentally revert it to the original 'no primary research' (less vague), or even something even less vague. See also: User talk:Aya/Wikibooks/A critique of Wikibooks#Be bold. I'll wait to see if KelvSYC wants to sort it out before I change anything. 'Primary research' refers to the sorts of theses published by graduate students to propose a genuinely new theory. It would be used, perhaps, to refer to, say, Newton's Laws of Universal Gravitation, at any point prior to it being commonly accepted in Physics. These theses often involve coining new words and word-phrases which are not commonly used elsewhere, in order to refer to the new concepts they describe. There's nothing wrong with this, per se, but if everyone did it, then the language use could easily become too confusing to be of any practical benefit. (q.v. Neo). This is the primary reason that these things should not be allowed. Surely, devising a strategy for a mission in, say, GTA:SA, is 'original research' and 'primary research' in the sense that you have devised it yourself. The important distinction is that it is done using commonplace terms (within its own scope), and is focused on a very popular computer game (the bestselling of all time IIRC), rather that some wacky new theory that someone like Eddie Izzard might come up with, say, that bees are actually made of jam. Personally I don't have a problem with this stuff either, and arguably it has sneaked in to a lot of other documents already. In fact, I'd go as far to say that I interpret almost everything I read as 'primary research', unless it clearly fits in with 'common sense', which I will define as the sorts of things commonly taught in schools. So Physics is almost certainly not 'primary research', whereas the mission walkthroughs section of Grand Theft Auto: San Andreas almost certainly is. Maybe the policy should be phrased "this site should only contain factual infomation", with the standard dictionary definition of 'factual', then allow memetic evolution to allow the users to decide what that actually is. Arguably this appears to be the common goal of all Wikimedia projects. The policy would allow any user to correct/revert/delete anything which has been agreed to not be 'factual'. I've tried to avoid using the term 'fictional', since this has different connotations than simply being the antithesis of 'factual'. [edit] NPOV (Neutral Point Of View) See: Wikibooks:Neutral point of view See also the much more comprehensive w:Wikipedia:Neutral point of view. Would it be sufficient to link the former to the latter? Arguably, just a special case of the previous problem. Based on some lengthy discussions from other users in the past, and my own interpretation, I think this requires a better definition. - What is a 'neutral point of view' anyway? To think from a neutral point of view, to me, almost requires you to imagine you are an alien visitor from another planet, trying to make an objective analysis of the human race, and their beliefs and customs. - Bias Take the more biased point of view: - Microsoft Windows is the best operating system. If you see that in a Wikibooks module, and do not agree, just remember you should really interpret that as: - The user who wrote this believes that Microsoft Windows is the best operating system. So you could replace it with: - At least one person believes that Microsoft Windows is the best operating system. But since it's likely that any opinion is shared between at least a few people, you could also word it with the more vague: - Some people believe that Microsoft Windows is the best operating system. If you're vaguely aware of the statistics, you could even replace it with one of: - Most people believe that Microsoft Windows is the best operating system. - Most people don't believe that Microsoft Windows is the best operating system. And if you happen to know the exact statistic (I just made this statistic up): - 63% of people believe that Microsoft Windows is the best operating system. Note that now, you are primarily talking about people, not an operating system. - True by definition What about this: - A human being is a mammal. Again, you could apply the same logic as the previous example, but it would seem a bit ridiculous to do so. You'd probably end up with: - More than 99% of people believe that a human being is a mammal. The difference here is that this statement is 'true by definition'. That is that the word-phrase 'human being' is defined in most dictionaries to include it within the bounds for the definition of the word 'mammal', at least as long as the species retains its mamma. This is not the case for the MS Windows example, at least not in any dictionaries I've ever read. - Ambiguous statements Back to the MS Windows example: - Microsoft Windows is the best operating system. In addition to bias, it also suffers from ambiguity. How is the word 'best' defined within this context? To keep it vaguely factual, I'd probably intepret that as: - Microsoft Windows is the most popular operating system. That is to say used by the most people. This is a common definition of 'best' within a similar context, but others might interpret it as: - Microsoft Windows is the most stable and user-friendly operating system. At which point it becomes slightly more contentious. Probably best to avoid the word 'best' anyway. :-) - But, 'I believe' What if the user had instead written: - I believe that Microsoft Windows is the best operating system. Well. It's hardly a neutral point of view, but arguably, shouldn't be deleted, but (as before) replaced with: - Some people believe that Microsoft Windows is the best operating system. Thus making it neutral. [edit] Indecisive deletion policy It's actually a good idea to bring suspect pages into 'Staff lounge', rather than just adding them to VfD. If something is contentious enough to warrant a VfD, then perhaps it needs to be discussed first to ensure everyone interprets it in the same way. Consequently many VfD's end up forever in voting while the users try to see if they are 'on the same wavelength'. [edit] Wikibooks:Staff lounge It's akin to a talk page, but it has far too broad a scope for what is acceptable there. Consequently, the page invariably becomes massive, there are no standards for when the archive the page, and it ends up being too overwhelming for new users. - Solutions A set of pages in the same vein, only more akin to the popular web-forum convention of subdividing different topics into different pages. Provisional subdivisions (incomplete): - General forum - A place for all questions which don't neatly fit into any other category. Questions could be moved at a later date, but this may be offputting to - Wiki markup forum - For questions relating to wiki markup, and the correct use of HTML on a wiki. - Votes for deletion - This is in effect a talk pages already. Adding it to a list of these forum-like talk pages could only serve to make it more visible. - Votes for undeletion - Ditto [edit] Stub books It's important to distinguish more objectively the differences between books, modules, and pages. It seems that many are interpreting a 'book' to mean the same as 'an article on Wikipedia'. I guess they're similar, but modules should be reserved for things beyond the scope of Wikipedia (although this is also poorly defined - perhaps it should be "anything more than 32k of text on as many subpages as you like"). [edit] Bookshelves out-of-date See: All bookshelves, etc. These should be updated to link to the main page of all new books. These indices are already inaccurate, and need an overhaul. Create more taxonomical and normative categories to remove duplicated entries. Make it obvious which pages represent the main page of a distinct book. The definitive book listing ought to be one where everyone can agree where books go. For this reason, I suggest using the w:Dewey Decimal Classification, since it was created entirely for this purpose, and is extremely comprehensive. Someone has already made a start with: - Template:Dewey 000 - Template:Dewey 100 - Template:Dewey 200 - Template:Dewey 300 - Template:Dewey 400 - Template:Dewey 500 - Template:Dewey 600 - Template:Dewey 700 - Template:Dewey 800 - Template:Dewey 900 The dewey index page at page site could be useful. - Problems The dewey system is not quite standard across the world. It is apparently the most internationally recognised, but at least the section pertainting to religion varies depending on the most popular (government-imposed?) religion in that country. The DDC was written by a US citizen, so most of the religion section is devoted solely to Christianity (too much of it in my view), whilst a very small subsection is provided for alternate religions and ideologies. In countries where Christianity is not the primary religion, the vast majority of this section is devoted to the primary religion, and the smaller subsection is used for Christianity and other religions. Another problem is with evolution. The DDC is not designed to evolve well. The generalities section (circa dewey 000) generally houses books pertaining to computers. I have also seen these classified in the physics and maths section in some libraries (circa dewey 500). As computers become more important in our day-to-day lives, these books will become increasingly common and popular. The existent bookshelf alternative may prove to be be more successful in the long term. Its top-level categories are almost identical to those of the DDC system already. Furthermore, it is better able to evolve than the DDC. Perhaps DDC is not such a great idea after all. [edit] Main page out-of-date See: Talk:Main Page [edit] Wikibooks:Browse out-of-date Also it contains many red links, which is generally a bad thing. [edit] Stub pages I'm not sure why people have a problem with this. It seems reasonable to me that a user may take several days to try to get all their content into a meaningful place, so these should be permitted to exist for at least a certain amount of time. [edit] Skeletons A skeleton is a Wikibook which consists of a contents page linked to many other non-existent pages. The user who creates them will often be overwhelmed by the size of the scope they have created for themself to fill in, and often abandon the project. Newbies also have a tendency to click on these links, and type in any old nonsense. - Solutions Need a good policy on starting new books. By having a stricter policy, the endless debates in VfD can be minimised. The important thing is that the policy should try to leave the book in a reasonable state at all times, since almost every book is abandoned by its authors before it can be considered complete. See: New book policy [edit] Scope overlap with other wikis Scope overlap with Wikipedia, and forking from Wikipedia. - Solutions Make a more tight definition as to the distinction between Wikipedia and Wikibooks. It's obvious that there's going to be crossover, so for subject X: - If you can't be bothered to put in the effort to create a good book about X: - If there is a Wikipedia article about X, add it to that - If not, create a new Wikipedia article about X - If you can be bothered to put in the effort to create a good book: - If there is a Wikipedia article already about X, make sure your book contains at least the information present there - Note heavily on the Wikipedia article that a book now exists on the subject, and all new information should be put there I shall define a good book to mean one which has more content than is generally acceptable for a Wikipedia article (probably around 32k of text). Where an extensive and popular Wikipedia article already exists with the same name, I see nothing wrong with forking the content from there, providing it is noted heavily in the Wikipedia article [edit] Double redirects Are they really a problem? Need to check this out. [edit] Orphaned pages Including orphaned redirect pages. These are mostly redundant, and are just taking up unneccesary space. Should be speedily deleted. [edit] Inconsistent use of capitalization Wiki page names are case-sensitive, so one user can create a book "A history of computers", and another could create a book "A History of Computers", and they may never notice they're both writing the same book. Are title caps preferable? Needs to be standardized, or maybe they shouldn't be case sensitive? The problem below is similar. [edit] Inconsistent use of naming Similar to the above, what about the two book names "Computing History" and "A History of Computing", or the endless conceivable permutations which are logically the same concept. Wikipedia simply uses redirects. Is this appropriate for Wikibooks too? Would these all be listed of the bookshelves? [edit] Inconsistent use of talk pages Mostly they talk about the page they belong to, but there are the odd cases such as pages like Wikibooks:Staff lounge which is a talk page in its own right. Also check out the way they've been used in Christianity/John 1, and similar pages. - Solutions I believe the logical distinction between a page and its associated talk page is that the page should serve as the information, and the talk page should serve as the reasoning why the information is the way it is. In a way, this reasoning is vital for preventing an edit war of the information page. For works-in-progress, you often see this reasoning in the information page itself. This practise should not be discouraged. For pages like Wikibooks:Staff lounge where the informational page is effectively being used as a talk page, perhaps the actual talk page could be used to keep presentable infomation on the ways things are done within that page (i.e. the two pages are conceptually swapped over), although I supposed it could also be used as a meta-talk page. It just seems a bit silly to talk about talking on a different page, when it could all be said on the main page. As for Christianity/John 1 etc., this practise ought to be frowned upon. It has already appeared in VfD. This is a special case which needs to be tightened up in the annotated texts policy. In my opinion, the main page should not be allowed to exist alone, since it is clearly raw text from the Bible. The talk pages should perhaps be appended to the main page, then cleared, since Wikipedians currently have no place to comment on the content of the page. [edit] Archiving talk pages Talk pages, and other pages which serve a similar function have a tendency to get long, and contain discussions which are no longer being monitored by anybody. [edit] The 32k page limit What browsers actually have a problem with this? If it really is a problem, it should be addressed in policy. [edit] Provisional policies and guidelines This is an attempt to standardize, in the style of nomic, what is currently popular convention, in the hopes of converting Wikibooks in something more akin to direct democracy rather than anarchy. Another analogy would be perhaps the U.S. Constitution, and its Amendments, although this is representative democracy not direct democracy. The goal is to prevent edit wars, amongst other things. [edit] Outline - Use of terminology What's a good word for documents of this nature? "Policy" seems to be the wiki standard, as well as "guideline" for more trivial things. It may be worth promoting all guidelines to policies anyway. If people interpret "policy" as "I must do this", and "guideline" as "I might do this", then guidelines would clearly be more effective if they were labelled "policies". The psychological interpretation of language is a key point. Perhaps the word "procedure" is better, since it infers activity, rather than politics. - Wikibooks:Policies and guidelines - This page should contain no actual policies, but instead serve as a useful index to all other policy documents. The definitive index should always be the templated categories. This page will also contain guidelines. - Policy documents - Wikibooks:Core policies - This will contain meta-policies, which are those to serve to indicate how policies are created/modified/removed, but can also be used for policies which don't require a whole document on the subject. - Wikibooks:Editing policy - As it is. etc. - Alternatives Maybe the problem is simply that by breaking up the policies into separate documents, users are too lazy to read them, since each page can take a long time to load. Based on the overheads of HTML and HTTP, it is certainly true to say a single larger document will be quicker to load than several smaller ones. Perhaps a single page is better, although each section may have to be extremely terse to make it digestable. Since there seems to be a lot of policy forking going on between wikis, maybe the whole lot should be copied to meta and deleted from all sites, including policy for individual projects such as Wikibooks and Wikipedia. - Visibility I'm thinking of those signs you see on trash bins that say "keep <locale> tidy"; the locale of course depends on where you happen to be at the time. An image of that, but reading "keep wikibooks tidy" to splat on various pages, assuming we can link it to an appropriate page. [edit] How to write effective policy documents - Know your audience This is en.wikibooks.org, so users here are most likely going to be countries with a large English-speaking population, with reasonable internet access. Using w:List_of_official_languages and the CIA factbook: - India (1,080,264,388 - also Hindi) - USA (295,734,134 - ~82.1%) - Pakistan (162,419,946 - <8%) - Nigeria (128,765,768) - Philippines (87,857,473 - also Filipino) - UK (60,441,457) - South Africa (44,344,136 - ~8.2% - also many others) - Kenya (33,829,590 - also Swahili) - Canada (32,805,041 - ~59.3% - also French) - Australia (20,090,437 - ~79.1%) - Hong Kong (6,898,686) - Papua New Guinea (5,545,268 - <2%) - Singapore (4,425,720 - ~23% - also many others) - New Zealand (4,035,461 - also Maori) - Republic of Ireland (4,015,676 - also Irish) Number in brackets is total population, which is not necessarily the same as English-speaking. Countries less than 2 million people not listed. This list is no indication of internet access. Also, some countries languages are not official languages by law, but only by custom. This distinction is irrelevant within the scope of this document. - Use unambigious language Technically impossible, since all people interpret language slightly differently, but bear it in mind. Some useful tricks are: - Link the word to a definition in Wikibooks:Glossary, and tighten the definition in there. - Link the word to a page in Wikipedia. Many words and word-phrases already have disambiguation pages there, so this may also get the point across. - Use examples So, for example, when describing the correct use of naming, give an example of a book that conforms. This may be far more meaningful than any amount of explanation could ever achieve. - The 'wall-of-text' effect When faced with a huge paragraph, people tend to only read the first couple of sentences, then let their mind fill in the rest. Short, bulled pointed sentences are much easier to digest. Use whitespace to break things up, and try to make policies terse, while their justification can be as verbose as is necessary to justify that policy. - The Bible Metaphor Easily one of the most influential books in western civilization, and in many ways is like an early version of the, now commonplace, legislative set of rules comprising the social contract. I'm not sure if the 11th commandment was ever intended to be "Thou shalt not infringe on intellectual property rights.", but that seems to be the way things have become. It memetic popularity can be attributed to its use of two distinct psychological tools. The first is wording it to make readers believe that they will suffer the torment of burning in eternal damnation if they don't read and follow the rules. The second is wording it to encourage the reader to propagate the meme to the minds of other people. The practical wiki-equivalent of eternal damnation, would be things like blocking user accounts and IP addresses, and having contributions deleted or reverted. Obviously, only the latter can be performed by a regular user, and since there are relatively few admins at the moment, this may prove to be the most effective deterrent. - The Selfish Gene & Prisoner's Dilemma Metaphor The psychological effect of wording the rules, to imply that a positive consequence will occur to the user who follows them, will serve better than to imply that a positive consequence will occur to the remainder of the Wikibooks community. In short, you get better results by appealing to an individual's sense of selfishness than to their sense of altruism. [edit] Core policies - Definition of terms This is extremely important. In order to communicate unambiguously, we need to agree on the same terminology. Wikibooks:Glossary is a good starting point for this. - Current policy definitions I had already added some of this to Wikibooks:Policies and guidelines, and created the templates Template:Enforced, Template:Proposed, Template:Rejected to make it more similar to Wikipedia, but perhaps it was not such a good idea. I shall refrain from making any further changes until this is sorted out. There are three kinds of policy: - Enforced policy - This will be enforced by the Wikibook community. - Proposed policy - This is a policy proposed to be an enforced policy, but is still undergoing discussion. - Rejected policy - This is a proposed policy which has been rejected by the community. It should remain to remind people why it was rejected. When the term policy is used without one of these qualifiers, it is assumed to mean enforced policy. A guideline is the much less formal equivalent of a policy. They generally don't require enforcement, because they're mostly obvious social behavioural guidelines in various situations. If they become problematic in the future, they may be promoted to a full policy document. - Existing systems Perhaps we should use the the standards from RFC2119, since they are more tight definitions of commonplace English words: -.) Translated into wiki, this would mean that breaking 1 and 2 would be grounds for banning, 3 and 4 would be grounds for reverting/page renaming/merging, and 5 is just a suggestion. e.g. - You MUST NOT vandalise pages. - You SHOULD sign posts on a talk page. - You MAY write a new book. :-) In a way, a wiki is a much better system for "Requests For Comments (RFCs)", since each page has a talk page precisely for this purpose. Maybe this would be a better metaphor, but it might confuse it with the IETF documents on which it was based. On the subject of RFCs, the success of the internet tend to suggest the IETF did a good job with writing those documents. They are very precise, and some are so well written, you could have a hard job arguing with them, except perhaps if you beg to differ on the definition of a fundamental principle like an "electron". - Provision nomic-esque policies I will have to change these: - Every core policy is mutable (i.e. it can be changed). - No legal system can adapt to the changing nature of the universe without being flexible. - Every core policy must have at least one justification. - The tragedy of the commons effect would suggest that people are far less likely to follow standards without adequate justification. - It will prevent newbies from asking, "why is X a policy?" - The process for changing the core policies should be by submitting the proposed changes to its talk page. The voting process begins at this time. Each registered user gets one vote, either 'for' or 'against'. The user may in addition specify a justification for their vote, even though it may have the secondary consequence of persuading subsequent voters to vote the same way. The voting period will last for one week. After this time, if the number of 'for' votes is greater than or equal to the number of 'against' votes, the change is considered accepted, and the user who proposed the change may now apply it to the appropriate page. This includes the case where there are no votes at all. - By specifying a time limit for voting, things can still happen even when the site is rarely visited. - Issues to resolve - Is one week a good length of time to allow for voting? - Should we have another set of less-mutable standards which require a minimum number of votes to ratify? - Better ideas? Anything wrong with just: - If you think it's wrong, just change it - If your change gets reverted, talk about it - If you still don't agree, vote on it Seems about as pro-active as I can think of. [edit] Navigation panel policy Since the navigation panel is visible on all pages (is this true for all available CSS pages?), it is a useful tool for helping people find what they are looking for, especially if they have been directed to a specific page on this site, rather than to the main page or portal. These can't be changed by regular users, so perhaps a page should be put in place to specify these links, while allowing its talk page to serve as a place to discuss what they should be. Currently they are: - Wikiversity -> Wikiversity - Community Portal -> Wikibooks:Community Portal - Recent changes -> Special:Recentchanges - Random module -> Special:Random - Help -> Help:Contents - Donations -> I'm thinking one should link to a page devoted to people who are new to the wiki experience, and that this page should provide an obvious link to the sandbox wiki. [edit] Book writing policy (a.k.a) How to make sure that your edits that won't get reverted by anyone, or your pages deleted by an administrator. Obviously, it's a complete waste of time to work on a book which will subsequently get reverted or deleted. - Writing a new book from scratch - Before you start a new book, check there isn't one already which covers the scope you have in mind. By adding content to an existing book (especially if you use the correct subpage convention), rather than creating a new 'stub' book, your change will more likely be accepted by the Wikibooks community, and you will be doing a 'good thing' in trying to keep similar content in the same book. - When starting a new book, the most important thing is to choose a good title for it. Should you subsequently abandon the project, this title will in many ways determine its future scope, since it then becomes up to other users to determine what information belongs in it, based on its title. - Once you have a title for your book, try to start out with the intention of writing everything on a single page. Perhaps start with a single paragraph to explain the scope of the book. If you don't provide even this much, your work will likely get declared to be a 'stub', and deleted at a later date. To avoid this, you generally have a lot more freedom in the sorts of content you can place in your user page, or a subpage thereof. Some users prefer to start their book as a subpage of their user page, then move it to the main namespace when they think it is of sufficient quality to 'publish'. If you start your book in the main namespace, you will more likely find that other users will edit it, and might be prepared to help out with writing it. - As ideas for 'chapters' in the book form in your mind, start using the '==' subheading notation to subdivide it, and try to provide at least a sentence for each subheading to explain what should go in that section. Once you have at least three or four of these, the page will automatically generate a table of contents for your book. If you like, you can further subdivide with '===' sub-subheading, etc. - Once your page reaches the point where some of the sections contain a substantial enough amount of content to warrant a whole page to the subject, then you can start playing around with subpages. Using the '/' subpage convention will save you a lot of effort since it automatically provides links to its parent pages. Try to avoid creating links to pages that don't exist, these often get clicked on by newbies, who seem to think that "fsghfgpouhgp" is good enough for the content. Also try to avoid 'stub' pages which contain only a single sentence. Many wiki users consider these to be evil, and will often mark them for deletion, especially when they do not use the '/' subpage convention to indicate they are subpages of another book. - Transferring existing content to form a new book - First, make sure you are legally allowed to put the source onto the wiki at all. See: Wikibooks:Copyrights - If the source text is already complete, and you have no intention to change it, it should be put on Wikisource instead. This site is for books in development. See also: Wikibooks:Annotated texts - If the source text needs work to convert it to use wiki markup, it might be worth temporarily housing the page as a subpage of your user page. So called data dumps are generally frowned upon. [edit] Editing policy [edit] Deletion policy - Don't add a page to VfD unless it hasn't been edited for at least 1 week. - Someone trying to build up a book from scratch may take a while to get any significant amount of content in. [edit] Namespace use policy Personally I find the namespacing system to be a little inflexibile, since you can't dynamically add new ones, however, it might worth formalizing things like: - All pages about Wikibooks itself, rather than actual books should be in the 'Wikibooks' namespace. - Every page name in the main namespace which clearly contains no ':' and '/' to identify it as a page in a book, should be considered a 'book' in its own right. - If that page appears to be part of another book, please rename (move) the page to use the bookname/pagename convention. - Point is, if someone can't be bothered to collate the pages of their own book, they clearly need the '/' convention to provide this automatic linking for them. There are also some pages in the 'Help' namespace, with no clear distinction as to what belongs in that, compared to what should belong in the 'Wikibooks' namespace. [edit] Page naming policy - On the subject of '/' vs. ':' There are two key differences between the two. The first is a simple typographical distinction, and the other is the way the wiki software behaves when rendering the page. Typographically, it may be preferable to use the '/' syntax since: - It distinguishes it from the ':' used for namespaces, which operate in a different fashion, and may serve to confuse users as to their function. e.g. - The corresponding talk page for Wikibooks:Administrators is Wikibooks talk:Administrators - The corresponding talk page for Programming:C is not Programming talk:C, but Talk:Programming:C. - It's common URL convention anyway. There are dozens of RFCs on the subject. - On my keyboard, I don't have to press SHIFT to make one appear. Okay. A bit selfish, but I suspect that 99% of en.wikibooks.org have the same keyboard layout as me in that regard. Behaviorally, however, the two are not nearly as equivalent, since using '/' will create a link to its parent pages, whereas ':' will not. The user may not want this link, since they may be using a far more sophisticated template page for navigation purposes. Unless there is a way to optionally suppress the link added using '/', then you can argue that both systems need to be retained for flexibility. If there is such a system, then I'd argue for the dropping of the ':' syntax, to be used solely as a means to easily identify which namespace a page is in. There are practical considerations as well. If we do decide to use one over the other, is anyone actually going to change all the existing books to be conformant? See also: User talk:Aya/Wikibooks/A critique of Wikibooks#The dangers of page naming - On the use of hierarchy generally There is also a great deal of flexibility in the use of hierarchy generally. See: User talk:Aya/Wikibooks/A critique of Wikibooks#The dangers of hierarchies [edit] Category use policy There doesn't seem to be anything about this at the moment, and I've not really put a lot of thought into it, so I'll put in: - Use categories at your own discretion. As for hierarchical categories such as... ...or the same, but using a '/' instead of a ':' - be aware that the use of these delimiters is not interpreted by the software in any way, so it is merely a typographical distinction. For the same reasons as page naming, I would suggest the use of '/' rather than ':' to avoid the confusion between concrete namespaces and artificial scopes. As for the conceptual 'scoping' of categories using this notation, I guess it's reasonable to do so where said categories would otherwise serve to pollute the main category namespace. This may be more necessary when creating categories based on fictional universes such as Pokémon, rather than the one we actually live in. If you merely wish to provide hierarchy to real world concepts, just choose a fairly unambigious name, and use sub-categories to create hierarchy. That is, if you add a category to another category, it is considered a sub-category of that category. Implying hierarchy through the name will only make it more awkward to modify that hierarchy later. A reasonable example might be - Category:Cooking - Category:Recipes - recipes is probably unambiguous enough - Category:Kitchen equipment The problem is, categories duplicate the functionality offered by the '/' subpage notation. The same thing could have been achieved by structuring the book thus: - Cookbook - Cookbook/Recipes - Cookbook/Recipes/Fish - notice here, the scope allows us to drop the word 'recipes' from the end. - Cookbook/Recipes/Meat - Cookbook/Recipes/Vegan - Cookbook/Kitchen equipment I guess it's arguable which is better. You could even write your entire book in the Category namespace I guess, just to exploit its subcategory abilities. This section has been disputed in the talk page. [edit] Image use policy [edit] Administration policy [edit] Page protection policy In a perfect world, no pages should need to be protected, in fact, this open attitude should be strongly enouraged. Many users can be trusted to improve the formatting of the page, without changing its content, and update links which become out-of-date. In the world we live in, however, it seems apparent that some pages need to be protected. I think the real problem is not the opportunist vandals, since it probably takes them more time to vandalise a page, than it does for another user to revert it. The real problem occurs when these vandalism attempts are scripted. So far it seems this is limited to automated linkspamming software that attempts to get higher page rankings for its pages, most likely on the Google search engine. If this is the case, then merely protected high-profile pages such as those displayed for the URLs wikibooks.org, and en.wikibooks.org, should suffice. The first two link to Wikibooks portal (which is currently not protected, and has been targeted by linkspammers in the past), the second links to Main page which is heavily protected and monitored. This kind of scripting could evolve into much more sophisticated vandalism attacks, which could be incredibly annyoing for the average user to revert, and would most likely require a complete database rollback. These sorts of attacks could be detected and prevented, but the current system in place may not be sufficient forever. The future solution would most likely involve the apache server keeping an eye out for many edits in a very short space of time, all coming from a single IP address. Having said that, some of these scripts are quite cunning. I've seen what I refer to as 'distributed linkspamming attacks' which come quite slowly from multiple IP addresses, to make them look like legitimate edits. In the meantime, the current system of blocking IP addresses will have to suffice. It might be worth formalising a procedure for checking the recent edits for obvious vandalism. See Edit review policy [edit] Edit review policy Some sort of simple procedure for those who can be bothered to review edits for vandalism, linkspam, new pages not conforming to naming policy, uploading illegal images, etc. e.g.: If you can't be bothered to check all edits made, perhaps just checking those made by non-registered users (vandalism attacks will most likely be from these). - Useful links - Special:Specialpages - All special pages (these need better descriptions) - Special:Allpages - The definitive place, I suppose, to find every page. - Special:BrokenRedirects - Special:Categories - Categories in red probably need to be created, or removed from containing page. - Special:Deadendpages - Probably a good place to find pages of a book which need to be linked back. Annoyingly, pages created with the '/' convention are automatically linked back, yet they still appear here. - Special:DoubleRedirects - Special:Longpages - Special:Newimages - Check for uploading of illegal images - Special:Newpages - Check for conformity of naming - Special:Lonelypages - Orphaned pages - Special:Recentchanges - Obviously - Special:Shortpages - For stubs - Special:Unusedcategories - These should probably be deleted - Special:Wantedpages - Perhaps references to these should be removed in skeleton documents.
http://en.wikibooks.org/wiki/User:Aya/Wikibooks/A_critique_of_Wikibooks
crawl-002
refinedweb
8,673
59.94
Netdev features mess and how to get out from it alive¶ - Author: Michał Mirosław <mirq-linux@rere.qmqm.pl> Part I: Feature sets¶ Long gone are the days when a network card would just take and give packets verbatim. Today’s devices add multiple features and bugs (read: offloads) that relieve an OS of various tasks like generating and checking checksums, splitting packets, classifying them. Those capabilities and their state are commonly referred to as netdev features in Linux kernel world. There are currently three sets of features relevant to the driver, and one used internally by network core: - netdev->hw_features set contains features whose state may possibly be changed (enabled or disabled) for a particular device by user’s request. This set should be initialized in ndo_init callback and not changed later. - netdev->features set contains features which are currently enabled for a device. This should be changed only by network core or in error paths of ndo_set_features callback. - netdev->vlan_features set contains features whose state is inherited by child VLAN devices (limits netdev->features set). This is currently used for all VLAN devices whether tags are stripped or inserted in hardware or software. - netdev->wanted_features set contains feature set requested by user. This set is filtered by ndo_fix_features callback whenever it or some device-specific conditions change. This set is internal to networking core and should not be referenced in drivers. Part II: Controlling enabled features¶ When current feature set (netdev->features) is to be changed, new set is calculated and filtered by calling ndo_fix_features callback and netdev_fix_features(). If the resulting set differs from current set, it is passed to ndo_set_features callback and (if the callback returns success) replaces value stored in netdev->features. NETDEV_FEAT_CHANGE notification is issued after that whenever current set might have changed. - The following events trigger recalculation: device’s registration, after ndo_init returned success user requested changes in features state netdev_update_features()is called ndo_*_features callbacks are called with rtnl_lock held. Missing callbacks are treated as always returning success. A driver that wants to trigger recalculation must do so by calling netdev_update_features() while holding rtnl_lock. This should not be done from ndo_*_features callbacks. netdev->features should not be modified by driver except by means of ndo_fix_features callback. Part III: Implementation hints¶ - ndo_fix_features: All dependencies between features should be resolved here. The resulting set can be reduced further by networking core imposed limitations (as coded in netdev_fix_features()). For this reason it is safer to disable a feature when its dependencies are not met instead of forcing the dependency on. This callback should not modify hardware nor driver state (should be stateless). It can be called multiple times between successive ndo_set_features calls. Callback must not alter features contained in NETIF_F_SOFT_FEATURES or NETIF_F_NEVER_CHANGE sets. The exception is NETIF_F_VLAN_CHALLENGED but care must be taken as the change won’t affect already configured VLANs. - ndo_set_features: Hardware should be reconfigured to match passed feature set. The set should not be altered unless some error condition happens that can’t be reliably detected in ndo_fix_features. In this case, the callback should update netdev->features to match resulting hardware state. Errors returned are not (and cannot be) propagated anywhere except dmesg. (Note: successful return is zero, >0 means silent error.) Part IV: Features¶ For current list of features, see include/linux/netdev_features.h. This section describes semantics of some of them. - Transmit checksumming For complete description, see comments near the top of include/linux/skbuff.h. Note: NETIF_F_HW_CSUM is a superset of NETIF_F_IP_CSUM + NETIF_F_IPV6_CSUM. It means that device can fill TCP/UDP-like checksum anywhere in the packets whatever headers there might be. - Transmit TCP segmentation offload NETIF_F_TSO_ECN means that hardware can properly split packets with CWR bit set, be it TCPv4 (when NETIF_F_TSO is enabled) or TCPv6 (NETIF_F_TSO6). - Transmit UDP segmentation offload NETIF_F_GSO_UDP_L4 accepts a single UDP header with a payload that exceeds gso_size. On segmentation, it segments the payload on gso_size boundaries and replicates the network and UDP headers (fixing up the last one if less than gso_size). - Transmit DMA from high memory On platforms where this is relevant, NETIF_F_HIGHDMA signals that ndo_start_xmit can handle skbs with frags in high memory. - Transmit scatter-gather Those features say that ndo_start_xmit can handle fragmented skbs: NETIF_F_SG — paged skbs (skb_shinfo()->frags), NETIF_F_FRAGLIST — chained skbs (skb->next/prev list). - Software features Features contained in NETIF_F_SOFT_FEATURES are features of networking stack. Driver should not change behaviour based on them. - LLTX driver (deprecated for hardware drivers) NETIF_F_LLTX is meant to be used by drivers that don’t need locking at all, e.g. software tunnels. This is also used in a few legacy drivers that implement their own locking, don’t use it for new (hardware) drivers. - netns-local device NETIF_F_NETNS_LOCAL is set for devices that are not allowed to move between network namespaces (e.g. loopback). Don’t use it in drivers. - VLAN challenged NETIF_F_VLAN_CHALLENGED should be set for devices which can’t cope with VLAN headers. Some drivers set this because the cards can’t handle the bigger MTU. [FIXME: Those cases could be fixed in VLAN code by allowing only reduced-MTU VLANs. This may be not useful, though.] rx-fcs This requests that the NIC append the Ethernet Frame Checksum (FCS) to the end of the skb data. This allows sniffers and other tools to read the CRC recorded by the NIC on receipt of the packet. rx-all This requests that the NIC receive all possible frames, including errored frames (such as bad FCS, etc). This can be helpful when sniffing a link with bad packets on it. Some NICs may receive more packets if also put into normal PROMISC mode. rx-gro-hw This requests that the NIC enables Hardware GRO (generic receive offload). Hardware GRO is basically the exact reverse of TSO, and is generally stricter than Hardware LRO. A packet stream merged by Hardware GRO must be re-segmentable by GSO or TSO back to the exact original packet stream. Hardware GRO is dependent on RXCSUM since every packet successfully merged by hardware must also have the checksum verified by hardware.
http://martchus.no-ip.biz/doc/linux/networking/netdev-features.html
CC-MAIN-2021-04
refinedweb
1,014
55.64
30 October 2006 14:03 [Source: ICIS news] MADRID (ICIS news)--Russian potash producer Uralkali was forced to permanently close its first mine division (BKPRU-1) at ?xml:namespace> ?xml:namespace> The current expansion of production capacities at its refining complexes in The flood occurred on 19 October following a break in a section of an old part of the mine, which caused an inflow of brine and hydrogen sulphide emissions. As a result of the closure, Uralkali´s capacity to extract potash ore has been reduced by around 20%. However, its processing plant at Mine 1, which has the capacity to produce 900,000 tonnes/year of pink standard Muriate of Potash (MOP) and 500,000 tonnes/year of white standard MOP, has not been affected. Uralkali’s production complex at The refineries at Mines 3 and 4 are undergoing expansion at present. Processing plant 3 is being expanded by 880,000 tonnes/year of MOP. The first phase was due for completion by the end of 2006 and the remainder by 2010. The expansion at processing plant 4 was scheduled for completion in 2009, when its capacity would increase by 1.25m tonnes/year. The expansion of these production capacities will be brought forward to help offset the loss of output at Mine 1, the company added. Uralkali has revised its MOP production forecast for 2007 to 5m tonnes from 6.2.
http://www.icis.com/Articles/2006/10/30/1102061/uralkali+closes+first+berezniki+mine+division.html
CC-MAIN-2013-20
refinedweb
234
57.81
Thomas Gschwind Benjamin A. Schmit Abteilung für Verteilte Systeme Technische Universität Wien Argentinierstraße 8/E1841 A-1040 Wien, Austria, Europe {tom,benjamin}@infosys.tuwien.ac.at{tom,benjamin}/ Every day, the web is applied to more and new application domains. In some cases people try to convert services that historically have been handled by a different mechanism, such as USENET News, to the web. In other cases, such as email, they are complemented by a mechanism that allows users to access these services via a web front-end. Due to this success and the increasing number of application domains web server performance is of major importance [8,11] and unresponsive and slow web sites may send users seeking for alternatives [5]. Traditionally, web services have been implemented using CGI programs in the form of compiled C and C++ programs or Perl scripts that were executed by the web server. Although these approaches work perfectly fine for simple dynamic web content they are cumbersome to use if a whole business process should be modeled as a web application. This stems from the fact that they do not take care of the state-management between subsequent web requests. These issues, however, are taken care of by newer approaches such as the Java Servlet Technology [22], the Python Zope Server [14], or dedicated application servers (e.g., Bea Weblogic or JBoss). One advantage of servlets is that they are executed as part of a servlet environment that, unlike CGI scripts, needs not be restarted at each invocation. To protect one servlet from another, these environments use programming languages that take care of memory management issues. Another advantage is that these languages come bundled with standardized libraries providing support for numerous different tasks. These advantages, however, have a price. Legacy applications that make use of C or C++ cannot be easily integrated. Although technologies such as SWIG [4] or JNI [16,25] simplify the integration of legacy applications into scripting languages they still require developers of such systems to deal with different systems. Sometimes the choice of language is mandated by management or the developers simply prefer the use of C or C++. Another advantage of C and C++ is that these languages provide better performance and a finer grained integration of the operating system's security mechanisms. This is probably one of the reasons why the Apache HTTP Server and the Microsoft Internet Information Server, the two most prominent web servers with a combined market share of over 85% [20], are written in C and C++ respectively. In this paper, we present a servlet environment that is completely implemented in C++. To the best of our knowledge, our C/C++ Servlet Environment (CSE) is the first servlet environment that uses the potential of C++. Similar to the servlet engines implemented in Java, the architecture of our servlet environment offers adequate protection between the individual servlets to be executed. Hence, our servlet engine provides the following advantages over those using Java or scripting languages: This paper is structured as follows. In Section 2 we present the requirements for a servlet environment as well as the advantages and challenges of using C++. Section 3 shows how these challenges have influenced the design and implementation of the C/C++ Servlet Environment, and application development for the CSE is discussed in Section 4. Related work is presented in Section 5 and in Section 6 we compare the performance of our approach with that of other approaches. Future work is discussed in Section 7 and we draw our conclusions in Section 8. The main goals during the design of the C/C++ Servlet Environment were security, stability, ease of use, parallelism, and performance. Before we can have a closer look at these requirements, however, it is necessary to give a brief overview of the typical architecture of a web site. This architecture is depicted in Figure 1. A web site must include a web server that handles requests from multiple clients. Requests that cannot be handled by the web server itself are forwarded to a CGI program or a servlet engine where a web service is executed. The web service in turn may contact a database or application server for persistent data storage. Since HTTP is stateless [7], the client has to transmit the application's current state or a session identifier with each request. In the latter case a mapping from session identifier to state has to be maintained by the servlet engine. CGI programs have the drawback that they have to be executed anew for each client's request. Hence, they need to be started at each request and then have to read in their configuration data and session state from persistent storage. A servlet engine, on the other hand, is running all the time and keeps several servlets as well as their session state in memory. Servlets need to maintain the client's session state (for instance, the contents of a shopping basket) because web clients only provide limited functionality to maintain this kind of data. Since many servlet engines execute several different servlets within the same process stability is a major concern. If one servlet crashes, the other servlets have to continue to run unaffectedly. For a servlet engine, it is very important to recognize and handle this kind of failure since there is no way of judging the stability of a user-supplied servlet from within the servlet engine. Stability is one reason why Java and interpreted languages in general are popular for this task. If a Java servlet crashes only its thread of execution is terminated and all the other servlets continue to run. Hence, the worst that can happen is that a servlet consumes overly much resources such as processor time, memory in the form of unused but still referenced Java objects, or network bandwidth. The price for these benefits is that all servlets are executed with the same privileges and that the operating system's security mechanisms need to be re-implemented as part of the servlet engine. Another drawback is a slight performance overhead since Java does not allow developers to write code on a level as low as it can be written with C and C++. The disadvantage of C and C++ is that these programming languages are not as safe. Bugs in C and C++ programs typically take down the whole process and they tend to get apparent at a much later time (typically at a point of execution that is unrelated to the place that caused the error). Therefore, even if the application is catching the signal that a segmentation violation has occurred, recovery is difficult. Hence, the design of a servlet engine written in these languages has to solve those challenges. Security is necessary to minimize the chance that servlets can be exploited by an intruder. Our architecture reuses the security mechanisms that have been built into the operating system. It allows sandboxes, and thus servlets, within the same servlet environment to be executed with different user privileges. The advantage of this approach is that system administrators can use standard access privilege mechanisms and do not have to get familiar with a new security management system which can lead to misunderstandings. Additionally, our approach requires only a single security mechanism to be checked for possible vulnerabilities. A disadvantage of C and C++, however, is that such programs are open to buffer overflow attacks if they have not been implemented carefully. The threat and impact of such attacks can be minimized by using the C++ Standard Template Library which has been designed in order to minimize such programming mistakes and by using operating system mechanisms such as a non-executable user stack area. No programming language or servlet environment, however, can guarantee that a program or servlet cannot be exploited by an intruder. The final responsibility is always up to the developer. Ease of use is another important aspect for a servlet engine. Even though there are servlet engines for Java that provide good performance, these servlet engines are of little use to C or C++ programmers that want to use existing code for their web applications. Using C and C++ in combination with such servlet engines requires the use of the Java Native Interface (JNI) [16,25], the conversion between C and Java data types, and to deal with low-level details of both languages. Although systems such as SWIG [4] can be used to help with this issue they do not solve the problem of having to deal with two different languages. Hence, a C++ servlet environment is more convenient to use for developers that have to deal with C or C++ code. A good servlet engine must not only be easy to use but has to provide good performance as well. This gets apparent by the fact that the two most popular web servers are implemented in C and C++ respectively. A convenient servlet environment that requires load balancing among multiple servers to be able to handle the incoming requests loses much of its original appeal. This is one of the reasons why we have developed the CSE. The CSE was designed towards optimal performance. Parallelism is, on a server machine, a prerequisite for performance and scalability. On a network, the upper bound of the access time to a service can be high and thus forbids handling requests in serial. The CSE introduces parallelism by providing several services on a single machine, by partitioning a service into several parts with simple interfaces between them, and by replicating those parts. Additionally, compared to using interpreted languages that use a global interpreter lock for thread locking [28, Section 8.1], the approach to use C++ has the advantage of having a fine grained thread locking mechanism as provided by linux-threads or pthreads [15]. Figure 2 shows the architecture of our C/C++ Servlet Environment. It consists of an Apache module, a servlet server and several sandbox processes. The CSE uses the Apache server to handle incoming HTTP requests, the servlet server is used for the management of the sandbox processes and the sandboxes are responsible for the execution of the servlets. We use different sandbox processes because this protects a servlet in one sandbox from an unstable one in another sandbox. Additionally, this approach allows administrators to execute different sandboxes with different user privileges. Although we have used C++ for the implementation, our architecture can be easily reused for a servlet environment implemented in plain C. We use the Apache web server [1] to handle HTTP requests. This approach has several advantages over implementing a new web server for our servlet engine such as provided by the Tomcat Servlet Engine [2]. Another advantage of the modular design we have chosen is that only the web server's Servlet Module has to be re-implemented if a different web server has to be used for a given web site. The Apache Module registers the URLs that are implemented by the servlet engine's servlets and the C++ Server Pages. Requests to other URLs such as static web pages or images are handled by Apache itself without consulting our module. Servlet requests are forwarded to the servlet server which assigns them to one of the sandboxes. One risk of this approach is that the servlet server might become a bottleneck. Hence, in future versions of CSE, we plan to extend the Apache module such that the requests are sent to the appropriate sandbox directly. Going a step further and executing the servlets by the Apache module directly, however, is not possible since that would compromise the web server's stability. The Servlet Server is responsible for the management of the sandboxes. To do that it processes requests forwarded by the Apache module and determines, based on the servlet registry, the sandbox that is responsible for the servlet's execution. The Servlet Registry maintains a mapping of the servlets and the sandboxes they should be executed in. This mapping defined within the server's configuration file. If a C++ Server Page (CSP) is used, the CSP Manager converts the CSP document into a servlet and compiles it. Several Compiler Threads can be started by the server in order to compile CSPs when they are first requested or when they have changed. They are synchronized so that no more than a single thread for a single servlet can run at a time. The compiled servlet is put into a cache directory, along with the servlet source code, a configuration file containing the destination sandbox, and an error output file which also serves as a timestamp of the last compilation attempt. A CSP is only recompiled if its timestamp is newer than the timestamp of its error output file. We do not use the CSP's object file since it is only created if the compilation is successful. If the sandbox does not exist yet, it will be created after compilation. Then, the destination sandbox is configured to host the new servlet. If for some reasons, however, CSP compilation during run-time is undesired or impossible, it is possible to compile them before starting the server. A Guardian Thread watches over the state of the sandboxes and restarts them if an unstable web application crashes. Instead of using a separate thread that sleeps most of the time, it should also be possible on UNIX systems to catch the child signal instead. From the performance point of view, however, this poses no difference. In order to minimize downtimes of the servlet server, its configuration can be changed dynamically. The configuration file tells the servlet registry which servlets should be loaded into which sandboxes. When the application server processes a request, it first checks the timestamp of the configuration file. If there was a change (or there was no request after startup yet), the configuration file is read, and the internal configuration data is updated. Part of the configuration data, such as the location of the shared object files, is passed on to the sandboxes where it is needed. This update process is also invoked when a sandbox has to be restarted by the guardian thread. Several separate tasks, the Sandboxes, handle the actual execution of the servlets. Each sandbox may encapsulate one or more web applications consisting of one or more servlets. The purpose of using several sandboxes is to take care of the session management and to provide a barrier for unstable servlets. The ability to execute several servlets within the same sandbox increases the scalability of our servlet environment. This approach allows developers to place servlets that frequently need to interact with each other into the same sandbox and hence enables a more efficient communication between the servlets and reduces the number of context switches required. The sandbox processes read requests from an input socket and execute them. The servlets themselves are loaded as dynamically shared objects. C++ Server Pages are translated into servlets which in turn are compiled into shared objects. Shared objects can be loaded into a running program on demand by a system call. The disadvantage, however, is that the server's original programmer can never be sure about their stability. If a servlet crashes while processing a request, the servlet server's guardian thread notices the crash and restarts and reinitializes the failed sandbox. If a request comes in while the destination sandbox is down, it is buffered and executed as soon as the sandbox is up and running again. This architecture has already proved its usefulness: Our original implementation contained an error that would crash every sandbox after a little more than 1000 requests. We have not discovered this error until we have started running the benchmarks because the crashed sandboxes were always restarted, and (except for a slight performance loss) no problem was visible. Additionally, since each sandbox is executed within its own process, system administrators may choose to run different sandboxes with different user privileges. This architecture provides a finer grained access control than Java servlet environments which execute all servlets within the same process and thus with the same user privileges. The ability to provide persistent storage is of major importance for most web applications. A voting application, for example, has to manage the votes cast by its users. The persistence mechanism of the CSE can be used internally by the sandbox's session management if the servlet's sessionType attribute is set to DatabaseSession. Alternatively, it may be used directly by the application developer. The C++ Servlet Environment has been designed so that the persistence mechanism can be replaced easily. The persistence mechanism is encapsulated by a traits class [19,24] with which its classes are parameterized. A traits class can be compared to a set of callback functions with the difference that they are known during compile time and thus leave more room for optimization. This approach allows application developers to choose a persistence level that fits their needs best, e.g., a database such as MySQL [30], or a file-based database such as PSTL, a persistent implementation of the Standard Template Library [10]. We also provide a general-purpose database interface which is a set of template classes that offer general database handling functions, along with data-types used within a database. They do not contain functions to access specific databases but are able to use concepts like SQL statements and cursors. The template classes provided are: In order to access a given database server, a concrete database driver (a traits class) for that database must be written. If the database has a C++ interface, this task is usually trivial because most work has already been done at the general-purpose database interface. A concrete database driver for the successful database MySQL is already available. The C++ Servlet Environment provides two approaches for writing web applications similar to those available in Java Servlet Environments: Servlets that use only C/C++ and C++ Server Pages that use C/C++ code embedded in HTML code. A Servlet is implemented as a C++ class inheriting from the Servlet class as shown in Figure 3. Subsequently this class is compiled into a shared object. Servlets can access the parameters and cookies that have been passed as part of the web request and output an HTML page onto their output stream. #include <cse/cse.h> class HelloServlet : public Servlet { protected: int counter; public: HelloServlet() : counter(0) { session=false; } virtual ~HelloServlet() { } virtual void service(const ServletRequest& rq, ServletResponse& re, Session* session=NULL) { re << "<html><head><title>Hello World</title></head>" << endl << "<body><h1>Hello World</h1>" << endl << "<p>Servlet request count: " << ++counter << "</p>" << endl << "</body></html>" << endl; } }; Servlet* factory() { return new HelloServlet(); } The most important methods and attributes of the Servlet class are: The ServletRequest class encapsulates information about the current request. It provides access to the HTTP request parameters using the getParameters() method. Such parameters may be passed as part of the URL in case of a GET request and as part of the request body in case of a POST request. The request parameters are decoded and returned as a map using the parameter names as keys. Cookies sent with the HTTP request can be obtained with the getCookies() method. They are automatically transformed into Cookie objects. The return type is a vector of these objects. Unless a cookie has been changed there is no need to include it in the response sent back to the client. Among other information, the ServletRequest class also provides the URL of the request (getURL()) and whether the request used the GET or POST method (getMethod()). The ServletResponse object contains a stream to which the output of the servlet is written. For example, Session data is provided through the Session class. Each session has a name and an ID. Together with the Sandbox name, they uniquely identify the session. The name distinguishes between different types of sessions within a single web application. The session IDs are assigned in a random order, which makes guessing them almost impossible and thus enhances data security. The setParameter() and getParameter() methods allow a servlet to store and retrieve session data. Depending on the kind of session, these data are stored within the memory or within a database. The session identifier must be passed on between subsequent requests to the web server. This can be done using cookies. The function getCookie() provides a cookie (as defined in [12]) that contains the session information. Cookies, however, do not work with all web clients. If the servlet programmer cannot rely on them, the methods asLink() and asForm() transform the session designation into a string suitable for links (in URL-encoded format) or for forms (as a hidden field). C++ Server Pages (CSPs) are stored in the web server's document root directory along with static HTML pages and can be identified by their .csp-extension. When they are requested for the first time these pages are converted into servlets and subsequently into shared objects. <%#vector%> <%!vector<string> strings;%> <% // check whether a string should be removed/added ServletRequest::Map::const_iterator mi; if((mi=rq.getParameters().find("remove"))!=rq.getParameters().end()) strings.erase(strings.begin()+atoi(mi->second.c_str())); if((mi=rq.getParameters().find("string"))!=rq.getParameters().end()) strings.push_back(mi->second); %> <html><head><title>TableServlet</title></head><body> <h1>TableServlet</h1> <p>This servlet stores strings within a table.</p> <form method=get action=TableServlet.csp><p>Please enter a string: <input type=text size=64 name=string></input><input type=submit </input></p></form> <p><table border=1> <tr><th colspan=2>Strings entered to date: <%=strings.size()%></th></tr> <% for (vector<string>::iterator i=strings.begin(); i!=strings.end(); ++i) { %> <tr> <td><%=*i%></td> <td><a href="TableServlet.csp?remove=<%=i - strings.begin()%>">remove</a></td> </tr> <% } %> </table></p> </body></html> CSPs are HTML documents enriched with special tags containing, among other things, C++ code. A sample CSP is shown in Figure 4. The tags used to identify C++ declarations and code are described below. Except for the first two tags, they have been designed similar to the JavaServer Pages (JSP) specification [27] to increase readability for people familiar with JSPs. The example shown in Figure 4 first includes the vector header file and declares a vector containing strings. The second block checks for the parameters passed to the CSP and based on these parameters removes or adds a new string to the vector. The remaining part of the servlet is used to display a form to add a new string and a table with the strings currently stored in the vector. Additionally, the strings are supplied with links to allow them to be removed. 1 #include <vector> 2 #include <cse/cse.h> 3 #include <string> 4 5 class CSPServlet : public Servlet { 6 protected: 7 virtual void print(ostream& os) const; 8 vector<string> strings; 9 10 public: 11 CSPServlet() : Servlet() { } 12 virtual ~CSPServlet(); 13 virtual void service(const ServletRequest& rq, ServletResponse& re, 14 Session* session= NULL); 15 }; 16 17 // ... helper functions ... 18 19 void CSPServlet::service(const ServletRequest& rq, ServletResponse& re, 20 Session* session=NULL) { 21 re << " 22 "; 23 // check whether a string should be removed/added 24 ServletRequest::Map::const_iterator mi; 25 if((mi=rq.getParameters().find("remove"))!=rq.getParameters().end()) 26 strings.erase(strings.begin()+atoi(mi->second.c_str())); 27 if((mi=rq.getParameters().find("string"))!=rq.getParameters().end()) 28 strings.push_back(mi->second); 29 re << " 30 <html><head><title>TableServlet</title></head><body> 31 <h1>TableServlet</h1> 32 <p>This servlet stores strings within a table.</p> 33 <form method=get action=TableServlet.csp><p>Please enter a string: 34 <input type=text size=64 name=string></input><input type=submit value=\"Go!\"> 35 </input></p></form> 36 <p><table border=1> 37 <tr><th colspan=2>Strings entered to date: "; 38 re << strings.size(); 39 re << "</th></tr> 40 "; 41 for (vector<string>::iterator i=strings.begin(); i!=strings.end(); ++i) { 42 re << " 43 <tr> 44 <td>"; 45 re << *i; 46 re << "</td> 47 <td><a href=\"TableServlet.csp?remove="; 48 re << i - strings.begin(); 49 re << "\">remove</a></td> 50 </tr> 51 "; 52 } 53 re << " 54 </table></p> 55 </body></html> 56 "; 57 } After the example servlet has been deployed, our CSE translates it into a C++ servlet as shown in Figure 5. Include directives of the C++ Server Page are converted into include pre-processor macros at the beginning of the file (line 1), definitions are converted into attribute and member function definitions (line 8). Code and expression directives are used to form the service() member function and HTML code is converted into statements sending it unmodified to the web client (lines 19-57). Since we have started with the implementation of our C++ Servlet Environment other developers have also recognized the need for a servlet environment for C++. The commercial vendor Rogue Wave has developed Bobcat [21], a C++ servlet engine that has an API similar to that of the Java Servlet Specification [26]. Unfortunately, its evaluation license contains a non-disclosure agreement. Hence, we cannot include their product in this paper and have to assume that it is not yet ready for a production system. Ape Software, an Indian company, has developed Servlet++ [3] under a BSD-like free license, which also supports a basic form of C++ Server Pages. Unlike CSE, it does not contain a stand-alone server for the execution of the servlets but is implemented completely within an Apache module. Hence, an unstable servlet can compromise the stability of the Apache server itself. The Servlet++ module supports C++ servlets with an interface similar to that of the Java Servlet class. Servlets are loaded as shared objects. Unlike CSE, servlets and C++ Server Pages have to be compiled manually. Also, Servlet++ currently lacks session management, a fundamental necessity for every servlet environment, and hence we left it out of the comparison in section 6. C Server Pages [9] is a servlet engine that has been designed with goals somewhat similar to those of the C++ Servlet Environment, but does not use a free license (commercial use is non-free). However, this system uses no sandboxes to encapsulate its servlets, so that it is less secure. There is currently no support for dynamic configuration, which means that each servlet has to be compiled manually (using a tool to transform C Server Pages into servlets and a C++ compiler), and the server then has to be restarted. C Server Pages is implemented as a CGI script, but the author has also built an Apache module which is, unfortunately, not yet available for download. Micronovae [17] is developing a C++ Server Pages engine which works together with Microsoft's Internet Information Server (IIS). Like the previous system, it does not use the concept of sandboxes. Additionally, it only provides support for C++ Server Pages but not for servlets. Unfortunately, their download (beta version) includes no source code, so that our information about this system is solely based on our experiences with it. The Weblet Application Server [29] seems to be a servlet engine for C++ developed by Webletworks. Unfortunately, we were unable to contact the web server of the application server's vendor for several months now and were also unable to obtain a copy or other information about the system through other web sites. Besides C++ servlet engines, there are numerous such engines for Java and scripting languages. One such servlet engine is the Apache Tomcat [2] servlet engine, a subproject of the Apache Jakarta Project. It has complete support of the JavaServer Pages and Java Servlet specifications and is included in Sun's reference implementation. Although Tomcat contains its own web server it can also cooperate with other web servers like the Apache HTTP Server. JavaServer Pages may be compiled by the built-in Java compiler or by alternative Java compilers such as Jikes. Jetty [18] is a Java Servlet engine developed by the Australian company Mort Bay. It can cooperate with the Apache HTTP Server, but Mort Bay suggests to use the included web server. Jetty is available under a free license and offers both Java Servlets and JavaServer Pages. Like Tomcat, Jetty is configured using a set of XML files and may use alternative Java compilers for the compilation of JavaServer Pages. Swill [13] is a lightweight programming library that provides a simple web server for C and C++. Unlike our servlet environment, its goal is not to provide a full fledged web server that allows the execution of multiple web applications. Instead, it allows developers to embed the web server into their own programs. This web server can be used to control the embedding application and to display the application's results using a web browser. Zope [14] is a servlet environment that allows developers to use Python for servlet development. It includes a web server and a web administration front-end. Initial performance results have shown that Zope cannot compete with the top servlet engines. We assume that this is due to the performance penalty incurred by the Python interpreter. Zope is probably the right environment for Python programmers maintaining small web sites. For the evaluation of the C++ Servlet Environment's performance we have implemented a benchmark suite that tests various aspects of the different servlet engines. The tests were performed with the built-in web server. All servlets were compiled using the individual engine's default servlet compiler before the execution of a benchmark. Static Page Access. In this benchmark we measure how long it takes to request a static HTML page together with 40 embedded images for 100 times. The primary goal of this benchmark is to measure the throughput of the servlet engine's web server. Bulletin Board System. We have implemented a small bulletin board system that stores its entries in the file system. It allows users to create messages, read them (using a session for remembering the least recently read message), and deleting them again. The test creates 100 messages of 320 characters each. These messages are then requested 50 times in batches of 10 messages. Finally, the messages are deleted again resulting in another 100 requests. The time taken for these 800 requests (message creation is verified with a second request) was measured. The goal of this benchmark is to check how well the servlet engine maintains the client's session state. Dynamic Page Access. This benchmark uses a shopping cart servlet that we have implemented for each servlet engine. For the measurement of this benchmark, we request the start page once to obtain the cookie containing the session information. Then, we add an item to the shopping cart, display the shopping cart, remove the item, and display the shopping cart again. The empty shopping cart page has 2048 bytes. A test run consists of 50 consecutive requests, with a total of 250 dynamic pages served. This benchmark is intended to measure the servlet engine's performance in a typical everyday situation. Parallel Page Access. For this request, we used the previous benchmark and accessed the server simultaneously from a varying number of clients, each running on a different machine. We have measured the time needed by a single client to execute the same number of requests as in the previous test. The other clients in the benchmark were started before we started the client to be measured and also were terminated afterwards. The goal of this benchmark is to see how well the servlet engines can cope with increasing load. Mandelbrot Calculation. This test measures the efficiency of calculation-intensive servlets by computing the Mandelbrot fractal. It accepts the size of the image, the area of the fractal to be calculated, and the maximum recursion depth as parameters. Although we support the generation of an xpm graphics file the output has been suppressed during this benchmark since we wanted to measure the ``number crunching'' performance only. A test run consists of 10 accesses to the servlet, calculating a picture of the Mandelbrot set at a resolution of 1024x768 pixels and a maximum of 256 iterations per pixel. System Library Call. Since a main reason for the design and implementation of the CSE was the possibility to easily integrate legacy applications into servlets, this test evaluates the inter-operation with legacy C and C++ code. For the test, we created a small servlet that calls functions from a shared library. A similar servlet might e.g. poll sensor data with high frequency in order to calculate a mean value. For C and C++ programs this is a straight-forward task. Java programs, however, have to use the Java Native Interface [16] which requires a JNI wrapper function to be written for each C or C++ function to be invoked. This benchmark measures how much performance gets lost at that interface. The hardware for the performance tests consisted of two computers with AMD Duron/800 MHz processors running Linux. They were linked via a 100 Mbit/s switch in order to ensure constant network bandwidth. For the dynamic and parallel tests, we used 1-8 identical Intel Pentium II/350 MHz computers as clients, with the same server as above. The results of our tests are shown in Table 1. We have included Tomcat as Sun's Java reference implementation, Jetty as an independent implementation in Java, and CSP and Micronovae as other C/C++ solutions. Since little information about the inner workings of Micronovae is available, we can only speculate why it performs better or worse than the other systems. As shown by the static page access benchmark using Apache as front-end was the right choice for this benchmark. Tomcat and Jetty both did not perform as well. Although they can be set up to be used in combination with Apache, this setup is complicated. Mort Bay even recommends to use Jetty for static documents as well. In our evaluation, the Windows Internet Information Server was slightly faster than Apache on Linux. We assume that this effect is caused by the operating system. The dynamic and parallel access benchmark, whose result is shown in Figure 6, reveals why Tomcat has become Sun's reference implementation. Both Java implementations perform much better than we would have assumed initially. Although we knew that our implementation leaves room for improvements, this was a surprise to us. The CSE and Jetty scale equally well, but not as good as Tomcat and slightly worse than Micronovae. The CSE does not perform as well because in its current implementation the servlet server might be a bottleneck, as we have mentioned in Section 3.1. Jetty performs slower because it seems that its implementation has not been as well optimized as Tomcat's. CSP scales worst in our benchmark. Obviously, using the CGI for servlet execution is, at best, only a solution when C/C++ applications should be integrated into a web server whose performance is not an issue. In the bulletin board system test all systems were relatively close together, which indicates that bulk transfers in Java are similarly fast as in C/C++. CSP's bad result is likely caused by the fact that it creates and initializes a new task for every servlet invocation. The difference of this benchmark over the others is that the bulletin board system's entries are stored on the file system. Hence we assume that Micronovae performs worse than the other approaches due to differences in the file system implementation between Linux and Windows. The C/C++ based systems have a clear advantage when performing CPU intensive computations as shown by the Mandelbrot calculation. It seems that the Java just-in-time compiler is unable to optimize the code as well as the GNU compiler. In the direct comparison, the compiler from Microsoft Visual C++ which is used within Micronovae performs worse than its GNU equivalent, but we do not know what optimizations are performed. The library benchmark shows severe limitations of the Java-based systems. In our scenario, C code is more than 16 times faster when many function calls into a legacy C application need to be done. It seems that the Java Native Interface (JNI) which handles C/C++ library calls has not been optimized at all. CSP takes slightly more time than the CSE but is still an order of magnitude faster than the Java-based systems. The very good performance of Micronovae is probably caused by a different shared library mechanism provided by the Windows platform. The current implementation of the CSE has some room for optimization. This gets apparent by looking at the architecture presented in Section 3. Requests to the individual servlets are delegated to the sandboxes by the servlet server. This is a potential bottleneck and could be solved by letting the Apache module themselves delegate the requests to the individual sandboxes. Additionally, our current implementation does not yet allow the simultaneous execution of a servlet's service() method. This stems from the fact that we do not yet honor a servlet's threadSafe attribute, which results in a loss of performance. During our tests we identified that our servlet engine is not yet capable of handling binary content correctly. We assume that this bug is located within the Apache module that passes the result of the sandboxes back to the clients. Our current assumption is that we do not handle the NULL character correctly since it indicates the end of a C string. In future versions we also plan to implement a test environment for servlets and C++ Server Pages that supports testing outside the CSE. To test the servlet, the environment would be linked to the servlet and could be debugged like a stand-alone C++ application. In future versions of the CSE, we also plan submit it to the BOOST web site [6] which provides a collection of free peer-reviewed portable C++ source libraries. The focus of BOOST is on libraries that work well with the C++ Standard Library and are suitable for eventual standardization. The contribution of this paper is an architecture that enables the implementation of high-performance web applications using C or C++ while providing the stability known from Java servlet engines. We have also implemented a C++ Servlet Environment using this architecture. Although our implementation has not yet been optimized and provides enough room for further performance improvements, it offers similar performance as the top servlet engines available today. Our C++ Servlet Environment uses an API which is based on that provided by Java Servlets and JavaServer Pages. This design choice has the advantage that developers familiar with this technology will immediately be able to write C++ Servlets and C++ Server Pages. Providing a servlet environment for C++ is important since it allows developers to reuse existing C++ code without having to use the Java Native Interface [16,25] and hence without having to deal with different languages and the conversion of different type systems. As we have explained in Section 5, this need has recently been identified by other researchers as well. Although similar, these products are slightly incompatible to each other. Hence, we think that the standardization of a C++ Servlet Environment will be important for the future of this technology. The C/C++ Servlet Engine is freely available under the GNU General Public License. A more detailed description of the design and implementation of the CSE can be found in [23]. The CSE as well as its documentation is available for download from the CSE homepage at. We would like to thank Dave Beazley, Andreas Grünbacher, and the many anonymous reviewers for their helpful comments. We also gratefully acknowledge the financial support provided by the USENIX Advanced Computing Systems Association and by the European Union as part of the EASYCOMP project (IST-1999-14191). Thomas Gschwind and Benjamin A. Schmit
http://static.usenix.org/legacy/publications/library/proceedings/usenix03/tech/freenix03/full_papers/gschwind/gschwind_html/index.html
CC-MAIN-2018-39
refinedweb
6,653
52.39
a recent newsgroup posting, Jake Segers asked how to display the results of FOR XML queries generically when the schema is inlined using the XMLDATA option. The XMLDATA option inlines an XDR schema within the result. He wanted null values in SQL Server, which are represented in XML as the lack of the node's presense, to be represented within an HTML table as an empty cell. Jake's solution used RAW mode for the XML query, so we could guarantee the structure (AUTO mode queries define the structure according to the joined tables in the order they appear in the SELECT list). He was unable to alter the solution to use EXPLICIT mode queries, so we turned to an XSLT solution to accomodate. The problem is two-fold. First, you need to use the schema as the basis of what to display, not the data that is present (the data may not be there). Simply setting up match patterns for the attributes in the row elements would not work, as the missing attributes are not accomodated. Second, the actual namespace may change in subsequent FOR XML queries. This is problematic, since you cannot reliably look for nodes within a given namespace, you must treat the nodes as if the namespace is irrelevant. Here is what I came up with. Variable is used to lookup the actual column names in the XDR Schema attributes section
http://blogs.msdn.com/b/kaevans/archive/2003/03/20/4094.aspx?Redirected=true
CC-MAIN-2015-11
refinedweb
235
68.81
On Tue, 18 Aug 1998, Linus Torvalds wrote:[snip]> [snip] 2.1.116 has a few patches that it> _shouldn't_ have had and that made it into the final release by mistake> (the page aging code shouldn't have been ifdeffed out), but it seems[snip]Are you referring to this part of the patch?+++ linux/mm/filemap.c Tue Aug 18 13:21:49 1998@@ -172,10 +172,12 @@ break; } age_page(page);+#if 0 if (page->age) break; if (page_cache_size * 100 < (page_cache.min_percent * num_physpages)) break;+#endif if (PageSwapCache(page)) { delete_from_swap_cache(page); return 1;Thank you,--Craig-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
https://lkml.org/lkml/1998/8/19/14
CC-MAIN-2018-30
refinedweb
123
60.65
This is the mail archive of the cygwin mailing list for the Cygwin project. Hello Mark, unfortunately i must correct my statement from Friday. The program works, but only if the connections to the server are established in a loop inside the program. If the program ends and you start it anew, a connection is not possible for a long time. you have to wait before you can establish a new connection. Actually only our approaches in the original bindresvport() seem to work for all cases. You have proposed to use the static variable usecount in bindresvport(). But how is the value of the variable handled if the program starts anew. Is it possible to get an used portnumber and run in EADDRINUSE? Greetings Raimund -----Ursprüngliche Nachricht----- Von: cygwin-owner@cygwin.com [mailto:cygwin-owner@cygwin.com] Im Auftrag von PAULUS, Raimund, TI-ABN Gesendet: Freitag, 2. Februar 2018 13:58 An: cygwin@cygwin.com Betreff: Re: RPC clnt_create() adress already in use Hi Mark, it works. Maybe it's the best solution for the problem. Greetings Raimund -----Ursprüngliche Nachricht----- Von: cygwin-owner@cygwin.com [mailto:cygwin-owner@cygwin.com] Im Auftrag von Mark Geisert Gesendet: Freitag, 2. Februar 2018 09:11 An: cygwin@cygwin.com Betreff: Re: RPC clnt_create() adress already in use Mark Geisert wrote: > Corinna Vinschen wrote: >> On Jan 31 00:15, Mark Geisert wrote: >>> PAULUS, Raimund, TI-ABN wrote: >>>> Hi Mark, >>>> >>>> in my email >>>> () i described 2 approaches. I prefer nr 1. >>>> Here the part of the source in bindresvport.c: >>>> [...] > [...] >> >> I'm a bit puzzled here in terms of using your own bindresvport. >> Cygwin implements bindresvport{_sa} for quite some time, 2006 or earlier. > > Yeesh; I did not know that. Thanks for pointing that out. So that > means there's another possible way to try solving the OP's issue: by > using Cygwin's > bindresvport* in place of the one supplied with libtirpc. > > If we see the OP's issue with Cygwin's bindresvport*, I think it makes > more sense to patch libtirpc than to change Cygwin's bindresvport*. > The crux of OP's issue is that libtirpc's code expects to see > EADDRINUSE errors from bind() whereas on Cygwin they aren't often seen until you connect(). > > I'll look into using Cygwin's bindresvport() in the next day or two. My testing shows that OP's original issue goes away when libtirpc is compiled to use Cygwin's bindresvport() directly rather than using its own version of that function. Raimund, could you try this newest possible solution? Before the first #include in bindresvport.c, add the line #ifndef __CYGWIN__ and at the end of the file, add the line #endif Then rebuild your libtirpc and your test programs linking against it, then run your tests. If this proves to solve your original problem then I'll submit a patch of libtirpc to the Cygwin package maintainer. Thank you, ..mark -- Problem reports: FAQ: Documentation: Unsubscribe info:
https://sourceware.org/legacy-ml/cygwin/2018-02/msg00026.html
CC-MAIN-2020-45
refinedweb
492
66.54
django-hosts 0.4.1 Dynamic and static hosts support for Django. This Django app routes requests for specific hosts to different URL schemes defined in modules called "hostconfs". For example, if you own example.com but want to serve specific content at api.example.com and beta.example.com, add the following to a hosts.py file: from django_hosts import patterns, host host_patterns = patterns('path.to', host(r'api', 'api.urls', name='api'), host(r'beta', 'beta.urls', name='beta'), ) This causes requests to {api,beta}.example.com to be routed to their corresponding URLconf. You can use your urls.py as a template for these hostconfs. Patterns are evaluated in order. If no pattern matches, the request is processed in the usual way, ie. using the standard ROOT_URLCONF. The patterns on the left-hand side are regular expressions. For example, the following ROOT_HOSTCONF setting will route foo.example.com and bar.example.com to the same URLconf. from django_hosts import patterns, host host_patterns = patterns('', host(r'(foo|bar)', 'path.to.urls', name='foo-or-bar'), ) Installation First, install the app with your favorite package manager, e.g.: pip install django-hosts Alternatively, use the repository on Github. Then configure your Django site to use the app: Add 'django_hosts' to your INSTALLED_APPS setting. Add 'django_hosts.middleware.HostsMiddleware' to your MIDDLEWARE_CLASSES setting. Create a module containing your default host patterns, e.g. in the hosts.py file next to your urls.py. Set the ROOT_HOSTCONF setting to the dotted Python import path of the module containing your host patterns, e.g.: ROOT_HOSTCONF = 'mysite.hosts' Set the DEFAULT_HOST setting to the name of the host pattern you want to refer to as the default pattern. It'll be used if no other pattern matches or you don't give a name to the host_url template tag. django-hosts uses versiontools to manage version numbers following PEP 386. - - Package Index Owner: jezdez - DOAP record: django-hosts-0.4.1.xml
http://pypi.python.org/pypi/django-hosts/0.4.1
crawl-003
refinedweb
328
54.59
World2 - A simple object level Stem cell. This cell is an extension of the World1 cell. In this example, instead of a single class cell with a fixed response value, we now can create multiple cells (registered objects) each with their own private data. The world_cmd method will return the planet's name stored in the cell. This cell illustrates the basic way to construct objects in Stem. Object cells require an attribute specification that describes the information we want to exist independently in each object cell when it is created. The following is the attribute specification used in World2: $attr_spec = [ { 'name' => 'planet', 'default' => 'X', }, ]; This specification indicates that this cell has an attribute named planet. It will default to the value of X if this attribute is not specified in the configuration arguments for this cell. Some of the attribute specification tags are name, type, default, required, class, and help. For more information on cell configuration please see Stem Object and Cell Creation and Configuration Design Notes and Stem Cell Design Notes. This is a minimal Stem constructor with the usual name new. you can invoke any other method as a constructor from a configuration by using the 'method' field: sub new { my ( $class ) = shift; my $self = Stem::Class::parse_args( $attr_spec, @_ ); return $self unless ref $self ; return ( $self ); } To create a Stem object cell we call the Stem::Class::parse_args routine and pass it the object cell attribute specification and the rest of the arguments passed into this constructor. The rest of the arguments come from the args field in the configuration for this cell. The parse_args function then returns the newly created object to the caller, which is usually the configuration system but it could be any other code as well. An important observation to make here is the Stem error handling technique. Errors, in Stem, are propagated up the call stack bu returning an error string rather than a reference. This is the typical Stem way of determining whether of not an error condition had occurred. Constructors or subroutines which normally return objects or references will return a string value as an error message. This is always checked by the caller and will usually be passed up the call stack until a top level subroutine handles it. The following Stem configuration file is used to bring a World2 object level cell into existance in the Stem environment. [ class => 'Console', ], [ class => 'World2', name => 'first_planet', args => [], ], [ class => 'World2', name => 'second_planet', args => [ planet => 'venus', ], ], As explained in World1.pm, we create a Stem::Console cell to allow for the creation of a Stem console to manually send command messages and display their responses. We also create two object level World2 cells. The first, we name first_planet and it defaults to having its planet attribute set to 'first_planet'. The second, we name second_planet and set its planet attribute to 'venus'. Using the args specifier in the cell configuration indicates that we are creating an object cell rather than a class cell. It indicates to the Stem cell creation environment that we wish to execute the constructor of the specified class to create an object of the class rather than using the Stem module as a class itself. Using object cells allow us to instantiate multiple objects with unique values, addressed and subsequent behavior. Execute run_stem world2 from the command line to run this configuration. You will be greeted with the Stem> prompt. It is now possible to send a message manually into the system. Type the following at the Stem prompt: reg status This will show the status of the local Stem hub. You will notice the two entries for the object cells created by the configuration file under the object cell section. Now execute the same command as you did in World1: first_planet hello Hello, World! (from X) second_planet hello Hello, World! (from venus) As in World1, the above triggers the hello_cmd method. However, now we are triggering the hello_cmd method on separate object cells rather than a single class cell. Stem Cookbook Part 1 Stem Cookbook Part 3 World2 Module
http://search.cpan.org/~uri/Stem-0.11/Cookbook/World3.pm
CC-MAIN-2017-51
refinedweb
682
61.87
#include <sys/types.h> #include <sys/wait.h> The waitpid()pid() or wait greater than (pid_t)0, it specifies the process ID of the child process for which status is requested.. If waitpid() returns because the status of a child process is available, then that status may be evaluated with the macros defined by wstat(3xfn)>: If waitpid() returns because the status of a child process is available, it returns a value equal to the process ID of the child process for which status is reported. If waitpid() returns due to the delivery of a signal to the calling process, -1 is returned and errno is set to EINTR. If waitpid() was invoked with WNOHANG set in options, it has at least one child process specified by pid for which status is not available, and status is not available for any process specified by pid, then 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The waitpid() function will fail if: With options equal to 0 and pid equal to (pid_t)-1, waitpid() is identical to wait(2). See attributes(5) for descriptions of the following attributes: intro(2), exec(2), exit(2), fork(2), pause(2), ptrace(2), sigaction(2), wait(2), signal(3C), attributes(5), siginfo(3HEAD), wstat(3xfn)
http://www.shrubbery.net/solaris9ab/SUNWaman/hman2/waitpid.2.html
CC-MAIN-2016-44
refinedweb
217
63.63
Hi everybody, As I am developing using JESS, I have some doubts. Here is more one... Consider that I have one fact and the two rules below. They are automatically created using Java and Rete. Actually, I used the STORE and a bean because I want to retrieve (FETCH) a list of results. For example, I know that when running this algorithm if I used "printout" I will receive as response: fournitureBeton creationEscalierBeton. Advertising Actually I want to retrieve these values. Probably my use of STORE is not correct and you have another simple solution. The problem here is that even if I declared my bean as global, this means, it will be instanciated only one time, as I saw the answers I received, it is passing two times when I call it. Thus I can conclude that it is creating two instances of my bean, one for creationEscalierBeton and another for fournitureBeton. Thus this is the reason that my "StringBuffer" inside the bean only have one result. Actually I want to retrieve an order list telling which rule is fired firstly, and so on. I want to retrieve as System.out.println("Order List: " + rete.fetch("RESULT")); Do you know another way to FETCH the RESULT value since this RESULT should be a list of the name of my rules or the name that I am saying? If in the place of the set and store I used printout, everything works as I need. However the problem is that I need to retrieve these results. Best regards, Daniela (assert (existPlancher)) (defglobal ?*obj* = (new MarchePublicBeanPlan)) (defrule creationEscalierBeton (existPlancher) (betonFournit) => (assert (existEscalierBeton)) (set ?*obj* action "creationEscalierBeton") (store RESULT (get ?*obj* action)) ) (defrule fournitureBeton (not(betonFournit)) => (assert (betonFournit)) (set ?*obj* action "fournitureBeton") (store RESULT (get ?*obj* action)) ) I have this bean: import java.io.Serializable; public class MarchePublicBeanPlan implements Serializable{ private String action; public StringBuffer plan; public MarchePublicBeanPlan() { System.out.println("************* Inside BEAN******************"); action = null; plan = new StringBuffer(); } public void setAction(String action) { System.out.println("************ACTION:"+action); plan.append(action); } public String getAction() { System.out.println("************PLAN:"+plan.toString()); return plan.toString(); } } And this is the answer I get when I rete.run it. ####################STARTING RETE################ ************* Inside BEAN****************** ************* Inside BEAN****************** ************ACTION:fournitureBeton ************PLAN:fournitureBeton Order List: "fournitureBeton" -------------------------------------------------------------------- To unsubscribe, send the words 'unsubscribe jess-users [EMAIL PROTECTED]' in the BODY of a message to [EMAIL PROTECTED], NOT to the list (use your own address!) List problems? Notify [EMAIL PROTECTED] --------------------------------------------------------------------
http://www.mail-archive.com/jess-users@mailgate2.sandia.gov/msg00616.html
CC-MAIN-2017-26
refinedweb
408
50.12
Red Hat Bugzilla – Bug 220815 ambiguous references in c++ Last modified: 2007-11-30 17:07:39 EST Description of problem: Hi, it seems this problem is related/similar to an already known one: <>. Compiling the supplied sample creates an error where the compiler is not able to properly resolve a symbol. Note that I am not 100% sure whether this is a bug or an error on my side (resulting from poor C++ standard knowledge. Comments welcome in this case as well) Version-Release number of selected component (if applicable): $ rpm -q gcc gcc-4.1.1-43.el5 $ uname -a Linux ls3108v1 2.6.18-1.2747.el5 #1 SMP Thu Nov 9 18:56:16 EST 2006 ia64 ia64 ia64 GNU/Linux How reproducible: compile the following with $> c++ -c: /************/ namespace _STL { struct __true_type {}; } using namespace _STL; struct __true_type { }; namespace std { typedef __true_type __type; } /************/ this produces the following output: test.cpp:11: error: reference to ‘__true_type’ is ambiguous test.cpp:7: error: candidates are: struct __true_type test.cpp:3: error: struct _STL::__true_type test.cpp:11: error: reference to ‘__true_type’ is ambiguous test.cpp:7: error: candidates are: struct __true_type test.cpp:3: error: struct _STL::__true_type test.cpp:11: error: ‘__true_type’ does not name a type Steps to Reproduce: 1. c++ -c <sample.cpp> 2. 3. Actual results: Expected results: Additional info: Names brought in by a using directive are found by name lookup as if they were in the nearest namespace enclosing both the using directive and the nominated namespace. Thus, both definitions of __true_type are found in the same namespace, and ambiguity ensues.
https://bugzilla.redhat.com/show_bug.cgi?id=220815
CC-MAIN-2017-22
refinedweb
269
56.66
Hi Codeforces! I've recently noticed a lack of simulated annealing tutorials, so I decided to make one. It all started when I was trying to "cheese" 1556H - DIY Tree after the contest. In general, simulated annealing is a pretty niche topic, but it can sometimes lend you unintended solutions for very hard problems. It's also very useful in contests like Google Hashcode, where you can add simulated annealing to an already good solution to make it a lot better. Simulated annealing's name and terms are derived from physical annealing, the process of letting metals or glass cool down and harden while removing internal stresses. An Overview Simulated Annealing is an approximation algorithm. It's generally useful in problems with low constraints (i.e. $$$n \leq 50$$$ or $$$n \leq 100$$$) where you need to find the minimum/maximum of something over all possible states (and there are usually way too many of them to check). In general, it's good at finding the global maximum/minimum of a function. For the rest of the blog, I'm going to assume that you want to find the minimum of the function (the maximum case is equivalent). A similar algorithm, that is easier to understand, is called hill-climbing. The way hill-climbing works is that you start with a random state, you find any neighbor of the state, and set the current state to that neighbor if the value of the neighbor state is less than the value of the current state. A "neighbor" of some state is another state that you can get to by applying some small transformation to the previous state (I'll explain more about neighbors in the future). If you look at the image down below, the hill climbing algorithm would look like this: - Start at some random $$$x$$$-value - Change $$$x$$$ by either $$$-1$$$ or $$$+1$$$ (pick the smaller one). In this case $$$x-1$$$ and $$$x+1$$$ are the neighbors of the state. - Repeat until both $$$x-1$$$ and $$$x+1$$$ are larger. The issue with this algorithm is that it often gets stuck in a local minimum, instead of a global minimum. Simulated annealing helps fix this issue by sometimes allowing a step to a worse neighbor, which could allow one to reach the global minimum, even if it isn't the same as the local minimum. At each step of the algorithm, you consider some neighboring state and probabilistically decide whether you move to the next state, or keep the current state. You repeat this step until you exhaust the time limit. The crux of simulated annealing is the acceptance probability function, which determines whether you'll take some step or not, depending on the temperature (a value that determines how big of a "bad" step you are going to take). In other words, the higher the temperature, the worse of a state you allow the algorithm to go to (as determined by the acceptance probability function). As iterations progress, temperature gets lower and lower, which causes the algorithm to take smaller and smaller steps until it converges at a global minimum. Here is some pseudocode: Let s = random(state) //get's any random valid state Let best = s while elapsed_time() <= time_limit: Let t = temperature(elapsed_time()/time_limit) //returns temperature given the percent done Let next = neighbor(s) //neighbor of a state if value(s) < value(best): best = s if P(value(s), value(next), t) >= random(0, 1): //P -> probability of acceptance function, returns the probability that you'll move to the next state given the value's and the current temperature s = next print(value(best)) In the following sections I'll describe what each function means, and common examples of each function, as well as properties that each function should have. Neighbor Function A neighbor function is just a function that takes in a state, and returns a random new state after doing some transformation to the previous state. An important property of the neighbor function is that it doesn't change the answer by too much. Let's start with the example of TSP. An example of a valid neighbor function for TSP is where you swap two the order of two random cities in a state to get a random neighbor of the state. This only affects the values for those two cities, so the answer doesn't change much. Another important property of the "neighbor function" (i.e. how you decide what your neighbors are) is that it should be possible to get any state to any other state in a short number of steps. In the case of TSP, it only takes a linear number of swaps to get from any state to any other state, the hard part of the problem is deciding which swaps to take (which simulated annealing takes care of). Note: The second condition is important for the accuracy of simulated annealing. For example, simulated annealing wouldn't really work well on a 2-d graph (like the picture I have above — that was purely to demonstrate the existence of local minima), as it takes a lot of steps to move from one end of the graph to the other. Temperature Function The temperature function decides how "willing" the algorithm will be to move to a worse state. In other words, if temperature is 0, you'd only ever move to better states (which becomes the hill climbing algorithm). If the temperature is infinity, you'd move to any state, regardless of how good/bad it is. In simulated annealing, you want the temperature to go from something high to something low. Initially, you want to be able to explore the different possible states, so you have a better chance at reaching the global minimum. At the end, when most of the time is used already, you want to keep taking better and better states, hoping that you're already close to the global minimum. There are 3 common temperature reduction rules: - $$$t = t \cdot a$$$ - $$$t = t - a$$$ - $$$t = \frac{t}{1 + a \cdot t}$$$ Where $$$a$$$ is just some constant that you choose. The most common one in my experience is the geometric one (the 1st one), where you start with $$$t_0 = 10^9$$$ or $$$t_0 = 10^5$$$ or some other high value and set $$$a = 0.99999$$$. Acceptance Probability Function This is the crux of simulated annealing. The acceptance probability function is the function that determines the probability of going from one state to a neighbor state. Let $$$P(old, new, temp)$$$ be the probability from transitioning from a state with value $$$old$$$ to a state with value $$$new$$$ if the current temperature is $$$temp$$$. If we were to use hill climbing, the function would look like this: def P(old, new, temp): if new < old: return 1.0 else: return 0.0 This is because you never transition to the next state if it's value is worse. The most common function that's used for simulated annealing is the Metropolis-Hastings Algorithm. This changes the acceptance probability function to: def P(old, new, temp): if new < old: return 1.0 return exp((old-new)/temp) Now, if the new state is worse you still might transition to it, which makes it more likely that you reach a global minimum. Another important thing to note is that as the temperature decreases, the probability that you transition to a new (worse) state also decreases, which is exactly what we wanted. Example Problems - Try implementing TSP - USACO subsequence reverse - DIY Tree - AHC001 A suggested by TheScrasse Thanks for reading! This was my first tutorial blog, so let me know if there's anything that should be added/changed, I'm always open to suggestions. If you have any other example problems, let me know and I'll add it to the list. Additionally, thanks to moo. for proofreading the post!
http://codeforces.com/blog/entry/94437
CC-MAIN-2021-39
refinedweb
1,323
57.5
Using Extension Methods in .NET Framework 2.0 The code base I am currently working with has a collection class that inherits from ArrayList. The purpose of this class is to offer a unique list of objects and to provide case insensitive comparisons when adding a unique item to the collection. The collection does not need to be unique, but we a way to add unique things to the list is required. My task the last week was to become familiar with the code base and identify possible refactorings. Using NDepend, I was able to determine that this class is typically used with strings. One possible refactoring I suggested was to replace this custom class with List Doing a global search and replace to change this custom ArrayList class with List Our first stab at tackling this problem was to create a helper method that added an item to a given list only if that list did not already contain the item. We wrote a macro to quickly turn the old references into the new helper references. The code looked something like this: tableList.AddUnique("item_table") ListHelper.AddUnique(tableList, "item_table") This seemed to get us around the uniqueness hurdle. It was a bit verbose, but it got the job done and would eventually allow us to get rid of the custom collection class altogether. We started hacking away, taking turns at the keyboard with three sets of eyes making sure we were hitting the right items and evaluating between List After a few hours of locating problem areas and fixing them with a handy macro we created out of sheer boredom, the hit squad decided this was not working. I mentioned that if we were using Visual Studio 2008, we could write an extension method that would simply attach an AddUnique() method to the List I left the 8 hour marathon triple-programming session with the goal of playing around with implementing extension methods in Visual Studio 2008. Since I was not familiar with actually writing them, I fired up my IDE and wrote a couple tests for what I wanted to happen. [Test] public void Can_Add_Unique_String_To_List_Of_String() { var list = new List<String>(); list.AddUnique("one"); list.AddUnique("one"); Assert.AreEqual(1, list.Count); } [Test] public void Can_Add_Case_Insensitive_Unique_String_To_List_Of_String() { var list = new List<string>(); list.AddUnique("one"); list.AddUnique("ONE"); Assert.AreEqual(1, list.Count); } I flipped over to my Extensions library and added a ListOfStringExtensions class. To get the code to compile I quickly stubbed out the method like this: public static void AddUnique(this List Using the ReSharper unit test runner, I verified that my test indeed fail. The moved on to implementing the method logic. My first attempt at implementing the method looked something like this: public static void AddUnique(this List<string> list, string item) { foreach(var s in list) { if(s == item) return; } list.Add(item); } This allowed my first test to pass, but not the second. So I modified the implementation like so: public static void AddUnique(this List<string> list, string item) { foreach(var s in list) { if(String.Compare(s,item,true) > 0) return; } list.Add(item); } Now both tests pass. I continued down this TDD path until I had a nice set of extension methods and unit tests that satisfied all the requirements to get rid of our custom collection class. In the end I created AddUnique, AddUniqueRange, CaseInsensitiveContains, IsUnique and ToUniqueList extension methods which all work nicely together with a full suit of unit tests. The next task at hand was to get the extension methods working in projects that were targeting the 2.0 framework. I set my unit test class to target 2.0 and verified that they still worked. I got an warning as soon as I set the framework target that I had references to projects targeting a different framework. The tests ran fine. I then changed the target of my extension library to 2.0 and got this nasty error at compile time: Cannot define a new extension method because the compiler required type 'System.Runtime.CompilerServices.ExtensionAttribute' cannot be found. Are you missing a reference to System.Core.dll? Hrm.. ExtensionAttribute seems to be a 3.x feature. But I had started out this adventure reading ScottGu's blog where he says that extension methods are language syntactical sugar and should work fine with the 2.0 framework. So I fired up Chrome and hit google. The first result for my query happened to be my good friend Nate Kohari's blog. Nate is the creator of Ninject the amazing Dependency Injection framework. He also recently release his new web site IdeaVine. He is an awesome guy and as usual he already blazed the path I was walking down. According to Nate, all I needed to do was add an attribute class in a specific namespace to get around the compiler issue. //override the .net 3.5 compiler services for .net 2.0 compatibility //see: namespace System.Runtime.CompilerServices { [AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)] public class ExtensionAttribute : Attribute { } } Everything compiles and test run fine targeting the 2.0 framework. In morning I plan to verify this by running sample code on a fresh box with only the 2.0 framework installed. As always you can get my full source code here. Share this story About Author I am a passionate engineer with an interest in shipping quality software, building strong collaborative teams and continuous improvement of my skills, team and the product.
https://iamnotmyself.com/2008/09/22/using-extension-methods-in-net-framework-2-0/
CC-MAIN-2017-39
refinedweb
918
62.88
import "github.com/elves/elvish/pkg/eval" Package eval handles evaluation of parsed Elvish code and provides runtime facilities. args_walker.go boilerplate editor.go env_list.go eval.go exception.go external_cmd.go frame.go glob.go go_fn.go interrupts.go module_math.go must.go ns.go op.go options.go port.go process_unix.go purely_eval.go pwd.go resolve.go source.go stacktrace.go state.go std_ports.go testutils.go unwrap.go valueCanOnlyAssignList = errors.New("can only assign compatible values") = make(chan interface{}) // BlackholeChan is channel writes onto which disappear, suitable for use as // placeholder channel output. BlackholeChan = make(chan interface{}) // DevNull is /dev/null. DevNull *os.File // DevNullClosedChan is a port made up from DevNull and ClosedChan, // suitable as placeholder input port. DevNullClosedChan *Port ) var ( // NoArgs is an empty argument list. It can be used as an argument to Call. NoArgs = []interface{}{} // NoOpts is an empty option map. It can be used as an argument to Call. NoOpts = map[string]interface{}{} ) ErrArityMismatch is thrown by a closure when the number of arguments the user supplies does not match with what is required.MoreThanOneRest is returned when the LHS of an assignment contains more than one rest variables. ErrNotInSameProcessGroup is thrown when the process IDs passed to fg are not in the same process group. ErrStoreNotConnected is thrown by dir-history when the store is not connected. IsBuiltinSpecial is the set of all names of builtin special forms. It is intended for external consumption, e.g. the syntax highlighter. var MathNs = NewNs().AddGoFns("math", map[string]interface{}{ "abs": math.Abs, "ceil": math.Ceil, "floor": math.Floor, "round": math.Round, }) MathNs contains essential math functions. OK is a pointer to the zero value of Exception, representing the absence of exception. Cause returns the Cause field if err is an *Exception. Otherwise it returns err itself. ComposeExceptionsFromPipeline takes a slice of Exception pointers and composes a suitable error. If all elements of the slice are either nil or OK, a nil is returned. If there is exactly non-nil non-OK Exception, it is returned. Otherwise, a PipelineError built from the slice is returned, with nil items turned into OK's for easier access from elvishscript. EachExternal calls f for each name that can resolve to an external command. TODO(xiaq): Windows support GetCompilationError returns a *diag.Error and true if the given value is a compilation error. Otherwise it returns nil and false. InTempHome is like util.InTestDir, but it also sets HOME to the temporary directory and restores the original HOME in cleanup. TODO(xiaq): Move this into the util package. MustCreateEmpty creates an empty file, and panics if an error occurs. It is mainly useful in tests. MustMkdirAll calls os.MkdirAll and panics if an error is returned. It is mainly useful in tests. MustWriteFile calls ioutil.WriteFile and panics if an error occurs. It is mainly useful in tests. NewCompilationError creates a new compilation error. NewExternalCmdExit constructs an error for representing a non-zero exit from an external command.. Styled turns a string, a ui.Segment or a ui.Text into a ui.Text by applying the given stylings. Test runs test cases. For each test case, a new Evaler is created with NewEvaler. TestWithSetup runs test cases. For each test case, a new Evaler is created with NewEvaler and passed to the setup function. type AddDirer interface { // AddDir adds a directory with the given weight to some storage. AddDir(dir string, weight float64) error } AddDirer wraps the AddDir function. type Callable interface { // Call calls the receiver in a Frame with arguments and options. Call(fm *Frame, args []interface{}, opts map[string]interface{}) error } Callable wraps the Call method. type Closure struct { ArgNames []string // The name for the rest argument. If empty, the function has fixed arity. RestArg string OptNames []string OptDefaults []interface{} Op effectOp Captured Ns SrcMeta *Source DefBegint int DefEnd int } Closure is a closure defined in Elvish script. Each closure has its unique identity. Call calls a closure. Equal compares by address. Hash returns the hash of the address of the closure. Index supports the introspection of the closure. Supported keys are "arg-names", "rest-arg", "opt-names", "opt-defaults", "body", "def" and "src". IterateKeys calls f with all the valid keys that can be used for Index. Kind returns "fn". Repr returns an opaque representation "<closure 0x23333333>". Editor is the interface that the line editor has to satisfy. It is needed so that this package does not depend on the edit package. EnvList is a variable whose value is constructed from an environment variable by splitting at pathListSeparator. Changes to it are also propagated to the corresponding environment variable. Its elements cannot contain pathListSeparator or \0; attempting to put any in its elements will result in an error. Get returns a Value for an EnvPathList. Set sets an EnvPathList. The underlying environment variable is set. using the specified ports. EvalInTTY evaluates an Op in the current terminal. It uses the stdin, stdout and stderr to build the ports, relays SIGINT from the terminal to ev.intCh, and puts Elvish in the foreground after evaluation finishes. TODO(xiaq): This function can only be used to evaluate an Op, and cannot be used to call functions with stdPorts. Make the Evaler initialize a stdPorts on construction, instead of in this function, so that NewTopFrame does not require the caller to supply the ports. EvalSourceInTTY evaluates Elvish source code in the current terminal. InstallBundled installs a bundled module to the Evaler. InstallDaemonClient installs a daemon client to the Evaler. InstallModule installs a module to the Evaler so that it can be used with "use $name" from script. func (ev *Evaler) PurelyEvalPartialCompound(cn *parse.Compound, upto *parse.Indexing) (string, error). Index supports introspection of the exception. Currently the only supported key is "cause". IterateKeys calls f with all the valid keys that can be used in Index. Kind returns "exception". PPrint pretty-prints the exception. Repr returns a representation of the exception. It is lossy in that it does not preserve the stacktrace.. If the command was stopped rather than terminated, the Pid field contains the pid of the process. func (exit ExternalCmdExit) Error() string Flow is a special type of error used for control flows. Control flows. PPrint pretty-prints the flow "error". Repr returns a representation of. Call calls a function with the given arguments and options. It does so in a protected environment so that exceptions thrown are wrapped in an Error. func (fm *Frame) CallWithOutputCallback(fn Callable, args []interface{}, opts map[string]interface{}, valuesCb func(<-chan interface{}), bytesCb func(*os.File)) error CallWithOutputCallback calls a function with the given arguments and options, feeding the outputs to the given callbacks. It does so in a protected environment so that exceptions thrown are wrapped in an Error. func (fm *Frame) CaptureOutput(fn Callable, args []interface{}, opts map[string]interface{}) (vs []interface{}, err error) CaptureOutput calls a function with the given arguments and options, capturing and returning the output. It does so in a protected environment so that exceptions thrown are wrapped in an Error. Close releases resources allocated for this frame. It always returns a nil error. It may be called only once.. func (ctx *Frame) ExecAndUnwrap(desc string, op valuesOp) ValuesUnwrapper ExecAndUnwrap executes a ValuesOp and creates an Unwrapper for the obtained values. func (fm *Frame) ExecWithOutputCallback(op Op, valuesCb func(<-chan interface{}), bytesCb func(*os.File)) error ExecWithOutputCallback executes an Op, feeding the outputs to the given callbacks.. ResolveVar resolves a variable. When the variable cannot be found, nil is returned. SetLocal changes the local scope of the Frame. GlobPattern is en ephemeral Value generated when evaluating tilde and wildcards. func (gp GlobPattern) Concat(v interface{}) (interface{}, error) func (gp GlobPattern) Equal(a interface{}) bool func (gp GlobPattern) Hash() uint32 func (gp GlobPattern) Index(k interface{}) (interface{}, error) func (GlobPattern) Kind() string func (gp GlobPattern) RConcat(v interface{}) (interface{}, error) func (gp GlobPattern) Repr(int) string GoFn uses reflection to wrap arbitrary Go functions into Elvish functions.. NewGoFn creates a new GoFn instance. Call calls the implementation using reflection. Equal compares identity. Hash hashes the address. Kind returns "fn". Repr returns an opaque representation "<builtin $name>". Inputs is the type that the last parameter of a Go-native function can take. When that is the case, it is a callback to get inputs. See the doc of GoFn for details. Ns is a map from names to variables. NewNs creates an empty namespace.) Repr(indent int) string Repr returns a representation of the pipeline error, using the multi-error builtin. Port conveys data stream. It always consists of a byte band and a channel band. Close closes a Port. Fork returns a copy of a Port with the Close* flags unset. PwdVariable is a variable whose value always reflects the current working directory. Setting it changes the current working directory. func (PwdVariable) Get() interface{} Get returns the current working directory. It returns /unknown/pwd when it cannot be determined. func (pwd PwdVariable) Set(v interface{}) error Set changes the current working directory. RawOptions is the type of an argument a Go-native function can take to declare that it wants to parse options itself. See the doc of GoFn for details. type Source struct { Type SourceType Name string Root bool Code string } Source describes a piece of source code. NewInteractiveSource returns a Source for a piece of code entered interactively. NewInternalGoSource returns a Source for use as a placeholder when calling Elvish functions from Go code. It has no associated code. NewModuleSource returns a Source for a piece of code used as a module. NewScriptSource returns a Source for a piece of code used as a script. SourceType records the type of a piece of source code. const ( InvalidSource SourceType = iota // A special value used for the Frame when calling Elvish functions from Go. // This is the only sourceType without associated code. InternalGoSource // Code from an internal buffer. InternalElvishSource // Code entered interactively. InteractiveSource // Code from a file. FileSource ) func (t SourceType) String() string TestCase is a test case for Test. That returns a new Test with the specified source code. Multiple arguments are joined with newlines. DoesNotCompile returns an altered TestCase that requires the source code to fail compilation. DoesNothing returns t unchanged. It is used to mark that a piece of code should simply does nothing. In particular, it shouldn't have any output and does not error. Prints returns an altered TestCase that requires the source code to produce the specified output in the byte pipe when evaluated. Puts returns an altered TestCase that requires the source code to produce the specified values in the value channel when evaluated. PutsStrings returns an altered TestCase that requires the source code to produce the specified strings in the value channel when evaluated. Throws returns an altered TestCase that requires the source code to throw the specified exception when evaluted. ThrowsAny returns an altered TestCase that requires the source code to throw any exception when evaluated. ValueUnwrapper unwraps one Value. func (u ValueUnwrapper) Any() (interface{}, error) func (u ValueUnwrapper) CommandHead() (Callable, error) func (u ValueUnwrapper) Fd() (int, error) func (u ValueUnwrapper) FdOrClose() (int, error) func (u ValueUnwrapper) Int() (int, error) func (u ValueUnwrapper) NonNegativeInt() (int, error) func (u ValueUnwrapper) String() (string, error) ValuesUnwrapper unwraps []Value. func (u ValuesUnwrapper) One() ValueUnwrapper One unwraps the value to be exactly one value. ☞ When evaluating closures, async access to global variables and ports can be problematic. Package eval imports 40 packages (graph) and is imported by 9 packages. Updated 2020-03-28. Refresh now. Tools for package owners.
https://godoc.org/github.com/elves/elvish/pkg/eval
CC-MAIN-2020-16
refinedweb
1,929
52.36
Installation To. - Printer-friendly version - Login or register to post comments Re: Installation Updated and simplified documentation to match the 0.9.9-3 release. Re: Installation This is great stuff for a noob like me ;) I'm waiting for the next part of the tutorial, since I need to implement a OpenGL inside a window. Oh, just a minor note. Could you show how to get GLControl run in a application like this? I tired out the other tutorial on this subject, but since I am on OSX I couldn't add the glControl as a component to the form (the first part of tutorial). Re: Installation Just add the GLControl as a class field: Of course, you can modify the control's size, dockstyle, anchoring, name, vsync and anything else you'd like. (The simplest approach is to add a glControl.Dock = DockStyle.Fill;so that it covers the whole form). Re: Installation Fiddler, that code you pasted doesnt seem to work. It complains about GLControl namespace not being found and when I fix it like this: Which i think should be correct, it just flashes the window and crashes with the following output: Using Snow Leopard and the latest versions of Mono and OpenTK :) Re: Installation VeliV, I'd suggest creating a new bug report for this issue. I just committed a fix, but I cannot actually test that GLControl is now working correctly. It would really help if you could install a subversion client and test with OpenTK from SVN. Re: Installation Yah, had some problems with subversion, since the solution and project files aren't included in it (couldn't open the project in mono). But I managed around it. But it still crashes. Here is the output: I'll go add an issue, altough I am not really sure what to put in it :D Re: Installation The stack trace should be enough. You can generate the missing project files by running Build.exe from a terminal ("mono Build.exe", just press enter). Build the debug version of OpenTK (so that the stack trace will include line numbers) and replace the OpenTK reference in your project with the new one (assembly version should read 0.9.9-4). You might have to remove the current reference before adding the new one to make this work. If it still crashes, please post the new stack trace, which will contain line numbers instead of [0x00000] values. Re: Installation I feel quite dumb for asking this, but how do I draw something in it? :P I got stuff working in GameWindow, but I don't know where to put the GL commands with GLControl :) Re: Installation Add a paint event handler after creating the GLControl: glControl1.Paint += new System.Windows.Forms.PaintEventHandler(this.glControl1_Paint); And then add your OpenGL commands there: Re: Installation Thanks, works now.
http://www.opentk.com/node/1329
CC-MAIN-2015-14
refinedweb
479
70.73
Created on 2012-04-13 23:07 by vdjeric, last changed 2012-12-26 19:26 by asvetlov. This issue is now closed. When dealing with a new connection, SocketServer.BaseRequestHandler.__init__ first calls the request handler (self.handle below) and then calls cleanup code which closes the connection (self.finish below). class BaseRequestHandler: def __init__(self, request, client_address, server): < ... snip ... > try: self.handle() finally: self.finish() The issue arises when a client disconnects suddenly during the self.handle() call. The handler may attempt to write data to the disconnected socket. This will cause an exception (which is correct), but somehow data will still be added to the connection's buffer and self.wfile.closed will be False! As a result, BaseRequestHandler.finish() will attempt to flush the connection's buffer and this will raise another exception which can't be handled without modifying the library code. ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 62718)41, in __init__ self.finish() File "C:\Python27\lib\SocketServer.py", line 694, in finish self.wfile.flush() File "C:\Python27\lib\socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 10053] An established connection was aborted by the software in your host machine ---------------------------------------- I've provided a toy server below, you can reproduce the issue by submitting a request to it with curl and then immediately killing curl: curl -d "test" Toy server code: =========================== import BaseHTTPServer import SocketServer import time class ThreadedHTTPServer(BaseHTTPServer.HTTPServer): pass class RequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_POST(self): try: length = int(self.headers["Content-Length"]) request = self.rfile.read(length) print "Sleeping. Kill the 'curl' command now." time.sleep(10) print "Woke up. You should see a stack trace from the problematic exception below." print "Received POST: " + request self.send_response(200) # <------- This somehow adds to the connection's buffer! self.end_headers() except Exception as e: print "Exception: " + str(e) # <----- This exception is expected httpd = ThreadedHTTPServer(("127.0.0.1", 8000), RequestHandler) httpd.serve_forever() httpd.server_close() Thanks for the report. Several things are going on here: 1. Even though socketserver's StreamRequestHandler uses unbuffered wfile for the socket """ class StreamRequestHandler(BaseRequestHandler): [...]) """ data is internally buffered by socket._fileobject: """ def write(self, data): data = str(data) # XXX Should really reject non-string non-buffers if not data: return self._wbuf.append(data) self._wbuf_len += len(data) if (self._wbufsize == 0 or self._wbufsize == 1 and '\n' in data or self._wbuf_len >= self._wbufsize): self.flush() """ Usually it doesn't turn out to be a problem because if the object is unbuffered the buffer is flushed right away, but in this specific case, it's a problem because a subsequent call to flush() will try to drain the data buffered temporarily, which triggers the second EPIPE from the StreamRequestHandler.finish() Note that Python 3.3 doesn't have this problem. While this is arguably a bad behavior, I don't feel comfortable changing this in 2.7 (either by changing the write() and flush() method or by just checking that the _fileobject is indeed buffered before flushing it). Moreover, this wouldn't solve the problem at hand in case the user chose to use a buffered connection (StreamRequestHandler.wbufsize > 0). 2. I think the root cause of the problem is that the handler's finish() method is called even when an exception occured during the handler, in which case nothing can be assumed about the state of the connection: """ class BaseRequestHandler: [...] self.setup() try: self.handle() finally: self.finish() """ Which is funny, because it doesn't match the documentation: """ .. method:: RequestHandler.finish() Called after the :meth:`handle` method to perform any clean-up actions required. The default implementation does nothing. If :meth:`setup` or :meth:`handle` raise an exception, this function will not be called. """ So the obvious solution would be to change the code to match the documentation, and not call finish when an exception was raised. But that would be a behavior change, and could introduce resource leaks. For example, here's StreamRequestHandler finish() method: """ def finish(self): if not self.wfile.closed: self.wfile.flush() self.wfile.close() self.rfile.close() """ While in this specific case if wouldn't lead to a FD leak (because the underlying socket is closed by the server code), one could imagine a case where it could have a negative impact, so I'm not sure about changing this. Finally, you could get rid of this error by overriding StreamRequestHandler.finish() method or catching the first exception in the handle() method and close the connection explicitely. So I'd like to know what others think about this :-) Thank you for taking a look Charles-François. I should note that catching the first exception in the request handler and then calling self.wfile.close() wouldn't fully solve the issue. The self.wfile.close() call would throw another broken pipe exception (which is ok & can be caught), but the BaseHTTPServer code would also throw an exception when it tries to flush the (now closed) wfile after returing from the do_POST request handler. ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 50611)38, in __init__ self.handle() File "C:\Python27\lib\BaseHTTPServer.py", line 340, in handle self.handle_one_request() File "C:\Python27\lib\BaseHTTPServer.py", line 329, in handle_one_request self.wfile.flush() #actually send the response if not already done. File "C:\Python27\lib\socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) AttributeError: 'NoneType' object has no attribute 'sendall' ---------------------------------------- I think this needs serious consideration. There needs to be an "socket error" case cleanup path that releases resources but ignores further socket errors. Clearly, the finish call is intended to be called, and I think the documentation is in error. However, the finish call should also be able to cope with the connection having been reset and handle such errors as may occur. Please consider the attached patch and see if it solves the issue. > Please consider the attached patch and see if it solves the issue. The patch looks OK (although I'd prefer a BSD errno example, such as ECONNRESET, instead of a winsock one). We should also update the documentation that states that in finish() won't be called if handle() raises an exception. > although I'd prefer a BSD errno example, such as ECONNRESET, instead of a winsock one. New patch includes documentation change. If 2.7 is still in bugfix mode, then this patch could probably be accepted. >. Oops, I misread. ECONNABORTED is fine. > New patch includes documentation change. LGTM. > If 2.7 is still in bugfix mode, then this patch could probably be accepted. I guess so. So, should I commit this? The change is really trivial. > So, should I commit this? The change is really trivial. LGTM. This should be applied to 2.7. New changeset df51cb946d27 by Kristján Valur Jónsson in branch '2.7': Issue #14574: Ignore socket errors raised when flushing a connection on close. Not clean to me: has python3 the same bug? Semantically, but I am told that due to py3k's different file buffering, that those errors don't percolate through. According to Charles-Francois' post from apr, 14th: "Note that Python 3.3 doesn't have this problem." > Semantically, but I am told that due to py3k's different file buffering, that those errors don't percolate through. According to Charles-Francois' post from apr, 14th: > "Note that Python 3.3 doesn't have this problem." It's not affected by the problem, but it's really just accidental, so it would probably be better to apply this as well to py3k. Ok, I'll have a look. New changeset 7e5d7ef4634d by Kristján Valur Jónsson in branch '3.2': Issue #14574: Ignore socket errors raised when flushing a connection on close. New changeset 7734c3020a47 by Kristján Valur Jónsson in branch '3.3': Merge with 3.2 : New changeset 2d1cfbaef9a2 by Kristján Valur Jónsson in branch 'default': Merge with 3.3 There, applied the same changes to the 3.x branches. apparently the DOC folder wasn't branched off for 3.3, or something, at least, there is no sepatate 3.3 and 'default' version of socketserver.rst. Or maybe it is just Hg that got the better of me. Everything is fine. Close the issue.
http://bugs.python.org/issue14574
CC-MAIN-2015-11
refinedweb
1,389
60.72
Agenda See also: IRC log <scribe> scribe: simonstey <Arnaud> PROPOSED: Approve minutes of the 10 March 2016 Telecon: <pfps> minutes look OK RESOLUTION: Approve minutes of the 10 March 2016 Telecon: <Arnaud> PROPOSED: Open ISSUE-134 knowing inverse, ISSUE-135 and/or syntactic sugar, ISSUE-136 Property pair names, ISSUE-137 language tag <hknublau> +1 +1 <kcoyle> +1 <Dimitris> +1 <hsolbrig> +1 <ericP> +1 RESOLUTION: Open ISSUE-134 knowing inverse, ISSUE-135 and/or syntactic sugar, ISSUE-136 Property pair names, ISSUE-137 language tag issue-80 <trackbot> issue-80 -- Constraint to limit IRIs against scheme/namespace, possibly with dereferencing -- open <trackbot> Arnaud: we talked about it last week; eric sent out an email about how stem works in shex.. where do we stand now? ericP: was I about to send a proposal on how this would look like in shacl? Arnaud: thought this would be a low hanging fruit issue-129 <trackbot> issue-129 -- Existential constraints should be consistent -- open <trackbot> Arnaud: dimitris raised that issue ... we should spend little bit of time discussing it; some votes were already cast Dimitris: the definition of extential constraints might not be the best one ... but e.g. hasValue works different from other constraint types in shacl ... while others only work over exisiting values, this does not pfps: holger had a perfect description for that ... i see no possibility of changing the meaning of hasValue since it works exactly as it should Arnaud: so you are saying there is nothing to be fixed? pfps: no.. espec. the meaning of the newer ones need some fiddling ... the problem doesn't go away if you get rid of the existential ones ... e.g., equals, minCount have a clear meaning and people would scream at us if we change it ericP: i'm wondering whether people actually understand the implications of what you are saying? pfps: when people do db querying, they get confused if they get no answers for a query Arnaud: example: there must not be a property with a certain value less than x ... does the value need to exist? Dimitris: this is not an implementation issue; that's the easy part ... I was worried about users actually understanding the meaning of hasValue ... it could e.g., be changed to sh:requiredValue <pfps> The definition for hasValue is "The property sh:hasValue can be used to verify that the focus node has a given RDF node among the values of the given predicate. " This seems to be very obvious. jamsden: what's the confusion here? <pfps> For sh:in "The property sh:in exclusively enumerates the values that a property may have. When specified, each value of the given property must be a member of the specified list. " <pfps> Both seem quite obvious and the right definition pfps: there is no such thing as undefined in SPARQL ... it's there or not ... the current definitions seem pretty obvious to me jamsden: same for me <Dimitris> Jim, the problem is that sh:in ("foo") and sh:hasValue "foo" behave differently <pfps> In RDF values are not associated with properties. kcoyle: the cardinality constraints are on the property and not the value, right? <hsolbrig> how about "includesValue" pfps: in shacl, everytime you are doing something you have a property in hand <hsolbrig> "has" tends to be fuzzy whether we're dealing with a set or an individual <hsolbrig> or, to be more orthogonal with "in", just "includes" <ericP> { [ sh:predicate :foo; sh:hasValue 1; sh:hasValue 2 ] } \ { <s> :foo 1,2,3 } <hsolbrig> +q <ericP> { [ sh:predicate :foo; sh:in ( 1 2 ) ] } \ { <s> :foo 1,2,3 } => fail pfps: e.g. sh:in looks at each of the values separately (for each of those values..) <ericP> +1 to harold's proposal <ericP> though i'm curious about use cases for this pfps: minCount on the other hand works over the whole set of properties <ericP> maybe ditch sh:hasValue ? pfps: two triples that have the same S, P but different Os resemble something like multivalued properties <hknublau> hasValue is very common in filters <kcoyle> I think hasValue has lots of uses -- unless I misunderstand it <pfps> I agree with Holger that hasValue would be very common in filters hsolbrig: the confusion comes from the fact that its meaning can be understood as "its value includes" ... or "its value is" <ericP> hknublau, can you describe how hasValue is used in filters? <hknublau> "Every person who has bornIn = USA must not travel to Cuba" Arnaud: so how do we make progress here? pfps: whoops.. my proposal is simple -> do nothing Arnaud: lets give it another week, there is no rush on closing now Arnaud: there was extensive discussion on that on the mailing list issue-65 <trackbot> issue-65 -- Consistency and cohesiveness of nomenclature (e.g., shapes, scopes, and constraints) -- open <trackbot> issue-120 <trackbot> issue-120 -- The spec must be more precise and consistent about when a resource is a shape, a class, and an instance of a class -- closed <trackbot> scribe: those issues might be related to that topic pfps: I'm pretty sure that SPARQL isn't using "instance" anywhere in its spec <Dimitris> \ hknublau: we certainly need to improve the wording <ericP> pfps, SPARQL 1.1 uses instance for the notion of "instance mapping" <ericP> (and a few instances of "for instance") hknublau: we have some redudancies in the document that where meant to support understanding but might ended up confusing people <pfps> the overhead will be about 100 words in a long document hknublau: I do not agree with pfps that we are violating any ??? Arnaud: we need to make sure that the spec isn't wrong ... for me the downside of pfps' proposal is that it might be a bit painful of having to write "SHACL instance" everytime ... but at the same time I'm sensitive to his proposal ... since people might not read the document from the beginning to the end pfps: I think we need to be crystal clear about the difference between SHACL instance and RDF(S) instance Dimitris: we could try to remove all uses of instance, but we've to see pfps: I found a hole in the metamodel.. which is kind of disturbing ... when a property is both a target of a inversepropertyconstraint as well as a propertyconstraint, it behaves strange [... pfps writing down an example ...] <pfps> sh:shape sh:property [ a sh:InversePropertyConstraint ; sh:predicate ex:foo ; sh:minCount 2 ] [ericP & pfps discussing the example] <ericP> validating <X> as the above: hknublau: it's unfinished but not broken ... both of your examples are invalid shape graphs <pfps> sh:shape sh:property [ a sh:PropertyConstraint ;a sh:InversePropertyConstraint ; sh:predicate ex:foo ; sh:minCount 2 ] <ericP> <Y> :foo <X>. <Z> :foo <X>. <X> :foo <Y>. <X> :foo <Z> . hknublau: what we could do is, making propertyconstraint and inversepropconstraint disjoint ... so a constraint can't be both at the same time <ericP> <Y> :foo <X>. <Z> :foo <X>. <X> :foo <W>. <X> :foo <K> . ericP: in shex we just have a flag that says whether something is forwards or backwards pfps: people can do a lot of silly, stupid and/or smart things in RDF ... you don't want to have to deal with defending your syntax against stupid/silly/.. proposals just because you weren't explicit enough in specifying what's allowed and what's not Arnaud: I want people to investigate pfps proposal <ericP> pfps, did OWL address this by saying that a parsing OWL from RDF fails if there is more than one way to parse it? Arnaud: I think pfps has made a fair amount of effort in providing information about his proposal <pfps> OWL solves this problem by making string requirements on graphs that are valid OWL ontologies <Arnaud> <pfps> I believe that Holdger said that syntax with positional arguments was an anti-pattern Arnaud: I want to step back from discussing specific issues and discuss pfps proposal ericP: maybe pfps wants to give us a short description of his proposal now? <ericP> pfps: 1) sh:property/invprop. you have to pull out the property and put it in a list; the benefit is that you don't have to worry about not knowing in which direction you have to go ... 2) sh:pattern is a little bit odd right now <ericP> current: ex:MyShape a sh:Shape ; sh:constraint [ a sh:Shape ; sh:predicate ex:myProperty ; sh:class ex:Person ; sh:in ( ex:Susan ex:Bill ) ] . <ericP> pfps: ex:MyShape a sh:Shape ; sh:fillers ( ex:myProperty [ a sh:Shape ; sh:class ex:Person ; sh:in ( ex:Susan ex:Bill ) ] ) . <ericP> current: ex:MyShape a sh:Shape ; sh:property [ a sh:Shape ; sh:predicate ex:myProperty ; sh:class ex:Person ; sh:in ( ex:Susan ex:Bill ) ] . <ericP> pfps: ex:MyShape a sh:Shape ; sh:propValue ( ex:myProperty [ a sh:Shape ; sh:class ex:Person ; sh:in ( ex:Susan ex:Bill ) ] ) . pfps: you can't have two properties inside the square brackets to combine together ... currently it's painful to repeat things ... 3) you can actually put e.g. sh:minCount anywhere in the shape <ericP> ex:PersonShape sh:minCount 1e10 . pfps: there are no more node/property/invpropertyconstraints anymore ... there are only shapes; very similar to shex Arnaud: I'm wondering whether the WG thinks we should spend time on looking into this or not <Arnaud> STRAWPOLL: continue investigating Peter's proposal there may be something there (+1: agree, 0: not sure, -1: disagree) <ericP> +1 <hsolbrig> +1 +1 <hknublau> -1 <Labra> +1 <kcoyle> +1 <pfps> +1 (surprise!) <Dimitris> 0- ( I think there are good elements but not user friendly) <jamsden> +0 but we might adopt some bits and pieces <pfps> I note that I have updated my previous SHACL implementation for this new syntax. It is about 80% complete. <jamsden> I don't see the motivation for such a significant change at this late date <pfps> to be fair, RDF makes for complex, long, and hard-to-understand syntax <jamsden> these issues just aren't that compelling to me <jamsden> what is the process for assessing, evaluating and deciding on a resolution? Arnaud: I encourage everybody to have a read on pfps proposal iovka's email: document: iovka: I'm a formal methods person; so in order to understand SHACL I generated an abstraction of SHACL ... I used presburger arithmetics for capturing shex (not needed for shacl) ... future goal is to come up with a transformation between shex <-> shacl ... I would need some support from someone who's more familiar with shacl than me for checking whether I captured shacl correctly or not Arnaud: I'm quite grateful on what iovka is doing pfps: the reason why I jumped on hasValue is that I'm not sure whether the current formalism is actually capable of capturing it (haven't looked at it though) iovka: I had a brief look at it today and it appears to be one of the easiest constraints <ericP> iovka expressed [] sh:hasValue 1 as ShEx as <S> EXTRA :foo { :foo [1] } <pfps> my time is likely to be very limited for a while starting very soon now I will have a read <Arnaud> trackbot, end meeting <iovka> quit
http://www.w3.org/2016/03/17-shapes-minutes.html
CC-MAIN-2021-39
refinedweb
1,875
56.18
Things used in this project Story This post is all about using the elecrow Crowtail Serial Wifi connector as a WiFi peripheral to get the MSP430 connected to the internet in a cheap and easy fashion! The eight dollar Crowtail uses the ESP8266 WiFi module to communicate with the twelve dollar MSP4305529 LaunchPad through AT commands over UART. A list of the AT commands can be found here. In order to keep the connections neat, the LaunchPad was interfaced with the Grove Base booster pack which is available in the Seeed for LaunchPad Starter Kit. This is not necessary for interfacing the Crowtail to the MSP430. The temperature and humidity sensor from the Seeed kit is also used in order to provide data for the LaunchPad to publish. Two different connection methods: LaunchPad as a web server: The ESP8266 is capable of acting as a WiFi access point. Through AT commands, The LaunchPad can be configured to accept TCP connections and then write data. In the code below, crowtailCode, if the user presses PUSH1 on the MSP4305529 LaunchPad, the LaunchPad will enter server mode. There will then be an open WiFi network with a name of "AI_THINKER _XXXX". The IP address where the LaunchPad is publishing data is in the serial monitor in response to the command AT+CIFSR, in this case 192.168.4.1. If the user opens a web browser and navigates to this address, the ESP8266 will accept a connection and then send data through a TCP connection. The data can be controlled in code, in this case it is publishing the temperature and humidity. The example page being published is a simple text page with code to refresh every five seconds. Any HTML code could be printed over serial to the device and then accessed. An example of the webpage hosting results can be seen below: LaunchPad as a Client: The ESP8266 can also be used to connect the LaunchPad to a web service as a client. If the user presses PUSH2 with the code below, the LaunchPad will connect to Thing Speak as a client and, through an API, publish the temperature and humidity data to the website. This data can then be monitored from any web connection. In order to configure this to work with Thing Speak, a free API key must be acquired and the SSID and WiFi password must be changed in code. An example of the Thing Speak data monitoring is shown below. Final Thoughts: This device itself could be used as a cheap and easy weather monitoring station! The results could be accessed over web interface or even published to a smart watch so the user could know the temp and humidity for their house or for their exact location. The ESP8266 interface also opens the door for many other IoT applications. It provides both an ability to host connections and connect to other services and is an extremely low cost option. It pairs very well with a low power, low cost LaunchPad to enable anyone to develop for an increasingly Internet of Things focused world! Media: Code crowtailCode.inoC/C++ #include <DHT.h> #define DEBUG true //Prints AT responses to serial terminal, set to False if you do not want to print #define tempHumidity 23 //Input analog pin for the temp humidity sensor #define button1 PUSH1 #define button2 PUSH2 #define LED2 GREEN_LED #define LED RED_LED String</head><body>"; String webpage = header + "<h1>Weather Values:</h1><p>Temperature = " + String(temperature) +"F<br>Humidity = " + String(humidity) + "%</p></body></html>"; int length = webpage.length(); String cipSend = "AT+CIPSEND="; cipSend += connectionId; cipSend += ","; cipSend += length; cipSend +="\r\n"; sendData(cipSend,500,DEBUG); //Set up TCP connection sendData(webpage,500,DEBUG); //Send the data String closeCommand = "AT+CIPCLOSE="; closeCommand+=5; // append connection id closeCommand+="\r\n"; sendData(closeCommand,1000,DEBUG); //close TCP connection } } } } void thingSpeak(){ while(digitalRead(button1)){ float temperature = dht.readTemperature(true); //Set to false if want to read in celcius float humidity = dht.readHumidity(); sendData("AT+CIPSTART=\"TCP\",\"184.106.153.149\",80\r\n",500,DEBUG); // turn on server on port 80 String dataWrite = "GET /update?api_key=" + apiKey + "&field1=" + String(temperature) + "&field2=" + String(humidity) + "\r\n"; String cmd = "AT+CIPSEND="; int length = dataWrite.length(); cmd += length; cmd += "\r\n"; sendData(cmd,500,DEBUG); //Set up TCP connection sendData(dataWrite,1000,DEBUG); //Send data to thing speak long int time = millis(); while((time+5000)>millis() && digitalRead(button1)){ //delay for 5 seconds before sending data or until button1 is pressed } } } String sendData(String command, const int timeout, boolean debug) { String output = ""; Serial1.print(command); // send the read character to the Serial1 long int time = millis(); while( (time+timeout) > millis())//wait for a response until timeout is up { while(Serial1.available()) { // The esp has data so display its output to the serial window char c = Serial1.read(); // read the next character. output+=c; } if ((output.indexOf("FAIL") != -1)||(output.indexOf("OK")!=-1)){ //if response received break from the timeout loop break; } } if(debug) { Serial.print(output); } return output; } Credits Chris Roberts Replications Did you replicate this project? Share it!I made one Love this project? Think it could be improved? Tell us what you think!
https://www.hackster.io/ctroberts/20-wifi-connected-hardware-solution-with-esp8266-fb5995
CC-MAIN-2018-09
refinedweb
861
52.8
Low level event reactor for Pyjoyment with asyncio support. Project description Pyjo-Reactor-Asyncio Low level event reactor with asyncio support for Pyjoyment. Pyjoyment An asynchronous, event driver web framework for the Python programming language. Pyjoyment provides own reactor which handles I/O and timer events in its own main event loop but it supports other loops, ie. libev or asyncio. See asyncio This module provides infrastructure for writing single-threaded concurrent code using coroutines, multiplexing I/O access over sockets and other resources, running network clients and servers, and other related primitives. The asyncio module was designed in PEP3156. For a motivational primer on transports and protocols, see PEP3153. Trollius Trollius is a portage of the asyncio project (PEP3156) on Python 2. Trollius works on Python 2.6-3.5. See Examples Non-blocking TCP client/server import Pyjo.Reactor.Asyncio import Pyjo.IOLoop # Listen on port 3000 @Pyjo.IOLoop.server(port=3000) def server(loop, stream, cid): @stream.on def read(stream, chunk): # Process input chunk print("Server: {0}".format(chunk.decode('utf-8'))) # Write response stream.write(b"HTTP/1.1 200 OK\x0d\x0a\x0d\x0a") # Disconnect client stream.close_gracefully() # Connect to port 3000 @Pyjo.IOLoop.client(port=3000) def client(loop, err, stream): @stream.on def read(stream, chunk): # Process input print("Client: {0}".format(chunk.decode('utf-8'))) # Write request stream.write(b"GET / HTTP/1.1\x0d\x0a\x0d\x0a") # Add a timer @Pyjo.IOLoop.timer(3) def timeouter(loop): print("Timeout") # Shutdown server loop.remove(server) # Start event loop Pyjo.IOLoop.start() Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Pyjo-Reactor-Asyncio/
CC-MAIN-2019-30
refinedweb
287
62.95
Note: In Python 2.7 use: from __future__ import print_function to use examples. In Python, by default, the key and value pairs in a dictionary are stored as hashes, therefore dictionaries do not retain the order in which values were added, and cannot be ordered. v_Dict ={} v_Dict["First"] = 99 v_Dict["Second"] = 45 v_Dict["Third"] = 234 print(v_Dict) ## {'Second': 45, 'Third': 234, 'First': 99} No order at all. To maintain an ordered dictionary one must use the collection module's "OrderedDict()." Normally one would declare a dictionary to be ordered like so: from collections import OrderedDict v_Dict = OrderedDict() v_Dict["First"] = 99 v_Dict["Second"] = 45 v_Dict["Third"] = 234 print(v_Dict) ## OrderedDict([('First', 99), ('Second', 45), ('Third', 234)]) An OrderedDict() maintains the order pairs were added. However, from time to time I find that OrderedDict() is not up to the task. Like here with zip: from collections import OrderedDict d_Dict = OrderedDict() v_Keys = [45, 50, 20] v_Values = [3.0, 5.0, 2.0] d_Dict = dict(zip(v_Keys, v_Values)) print(d_Dict) ## {50: 5.0, 20: 2.0, 45: 3.0} Order is lost again. To get order back, one must sort the dictionary on the pairs from within OrderedDict(). Like so: from collections import OrderedDict v_Keys = [45, 50, 20] v_Values = [3.0, 5.0, 2.0] d_Dict = dict(zip(v_Keys, v_Values)) d_Dict = OrderedDict(sorted(d_Dict.items())) print(d_Dict) ## OrderedDict([(20, 2.0), (45, 3.0), (50, 5.0)]) Order is restored. I hope someone finds this beneficial.
https://www.daniweb.com/programming/software-development/tutorials/498051/python-ordering-a-dictionary-after-the-fact
CC-MAIN-2020-29
refinedweb
242
60.82
Index Links to LINQ It can be helpful to start from the beginning when working with new technologies. This post explains how to create a minimal WPF application that produces a single window with a gradient in it, as shown in Figure 1. The point of this exercise is to build the app from scratch, choosing File | New Project | Empty Project rather than File | New Project | WPF Application. The benefit of this exercise is to simply see what ingredients go into the production of a minimal WPF program. Figure 1: A Simple WPF window with a gradient. Figure 2: The References section from the SimpleWpf project. The complete source code to this project is shown in Listing 1. You can see that an Application object is created in the Main method, and that a few minimal fields of the window are filled out in the constructor. I also create a WPF LinearGradientBrush and set it as the Background for the window. using System; using System.Windows; using System.Windows.Media; namespace Project1 { class Program: Window { [STAThread] static void Main(string[] args) { Application application = new Application(); application.Run(new Program()); } public Program() { Width = 320; Height = 260; Title = "A Colorful Window"; Background = new LinearGradientBrush(Colors.AliceBlue, Colors.Aquamarine, new Point(0, 0), new Point(1, 1)); } } } I should perhaps end by reminding you that the simplest way to create a WPF application in Visual Studio is to choose File | New Project | WPF Application. I have shown you this alternative technique simply because I hope you find it interesting or entertaining. It also illustrates that it is possible, though not necessarily recommended, to build WPF applications without using XAML. The complete Source is on my LINQ Farm. Here is direct link to the download. If you would like to receive an email when updates are made to this post, please register here RSS You've been kicked (a good thing) - Trackback from DotNetKicks.com Greate... I hope success for u. have a good time. Wow, I didn't know that you can create a working WPF app without using the Visual Studio WPF project template! I thought you'd need some .g.cs autogeneration magic - but apparently not!! Nice one Charlie! I was looking for something like this... Sometimes the best way to learn something is to learn from scratch, and I believe WPF is one of those things. thank you very much. Those looking for more information about generating WPF applications from scratch might want to check Charles Petzold's book "Applications = Code + Markup". Petzold tends to teach how to do things from scratch, and this one is no exception. There's a lot of benefits to knowing the XAML designer and project templates aren't magic, so I recommend it! MSBuild MSBuild Reserved Properties [Via: Sayed Ibrahim Hashimi ] Sharepoint Adding Copy and Paste... I guess WPF requires following Namespaces DLLs: PresentationCore.dll PresentationFramework.dll WindowsBase.dll System.dll Totally different from forms, I guess. Which is kind of interesting as it is a parallel develpment to Windows.Forms and no dependncies, atleast that is what I understood. I like how the C++/CLI to get a blank WPF window is just: Application().Run(%Window()); It makes a very nice change from the equivalent in the Windows 3.1 edition of Petzold!
http://blogs.msdn.com/charlie/archive/2008/06/14/wpf-farm-simple-wpf.aspx
crawl-002
refinedweb
550
66.33
Network.Mom.Patterns.Basic.Client Description Client side of Client/Server Synopsis Documentation withClient :: Context -> Service -> String -> LinkType -> (Client -> IO a) -> IO aSource request :: Client -> Timeout -> Source -> SinkR (Maybe r) -> IO (Maybe. A 'hello world' Example: import qualified Data.Conduit as C import qualified Data.ByteString.Char8 as B import Network.Mom.Patterns.Basic.Client import Network.Mom.Patterns.Types main :: IO () main = withContext 1 $ \ctx -> withClient ctx "test" "tcp://localhost:5555" Connect $ \c -> do mbX <- request c (-1) src snk case mbX of Nothing -> putStrLn "No Result" Just x -> putStrLn $ "Result: " ++ x where src = C.yield (B.pack "hello world") snk = do mbX <- C.await case mbX of Nothing -> return Nothing Just x -> return $ Just $ B.unpack x checkResult :: Client -> Timeout -> SinkR (Maybe r) -> IO (Maybe r)Source
http://hackage.haskell.org/package/patterns-0.1.1/docs/Network-Mom-Patterns-Basic-Client.html
CC-MAIN-2017-04
refinedweb
129
50.94
Example of optimizations “breaking” multithreaded code. static class Test { static int stop = 0; static void Main() { var t = new Thread(Worker); t.Start(); Console.WriteLine($"Running ({FrameworkVersion})"); Thread.Sleep(5000); stop = 1; Console.WriteLine("Waiting"); t.Join(); } static void Worker(object o) { Console.WriteLine("Started"); var x = 0; while (stop == 0) x++; Console.WriteLine("Stopped"); } static string FrameworkVersion => #if NETCOREAPP1_1 System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription; #else Environment.Version.ToString(); #endif } The code is pretty straightforward. Separate thread is started executing Worker method where the while loop spins until the stop variable is set. This variable is set after 5 seconds in the main thread. Thus, one might expect the code to stop after 5 seconds. Surprisingly if you run this code with full optimizations (Release build) and without debugger attached the loop in Worker method never stops. How is it possible? As I said above, the problem is the optimizations. Let’s have a look at what’s actually being executed. The code differs slightly – although the end behavior is the same – whether it’s running in 64bit or 32bit. .NET Framework 4.0.30319.42000 in 32bit (MS x86 JIT) ; while (stop == 0) 02820556 mov eax,dword ptr ds:[00EF43CCh] 0282055B test eax,eax 0282055D jne 02820563 0282055F test eax,eax 02820561 je 0282055F Whoa. That’s very well optimized. The stop variable is, first, loaded into EAX register and then used only from there in test- je loop which represents the while loop in code. It makes sense, the stop is never modified there, so why bother loading it from memory all the time. It’s also worth mentioning the x increment was optimized out, because it’s not used at all. Changing the code to Console.WriteLine($"Stopped ({x})"); gives us slightly different assembly. ; var x = 0; 01480557 xor esi,esi ; while (stop == 0) 01480559 mov eax,dword ptr ds:[010E43CCh] 0148055E test eax,eax 01480560 jne 01480567 ; x++; 01480562 inc esi ; while (stop == 0) 01480563 test eax,eax 01480565 je 01480562 Although the assembly is slightly different now (the x variable is held in ESI register), the optimization is still there. .NET Framework 4.0.30319.42000 in 64bit (RyuJIT) ; var x = 0; 00007FF800440616 xor ecx,ecx ; while (stop == 0) 00007FF800440618 mov eax,dword ptr [7FF80033476Ch] 00007FF80044061E test eax,eax 00007FF800440620 jne 00007FF800440628 ; x++; 00007FF800440622 inc ecx ; while (stop == 0) 00007FF800440624 test eax,eax 00007FF800440626 je 00007FF800440622 The code is the same as in 32bit. Only the x was not completely eliminated, it’s kept in ECX register, although never used. Hence the problem is still there, because the value is tested from register and never re-fetched from memory. Using the x makes the code basically the same as in 32bit. ; var x = 0; 00007FF80043062C xor esi,esi ; while (stop == 0) 00007FF80043062E mov ecx,dword ptr [7FF80032476Ch] 00007FF800430634 test ecx,ecx 00007FF800430636 jne 00007FF80043063E ; x++; 00007FF800430638 inc esi ; while (stop == 0) 00007FF80043063A test ecx,ecx 00007FF80043063C je 00007FF800430638 The ECX register is no longer used and ESI is used instead, because the message will use x. .NET Core 4.6.25211.01 on 64bit (RyuJIT) ; var x = 0; 00007FF7E4A410B7 xor esi,esi ; while (stop == 0) 00007FF7E4A410B9 mov rcx,7FF7E48E4E90h 00007FF7E4A410C3 mov edx,1 00007FF7E4A410C8 call 00007FF84453DE80 00007FF7E4A410CD mov ecx,dword ptr [7FF7E48E4EC4h] 00007FF7E4A410D3 test ecx,ecx 00007FF7E4A410D5 jne 00007FF7E4A410DD ; x++; 00007FF7E4A410D7 inc esi ; while (stop == 0) 00007FF7E4A410D9 test ecx,ecx 00007FF7E4A410DB je 00007FF7E4A410D7 No surprise here. Still the same optimization. It’s very close to the assembly the .NET Framework on 64bit with x used produced. You might also notice the mov- mov- call sequence there and wonder: Why it’s there? What is doing? What’s at 00007FF84453DE80? You’re not alone. I ask the same questions. But my current knowledge and skills can’t give me answers. According to debugger there’s no code at 0x00007FF86294DE80. Maybe somebody reading this can shed some light on it. Conclusion Optimizations are great. The compiler or JIT or … can make the code run much faster. But with a great tool comes also great change of injuring self. That’s why it’s so important to understand memory barriers and volatile and locking and proactively looking for places where these need to be. In this example, no matter .NET Framework or .NET Core or JIT the unexpected behavior happens. And even if it didn’t in one case, one shouldn’t rely on that and rather have the code correct.
https://www.tabsoverspaces.com/233629-example-of-optimizations-breaking-multithreaded-code
CC-MAIN-2020-50
refinedweb
743
67.04
Javascript Integration with Docker 📌 In this task, you have to create a Web Application for Docker (one of the great Containerization Tool which provides the user Platform as a Service (PaaS)) by showing your own creativity and UI/UX designing skills to make the web portal user friendly. 📌 This app will help the user to run all the docker commands like: 👉docker images 👉docker ps 👉docker run 👉docker rm -f 👉docker exec 👉 add more if you want. (Optional) 👉 Make a blog/article/video explaining this task step by step. ⚙️ Task 7.2 - 📌 Write a blog explaining the use-case of javascript in any of your favorite industries. Task 7.1 In this part of the task, I have integrated Javascript with docker so that users can use docker and use its commands more easily and effectively. To start this task we have to first start our VM and OS(RHEL8) and install ‘httpd’ service in our OS. ‘httpd’ is the Apache HyperText Transfer Protocol (HTTP) server program. It is designed to be run as a standalone daemon process. When used like this it will create a pool of child processes or threads to handle requests. yum install httpd Then we have to disable our firewall and start the httpd service. systemctl disable firewalld setenforce 0 systemctl start httpd Now we have to go in the directory “/var/www/html” and create a file using ‘gedit’ command. cd /var/www/html gedit <file_Name>.html This file contains all the front-end of our website and should also include the script to connect our server to OS. The script part given below is from the code that I have shared in my GitHub repository below. <script> function lw() { var xhr=new XMLHttpRequest(); i=document.getElementById("in1").value; xhr.open("GET",""+i,true); xhr.send(); xhr.onload=function (){ var output=xhr.responseText; document.getElementById("d1").innerHTML=output; } } </script> Now change the directory to “var/www/cgi-bin” and make a Python file using ‘gedit’ command with “.py” extension to write our backend code. cd /var/www/cgi-bin gedit <file_Name.py> This file contains the backend code(in Python). #!/usr/bin/python3 import cgi import subprocess import timeprint("content-type: text/html") print()print("Hello from backend") print() f=cgi.FieldStorage() cmd=f.getvalue("x") o=subprocess.getoutput("sudo "+ cmd) print(o) After this so that any non-root user can access from my server, I made changes as follows so that using the “sudo” command the non-root user can access the docker services. In the terminal, we have to go to file-path ‘/etc/group’. vim /etc/group Now we have to make the following change to allow our non-root users to access docker. Now we have to again change the directory from terminal to “/etc/sudoers” and make the following changes over there. vim /etc/sudoers We have to give access to our python file to be executable from the guest user. So for this, we have to give access to it by going into the directory where it was stored and then using the command given below. chmod +x <file_name>.py Now we are all set and can use our application in Windows web browser (Chrome or Edge) with the help of URL. http://<IP Address>/<file_Name>.html
https://ds1887534.medium.com/javascript-integration-with-docker-6cf42758e108?source=read_next_recirc---------3---------------------0f89e3b0_4fba_4419_9bc6_9947ec89384d-------
CC-MAIN-2022-33
refinedweb
550
62.98
Computer Science Archive: Questions from June 22, 2011 - Anonymous askedA disk is divided into 4 partitions and there are 4 logical drives present in the computer and every... Show moreA disk is divided into 4 partitions and there are 4 logical drives present in the computer and every partition is assigned 1 logical drive. You are required to write the steps involved to calculate the LBA address of third partition table. • Show less0 answers - Anonymous askedIn data part of Master Boot Record, 66 bytes are used to represent the 4 different partitions in the... More »0 answers -2 answers - javaXpert askedfont-family: 'Cuprum', arial,... Show more<style> textarea { width: 100%; padding: 5px; border: 1px solid #ccc; font-family: 'Cuprum', arial, serif; font-size: 14px; } </style> <div class="content"> <h1> - Insert Form</h1> <?php showBlla(); ?> <form method="post" name="form" id="admin_form" action="admin/blla/"> <ul class="rounded"> <li> <input type="text" class="text" name="title" id="title" placeholder="Title" /> </li> <li> <tr><td>Message:</td><td><textarea name='message' rows='10'></textarea></td></tr> </li> </ul> <input type="submit" name="submit" id="submit" value="submit" class="button2 large" /> </form> </br> <a href="admin">« home</a> <div class="vspacer"></div> </div> ---------------------------------------------------------------------------------------------------------- <?php function showBlla($titulli, $message) { $qry = "INSERT INTO blla(titulli, message) VALUES('$_POST[titulli]','$_POST[message]')"; $result = @mysql_query($qry); $r = mysql_fetch_array($result); } ?> ---------------------------------------------------------------------------------------------------------- CREATE TABLE IF NOT EXISTS `form` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) NOT NULL, `message` longtext NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; ---------------------------------------------------------------------------------------------------------- P.S if you could give me little help how to fix this function so I may Call it and it will give me back(fetch) the data into the WEB I wont to show in. The data are actualy saving into DB. but when I reload the page it inserts blank data into MySQL db • Show less0 answers - Anonymous asked1 answer - Neednerds08219 askedUsing Java write a Program FixPhone.java that fixes the phone number entry error,ask for the phone n... Show moreUsing Java write a Program FixPhone.java that fixes the phone number entry error,ask for the phone number .First, check if the phone number is fixable,if not ask for another phone number or display invalid result. How to fix: Check the length of the characters check location of dashes. check location of parentheses. If it is fixable fix it. if not ask for another phone number over over again ,program never stops until user terminates by typing “quit”(make it Case insensitive) (hint: equalsIgnorecase ,use infinite loop mechanism) Fix it by stripping all the dashes and parentheses then validate the input . Right format of phone#:(XXX)XXX-XXXX • Show less4 answers - AngryPepper4181 askedWatson elementary school contains 30 classrooms numbered 1 through 30. Each classroom can contain a... More »2 answers - AngryPepper4181 askedThe city of cary is holding a special census. The city has collected data on cards that each hold t... More »0 answers - Anonymous askedWrite a program with Easy68k (for 68000 microprocessor) to read a decimal number from the keyboard a... Show more Write a program with Easy68k (for 68000 microprocessor) to read a decimal number from the keyboard and display it on the 7 segment display array. The number is only required to be two digits. I know how to write a program using the stored "tasks" in easy68k that will display what I type on the keyboard on the view output screen. The problem I am having is displaying the number from the keyboard into the seven segment displays. Below is the code for reading text from the keyboard: ORG $1000 START: MOVE.B #2,D0 ; set task number for reading text from keyboard MOVEA.L #$5000,A1 ; set the location of the text saving in memory TRAP #15 ; TRAP #15 END START ; last line of source I also know how to write a program to set values and a counter on the sev-seg display. I simply have no idea to how to tie keyboard input into it. Anytime I try I just get a weird lower case r shape to appear in the display.• Show less0 answers - Anonymous asked0 answers - alienbob21 askedYou... Show moreWrite out a complete code with C++ language that will run the program. I'd really appreciate it. You are to design and implement a program which validates dates expressed in the form of the string “mm-dd-yyyy”, e.g. “3-30-2012” and computes the day of the year to which they refer, e.g. the previous example specifies the 90th day of the year (31 days in January plus 29 days in February (2012 is a leap year) plus 30, the number of the day of March.) A further note about leap years: A year is a leap year if it is divisible by 4, but not by 100. For example, 1998 and 2012 are leap years because they’re divisible by 4, but not by 100. A year that is divisible by 100 is a leap year if it is also divisible by 400. For example, 1600 and 2000 are divisible by 400 and are leap years. However, 2200 is not a leap year because 2200 is not divisible by 400. Your program should allow one or two digits for the month and day, i.e. it is not necessary to require January first be expressed as “01-01-2011”: “1-1-2011” suffices, but there would be nothing wrong with “01-01-2011” either. Your program will read candidate strings for validation from the file “dates.txt” and output a line for each to the console. This output should echo the input and specify either the day of the year or give an explanation why it fails to meet the syntax criteria. For example: 5-22-1951 is the 142 day of the year. 5-22-1952 is the 143 day of the year. 5-22-2000 is the 143 day of the year. 5-22-1900 is the 142 day of the year. 1-1 has an error: missing 2nd dash. 1-1-1- has an error: too many dashes. 2-30-2011 has an error: Invalid day. 13-1-1999 has an error: Invalid month. 1-0-1066 has an error: Invalid day. -1-1 has an error: Dashes in wrong position. mm-dd-yyyy has an error: Bad month digit. The exact text of your error messages does not matter as long as the error is explained well and it does not matter which error is reported in the case of multiple errors (e.g. “0y-80” has an invalid month, a non-numeric character, a missing dash, an invalid day and is missing a dash and year.) Your program must use functions to do its work. As always, there are a variety of ways to proceed. The following is one way to break the problem down into a manageable, testable set of functional components. You must implement and use these functions, following these prototypes. NOTE: DO NOT write to the console in any of the following functions: they should silently return results up the chain to main where a determination can be made of how to use that result. bool isLeapYear(int year); /* Returns true if the specified year is a leap year; returns false otherwise. */ int getNumberOfDaysInMonth(int month, int year); /* Returns the number of days in the specified month for the specified year. It is assumed these two values have already been validated as being a valid month (1 through 12) and year (non-negative). */ bool isGoodDayOfTheMonth(int month, int day, int year); /* Returns true if the specified day is a valid day of the specified month in the specified year; returns false otherwise.*/ int checkValidDate(int month, int day, int year); /* returns GOOD_DATE if the specified month, day and year specify a valid date; otherwise returns INVALID_MONTH if the month is invalid, INVALID_DAY if the day is invalid and INVALID_YEAR if the year is invalid.*/ int checkDashes(string date, size_t &firstDash, size_t &secondDash); /* Determines whether the date string contains an appropriate set of dashes, e.g. passing “2-16-2011” in the date string would return GOOD_DATE. The two size_t reference parameters are to be assigned the string position of the first and second dash, respectively. Possible error returns are MISSING_DASH, MISSING_SECOND_DASH, TOO_MANY_DASHES and DASH_IN_WRONG_POSITION. */ int convertSubstringToNumber(string date, size_t firstPos, size_t secondPos); /* Extracts the characters from the date string beginning at the location specified by firstPosition and continuing up through the locations specified by secondPosition to form an integer return value. It is assumed that the date string has already been validated, i.e. the postitions are valid and the substring’s characters are all numeric.*/ int checkDigits(string date, size_t firstDash, size_t secondDash); /* Uses the two locations in the date string as the bounds to check for strictly numeric characters. All three date components, month, day and year, are checked for validity. Returns GOOD_DATE if they are all numeric, otherwise returns BAD_MONTH_DIGIT, BAD_DAY_DIGIT, or BAD_YEAR_DIGIT as appropriate.*/ int extractDateComponents(string date, int &month, int &day, int &year); /* Stores the month, day and year specified by the date parameter string into the three associated reference parameter integers. This function should perform all of its work by calling functions already described. Functions checking for error conditions should be called in an appropriate order to avoid pasing unverified input to functions noted as assuming good input. Returns GOOD_DATE if successful, otherwise returns error values obtained from calling lower-level functions.*/ int computeDayOfYear(int month, int day, int year); /* Given the specified month, day and year, computes the number of the day in the year. It is assumed the three input parameters specify a valid date.*/ int determineDayOfYear(string date); /* Computes the day of the year for the specified date string. Returns that value if the date is valid; otherwise returns an error code indicating how the string was invalid. */ You must use the following error condition definitions: #define GOOD_DATE 0 #define INVALID_MONTH -1 #define INVALID_DAY -2 #define INVALID_YEAR -3 #define MISSING_DASH -4 #define MISSING_SECOND_DASH -5 #define TOO_MANY_DASHES -6 #define DASH_IN_WRONG_POSITION -7 #define BAD_MONTH_DIGIT -8 #define BAD_DAY_DIGIT -9 #define BAD_YEAR_DIGIT -10 #define MISSING_DIGITS_BETWEEN_DASHES -11 It should be possible to implement these functions “from the bottom up,” in the approximate order described. A “test first” development approach is encouraged: write a series of calls to the function under development in the main() function, using various combinations of valid and invalid data as appropriate. Compare the results of calling the function with what you would expect the proper result to be. When your expectations are not met, you should have a good idea of where in the function to look. Once all your expectations are met, you can go on to the next function, etc. The more completely you test the lower level functions, the less prone you are to have to look at more than the function you are currently implementing, because whatever functions it calls should have already been demonstrated to work. If you DO find a problem in a lower level function, add a test to its repertoire demonstrating the proper behavior and fix it before continuing to implement the function which called it and uncovered the untested failure. A test file, dates.txt, has been supplied to help your testing and implementation. Think about how each of these inputs should be handled. Remember: NO CONSOLE OUTPUT should be generated by the functions prototyped here! • Show less1 answer - Anonymous asked1 answer - Anonymous asked0 answers - AngryPepper4181 askedstring HEA... Show morestart • Show less0 answers - Neednerds08219 askedUsing Java ,write a Program FixPhone.java that fixes the phone number entry error(misplaced dashes a... Show moreUsing Java ,write a Program FixPhone.java that fixes the phone number entry error(misplaced dashes and parentheses),ask for the phone number .validate the phone number, if the phone number is fixable fix it,if not ask for another phone number over and over or display invalid result. How to validate: create a method public class FixPhone { //******isNumber()******* public static boolean is number(String str) //hint use a counter { while(n<str.length()) if it is valid return true, if not valid return invalid phone number. How to fix: Strip the user’s input of dashes and parentheses. Check the length of the characters If it is fixable, fix it to this format: (XXX)XXX-XXXX If not ask for another phone number over over again ,program never stops until user terminates by typing “quit” to exit the system(make it Case insensitive) (hint: equalsIgnorecase ,use infinite loop mechanism&Boolean methods) • Show less0 answers - Anonymous asked1 answer -3 answers -1 answer -: It has to be c++ and written in either visual studio 2008 or 20100 answers - Anonymous asked2 answers - Anonymous asked9 cents p... Show more Problem Description.• Show less1 answer - Anonymous askedvar... Show moreDerive the cube class from the base square class. Assume the square class has a protected member variable representing the side called side and declared as a double with a default value of 1.0. It also has a public function called calcVal that evaluates the area of a square as side * side. In your derived class have the default values for side be 1. For the cube class include a public function calcVal that evaluates the volume of the cube. (Hint: The volume of the cube is side * square :: calcVal.) • Show less1 answer - Anonymous askedI watched videos on digital technology. with all the remixes copyright laws are questioned. Should t... Show moreI watched videos on digital technology. with all the remixes copyright laws are questioned. Should the law allow these creative expressions? how can the creative culture of user generated content be revived? Why is it important? or not? How should those that create the videos, music and images be protected? Should digital rights management technologies be utilized, why? I also need advice on what you think is the best topic to write my research paper on? Artificial Intelligence? Its such a broad topic and I've been researching for days! THANKS • Show less0 answers - Anonymous askedamp;lt;inp... Show more I am using JSP and HTML, I submitted a form and all of the data disappears. when it sees &lt;input name=&quot;firstname&quot;/&gt; that's all that it outputs. How do I output an <input> tag so that it holds the correct value (if one was supplied).• Show less1 answer - Anonymous askedtranslate the following algorithm into assembly language if X > 12 Then X = 2 * X + 4 ELSE X = X + Y0 answers - Anonymous asked0 answers
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2011-june-22
CC-MAIN-2014-23
refinedweb
2,431
62.88
On Thu, Oct 08, 2009 at 11:46:00PM +0800, Wu Zhangjin wrote: > > Are the non-memory parts marked as reserved? > > > No, so, is that a need to mark them? Initially all pages are marked as reserved. Which seems to be good enough for x86: $ cat /proc/iomem 00000000-0009efff : System RAM 0009f000-0009ffff : reserved 000c0000-000cffff : pnp 00:0d 000e0000-000fffff : pnp 00:0d 00100000-7fe5b7ff : System RAM [...] The 0x9f000 - 0x9ffff range is the good old ISA I/O memory range (classic MDA/CGA/VGA etc.), that is non-memory yet: #ifdef CONFIG_FLATMEM #define pfn_valid(pfn) ((pfn) < max_mapnr) #endif /* CONFIG_FLATMEM */ is sufficient on x86 so I think something else must be wrong. Ralf
https://www.linux-mips.org/archives/linux-mips/2009-10/msg00065.html
CC-MAIN-2016-50
refinedweb
114
80.31
This preview shows pages 1–4. Sign up to view the full content. Model Question Paper Subject Code: BC0053 Subject Name: VB.NET & XML Credits: 4 Marks: 140 Part A (One mark questions) 1. The .Net Framework class library consists of segments of pre-written code called __________ which provide major functions needed for developing.Net A. objects B. classes C. variables D. constants 2. The _____________ namespace contains the classes used to create forms. A. System.Windows.Forms B. System.Windows.Classes C. System.Windows.Controls D. System.Windows 3. The _______________ manages the execution of .Net programs by providing essential functions such as memory management, code execution, security and other services. A. .Net Framework class library B. Windows Forms classes C. Operating System D. Common Language Runtime (CLR) 4. You should press __________ to run the application without debugging. A. Ctrl + F5 B. F5 C. F4 D. F2 5. VB.Net uses the ____________ operator to access the member variables and methods of a class. A. -> operator View Full Document This preview has intentionally blurred sections. B. + Operator C. Assignment (= ) operator D. .(Dot) 6. The __________________ statement, when turned on, does not allow the use of any variable without proper declaration. A. Option B. Option explicit C. Explicit D. Option Strict 7. The ___________ statement is used to perform a series of specific checks. A. if B. if…then C. if…then…else D. Select…Case 8. In the ________ looping structure the loop will continue until the condition remains true. A. Do…Loop While B. For…Next C. Do…While D. For Each 9. To access the values in an array, we use the indexing operator by passing _______________. A. a float value B. an integer C. a double D. array object 10. The ____________ you select determines the initial files, assembly references, code, and property settings added to the project. A. Options B. names C. options template D. project template 11. The TabIndex property is a number that represents the control's position in the tab order, beginning with _________. A. 1 B. 0 C. -1 D. 2 12. An Object's _________ consists of a clearly defined set of properties, methods, and events. A. property B. method C. name D. interface 13. An _________ contains information relevant to the error, exposed as properties of the object. A. Error object B. Exception class C. Exception object D. Error Class 14. The __________ block contains code that runs when the Try block finishes normally, or when a Catch block receives control and then finishes. A. Try B. Finally C. throw D. catch 15. The ________ statement will, under a given circumstance, break out of the Try or Catch block and continue at the Finally block. A. Exit Try B. Try C. Catch D. Finally 16. To connect to an Access database we need ____________ connection object. View Full Document This preview has intentionally blurred sections. This note was uploaded on 03/01/2012 for the course I.T ICc 231 taught by Professor Ramon during the Fall '10 term at Institute of Management Technology. - Fall '10 - Ramon Click to edit the document details
https://www.coursehero.com/file/6835229/BC0053-VB-Net-and-XML-MQP/
CC-MAIN-2017-17
refinedweb
526
69.07
Dec 27 Reading the contents of a web page is easy in C# with the System.Net.WebClient class: using System.Net; using System.Windows.Forms; string url = ""; string result = null; try { WebClient client = new WebClient(); result = client.DownloadString( url ); } catch (Exception ex) { // handle error MessageBox.Show( ex.Message ); } The web page is read into the ‘result’ string. Note the URL you pass to the DownloadString method must have the http:// prefix, otherwise it will throw a WebException. A more complicated but also more flexible solution is to use the System.Net.HttpWebRequest class, which enables you to interact directly with servers using HTTP: using System.Net; using System.IO; using System.Windows.Forms; string result = null; string url = ""; MessageBox.Show( ex.Message ); } finally { if (reader != null) reader.Close(); if (response != null) response.Close(); } You can also add headers in both WebClient and HttpWebRequest. Take a look at and Sorry 🙁 Please–remember that if an object supports IDisposable you should *always* invoke Dispose. Since that’s not convenient, instead wrap the code in using block like this: Uri target = new Uri(LocalTestPagesFullURL + “/” + relative_path); LH.info(“Running test for “, target); using (System.Net.WebClient client = new System.Net.WebClient()) { byte[] data = client.DownloadData(target); } //using Don’t think you need to do this? Keep writing code, you’ll find out for yourself at 2am… I know this is a somewhat old post, but nevertheless… I always strongly suggest avoiding WebClient for everything but one-time-only scripting usage. WebClient is unfortunately not properly encoding-aware, and it trashes the relevant HTTP headers so you cannot manually determine the correct encoding should it fail. Essentially, when WebClient doesn’t find an encoding specified as a charset, it uses your local system’s default codepage, which is generally not what you want. To avoid potential data-corruption issues, I’d avoid it. hello Can I place a text file (xyz.txt) in my own url and then whereever I go I can download it into string[] using c# ? can u suggest the code ? @denny: The code above will do it for you. Just replace the url string literal, such as: string url = “”; Thanks a bunch man you helped me a lot with that simple example i appreciate it ! israeli guy Thanks man , that should come in handy at some point 🙂 ha ha Excellent
http://www.csharp411.com/read-a-web-page-in-c/
CC-MAIN-2017-22
refinedweb
390
60.01
I’m following the Model View Presenter (MVP) pattern similar to Antonio Leiva’s example found here: antoniolg/github. I’ve been playing around with it quite a bit and I was wondering how I would start a service from the interactor layer. Normally I’ve been putting my retrofit calls inside the interactor but I was wondering if there is a way to start a service from the interactor so I could run my retrofit calls in the service instead. Problem here is that I don’t have the activity context to run the service and it kind of defeats the purpose of the MVP if I were to expose the context to the interactor. I’m also not quite sure if this is even a good thing to be doing (starting services from the interactor). I was thinking about starting services from the presenter layer instead, but I’m running towards dead ends on how I should be approaching this. If there’s a way around this, please help a fellow out? Or enlighten me if this is not a good approach. Define class for example My App extends Application and define method like getAppInstance returns Application object and then add name attribute of this class to Applicqtion Tag in Manifest then call this method inside your use case to get context object and start your service public class MyApp extends Application { private MyApp instance; @Override public void onCreate() { super.onCreate(); instance = this; } @Override public void onTerminate() { super.onTerminate(); instance = null; } public MyApp getInstance(){ return instance; } } Tags: android, javajava, service
https://exceptionshub.com/java-how-do-i-start-a-service-from-my-interactor-using-the-mvp-pattern-in-android.html
CC-MAIN-2021-17
refinedweb
262
56.79
Improved Scaffolding for Ruby on Rails Rails can help with the creation of our database tables, and we need three: one to hold information on our soccer players, another for squad data and another to maintain medical conditions. For the sake of simplicity, let's assume that each player belongs to one squad and can have a single medical condition (or none at all). Let's tell Rails about the tables: ruby script/generate model player ruby script/generate model squad ruby script/generate model condition Models in Rails let us talk to our data from our Web application. Each of the above commands produces eight lines of output while Rails does its thing. Note that each contains a file generated in the db/migrate directory. These are our database migrations. At this point, things get less SQL-centric and more Rails-like, as Rails provides a database-independent way to define our tables. To see this in action, edit the db/migrate/xxxxxxxxx_create_players.rb file (where xxxxxxxxx is a unique date/time string generated by Rails), changing the self.up method to look like this: def self.up create_table :players do |t| t.integer :squad_id, :condition_id t.string :name, :address, :contact_tel_no t.date :date_of_birth t.timestamps end end This is the high-level Rails way of telling your database to create a table. Each column in the table gets a unique name and a data type. Note that in addition to the columns you might expect each player to have (name, address and so on), we add in two integer columns that will link to the squad and condition tables. What's cool about using migrations is that it does not matter which database you are using, Rails generates the correct database-specific SQL statements as required and when needed. Let's define the other two tables. Edit db/migrate/xxxxxxxxxx_create_squads.rb, changing the self.up method as follows: def self.up create_table :squads do |t| t.string :name t.timestamps end end And, finally, change db/migrate/xxxxxxxxxx_create_conditions.rb to have a self.up method that looks like this: def self.up create_table :conditions do |t| t.string :name t.timestamps end end Now for the fun part, type the following at the command-prompt: rake db:migrate Output similar to the following should scroll by on screen: (in /home/barryp/rails/soccer_club) == CreatePlayers: migrating ===================== -- create_table(:players) -> 0.1916s == CreatePlayers: migrated (0.1918s) ============ == CreateConditions: migrating ================== -- create_table(:conditions) -> 0.0183s == CreateConditions: migrated (0.0185s) ========= == CreateSquads: migrating ====================== -- create_table(:squads) -> 0.0309s == CreateSquads: migrated (0.0311s) ============= What's happened is that Rails has connected to the back-end database and created the three required tables. Note that there's no programmer-written SQL code in sight! Rails handles all the down-and-dirty SQL details. For those readers who don't believe me, log in to PostgreSQL as soccer_manager and bask in the glory of the table schema that Rails has created for you. At this point, it would be normal to use Rails to generate some scaffolding code, then reach for a CSS reference to pretty up the whole thing. This is doable, but it takes time. For now, let's use Rails to generate empty controllers with these three commands: ruby script/generate controller player ruby script/generate controller squad ruby script/generate controller condition Each of these commands produces seven lines of output. Note that a Ruby file is generated in the app/controllers directory. These are source code files that will contain any business logic we want to add to our Rails application. We will do this in a little while. To complete the default Rails setup, we need to specify our table relationships. Edit app/models/player.rb to look like this: class Player < ActiveRecord::Base belongs_to :condition belongs_to :squad end ActiveScaffold is written and maintained by a dedicated group of Rubyists who live at activescaffold.com/team. ActiveScaffold is a Rails plugin, and as such, gets installed into an existing Rails project, so let's do that first. From the top-level directory of your Rails application, type the following (which should be entered on a single line): git clone git://github.com/activescaffold/active_scaffold.git \ vendor/plugins/active_scaffold && \ rm -rf vendor/plugins/active_scaffold/.git This command fetches ActiveScaffold and installs it into your Rails application. When this process completes, a new directory has been created within the vendor/plugins/ directory of your Rails application called activescaffold. For the plugin to work its magic, we need to create an application-level layout that will be used throughout our Rails application. Here's a bare-bones layout, which we need to create in the app/views/layouts directory and which is called application.rhtml: <html> <head> <title>Soccer Club Database System</title> <%= javascript_include_tag :defaults %> <%= active_scaffold_includes %> </head> <body> <%= yield %> </body> </html> This is a straightforward, essentially empty, HTML page. Take note of the code included within the <%= and %> tags. These tags allow us to execute Ruby code from within an HTML template. The first set of such tags adds a set of JavaScript routines to our page; the second pulls in the ActiveScaffold goodness, and the third executes the Ruby yield method. Any layouts that are created within our application (whether manually by us or dynamically by Rails or ActiveScaffold) will be wrapped in the application.rhtml layout, with their content replacing the invocation of yield as required. With the default layout created, we need to edit each of our existing controllers to switch on ActiveScaffold. Here's how the app/controllers/player_controller.rb file should appear after this edit: class PlayerController < ApplicationController active_scaffold :player end Add a similar line of code to the app/controllers/squad_controller.rb and app/controllers/condition_controller.rb files, then start your Rails application: ruby script/server Fire up your browser and load the page. Take a look at Figure 1, which shows the default ActiveScaffold player listing—it looks great. Note that ActiveScaffold has spotted the links between the three tables and pulled in the appropriate data values. Note also that I've added some sample data to my Web app. Unfortunately, the ordering of the columns leaves a little to be desired, and this is no more evident than when we view the default ActiveScaffold player form, as shown in Figure 2. This form displays the table columns in alphabetical order, which is not what we want. In addition, the subforms that provide access to the squad and medical condition data are cool, but what we want is a simple drop-down list for our application. Thankfully, adjusting ActiveScaffold's default behaviors is not difficult, as we shall see in a few moments. Another problem (which you may have noticed if you've been following along) is that the date range associated with the date_of_birth value is very restrictive, using 1997 as the earliest start year. As all of our soccer players were born in the early 1990s, we need some way to adjust the start year for any entered dates. ActiveScaffold (together with Rails) can help here Some of RoR examples Some of RoR examples that i do Ruby On Rails installation on Windows Ruby on Rails installation in Ubuntu autocompleter example in RoR live Validation for RoR Ismail Muhammad Noman share-facts.blogspot.com Getting the article code to work with PostgreSQL. A reader, Ken Shaffer, emailed with an issue he had getting the example code (above) to work with his version of PostgreSQL. I initially suggested he try it with MySQL just to get things going but, to Ken's credit, he soldiered on and sorted out his problems. Here's a copy of Ken's e-mail to me letting me know what he did. Both Ken and I hope this information will be of use to other readers experiencing similar problems. ----------- start of Ken's email ----------- Hi Paul, I did succeed in getting a compatible set of gems and activescaffold for running the soccer club demo. Open source is so dynamic that it's sometimes tricky getting compatible versions of things -- my hat's off to the people putting Linux distributions together. For the Ubuntu 8.10 rails 2.1 (rake, rubygems) installed via the Synaptic Package Manager, use the ruby_pg gem instead of the postgres gem. I understand the postgres gem has not been supported since 2006. Also avoid the pg gem, which is not necessary. The activescaffold install now needs to pick out the rails 2.1 (previous) version. The following failed to get the rails 2.1 version (got the default 2.2 ver): script/plugin install \ git://github.com/activescaffold/active_scaffold.git \ -r rails-2.1 Found a code snippit (attached below) which worked. Another (more painful) approach which worked is to update all the rails 2.1.0 versions to the 2.2.2 versions (and also update rubygems). The default activescaffold install will then work. The below code snippit succeeded in getting the activescaffold version compatible with rails 2.1 from Comment 1 by mr.gaffo, Nov 18, 2008 ---snip ----- Either (in vendor plugin): git clone git://github.com/activescaffold/active_scaffold.git [cd to active_scaffold ] git checkout origin/rails-2.1 rm -rf .git Or, pull down the newest rails-2.1 tarball from github. --snip------ Again, thanks for a great article. It was the combination of postgresql and rails that caught my interest. Feel free to pass along any info, no attribution needed. Ken ----------- end of Ken's email ----------- Thanks for that, Ken! Paul. Paul Barry
http://www.linuxjournal.com/magazine/improved-scaffolding-ruby-rails?page=0,2&quicktabs_1=0
CC-MAIN-2014-49
refinedweb
1,588
57.98
Introduction: Animatronic Monkey This is also one of the animals that is smuggled and suffers from mistreatment in captivity. Many find it cute and funny. However, when buying one of these, you will be financing and encouraging the smuggling of wild animals. Rather than famous people say, if you blow something here, you will not earn a monkey. Here in Brazil we have tough laws to punish the smuggling of wild animals. Working with Arduino: Step 1: Materials: - figure of the minkey (free download): - 4 sheets of A4 paper (In the file is only one, but I need to print poster-size, four pages per sheet to make it a little bigger monkey) - pieces of balsa wood - a piece of depron or styrofoam tray - two servomotors (one standard an other mini) withcrosshead, each - one bolt with nut and washer (of about 6 cm) - a piece of steel wire (I used a clothespin) - a strip of velcro (6 x 3cm) - programed Arduino, brotoboard, wires, batteries, servo lead Connection - glue, hot glue, instant glue, scissors, ruler, pliers, etc ... Step 2: Assembling Skull The skeleton was made with two servos, pieces of balsa wood, a bolt, a nut and a steel rod. Step 3: Assembling the Parts of Monkey After printing, cut and glue the parts.Be careful not to glue the head on the body. At the bottom, I put a piece of depron. Notice that one edge is rectangular. This was used to fit the tail of the monkey. Step 4: Wearing the Skeleton I put a piece of styrofoam in the head to be able to glue the crosshead of the servomotor. Then i "covered" the skeleton with the monkey's body, attached the head and glued the base. To have access to the internal components, i put a strip of velcro and made a hole in the chest of the monkey. Step 5: Connecting to Arduino I Just use two ports on the Arduino to control it. In this example, I used ports 0 and 1. Software: int pos = 0; // variável da posiçao do servo int anival = 0; // variável que vai escolher o animal int randomval2 = 0; // variável do tempo de pausa #include <Servo.h> // biblioteca servo Servo macacoc; // criando o objeto macaco corpo Servo macacop; // criando o objeto macaco pescoço void setup () { macacoc.attach (0); // atribuindo o corpo do macaco ao pino 0 macacop.attach (1); // atribuindo o pescoço do macaco ao pino 1 } void loop () { inicio: randomval2 = random (4); // variável para escolher o tempo de pausa randomval2 = randomval2 * 1000; //ajustando o tempo para segundos: 0, 1, 2 ou 3 segundos macacop.write(90); for(pos = 130; pos < 170; pos += 1) { macacoc.write(pos); delay(30); } delay(2000); for(pos = 170; pos>=130; pos-=1) { macacoc.write(pos); delay(40); } for(pos = 90; pos < 140; pos+=1) { macacop.write(pos); delay (15); } for(pos = 140; pos> 90; pos-=1) { macacop.write(pos); delay (15); } for(pos = 90; pos < 140; pos+=1) { macacop.write(pos); delay (10); } for(pos = 140; pos> 90; pos-=1) { macacop.write(pos); delay (10); } delay (randomval2); Participated in the Halloween Decorations Challenge Participated in the Microcontroller Contest Participated in the Toy Challenge Participated in the Make It Move Challenge Participated in the Halloween Props Challenge 4 Comments 10 years ago on Introduction These animals are so cool. Great Job Peck ! Reply 10 years ago on Introduction Valeu, Mary! 10 years ago on Introduction Cute monkey. I like the idea as well as you did it. Reply 10 years ago on Introduction Thank you! The hard part is making the first. Then the imagination fly.
https://www.instructables.com/Animatronic-Monkey/
CC-MAIN-2022-21
refinedweb
600
63.59
Using units in python Posted January 19, 2013 at 09:00 AM | categories: units, python | tags: | View Comments Updated March 23, 2013 at 09:45 AM I think an essential feature in an engineering computational environment is properly handling units and unit conversions. Mathcad supports that pretty well. I wrote a package for doing it in Matlab. Today I am going to explore units in python. Here are some of the packages that I have found which support units to some extent - - - - - - The last one looks most promising. import numpy as np from scimath.units.volume import liter from scimath.units.substance import mol q = np.array([1, 2, 3]) * mol print q P = q / liter print P [1.0*mol 2.0*mol 3.0*mol] [1000.0*m**-3*mol 2000.0*m**-3*mol 3000.0*m**-3*mol] That doesn't look too bad. It is a little clunky to have to import every unit, and it is clear the package is saving everything in SI units by default. Let us try to solve an equation. Find the time that solves this equation. \(0.01 = C_{A0} e^{-kt}\) First we solve without units. That way we know the answer. import numpy as np from scipy.optimize import fsolve CA0 = 1.0 # mol/L CA = 0.01 # mol/L k = 1.0 # 1/s def func(t): z = CA - CA0 * np.exp(-k*t) return z t0 = 2.3 t, = fsolve(func, t0) print 't = {0:1.2f} seconds'.format(t) t = 4.61 seconds Now, with units. I note here that I tried the obvious thing of just importing the units, and adding them on, but the package is unable to work with floats that have units. For some functions, there must be an ndarray with units which is practically what the UnitScalar code below does. import numpy as np from scipy.optimize import fsolve from scimath.units.volume import liter from scimath.units.substance import mol from scimath.units.time import second from scimath.units.api import has_units, UnitScalar CA0 = UnitScalar(1.0, units = mol / liter) CA = UnitScalar(0.01, units = mol / liter) k = UnitScalar(1.0, units = 1 / second) @has_units(inputs="t::units=s", outputs="result::units=mol/liter") def func(t): z = CA - CA0 * float(np.exp(-k*t)) return z t0 = UnitScalar(2.3, units = second) t, = fsolve(func, t0) print 't = {0:1.2f} seconds'.format(t) print type(t) t = 4.61 seconds <type 'numpy.float64'> This is some heavy syntax that in the end does not preserve the units. In my Matlab package, we had to “wrap” many functions like fsolve so they would preserve units. Clearly this package will need that as well. Overall, in its current implementation this package does not do what I would expect all the time.1 Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/01/19/Using-units-in-python/
CC-MAIN-2017-39
refinedweb
487
79.36
. [Conditional("DEBUG")] private void DiagnosticLog(string message) { Console.WriteLine(message); } private static void Main(string[] args) { DiagnosticLog("app start"); Console.WriteLine("1 + 2 = " + (1 + 2).ToString()); } Select all Open in new window The first one is that it can be applied to only part of a method. namespace ConsoleApplication67 { class Program { private static decimal _account = 100; static void Main(string[] args) { decimal toDebit = 10; decimal balance = Withdraw(toDebit); } private static decimal Withdraw(decimal amount) { bool accountHasFunds = CheckThatAccountHasFunds(); if (accountHasFunds) { #if DEBUG DebitAccount(amount); #endif } return _account; } private static bool CheckThatAccountHasFunds() { return (_account > 0); } private static void DebitAccount(decimal amount) { _account -= amount; } } } The question as I understand it is about how to supply a default/standard set of data under certain conditions, at other times to use 'real' data. Your example above doesn't change the source, now I have added an extra boolean to my if() sentences checking if System.Diagnostics.Debugge You could use the #if to turn code sections on and off. There is also a /define compiler option so you can have two compiler options eg. RELEASE and your customised RELEASE_STATIC, the first is the default release version, the second defines your custom flag to turn code blocks on and off. The CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure. If however you want to have something that works on the whole project, there is a simpler approach. Here are the steps. At the top of the code window, in the drop down where you have Debug and Release, select the Configuration Manager. In the upper left drop down of the manager, create a New configuration, give it a name (let's call it Special) and copy the the settings from your Release configuration. Once this is done, you have 3 ways of compiling your application. To setup Special, go into the project's Properties Window, go in the Build tab and select the Special configuration at the top. In the Conditional compilation symbols, define one as SPECIAL=true. Now, in your code, everywhere you where working with System.Diagnostics.Debugge #if SPECIAL // YourCode #endif When you select SPECIAL as the configuration at the top of the code window, the compiler will compile that code. But if you select any other configuration (Release or Debug) that does not has the SPECIAL compilation constant defined, its value is seen as False, and the piece of code in the #if will not compile. You can see it in the code window because it is grayed out in the configurations in which it won't compile. Compiling with the Release configuration gives you an application that does not contain your extra code, while compiling with the Special configuration does. 2 Versions of your application, same source code. Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial >>If however you want to have something that works on the whole project, there is a simpler approach. Erm, your 'simpler' approach is what I suggested. Two configurations for compiling with a flag set at compile time available for the complete set of files. Yes you suggested conditional compilation, but you did it through a compiler option, something that, in my experience, very few programmers use because most of these can be set more easily through the project's properties. And the link you gave show how to define the options in the code, which is a pain when you need to do it in many files, when you have to change the values often, and/or when you have many configurations with different combinations of values for the same options. I simply wanted to point to a third way of doing it, the one I almost always see when reviewing code. And add a more complete idea of how to use it in code, with a brief example that is not confusing as the one in your link. It can lead one to think that the thing is always used in combination with DEBUG. Having said that I have looked and seen that Microsoft has changed things since the last time I used this functionality, admittedly quite some years ago. Previously one could see the complete compiler command line on a separate tab on the project properties - now that is hidden, one only sees the option to enter the custom defines in a textbox. Do note: It is important that your method return void. If your function has a return type other than void, then you must use the #if syntax. e.g. Open in new window In the above, you would only see the "app start" message if the application were compiled with a DEBUG compilation symbol. It woks the same as if you did #if DEBUG. The call to DiagnosticLog will be stripped out of the code during compilation if the symbol is not defined. It's a slightly cleaner syntax, IMO. The #if has 3 advantages over it. The first one is that it can be applied to only part of a method. Often, a #if will deal with only one or two lines. Creating methods for these situations would be overkill. It also has the possibility of having a #else, which is also something that often happens with conditional compilation. Having to create 2 methods with different attributes to do so would also, in my opinion, be overkill, and make the code less fluid. The editors gray out code that is not compiled with the configuration that is currently selected. This makes it automatic and thus easier to see what will really be there when the application compiles. With the attribute, you have to be very careful of spotting the attribute in order to see that a method is there in the code but will not compile. Open in new window Oversimplified example, but it should stress the importance of care in using that functionality in such a manner. It is usually not something that apply to a simple call to a method. And as I said sooner, in such a situation, the Conditional attribute is probably the best thing. I'm having trouble thinking about how to use your suggestion (turning complete functions on/off rather than isolating blocks of code with #if, #else, #endif ) for this case - using static data for development/testing rather than 'real' data. Can you give a quick air code / logic of how you would do that. Are you referring to something other than what I posted above? The question as I understand it is about how to supply a default/standard set of data under certain conditions, at other times to use 'real' data. I just can't see an easy way to do that by turning complete functions on/off via compilation - which is what your suggestion refers to. ps. Your example above doesn't change the source, it just runs an action automatically on what is provided. pps. Having written that a possible but rather ugly way has come to mind. [Conditional("STATIC_DATA" private void ProvideStaticData() { //generate the standard data } xxx() { //read data from database //fill with actual sales values //now throw all the above away should the following function exist ProvideStaticData(); } I agree, but maybe I don't understand what you are saying. This is from the follow up comment to the question, providing clarifying information about the task in hand: Yes, my code is written to check for it and bypass data collector tasks that are not available while debugging. Instead static data is returned so that it may be used for debugging purposes and further development of this code and code that uses the solution.
https://www.experts-exchange.com/questions/28693213/Compile-a-dll-that-always-runs-its-code-as-if-System-Diagnostics-Debugger-IsAttached.html
CC-MAIN-2018-51
refinedweb
1,307
59.84
0 I'm making an archery game and I need to shoot 5 arrows and assign a point value to each spot of the target. Bulls eye is worth 9 and each ring out from there is worth 2 less. I am having trouble figuring out how to assign a score to the target. I'm writing it in the getScore function to be called later on by main. Any advice? Thanks! Here's part of my code def drawTarget(win): center = Point(0,0) # Center of target e = Circle(center,5) # largest circle e.setFill("white") e.draw(win) d = Circle(center,4) d.setFill("black") d.draw(win) c = Circle(center,3) c.setFill("blue") c.draw(win) b = Circle(center,2) b.setFill("red") b.draw(win) a = Circle(center,1) # smallest (bullseye) circle a.setFill("yellow") a.draw(win) # allow user to place an arrow shot with the mouse and return the score associated # with that show. Place a small circle to show where the arrow hit. def getScore(win): x = win.getMouse() c = Circle(x,.1) c.setFill("green") c.draw(win) p1 = x.getX() p2 = x.getY() #score = math.sqrt((p1)**2),p2**2) return 0 # needs to be replaced by real score
https://www.daniweb.com/programming/software-development/threads/265210/archery-graphics-game
CC-MAIN-2017-17
refinedweb
209
78.96
A significant portion of the data that is generated today is unstructured. Unstructured data includes social media comments, browsing history and customer feedback. Have you found yourself in a situation with a bunch of textual data to analyse, and no idea how to proceed? Natural language processing in Python can help. The objective of this tutorial is to enable you to analyze textual data in Python through the concepts of Natural Language Processing (NLP). You will first learn how to tokenize your text into smaller chunks, normalize words to their root forms, and then, remove any noise in your documents to prepare them for further analysis. Let’s get started! Prerequisites In this tutorial, we will use Python’s nltk library to perform all NLP operations on the text. At the time of writing this tutorial, we used version 3.4 of nltk. To install the library, you can use the pip command on the terminal: pip install nltk==3.4 To check which version of nltk you have in the system, you can import the library into the Python interpreter and check the version: import nltk print(nltk.__version__) To perform certain actions within nltk in this tutorial, you may have to download specific resources. We will describe each resource as and when required. However, if you would like to avoid downloading individual resources later in the tutorial and grab them now in one go, run the following command: python -m nltk.downloader all Step 1: Convert into Tokens A computer system can not find meaning in natural language by itself. The first step in processing natural language is to convert the original text into tokens. A token is a combination of continuous characters, with some meaning. It is up to you to decide how to break a sentence into tokens. For instance, an easy method is to split a sentence by whitespace to break it into individual words. In the NLTK library, you can use the word_tokenize() function to convert a string to tokens. However, you will first need to download the punkt resource. Run the following command in the terminal: nltk.download('punkt') Next, you need to import word_tokenize from nltk.tokenize to use it. from nltk.tokenize import word_tokenize print(word_tokenize("Hi, this is a nice hotel.")) The output of the code is as follows: ['Hi', ',', 'this', 'is', 'a', 'nice', 'hotel', '.'] You’ll notice that word_tokenize does not simply split a string based on whitespace, but also separates punctuation into tokens. It’s up to you if you would like to retain the punctuation marks in the analysis. Step 2: Convert Words to their Base Forms When you are processing natural language, you’ll often notice that there are various grammatical forms of the same word. For instance, “go”, “going” and “gone” are forms of the same verb, “go”. While the necessities of your project may require you to retain words in various grammatical forms, let us discuss a way to convert various grammatical forms of the same word into its base form. There are two techniques that you can use to convert a word to its base. The first technique is stemming. Stemming is a simple algorithm that removes affixes from a word. There are various stemming algorithms available for use in NLTK. We will use the Porter algorithm in this tutorial. We first import PorterStemmer from nltk.stem.porter. Next, we initialize the stemmer to the stemmer variable and then use the .stem() method to find the base form of a word. from nltk.stem.porter import PorterStemmer stemmer = PorterStemmer() print(stemmer.stem("going")) The output of the code above is go. If you run the stemmer for the other forms of “go” described above, you will notice that the stemmer returns the same base form, “go”. However, as stemming is only a simple algorithm based on removing word affixes, it fails when the words are less commonly used in language. When you try the stemmer on the word “constitutes”, it gives an unintuitive result. print(stemmer.stem("constitutes")) You will notice the output is “constitut”. This issue is solved by moving on to a more complex approach towards finding the base form of a word in a given context. The process is called lemmatization. Lemmatization normalizes a word based on the context and vocabulary of the text. In NLTK, you can lemmatize sentences using the WordNetLemmatizer class. First, you need to download the wordnet resource from the NLTK downloader in the Python terminal. nltk.download('wordnet') Once it is downloaded, you need to import the WordNetLemmatizer class and initialize it. from nltk.stem.wordnet import WordNetLemmatizer lem = WordNetLemmatizer() To use the lemmatizer, use the .lemmatize() method. It takes two arguments — the word and the context. In our example, we will use “v” for context. Let us explore the context further after looking at the output of the .lemmatize() method. print(lem.lemmatize('constitutes', 'v')) You would notice that the .lemmatize() method correctly converts the word “constitutes” to its base form, “constitute”. You would also notice that lemmatization takes longer than stemming, as the algorithm is more complex. Let’s check how to determine the second argument of the .lemmatize() method programmatically. NLTK has a pos_tag function which helps in determining the context of a word in a sentence. However, you first need to download the averaged_perceptron_tagger resource through the NLTK downloader. nltk.download('averaged_perceptron_tagger') Next, import the from nltk.tag import pos_tag sample = "Hi, this is a nice hotel." print(pos_tag(word_tokenize(sample))) You will notice that the output is a list of pairs. Each pair consists of a token and its tag, which signifies the context of a token in the overall text. Notice that the tag for a punctuation mark is itself. [('Hi', 'NNP'), (',', ','), ('this', 'DT'), ('is', 'VBZ'), ('a', 'DT'), ('nice', 'JJ'), ('hotel', 'NN'), ('.', '.')] How do you decode the context of each token? Here is a full list of all tags and their corresponding meanings on the web. Notice that the tags of all nouns begin with “N”, and for all verbs begin with “V”. We can use this information in the second argument of our .lemmatize() method. def lemmatize_tokens(stentence): lemmatizer = WordNetLemmatizer() lemmatized_tokens = [] for word, tag in pos_tag(stentence): if tag.startswith('NN'): pos = 'n' elif tag.startswith('VB'): pos = 'v' else: pos = 'a' lemmatized_tokens.append(lemmatizer.lemmatize(word, pos)) return lemmatized_tokens sample = "Legal authority constitutes all magistrates." print(lemmatize_tokens(word_tokenize(sample))) The output of the code above is as follows: ['Legal', 'authority', 'constitute', 'all', 'magistrate', '.'] This output is on expected grounds, where “constitutes” and “magistrates” have been converted to “constitute” and “magistrate”, respectively. Step 3: Data Cleaning The next step in preparing data is to clean the data and remove anything that does not add meaning to your analysis. Broadly, we will look at removing punctuation and stop words from your analysis. Removing punctuation is a fairly easy task. The punctuation object of the string library contains all the punctuation marks in English. import string print(string.punctuation) The output of this code snippet is as follows: '!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~' In order to remove punctuation from tokens, you can simply run: for token in tokens: if token in string.punctuation: # Do something Next, we will focus on removing stop words. Stop words are commonly used words in language like “I”, “a” and “the”, which add little meaning to text when analyzing it. We will therefore, remove stop words from our analysis. First, download the stopwords resource from the NLTK downloader. nltk.download('stopwords') Once your download is complete, import stopwords from nltk.corpus and use the .words() method with “english” as the argument. It is a list of 179 stop words in the English language. from nltk.corpus import stopwords stop_words = stopwords.words('english') We can combine the lemmatization example with the concepts discussed in this section to create the following function, clean_data(). Additionally, before comparing if a word is a part of the stop words list, we convert it to the lower case. This way, we still capture a stop word if it occurs at the start of a sentence and is capitalized. def clean_data(tokens, stop_words = ()): cleaned_tokens = [] for token, tag in pos_tag(tokens): if tag.startswith("NN"): pos = 'n' elif tag.startswith('VB'): pos = 'v' else: pos = 'a' lemmatizer = WordNetLemmatizer() token = lemmatizer.lemmatize(token, pos) if token not in string.punctuation and token.lower() not in stop_words: cleaned_tokens.append(token) return cleaned_tokens sample = "The quick brown fox jumps over the lazy dog." stop_words = stopwords.words('english') clean_data(word_tokenize(sample), stop_words) The output of the example is as follows: ['quick', 'brown', 'fox', 'jump', 'lazy', 'dog'] As you can see, the punctuation and stop words have been removed. Word Frequency Distribution Now that you are familiar with the basic cleaning techniques in NLP, let’s try and find the frequency of words in text. For this exercise, we’ll use the text of the fairy tale, The Mouse, The Bird and The Sausage, which is available freely on Gutenberg. We’ll store the text of this fairy tale in a string, text. First, we tokenize text and then clean it using the function clean_data that we defined above. tokens = word_tokenize(text) cleaned_tokens = clean_data(tokens, stop_words = stop_words) To find the frequency distribution of words in your text, you can use FreqDist class of NLTK. Initialize the class with the tokens as an argument. Then use the .most_common() method to find the commonly occurring terms. Let us try and find the top ten terms in this case. from nltk import FreqDist freq_dist = FreqDist(cleaned_tokens) freq_dist.most_common(10) Here are the ten most commonly occurring terms in this fairy tale. python [('bird', 15), ('sausage', 11), ('mouse', 8), ('wood', 7), ('time', 6), ('long', 5), ('make', 5), ('fly', 4), ('fetch', 4), ('water', 4)] Unsurprisingly, the three most common terms are the three main characters in the fairy tale. The frequency of words may not be very important when analysing text. Typically, the next step in NLP is to generate a statistic — TF – IDF (term frequency – inverse document frequency), which signifies the importance of a word in a list of documents. Conclusion In this post, you were introduced to natural language processing in Python. You converted text to tokens, converted words to their base forms and finally, cleaned the text to remove any part which didn’t add meaning to the analysis. Although you looked at simple NLP tasks in this tutorial, there are many techniques to explore. One may wish to perform topic modelling on textual data, where the objective is to find a common topic that a text might be talking about. A more complex task in NLP is the implementation of a sentiment analysis model to determine the feeling behind any text. What procedures do you follow when you are given a pile of text to work with? Let us know in the comments.
https://linksoftvn.com/getting-started-with-natural-language-processing-in-python-sitepoint/
CC-MAIN-2019-18
refinedweb
1,810
57.67
Unity Multiplayer Games — Save 50% Build engaging, fully functional, multiplayer games with Unity engine with this book and ebook. (For more resources related to this topic, see here.) Multiplayer is everywhere. It's a staple of AAA games and small-budget indie offerings alike. Multiplayer games tap into our most basic human desires. Whether it be teaming up with strangers to survive a zombie apocalypse, or showing off your skills in a round of "Capture the Flag" on your favorite map, no artificial intelligence in the world comes close to the feeling of playing with a living, breathing, and thinking human being. Unity3D has a sizable number of third-party networking middleware aimed at developing multiplayer games, and is arguably one of the easiest platforms to prototype multiplayer games. The first networking system most people encounter in Unity is the built-in Unity Networking API . This API simplifies a great many tasks in writing networked code by providing a framework for networked objects rather than just sending messages. This works by providing a NetworkView component, which can serialize object state and call functions across the network. Additionally, Unity provides a Master server, which essentially lets players search among all public servers to find a game to join, and can also help players in connecting to each other from behind private networks. In this article, we will cover: - Introducing multiplayer - Introducing UDP communication - Setting up your own Master server for testing - What a NetworkView is - Serializing object state - Calling RPCs - Starting servers and connecting to them - Using the Master server API to register servers and browse available hosts - Setting up a dedicated server model - Creating a Pong clone using Unity networking Introducing multiplayer games Before we get started on the details of communication over the Internet, what exactly does multiplayer entail in a game? As far as most players are concerned, in a multiplayer game they are sharing the same experience with other players. It looks and feels like they are playing the same game. In reality, they aren't. Each player is playing a separate game, each with its own game state. Trying to ensure that all players are playing the exact same game is prohibitively expensive. Instead, games attempt to synchronize just enough information to give the illusion of a shared experience. Games are almost ubiquitously built around a client-server architecture, where each client connects to a single server. The server is the main hub of the game, ideally the machine for processing the game state, although at the very least it can serve as a simple "middleman" for messages between clients. Each client represents an instance of the game running on a computer. In some cases the server might also have a client, for instance some games allow you to host a game without starting up an external server program. While an MMO ( Massively Multiplayer Online ) might directly connect to one of these servers, many games do not have prior knowledge of the server IPs. For example, FPS games often let players host their own servers. In order to show the user a list of servers they can connect to, games usually employ another server, known as the "Master Server" or alternatively the "Lobby server". This server's sole purpose is to keep track of game servers which are currently running, and report a list of these to clients. Game servers connect to the Master server in order to announce their presence publicly, and game clients query the Master server to get an updated list of game servers currently running. Alternatively, this Master server sometimes does not keep track of servers at all. Sometimes games employ "matchmaking", where players connect to the Lobby server and list their criteria for a game. The server places this player in a "bucket" based on their criteria, and whenever a bucket is full enough to start a game, a host is chosen from these players and that client starts up a server in the background, which the other players connect to. This way, the player does not have to browse servers manually and can instead simply tell the game what they want to play. Introducing UDP communication The built-in Unity networking is built upon RakNet . RakNet uses UDP communication for efficiency. UDP ( User Datagram Protocols ) is a simple way to send messages to another computer. These messages are largely unchecked, beyond a simple checksum to ensure that the message has not been corrupted. Because of this, messages are not guaranteed to arrive, nor are they guaranteed to only arrive once (occasionally a single message can be delivered twice or more), or even in any particular order. TCP, on the other hand, guarantees each message to be received just once, and in the exact order they were sent, although this can result in increased latency (messages must be resent several times if they fail to reach the target, and messages must be buffered when received, in order to be processed in the exact order they were sent). To solve this, a reliability layer must be built on top of UDP. This is known as rUDP ( reliable UDP ). Messages can be sent unreliably (they may not arrive, or may arrive more than once), or reliably (they are guaranteed to arrive, only once per message, and in the correct order). If a reliable message was not received or was corrupt, the original sender has to resend the message. Additionally, messages will be stored rather than immediately processed if they are not in order. For example, if you receive messages 1, 2, and 4, your program will not be able to handle those messages until message 3 arrives. Allowing unreliable or reliable switching on a per-message basis affords better overall performance. Messages, such as player position, are better suited to unreliable messages (if one fails to arrive, another one will arrive soon anyway), whereas damage messages must be reliable (you never want to accidentally drop a damage message, and having them arrive in the same order they were sent reduces race conditions). In Unity, you can serialize the state of an object (for example, you might serialize the position and health of a unit) either reliably or unreliably (unreliable is usually preferred). All other messages are sent reliably. Setting up the Master Server Although Unity provide their own default Master Server and Facilitator (which is connected automatically if you do not specify your own), it is not recommended to use this for production. We'll be using our own Master Server, so you know how to connect to one you've hosted yourself. Firstly, go to the following page: We're going to download two of the listed server components: the Master Server and the Facilitator as shown in the following screenshot: The servers are provided in full source, zipped. If you are on Windows using Visual Studio Express, open up the Visual Studio .sln solution and compile in the Release mode. Navigate to the Release folder and run the EXE (MasterServer.exe or Facilitator.exe). If you are on a Mac, you can either use the included XCode project, or simply run the Makefile (the Makefile works under both Linux and Mac OS X). The Master Server, as previously mentioned, enables our game to show a server lobby to players. The Facilitator is used to help clients connect to each other by performing an operation known as NAT punch-through . NAT is used when multiple computers are part of the same network, and all use the same public IP address. NAT will essentially translate public and private IPs, but in order for one machine to connect to another, NAT punch-through is necessary. You can read more about it here: The default port for the Master Server is 23466, and for the Facilitator is 50005. You'll need these later in order to configure Unity to connect to the local Master Server and Facilitator instead of the default Unity-hosted servers. Now that we've set up our own servers, let's take a look at the Unity Networking API itself. NetworkViews and state serialization In Unity, game objects that need to be networked have a NetworkView component. The NetworkView component handles communication over the network, and even helps make networked state serialization easier. It can automatically serialize the state of a Transform, Rigidbody, or Animation component, or in one of your own scripts you can write a custom serialization function. When attached to a game object, NetworkView will generate a NetworkViewID for NetworkView. This ID serves to uniquely identify a NetworkView across the network. An object can be saved as part of a scene with NetworkView attached (this can be used for game managers, chat boxes, and so on), or it can be saved in the project as a prefab and spawned later via Network.Instantiate (this is used to generate player objects, bullets, and so on). Network.Instantiate is the multiplayer equivalent to GameObject.Instantiate —it sends a message over the network to other clients so that all clients spawn the object. It also assigns a network ID to the object, which is used to identify the object across multiple clients (the same object will have the same network ID on every client). A prefab is a template for a game object (such as the player object). You can use the Instantiate methods to create a copy of the template in the scene. Spawned network game objects can also be destroyed via Network.Destroy. It is the multiplayer counterpart of GameObject.Destroy. It sends a message to all clients so that they all destroy the object. It also deletes any RPC messages associated with that object. NetworkView has a single component that it will serialize. This can be a Transform, a Rigidbody, an Animation, or one of your own components that has an OnSerializeNetworkView function. Serialized values can either be sent with the ReliableDeltaCompressed option, where values are always sent reliably and compressed to include only changes since the last update, or they can be sent with the Unreliable option, where values are not sent reliably and always include the full values (not the change since the last update, since that would be impossible to predict over UDP). Each method has its own advantages and disadvantages. If data is constantly changing, such as player position in a first person shooter, in general Unreliable is preferred to reduce latency. If data does not often change, use the ReliableDeltaCompressed option to reduce bandwidth (as only changes will be serialized). NetworkView can also call methods across the network via Remote Procedure Calls ( RPC ). RPCs are always completely reliable in Unity Networking, although some networking libraries allow you to send unreliable RPCs, such as uLink or TNet. Writing a custom state serializer While initially a game might simply serialize Transform or Rigidbody for testing, eventually it is often necessary to write a custom serialization function. This is a surprisingly easy task. Here is a script that sends an object's position over the network: using UnityEngine; using System.Collections; public class ExampleUnityNetworkSerializePosition : MonoBehaviour { public void OnSerializeNetworkView( BitStream stream, NetworkMessageInfo info ) { // we are currently writing information to the network if( stream.isWriting ) { // send the object's position Vector3 position = transform.position; stream.Serialize( ref position ); } // we are currently reading information from the network else { // read the first vector3 and store it in 'position' Vector3 position = Vector3.zero; stream.Serialize( ref position ); // set the object's position to the value we were sent transform.position = position; } } } Most of the work is done with BitStream. This is used to check if NetworkView is currently writing the state, or if it is reading the state from the network. Depending on whether it is reading or writing, stream.Serialize behaves differently. If NetworkView is writing, the value will be sent over the network. However, if NetworkView is reading, the value will be read from the network and saved in the referenced variable (thus the ref keyword, which passes Vector3 by reference rather than value). Using RPCs RPCs are useful for single, self-contained messages that need to be sent, such as a character firing a gun, or a player saying something in chat. In Unity, RPCs are methods marked with the [RPC] attribute. This can be called by name via networkView.RPC( "methodName", … ). For example, the following script prints to the console on all machines when the space key is pressed. using UnityEngine; using System.Collections; public class ExampleUnityNetworkCallRPC : MonoBehavior { void Update() { // important – make sure not to run if this networkView is notours if( !networkView.isMine ) return; // if space key is pressed, call RPC for everybody if( Input.GetKeyDown( KeyCode.Space ) ) networkView.RPC( "testRPC", RPCMode.All ); } [RPC] void testRPC( NetworkMessageInfo info ) { // log the IP address of the machine that called this RPC Debug.Log( "Test RPC called from " + info.sender.ipAddress ); } } Also note the use of NetworkView.isMine to determine ownership of an object. All scripts will run 100 percent of the time regardless of whether your machine owns the object or not, so you have to be careful to avoid letting some logic run on remote machines; for example, player input code should only run on the machine that owns the object. RPCs can either be sent to a number of players at once, or to a specific player. You can either pass an RPCMode to specify which group of players to receive the message, or a specific NetworkPlayer to send the message to. You can also specify any number of parameters to be passed to the RPC method. RPCMode includes the following entries: - All (the RPC is called for everyone) - AllBuffered (the RPC is called for everyone, and then buffered for when new players connect, until the object is destroyed) - Others (the RPC is called for everyone except the sender) - OthersBuffered (the RPC is called for everyone except the sender, and then buffered for when new players connect, until the object is destroyed) - Server (the RPC is sent to the host machine) Initializing a server The first thing you will want to set up is hosting games and joining games. To initialize a server on the local machine, call Network.InitializeServer. This method takes three parameters: the number of allowed incoming connections, the port to listen on, and whether to use NAT punch-through. The following script initializes a server on port 25000 which allows 8 clients to connect: using UnityEngine; using System.Collections; public class ExampleUnityNetworkInitializeServer : MonoBehavior { void OnGUI() { if( GUILayout.Button( "Launch Server" ) ) { LaunchServer(); } } // launch the server void LaunchServer() { // Start a server that enables NAT punchthrough, // listens on port 25000, // and allows 8 clients to connect Network.InitializeServer( 8, 25005, true ); } // called when the server has been initialized void OnServerInitialized() { Debug.Log( "Server initialized" ); } } You can also optionally enable an incoming password (useful for private games) by setting Network.incomingPassword to a password string of the player's choice, and initializing a general-purpose security layer by calling Network.InitializeSecurity(). Both of these should be set up before actually initializing the server. Connecting to a server To connect to a server you know the IP address of, you can call Network.Connect. The following script allows the player to enter an IP, a port, and an optional password and attempts to connect to the server: using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToServer : MonoBehavior { private string ip = ""; private string port = ""; private string password = ""; void OnGUI() { GUILayout.Label( "IP Address" ); ip = GUILayout.TextField( ip, GUILayout.Width( 200f ) ); GUILayout.Label( "Port" ); port = GUILayout.TextField( port, GUILayout.Width( 50f ) ); GUILayout.Label( "Password (optional)" ); password = GUILayout.PasswordField( password, '*', GUILayout.Width( 200f ) ); if( GUILayout.Button( "Connect" ) ) { int portNum = 25005; // failed to parse port number – a more ideal solution is tolimit input to numbers only, a number of examples can befound on the Unity forums if( !int.TryParse( port, out portNum ) ) { Debug.LogWarning( "Given port is not a number" ); } // try to initiate a direct connection to the server else { Network.Connect( ip, portNum, password ); } } } void OnConnectedToServer() { Debug.Log( "Connected to server!" ); } void OnFailedToConnect( NetworkConnectionError error ) { Debug.Log( "Failed to connect to server: " +error.ToString() ); } } Connecting to the Master Server While we could just allow the player to enter IP addresses to connect to servers (and many games do, such as Minecraft), it's much more convenient to allow the player to browse a list of public servers. This is what the Master Server is for. Now that you can start up a server and connect to it, let's take a look at how to connect to the Master Server you downloaded earlier. First, make sure both the Master Server and Facilitator are running. I will assume you are running them on your local machine (IP is 127.0.0.1), but of course you can run these on a different computer and use that machine's IP address. Keep in mind, if you want the Master Server publicly accessible, it must be installed on a machine with a public IP address (it cannot be in a private network). Let's configure Unity to use our Master Server rather than the Unity-hosted test server. The following script configures the Master Server and Facilitator to connect to a given IP (by default 127.0.0.1): using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToMasterServer : MonoBehaviour { // Assuming Master Server and Facilitator are on the same machine public string MasterServerIP = "127.0.0.1"; void Awake() { // set the IP and port of the Master Server to connect to MasterServer.ipAddress = MasterServerIP; MasterServer.port = 23466; // set the IP and port of the Facilitator to connect to Network.natFacilitatorIP = MasterServerIP; Network.natFacilitatorPort = 50005; } } Registering a server with the Master Server Now that you've configured the Master Server, it's time to register a server with it. This is easy to do. Immediately after making a call to Network.InitializeServer, make another call to MasterServer.RegisterHost. This call connects to the Master Server and tells it to display our server in the public game list. The RegisterHost function takes three parameters, all strings: gameTypeName, gameName, and comment. The game type name is used to separate different game listings from each other. For example, if two games use the same Master Server, they would both supply different game type names in order to avoid getting listings for the other game. The game name is the name of the host server, for example "John's server". The comment is a general purpose data string, essentially anything can be stored here. For example you could store data about the server (such as map rotation, available modes, and so on) and display these to the user while they browse the lobby. Because RegisterHost is a separate call from InitializeServer, you can simply omit the call to RegisterHost to implement private or LAN-style servers. Browsing available servers To browse the available servers, call MasterServer.RequestHostList . This takes one single parameter: the game type name (this is the same game type name you passed to RegisterHost). This does not return anything, instead the result will be asynchronously downloaded, and the last known list of servers can be accessed via MasterServer.PollHostList. Additionally, to ensure you aren't using old data, you can call MasterServer.ClearHostList. For example, if the user hits the Refresh button in the lobby you might clear the host list and then request a new list from the Master Server. The following script shows a lobby for users to browse available servers and connect to them: using UnityEngine; using System.Collections; public class ExampleUnityNetworkingBrowseServers : MonoBehavior { // are we currently trying to download a host list? private bool loading = false; // the current position within the scrollview private Vector2 scrollPos = Vector2.zero; void Start() { // immediately request a list of hosts refreshHostList(); } void OnGUI() { if( GUILayout.Button( "Refresh" ) ) { refreshHostList(); } if( loading ) { GUILayout.Label( "Loading..." ); } else { scrollPos = GUILayout.BeginScrollView( scrollPos, GUILayout.Width( 200f ), GUILayout.Height( 200f ) ); HostData[] hosts = MasterServer.PollHostList(); for( int i = 0; i < hosts.Length; i++ ) { if( GUILayout.Button( hosts[i].gameName, GUILayout.ExpandWidth( true ) ) ) { Network.Connect( hosts[i] ); } } if( hosts.Length == 0 ) { GUILayout.Label( "No servers running" ); } GUILayout.EndScrollView(); } } void refreshHostList() { // let the user know we are awaiting results from the master server loading = true; MasterServer.ClearHostList(); MasterServer.RequestHostList( "GameTypeNameHere" ); } // this is called when the Master Server reports an event to the client – for example, server registered successfully, host list received, etc void OnMasterServerEvent( MasterServerEvent msevent ) { if( msevent == MasterServerEvent.HostListReceived ) { // received the host list, no longer awaiting results loading = false; } } } The preceding code will list available servers registered to the Master Server. Clicking one of the buttons will call the Network.Connect function and connect to the corresponding server, and clicking on Refresh will display a Loading... message while results are fetched from the Master Server. There are a number of improvements and other tweaks that can be made to this code, left as an exercise for the reader: - Refresh the host list every few seconds. This should be done transparently, without displaying a "Loading" message. - Allow the user to add servers to a "favorites" list (possibly saved as CSV to PlayerPrefs), if your game allows players to run dedicated servers. - If the user attempts to connect to a password-protected game (HostData.passwordProtected is true), display a password entry field. - Save game information such as map, mode, and so on in the Comments field when registering a server, and allow the user to filter server results. Setting up a dedicated server model Many games allow players to host their own dedicated servers, as separate applications from the game client. Some games even allow players to modify the behavior of the server through scripting languages, allowing player-run servers to employ novel behaviors not originally designed into the game. Let's see how we can set up a similar system in Unity. I will not be covering modding, although readers can look up Lua scripting in Unity—there are a number of resources on the topic. Servers in Unity Most games have a specialized "server" build, which contains much the same code as the client, designed to run as a dedicated server. This allows the server to process the same logic as the client. Unity, however, does not directly support this concept out of the box. Unity Pro does allow builds to be run in "headless mode", which runs the game without initializing any graphics, resources, but the server runs the exact same code as the client. The game must be designed to operate in both server and client mode. To do this, we'll take advantage of a compiler feature known as "conditional compilation". This allows us to wrap code in special tags which allows us to strip out entire sections of code when compiling. This way, our server-only code will only be included in server builds, and our client-only code will only be included in client builds. Compiler directives The first thing we will do, is figure out how the application knows whether it is a client or a server. We will use a compiler directive to do this. If you are using Unity 4, you can go to Edit | Project Settings | Player and under Other Settings is a section that allows you to define these. However, for any version prior to Unity 4, you'll have to define these yourself. To do this, create a new text file in the Assets folder and name it smcs.rsp. Open Notepad and type: -define:SERVER This creates a global symbol define for your C# scripts. You would use the symbol like this: #if SERVER //code in here will not be compiled if SERVER isn't defined #endif You might consider writing an editor script which replaces the contents of this file (when compiling for the client, it would replace SERVER with CLIENT , and vice versa). It is important to note that changes to this file will not automatically recompile, when changing the file you should save one of your scripts. Your editor script might do this automatically, for example it could call AssetDatabase.Refresh( ImportAssetOptions.ForceUpdate ). Now that we can detect whether the application was built as a server or a client, we'll need some way for the server to act as autonomously as possible. The server should have a configuration file which allows the user to set, for example, network settings before the server runs. This book will not cover how to load the configuration file (XML or JSON are recommended), but once these are loaded the server should immediately initialize and register itself with the Master Server using the data in the configuration file (for example, server name, maximum connections, listen port, password, and so on). Setting up a server console without Pro Usually, a game server is a console application. This is nearly possible in Unity if you have purchased a Pro license, by appending the -batchmode argument to the executable (actually, Unity does not create a console window, instead the game simply runs in the background). If you do have Pro, feel free to skip this section. However, if you own a free license, you'll need to get a bit creative. We want the server to use as few resources as possible. We can create a script that turns off rendering of the scene when running in server mode. This won't completely disable the rendering system (as running in command line would), but it does significantly reduce the GPU load of the server. using UnityEngine; using System.Collections; public class DisableServerCamera : MonoBehavior { #if SERVER void Update() { // culling mask is a bitmask – setting all bits to zero means render nothing camera.cullingMask = 0; } #endif } This script can be attached to a camera, and will cause that camera to not render anything when running on the server. Next we're going to set up a console-type display for our server. This "console" will hook into the built-in Debug class and display a scrolling list of messages. We'll do this via Application.RegisterLogCallback. using UnityEngine; using System.Collections; using System.Collections.Generic; // contains data about the logged message struct LogMessage { public string message; public LogType type; } public class CustomLog : MonoBehavior { // how many past log messages to store public int MaxHistory = 50; // a list of stored log messages private List<LogMessage> messages = new List<LogMessage>(); // the position within the scroll view private Vector2 scrollPos = Vector2.zero; void OnEnable() { // register a custom log handler Application.RegisterLogCallback( HandleLog ); } void OnDisable() { // unregister the log handler Application.RegisterLogCallback( null ); } void OnGUI() { scrollPos = GUILayout.BeginScrollView( scrollPos, GUILayout.ExpandWidth( true ), GUILayout.ExpandHeight( true ) ); //draw each debug log – switch colors based on log type for( int i = 0; i < messages.Count; i++ ) { Color color = Color.white; if( messages[i].type == LogType.Warning ) { color = Color.yellow; } else if( messages[i].type != LogType.Log ) { color = Color.red; } GUI.color = color; GUILayout.Label( messages[i].message ); } GUILayout.EndScrollView(); } void HandleLog( string message, string stackTrace, LogType type ) { // add the message, remove entries if there's too many LogMessage msg = new LogMessage(); msg.message = message; msg.type = type; messages.Add( msg ); if( messages.Count >= MaxHistory ) { messages.RemoveAt( 0 ); } // scroll to the newest message by setting to a huge amount // will automatically be clamped scrollPos.y = 1000f; } } Now the user can see the debug information being printed as the server runs—very useful indeed. You should strive for as much code reuse as possible in fact, if your game allows players to host a game from inside the client, most of the same code will already work with a few minor differences: - As previously mentioned, the server starts up automatically with a configuration loaded from the user-editable files (unlike the client). - The server does not spawn any player objects of its own, unlike the client. - The server does not have any UIs or menus to display to the user beyond the log dump. Beyond starting up the server and shutting it down, there is zero interaction with the server application. There are a few tricks to loading networked levels in the Unity game engine. If you just use Application.LoadLevel, you'll encounter a number of issues; specifically you may find that a client connecting to the game won't see any objects that were instantiated via Network.Instantiate. The reason for this is because the level loading process doesn't happen instantly—it actually takes two frames to complete. This occurs after the list of networked objects was received, so the load process will delete them. Note that Application.LoadLevel is purely client side. Unity imposes no limitations on which level a client or server loads in a networked game. In fact, it's entirely possible that you might have different levels within a networked session, and this is what Network.SetLevelPrefix is for. Each of these levels is assigned some kind of "ID" that uniquely identifies the level. Before loading the level you would use Network.SetLevelPrefix. This essentially separates players into channels, so all players with level prefix 0 are separate from players with level prefix 1, for example. Note that if your game needs all clients to load the same level, you'll have to ensure this yourself. If a client has a different level loaded than the host, without setting the level prefix to something different than the host, the client might see some odd situations, such as players floating or sunk into the ground (a player could be standing on a bridge in one level, and a different level at the same position might have a building; so the player would appear to be clipped into the building). The correct way to load levels in a networked game, is to first disable the network queue, load the level, wait two frames, and then re-enable the network queue. This means any incoming messages will not be processed, and will instead be buffered until the new level has completely finished loading. Let's write a simple network level loader that will handle all of these for us. It's designed as a singleton so we don't need one present in the scene (one will automatically be created): using UnityEngine; using System.Collections; public class NetworkLevelLoader : MonoBehavior { // implements singleton-style behavior public static NetworkLevelLoader Instance { get { // no instance yet? Create a new one if( instance == null ) { GameObject go = new GameObject( "_networkLevelLoader" ); // hide it to avoid cluttering up the hieararchy go.hideFlags = HideFlags.HideInHierarchy; instance = go.AddComponent<NetworkLevelLoader>(); // don't destroy it when a new scene loads GameObject.DontDestroyOnLoad( go ); } return instance; } } private static NetworkLevelLoader instance; public void LoadLevel( string levelName, int prefix = 0 ) { StopAllCoroutines(); StartCoroutine( doLoadLevel( levelName, prefix ) ); } // do the work of pausing the network queue, loading the level, waiting, and then unpausing IEnumerator doLoadLevel( string name, int prefix ) { Network.SetSendingEnabled( 0, false ); Network.isMessageQueueRunning = false; Network.SetLevelPrefix( prefix ); Application.LoadLevel( name ); yield return null; yield return null; Network.isMessageQueueRunning = true; Network.SetSendingEnabled( 0, true ); } } You can now replace any calls to Application.LoadLevel with NetworkLevelLoader.Instance.LoadLevel. For example, the server might call an RPC which loads the level via the helper class we just wrote, as a buffered RPC so that all clients connecting will automatically load the level. Creating a multiplayer Pong game Now that we've covered the basics of using Unity Networking, we're going to apply them to creating a multiplayer Pong clone. The game will play pretty much as standard Pong. Players can choose their name, and then view a list of open servers (full rooms will not be shown). Players can also host their own game. Once in a game, players bounce a ball back and forth until it hits the opponent's side. Players get one point for this, and the ball will reset and continue bouncing. When a player hits 10 points, the winner is called, the scores are reset, and the game continues. While in a match with no other players, the server will inform the user to wait. If a player leaves, the match is reset (if the host leaves, the other player is automatically disconnected). Preparing the Field First, create a cube (by navigating to GameObject | Create Other | Cube ) and scale it to 1 x 1 x 4. Name it Paddle and set the Tag to Player . Check the Is Trigger box on the collider. Our ball will detect when it hits the trigger zone on the player paddle, and reverse direction. We use triggers because we don't necessarily want to simulate the ball realistically with the Unity physics engine (we get far less control over the ball's physics, and it may not behave exactly as we would like). We will also line our playing field in trigger boxes. For these you can duplicate the paddle four times and form a large rectangle outlining the playing field. The actual size doesn't matter so much, as long as the ball has room to move around. We will add two more tags for these boundaries: Boundary and Goal . The two boxes on the top and bottom of the field are tagged as Boundary , the two boxes on the left and right are tagged as Goal . When the ball hits a trigger tagged Boundary, it reverses its velocity along the z axis. When the ball hits a trigger tagged Player, it reverses its velocity along the x axis. And when a ball hits a trigger tagged Goal, the corresponding player gets a point and the ball resets. Let's finish up the playing field before writing our code: - Firstly, set the camera to Orthographic and position it at (0, 10, 0). Rotate it 90 degrees along the x axis until it points straight down, and change its Orthographic Size to a value large enough to frame the playing field (in my case, I set it to 15). Set the camera's background color to black. - Create a directional light that points straight down. This will illuminate the paddles and ball to make them pure white. - Finally, duplicate the player paddle and move it to the other half of the field. The Ball script Now we're going to create the Ball script. We'll add the multiplayer code later, for now this is offline only:; // move the ball in the current direction Vector2 moveDir = currentDir * currentSpeed * Time.deltaTime; transform.Translate( new Vector3( moveDir.x, 0f, moveDir.y ) ); } void OnTriggerEnter( Collider other ) { if( other.tag == "Boundary" ) { // vertical boundary, reverse Y direction currentDir.y *= -1; } else if( other.tag == "Player" ) { // player paddle, reverse X direction currentDir.x *= -1; } else if( other.tag == "Goal" ) { //; } } To create the ball, as before we'll create a cube. It will have the default scale of 1 x 1 x 1. Set the position to origin (0, 0, 0). Add a rigidbody component to the cube, untick the Use Gravity checkbox, and tick the Is Kinematic checkbox. The Rigidbody component is used to let our ball get the OnTriggerEnter events. Is Kinematic is enabled because we're controlling the ball ourselves, rather than using Unity's physics engine. Add the new Ball component that we just created and test the game. It should look something like this: You should see the ball bouncing around the field. If it hits either side, it will move back to the center of the field, pause for 3 seconds, and then begin moving again. This should happen fairly quickly, because the paddles aren't usable yet (the ball will often bounce right past them). The Paddle script Let's add player control to the mix. Note that at the moment player paddles will both move in tandem, with the same controls. This is OK, later we'll disable the player input based on whether or not the network view belongs to the local client (this is what the AcceptsInput field is for):; void Update() { // does not accept input, abort if( !AcceptsInput ) return; //get user input float input = Input.GetAxis( "Vertical" ); // move paddle Vector3 pos = transform.position; pos.z += input * MoveSpeed * Time.deltaTime; // clamp paddle position pos.z = Mathf.Clamp( pos.z, -MoveRange, MoveRange ); // set position transform.position = pos; } } You can now move the paddles up and down, and bounce the ball back and forth. The ball will slowly pick up speed as it bounces, until it hits either of the goals. When that happens, the round resets. Keeping score What we're going to do now is create a scorekeeper. The scorekeeper will keep track of both players' scores, and will later keep track of other things, such as whether we're waiting for another player to join: using UnityEngine; using System.Collections; public class Scorekeeper : MonoBehavior { // the maximum score a player can reach public int ScoreLimit = 10; //; } } } Now our scorekeeper can keep score for each player, let's make the goals and add points with a Goal script. It's a very simple script, which reacts to the GetPoint message sent from the ball upon collision to give the other player a point: using UnityEngine; using System.Collections; public class Goal : MonoBehavior { // the player who gets a point for this goal, 1 or 2 public int Player = 1; // the Scorekeeper public Scorekeeper scorekeeper; public void GetPoint() { // when the ball collides with this goal, give the player a point scorekeeper.AddScore( Player ); } } Attach this script to both goals. For player 1's goal, set the Player to 2 (player 2 gets a point when the ball lands in player 1's goal), for player 2's goal, set the Player to 1 (player 1 gets a point when the ball lands in player 2's goal). The game is almost completely functional now (aside from multiplayer). One problem is that we can't tell that points are being given until the game ends, so let's add a score display. Displaying the score to the player Create two 3D Text objects as children of the scorekeeper. Name them p1Score and P2Score, and position them on each side of the field: Let's make the scorekeeper display the player scores: using UnityEngine; using System.Collections; public class Scorekeeper : MonoBehavior { // the maximum score a player can reach public int ScoreLimit = 10; //(); } } The score is now displayed properly when a player gets a point. Be sure to give it a test run—the ball should bounce around the field, and you should be able to deflect the ball with the paddle. If the ball hits player 1's goal, player 2 should get 1 point, and vice versa. If one player gets 10 points, both scores should reset to zero, the ball should move back to the center of the screen, and the game should restart. With the most important gameplay elements complete, we can start working on multiplayer networking. Networking the game For testing purposes, let's launch a network game as soon as the level is launched: using UnityEngine; using System.Collections; public class RequireNetwork : MonoBehavior { void Awake() { if( Network.peerType == NetworkPeerType.Disconnected ) Network.InitializeServer( 1, 25005, true ); } } If we start this level without hosting a server first, it will automatically do so for us in ensuring that the networked code still works. Now we can start converting our code to work in multiplayer. Let's start by networking the paddle code:; // the position read from the network // used for interpolation private Vector3 readNetworkPos; void Start() { // if this is our paddle, it accepts input // otherwise, if it is someone else's paddle, it does not AcceptsInput = networkView.isMine; } void Update() { // does not accept input, interpolate network pos if( !AcceptsInput ) { transform.position = Vector3.Lerp( transform.position, readNetworkPos, 10f * Time.deltaTime ); // don't use player input return; } //get user input float input = Input.GetAxis( "Vertical" ); // move paddle Vector3 pos = transform.position; pos.z += input * MoveSpeed * Time.deltaTime; // clamp paddle position pos.z = Mathf.Clamp( pos.z, -MoveRange, MoveRange ); // set position transform.position = pos; } void OnSerializeNetworkView( BitStream stream ) { // writing information, push current paddle position if( stream.isWriting ) { Vector3 pos = transform.position; stream.Serialize( ref pos ); } // reading information, read paddle position else { Vector3 pos = Vector3.zero; stream.Serialize( ref pos ); readNetworkPos = pos; } } } The paddle will detect whether it is owned by the local player or not. If not, it will not accept player input, instead it will interpolate its position to the last read position value over the network. By default, network views will serialize the attached transform. This is OK for testing, but should not be used for production. Without any interpolation, the movement will appear very laggy and jerky, as positions are sent a fixed number of times per second (15 by default in Unity Networking) in order to save on bandwidth, so snapping to the position 15 times per second will look jerky. In order to solve this, rather than instantly snapping to the new position we smoothly interpolate towards it. In this case, we use the frame delta multiplied by a number (larger is faster, smaller is slower), which produces an easing motion; the object starts quickly approaching the target value, slowing down as it gets closer. When serializing, it either reads the position and stores it, or it sends the current transform position, depending on whether the stream is for reading or for writing. Now, add a Network View to one of your paddles, drag the panel component attached to the Paddle into the Observed slot, and make it a prefab by dragging it into your Project pane. Next, delete the paddles in the scene, and create two empty game objects where the paddles used to be positioned. These will be the starting points for each paddle when spawned. Spawning paddles Next, let's make the scorekeeper spawn these paddles. The scorekeeper, upon a player connecting, will send an RPC to them to spawn a paddle: ); } } void OnPlayerConnected( NetworkPlayer player ) { // when a player joins, tell them to spawn networkView.RPC( "net_DoSpawn", player, SpawnP2.position ); } [RPC] void net_DoSpawn( Vector3 position ) { // spawn the player paddle Network.Instantiate( paddlePrefab, position, Quaternion.identity,(); } } At the moment, when you start the game, one paddle spawns for player 1, but player 2 is missing (there's nobody else playing). However, the ball eventually flies off toward player 2's side, and gives player 1 a free point. The networked ball Let's keep the ball frozen in place when there's nobody to play against, or if we aren't the server. We're also going to add networked movement to our ball:; // don't move the ball if there's nobody to play with if( Network.connections.Length == 0 ) return; // move the ball in the current direction Vector2 moveDir = currentDir * currentSpeed * Time.deltaTime; transform.Translate( new Vector3( moveDir.x, 0f, moveDir.y ) ); } void OnTriggerEnter( Collider other ) { // bounce off the top and bottom walls if( other.tag == "Boundary" ) { // vertical boundary, reverse Y direction currentDir.y *= -1; } // bounce off the player paddle else if( other.tag == "Player" ) { // player paddle, reverse X direction currentDir.x *= -1; } // if we hit a goal, and we are the server, give the appropriate player a point else if( other.tag == "Goal" && Network.isServer ) { //; } void OnSerializeNetworkView( BitStream stream ) { //write position, direction, and speed to network if( stream.isWriting ) { Vector3 pos = transform.position; Vector3 dir = currentDir; float speed = currentSpeed; stream.Serialize( ref pos ); stream.Serialize( ref dir ); stream.Serialize( ref speed ); } // read position, direction, and speed from network else { Vector3 pos = Vector3.zero; Vector3 dir = Vector3.zero; float speed = 0f; stream.Serialize( ref pos ); stream.Serialize( ref dir ); stream.Serialize( ref speed ); transform.position = pos; currentDir = dir; currentSpeed = speed; } } } The ball will stay put if there's nobody to play against, and if someone we're playing against leaves, the ball will reset to the middle of the field. The ball will also work correctly on multiple machines at once (it is simulated on the server, and position/velocity is relayed to clients). Add NetworkView to the ball and have it observe the Ball component. Networked scorekeeping There is one final piece of the puzzle that is keeping score. We're going to convert our AddScore function to use an RPC, and if a player leaves we will also reset the scores: ); // nobody has joined yet, display "Waiting..." for player 2 Player2ScoreDisplay.text = "Waiting..."; } } void OnPlayerConnected( NetworkPlayer player ) { // when a player joins, tell them to spawn networkView.RPC( "net_DoSpawn", player, SpawnP2.position ); // change player 2's score display from "waiting..." to "0" Player2ScoreDisplay.text = "0"; } void OnPlayerDisconnected( NetworkPlayer player ) { // player 2 left, reset scores p1Score = 0; p2Score = 0; // display each player's scores // display "Waiting..." for player 2 Player1ScoreDisplay.text = p1Score.ToString(); Player2ScoreDisplay.text = "Waiting..."; } void OnDisconnectedFromServer( NetworkDisconnection cause ) { // go back to the main menu Application.LoadLevel( "Menu" ); } [RPC] void net_DoSpawn( Vector3 position ) { // spawn the player paddle Network.Instantiate( paddlePrefab, position, Quaternion.identity, 0 ); } // call an RPC to give the player a point public void AddScore( int player ) { networkView.RPC( "net_AddScore", RPCMode.All, player ); } // give the appropriate player a point [RPC] public void net(); } } Our game is fully networked at this point. The only problem is that we do not yet have a way to connect to the game. Let's write a simple direct connect dialog which allows players to enter an IP address to join. The Connect screen The following script shows the player IP and Port entry fields, and the Connect and Host buttons. The player can directly connect to an IP and Port, or start a server on the given Port. By using direct connect we don't need to rely on a master server, as players directly connect to games via IP. If you wanted to, you could easily create a lobby screen for this instead of using direct connect (allowing players to browse a list of running servers instead of manually typing IP address). To keep things simpler, we'll omit the lobby screen in this example: using UnityEngine; using System.Collections; public class ConnectToGame : MonoBehavior { private string ip = ""; private int port = 25005; void OnGUI() { // let the user enter IP address GUILayout.Label( "IP Address" ); ip = GUILayout.TextField( ip, GUILayout.Width( 200f ) ); // let the user enter port number // port is an integer, so only numbers are allowed GUILayout.Label( "Port" ); string port_str = GUILayout.TextField ( port.ToString(), GUILayout.Width( 100f ) ); int port_num = port; if( int.TryParse( port_str, out port_num ) ) port = port_num; // connect to the IP and port if( GUILayout.Button( "Connect", GUILayout.Width( 100f ) ) ) { Network.Connect( ip, port ); } // host a server on the given port, only allow 1 incoming connection (one other player) if( GUILayout.Button( "Host", GUILayout.Width( 100f ) ) ) { Network.InitializeServer( 1, port, true ); } } void OnConnectedToServer() { Debug.Log( "Connected to server" ); // this is the NetworkLevelLoader we wrote earlier in the chapter – pauses the network, loads the level, waits for the level to finish, and then unpauses the network NetworkLevelLoader.Instance.LoadLevel( "Game" ); } void OnServerInitialized() { Debug.Log( "Server initialized" ); NetworkLevelLoader.Instance.LoadLevel( "Game" ); } } With this, we now have a complete, fully functional multiplayer Pong game. Players can host games, as well as join them if they know the IP. When in a game as the host, the game will wait for another player to show up before starting the game. If the other player leaves, the game will reset and wait again. As a player, if the host leaves it goes back to the main menu. Summary In this article, we covered: - The basics of UDP and reliable/unreliable communication - Setting up a lobby server - What a Network View is - How to serialize object state - How to send reliable RPCs - Hosting game servers and connecting to them - Registering servers with the lobby - The basics of dedicated servers - How to load levels in a networked game Resources for Article: Further resources on this subject: - Unity Game Development: Interactions (Part 1) [Article] - Unity Game Development: Welcome to the 3D world [Article] - Unity Game Development: Interactions (Part 2) [Article] About the Author : Alan R. Stagner Alan R. Stagner is an independent developer with a passion for Unity3D game development. He was introduced to programming by his father, he sought out different ways to create games in a variety of languages. Most recently, he found the Unity game engine and was instantly hooked, and discovered his love of multiplayer game development. He has also dabbled in database and server programming from time to time, mostly involving PHP and MySQL with recent forays into ASP.NET. Post new comment
http://www.packtpub.com/article/unity-networking-pong-game
CC-MAIN-2014-15
refinedweb
8,040
55.24
> gzip_win.zip > deflate.c /* deflate.c -- compress data using the deflation algorithm * Copyright (C) 1992-1993 Jean-loup Gailly * This is free software; you can redistribute it and/or modify it under the * terms of the GNU General Public License, see the file COPYING. */ /* * PURPOSE * * Identify new text as repetitions of old text within a fixed- * length sliding window trailing behind the new text. * * DISCUSSION * * The "deflation" process depends on being able to identify portions * of the input text which are identical to earlier input (within a * sliding window trailing behind the input currently being processed). * * The most straightforward technique turns out to be the fastest for * most input files: info-zippers for bug reports and testing. * * REFERENCES * * APPNOTE.TXT documentation file in PKZIP 1.93a distribution. * * * * INTERFACE * * void lm_init (int pack_level, ush *flags) * Initialize the "longest match" routines for a new file * * ulg deflate (void) * Processes a new input file and return its compressed length. Sets * the compressed length, crc, deflate flags and internal file * attributes. */ #include #include "tailor.h" #include "gzip.h" #include "lzw.h" /* just for consistency checking */ #ifdef RCSID static char rcsid[] = "$Id: deflate.c,v 0.15 1993/06/24 10:53:53 jloup Exp $"; #endif /* =========================================================================== * Configuration parameters */ /* Compile with MEDIUM_MEM to reduce the memory requirements or * with SMALL_MEM to use as little memory as possible. Use BIG_MEM if the * entire input file can be held in memory (not possible on 16 bit systems). * Warning: defining these symbols affects HASH_BITS (see below) and thus * affects the compression ratio. The compressed output * is still correct, and might even be smaller in some cases. */ #ifdef SMALL_MEM # define HASH_BITS 13 /* Number of bits used to hash strings */ #endif #ifdef MEDIUM_MEM # define HASH_BITS 14 #endif #ifndef HASH_BITS # define HASH_BITS 15 /* For portability to 16 bit machines, do not use values above 15. */ #endif /* To save space (see unlzw.c), we overlay prev+head with tab_prefix and * window with tab_suffix. Check that we can do this: */ #if (WSIZE<<1) > (1< BITS-1 error: cannot overlay head with tab_prefix1 #endif #define HASH_SIZE (unsigned)(1< = HASH_BITS */ unsigned int near prev_length; /* Length of the best match at previous step. Matches not greater than this * are discarded. This is used in the lazy match evaluation. */ unsigned near strstart; /* start of string to insert */ unsigned near match_start; /* start of matching string */ local int eofile; /* flag set at end of input file */ local unsigned lookahead; /* number of valid bytes ahead in window */ unsigned near max_chain_length; /* To speed up deflation, hash chains are never searched beyond this length. * A higher limit improves compression ratio but degrades the speed. */ local unsigned int. */ local int compr_level; /* compression level (1..9) */ unsigned near good_match; /* Use a faster search when the previous match is longer than this */ /* Values for max_lazy_match, good_match and max_chain_length, depending on * the desired pack level (0..9). The values given below have been tuned to * exclude worst case performance for pathological files. Better values may be * found for specific files. */ typedef struct config { ush good_length; /* reduce lazy search above this match length */ ush max_lazy; /* do not perform lazy search above this match length */ ush nice_length; /* quit search above this match length */ ush max_chain; } config; #ifdef FULL_SEARCH # define nice_match MAX_MATCH #else int near nice_match; /* Stop searching when current match exceeds this */ #endif local config configuration_table[10] = { /* good lazy nice chain */ /* 0 */ {0, 0, 0, 0}, /* store only */ /* 1 */ {4, 4, 8, 4}, /* maximum speed, no lazy matches */ /* 2 */ {4, 5, 16, 8}, /* 3 */ {4, 6, 32, 32}, /* 4 */ {4, 4, 16, 16}, /* lazy matches */ /* 5 */ {8, 16, 32, 32}, /* 6 */ {8, 16, 128, 128}, /* 7 */ {8, 32, 128, 256}, /* 8 */ {32, 128, 258, 1024}, /* 9 */ {32, 258, 258, 4096}}; /* maximum compression */ /* Note: the deflate() code requires max_lazy >= MIN_MATCH and max_chain >= 4 * For deflate_fast() (levels <= 3) good is ignored and lazy has a different * meaning. */ #define EQUAL 0 /* result of memcmp for equal strings */ /* =========================================================================== * Prototypes for local functions. */ local void fill_window OF((void)); local ulg deflate_fast OF((void)); int longest_match OF((IPos cur_match)); #ifdef ASMV void match_init OF((void)); /* asm code initialization */ #endif #ifdef DEBUG local void check_match OF((IPos start, IPos match, int length)); #endif /* =========================================================================== * Update a hash value with the given input byte * IN assertion: all calls to to UPDATE_HASH are made with consecutive * input characters, so that a running hash key can be computed from the * previous key instead of complete recalculation each time. */ #define UPDATE_HASH(h,c) (h = (((h)< 9) error("bad pack level"); compr_level = pack_level; /* Initialize the hash table. */ #if defined(MAXSEG_64K) && HASH_BITS == 15 for (j = 0; j < HASH_SIZE; j++) head[j] = NIL; #else memzero((char*)head, HASH_SIZE*sizeof(*head)); #endif /* prev will be initialized on the fly */ /* Set the default configuration parameters: */ max_lazy_match = configuration_table[pack_level].max_lazy; good_match = configuration_table[pack_level].good_length; #ifndef FULL_SEARCH nice_match = configuration_table[pack_level].nice_length; #endif max_chain_length = configuration_table[pack_level].max_chain; if (pack_level == 1) { *flags |= FAST; } else if (pack_level == 9) { *flags |= SLOW; } /* ??? reduce max_chain_length for binary files */ strstart = 0; block_start = 0L; #ifdef ASMV match_init(); /* initialize the asm code */ #endif lookahead = read_buf((char*)window, sizeof(int) <= 2 ? (unsigned)WSIZE : 2*WSIZE); if (lookahead == 0 || lookahead == (unsigned)EOF) { eofile = 1, lookahead = 0; return; } eofile = 0; /* Make sure that we always have enough lookahead. This is important * if input comes from a device such as a tty. */ while (lookahead < MIN_LOOKAHEAD && !eofile) fill_window(); ins_h = 0; for (j=0; j = 1 */ #ifndef ASMV /* For MSDOS, OS/2 and 386 Unix, an optimized version is in match.asm or * match.s. The code is functionally equivalent, so you can use the C version * if desired. */ int longest_match(cur_match) IPos cur_match; /* current match */ { unsigned chain_length = max_chain_length; /* max hash chain length */ register uch *scan = window + strstart; /* current string */ register uch *match; /* matched string */ register int len; /* length of current match */ int best_len = prev_length; /* best match length so far */ IPos limit = strstart > (IPos)MAX_DIST ? strstart - (IPos)MAX_DIST : NIL; /* Stop when cur_match becomes <= limit. To simplify the code, * we prevent matches with the string of window index 0. */ /* The code is optimized for HASH_BITS >= 8 and MAX_MATCH-2 multiple of 16. * It is easy to get rid of this optimization if necessary. */ #if HASH_BITS < 8 || MAX_MATCH != 258 error: Code too clever #endif #ifdef UNALIGNED_OK /* Compare two bytes at a time. Note: this is not always beneficial. * Try with and without -DUNALIGNED_OK to check. */ register uch *strend = window + strstart + MAX_MATCH - 1; register ush scan_start = *(ush*)scan; register ush scan_end = *(ush*)(scan+best_len-1); #else register uch *strend = window + strstart + MAX_MATCH; register uch scan_end1 = scan[best_len-1]; register uch scan_end = scan[best_len]; #endif /* Do not waste too much time if we already have a good match: */ if (prev_length >= good_match) { chain_length >>= 2; } Assert(strstart <= window_size-MIN_LOOKAHEAD, "insufficient lookahead"); do { Assert(cur_match < strstart, "no future"); match =*)(match+best_len-1) != scan_end || *(us. */ scan++, match++; do { } while (*(ush*)(scan+=2) == *(ush*)(match+=2) && *(ush*)(scan+=2) == *(ush*)(match+=2) && *(ush*)(scan+=2) == *(ush*)(match+=2) && *(ush*)(scan+=2) == *(ush*)(match+=2) && scan < strend); /* The funny "do {}" generates better code on most compilers */ /* Here, scan <= window+strstart+257 */ Assert(scan <= window+(unsigned)++; /* We check for insufficient lookahead only every 8th comparison; * the 256th check will be made at strstart+258. */ do { } while (*++scan == *++match && *++scan == *++match && *++scan == *++match && *++scan == *++match && *++scan == *++match && *++scan == *++match && *++scan == *++match && *++scan == *++match && scan < strend); len = MAX_MATCH - (int)(strend - scan); scan = strend - MAX_MATCH; #endif /* UNALIGNED_OK */ if (len > best_len) { match_start = cur_match; best_len = len; if (len >= nice_match) break; #ifdef UNALIGNED_OK scan_end = *(ush*)(scan+best_len-1); #else scan_end1 = scan[best_len-1]; scan_end = scan[best_len]; #endif } } while ((cur_match = prev[cur_match & WMASK]) > limit && --chain_length != 0); return best_len; } #endif /* ASMV */ #ifdef DEBUG /* =========================================================================== * Check that the match at match_start is indeed a match. */ local void check_match(start, match, length) IPos start, match; int length; { /* check that the match is indeed a match */ if (memcmp((char*)window + match, (char*)window + start, length) != EQUAL) { fprintf(stderr, " start %d, match %d, length %d\n", start, match, length); error("invalid match"); } if (verbose > 1) { fprintf(stderr,"\\[%d,%d]", start-match, length); do { putc(window[start++], stderr); } while (--length != 0); } } #else # define check_match(start, match, length) #endif /* =========================================================================== * Fill the window when the lookahead becomes insufficient. * Updates strstart and lookahead, and sets eofile if end of input file. * IN assertion: lookahead < MIN_LOOKAHEAD && strstart + lookahead > 0 * OUT assertions: at least one byte has been read, or eofile is set; * file reads are performed for at least two bytes (required for the * translate_eol option). */ local void fill_window() { register unsigned n, m; unsigned more = (unsigned)(window_size - (ulg)lookahead - (ulg)strstart); /* Amount of free space at the end of the window. */ /* If the window is almost full and there is insufficient lookahead, * move the upper half to the lower one to make room in the upper half. */ if (more == (unsigned)EOF) { /* Very unlikely, but possible on 16 bit machine if strstart == 0 * and lookahead == 1 (input done one byte at time) */ more--; } else if (strstart >= WSIZE+MAX_DIST) { /* By the IN assertion, the window is not empty so we can't confuse * more == 0 with more == 64K on a 16 bit machine. */ Assert(window_size == (ulg)2*WSIZE, "no sliding with BIG_MEM"); memcpy((char*)window, (char*)window+WSIZE, (unsigned)WSIZE); match_start -= WSIZE; strstart -= WSIZE; /* we now have strstart >= MAX_DIST: */ block_start -= (long) WSIZE; for (n = 0; n < HASH_SIZE; n++) { m = head[n]; head[n] = (Pos)(m >= WSIZE ? m-WSIZE : NIL); } for (n = 0; n < WSIZE; n++) { m = prev[n]; prev[n] = (Pos)(m >= WSIZE ? m-WSIZE : NIL); /* If n is not on any hash chain, prev[n] is garbage but * its value will never be used. */ } more += WSIZE; } /* At this point, more >= 2 */ if (!eofile) { n = read_buf((char*)window+strstart+lookahead, more); if (n == 0 || n == (unsigned)EOF) { eofile = 1; } else { lookahead += n; } } } /* =========================================================================== * Flush the current block, with given end-of-file flag. * IN assertion: strstart is set to the end of the current match. */ #define FLUSH_BLOCK(eof) \ flush_block(block_start >= 0L ? (char*)&window[(unsigned)block_start] : \ (char*)NULL, (long)strstart - block_start, (eof)) /* =========================================================================== * Processes a new input file and return its compressed length. This * function does not perform lazy evaluationof matches and inserts * new strings in the dictionary only for unmatched strings or for short * matches. It is used only for the fast compression options. */ local ulg deflate_fast() { IPos hash_head; /* head of the hash chain */ int flush; /* set if current block must be flushed */ unsigned match_length = 0; /* length of best match */ prev_length = MIN_MATCH-1; while (lookahead != 0) { /* Insert the string window[strstart .. strstart+2] in the * dictionary, and set hash_head to the head of the hash chain: */ INSERT_STRING(strstart, hash_head); /* Find the longest match, discarding those <= prev_length. * At this point we have always match_length < MIN_MATCH */ if (hash_head != NIL &&; } if (match_length >= MIN_MATCH) { check_match(strstart, match_start, match_length); flush = ct_tally(strstart-match_start, match_length - MIN_MATCH); lookahead -= match_length; /* Insert new strings in the hash table only if the match length * is not too large. This saves time but degrades compression. */ if (match_length <= max_insert_length) { match_length--; /* string at strstart already in hash table */ do { strstart++; INSERT_STRING(strstart, hash_head); /* strstart never exceeds WSIZE-MAX_MATCH, so there are * always MIN_MATCH bytes ahead. If lookahead < MIN_MATCH * these bytes are garbage, but it does not matter since * the next lookahead bytes will be emitted as literals. */ } while (--match_length != 0); strstart++; } else { strstart += match_length; match_length = 0; ins_h = window[strstart]; UPDATE_HASH(ins_h, window[strstart+1]); #if MIN_MATCH != 3 Call UPDATE_HASH() MIN_MATCH-3 more times #endif } } else { /* No match, output a literal byte */ Tracevv((stderr,"%c",window[strstart])); flush = ct_tally (0, window[strstart]); lookahead--; strstart++; } if (flush) FLUSH_BLOCK(0), block_start = strstart; /* Make sure that we always have enough lookahead, except * at the end of the input file. We need MAX_MATCH bytes * for the next match, plus MIN_MATCH bytes to insert the * string following the next match. */ while (lookahead < MIN_LOOKAHEAD && !eofile) fill_window(); } return FLUSH_BLOCK(1); /* eof */ } /* =========================================================================== * Same as above, but achieves better compression. We use a lazy * evaluation for matches: a match is finally adopted only if there is * no better match at the next window position. */ ulg deflate() { IPos hash_head; /* head of hash chain */ IPos prev_match; /* previous match */ int flush; /* set if current block must be flushed */ int match_available = 0; /* set if previous match exists */ register unsigned match_length = MIN_MATCH-1; /* length of best match */ #ifdef DEBUG extern long isize; /* byte length of input file, for debug only */ #endif if (compr_level <= 3) return deflate_fast(); /* optimized for speed */ /* Process the input block. */ while (lookahead != 0) { /* Insert the string window[strstart .. strstart+2] in the * dictionary, and set hash_head to the head of the hash chain: */ INSERT_STRING(strstart, hash_head); /* Find the longest match, discarding those <= prev_length. */ prev_length = match_length; prev_match = match_start; match_length = MIN_MATCH-1; if (hash_head != NIL && prev_length < max_lazy_match &&; /* Ignore a length 3 match if it is too distant: */ if (match_length == MIN_MATCH && strstart-match_start > TOO_FAR){ /* If prev_match is also MIN_MATCH, match_start is garbage * but we will ignore the current match anyway. */ match_length--; } } /* If there was a match at the previous step and the current * match is not better, output the previous match: */ if (prev_length >= MIN_MATCH && match_length <= prev_length) { check_match(strstart-1, prev_match, prev_length); flush = ct_tally(strstart-1-prev_match, prev_length - MIN_MATCH); /* Insert in hash table all strings up to the end of the match. * strstart-1 and strstart are already inserted. */ lookahead -= prev_length-1; prev_length -= 2; do { strstart++; INSERT_STRING(strstart, hash_head); /* strstart never exceeds WSIZE-MAX_MATCH, so there are * always MIN_MATCH bytes ahead. If lookahead < MIN_MATCH * these bytes are garbage, but it does not matter since the * next lookahead bytes will always be emitted as literals. */ } while (--prev_length != 0); match_available = 0; match_length = MIN_MATCH-1; strstart++; if (flush) FLUSH_BLOCK(0), block_start = strstart; } else if (match_available) { /* If there was no match at the previous position, output a * single literal. If there was a match but the current match * is longer, truncate the previous match to a single literal. */ Tracevv((stderr,"%c",window[strstart-1])); if (ct_tally (0, window[strstart-1])) { FLUSH_BLOCK(0), block_start = strstart; } strstart++; lookahead--; } else { /* There is no previous match to compare with, wait for * the next step to decide. */ match_available = 1; strstart++; lookahead--; } Assert (strstart <= isize && lookahead <= isize, "a bit too far"); /* Make sure that we always have enough lookahead, except * at the end of the input file. We need MAX_MATCH bytes * for the next match, plus MIN_MATCH bytes to insert the * string following the next match. */ while (lookahead < MIN_LOOKAHEAD && !eofile) fill_window(); } if (match_available) ct_tally (0, window[strstart-1]); return FLUSH_BLOCK(1); /* eof */ }
http://read.pudn.com/downloads64/sourcecode/windows/file/224321/deflate.c__.htm
crawl-002
refinedweb
2,357
59.13
Project moved to github as 7.0beta ¶ Motivated by earlier suggestions, a lot of improvements made on the library. - What's new compared to 6.x? - huge refactor - one-file code splitted to parts - completely new css processing part: processing @import-s, url-s - composer support - demo app - moved to github Visit *** NLSClientScript prevents duplicated linking of javascript files when updating a view by ajax, when eg. paging or sorting a gridview, ajax-submitting a form or any custom ajax-updating a part of a view. The extension does not prevent the multiple loading of CSS files. I simply couldn't find a way to manage it clearly (too long to explain here). The issue what this extenson fights is for example when you render Jui widgets by CHtml::ajax, the js files used by the widget will be loaded as many times as you render such a widget in a view. The unnecessary bandwidth usage is the smaller problem, the bigger problem is eg. loading jquery.js again may reset some js objects set by previously loaded ui-related js files. That can cause js errors and the view may stop working. Using NLSClientScript helps to avoid it all. From 6.0, it optionally merges/caches + minifies the registered js and css files. History ¶ - 6.7 - fixed toAbsUrl() and optimized init() methods (reported + fix by le_top) - 6.6 - fixed regexp in normUrl (reported by le_top) - 6.5 - fixed buggy behavior when more xhr "script"-dataType requests started for the same script - eliminated deprecated jQuery.browser reference - 6.4 - followed the change of the registerScriptFile() arguments in yii 1.1.14 - removed/added some comments - 6.3 - serious bug fixed: filtering duplications (usually) failed when js-merging applied for the response of an xhr request - new params: mergeIfXhr, mergeJsExcludePattern, mergeJsIncludePattern, mergeCssExcludePattern, mergeCssIncludePattern, resMap2Request (see the phpdoc comments in the source for more info) - appended an extra ; to the js files some other small improvements - 6.21 fixed another bug: merged files have been re-generated on every request when the appVersion parameter was used - 6.2 (see the updated Usage) - fixed a serious bug broke the original functionality when merging happened (duplicates couldn't been recognized) added a new parameter appVersion - 6.1 (see the updated Usage) - fixed several bugs (serverBaseUrl composing, merging css files by media correctly) added parameters mergeAbove, curlTimeOut, curlConnectionTimeOut - 6.0 (see the updated Usage) - added optional merge and minify functionalities to keep the simpleness of the single-file extension, embedded JSMin.php from - 5.0 (see the updated Usage) in 4.0RC found an issue couldn't worked around: it registered also the script tags being in html/css/js comments, input field, textarea value so i had to drop the native source analysis by regexp. Fortunately found the solution in 5.0 looks like the most perfect till now. Tested successfully in IE7+, latest FF,Chrome,Opera. Reports about testing are welcome as always. - 4.0 RC refactored, hopefully all bugs reported about 3.x have been eliminated - 3.6 fixed a typo - 3.5 (see the updated Usage) - handling special case when updating a table by tr tag - further IE fixes - added 2 new parameters: ignoredPattern and processedPattern general refactoring - 3.4 fixed non-script-rendering bug in IE - 3.3 - removes the occasional ...?_=3767454656434 -like timestamps from the url keys used to store/identify the loaded scripts fixed the accidental naming NLSClientScript to EClientScript - 3.2 - fixed accessing HEAD element for IE compressed js code (full source is still there in the php source) - 3.1 the extension now prevents the duplicated loading of css files also. - 3.0 - brand new approach simplifying dramatically the extension and the usage of the extension, based the great idea of Eirik Hoem see the Usage below! - 2.1 - dirty fix for a rendering bug of jquery.js v1.6.1 affecting binline=true mode in Yii 1.18 - 2.0 - brand new approach: resource hash stored at the server side, in the webuser state. All these info deleted when a non-ajax request comes - no $.ajax usage - better performance - the extension does not require jquery.js and jquery.yii.js to be linked initially any more - js/css files can be linked from other domain new parameter: bInlineJs - if true, the scriptFile method will insert the js file content into the html instead of linking the file what can result even better performance - 1.3 - added cache:true to the ajax js load compressed core js code - 1.2 - fixed js error when app not in YII_DEBUG mode - 1.1 - hash key generated on server side - two hash key mode: PATH and CONTENT - shortened client-side code - 1.0 - base version If you interest the details, see the comments in the source. Requirements ¶ Yii 1.x Limitations ¶ The extension identifies the scripts by its paths so it does not prevent to load the same script content from different paths. So eg. if you published the same js file into different asset directories, NLSClientScript considers those to be different and won't prevent to load those several instances. The extension doesn't watch wether a js/css file has been changed. If you set the merge functionality and some file changed, you need to delete the cached merged file manually, otherwise you'll get the old merged one. Usage (v6.3+) ¶ 1 . Set the class for the clientScript component in /protected/config/main.php, like ... 'components'=>array( ... 'clientScript' => array( 'class' => 'your.path.to.NLSClientScript', //'excludePattern' => '/\.tpl/i', //js regexp, files with matching paths won't be filtered is set to other than 'null' //'includePattern' => '/\.php/', //js regexp, only files with matching paths will be filtered if set to other than 'null' 'mergeJs' => true, //def:true 'compressMergedJs' => false, //def:false 'mergeCss' => true, //def:true 'compressMergedCss' => false, //def:false 'mergeJsExcludePattern' => '/edit_area/', //won't merge js files with matching names 'mergeIfXhr' => true, //def:false, if true->attempts to merge the js files even if the request was xhr (if all other merging conditions are satisfied) 'serverBaseUrl' => '', //can be optionally set here 'mergeAbove' => 1, //def:1, only "more than this value" files will be merged, 'curlTimeOut' => 10, //def:10, see curl_setopt() doc 'curlConnectionTimeOut' => 10, //def:10, see curl_setopt() doc 'appVersion'=>1.0 //if set, it will be appended to the urls of the merged scripts/css ) ... ) For more information about the parameters, see the header comment of NLSClientScript.php. 2 . use Yii::app()->getClientScript() by the standard way to link js and css files/snippets Example: $cs = Yii::app()->getClientScript(); $systemJsPath = Yii::app()->getAssetManager()->publish( Yii::getPathOfAlias( 'system.web.js' ), false, -1, false ); $cs->registerScriptFile( $systemJsPath . '/ext/yii_ext.js'); $cs->registerScriptFile( $systemJsPath . '/ext/plugins/jquery.form.js'); 3 . DOES NOT WORK FROM v5.0: If you want to do a custom ajax request with "dataType"="json" and there are some fields of the response you want to update your page with, filter that part "by hand" with $.ajaxSettings.dataFilter like eg.: echo CHtml::ajaxLink('custom update', array('/site/testupdate'), array( 'dataType' => 'json', 'success'=>'js:function(data){ $("#cont").html($.ajaxSettings.dataFilter(data.content)); }', )); nice one I'll check it out for sure What I do is to extend controller's render and renderPartial methods to accomplish the same result but your way is better just looks wonderful gonna take a look on this one for sure. thanks a lot and keep up the good work! thx thank you guys, let me know if any issue come up with it. Great work I use another way for it like fullajax concept. But in jquery way. not working for me This extension is promising. I´m facing the mentioned troubles with my ajax-loaded gridview and its ajax events: same requests keep repeating after refreshing the grid. I´ve just installed the extension but it doesn´t seem to be working, the same things still happens. What could I be doing wrong? Thanks more info needed Hi hav3fun, thx for the report. Can you send me (at least the relevant part) of your source code, pls check my mail address in my profile. I'll try to debug. more info needed2 also please specify exactly what request keeps repeating: what is requested, what is the response (use Firebug) works but sadly not usefull in my case This extension works for me, but sadly the final "rendertime" of the page seems to be increased (sometimes dramatically). I got a application with several js files which are also used for the layout. With this extension the js excecution time is increased so the not finished (or positioned) layout is shown. I also had some trouble with the ajax caching. Every visit to the page would let load the js file again an again because every request had a ?_=1308645861658 in the url. Don't know if this is some case of some other ajax setup's i have used. But if i add cache:true, to the script ajax call this issue is solved. Sadly the excecution time is to high guess it comes from the ajax lazyload. 1.3 released Thx horizons, pls check the new version 1.3. To be honest i couldn't do too much with the speed of the ajax loading in the current version (only that way can synchronously load a remote js dynamically, eg. appending a script tag to the head/body cannot provide synchronous load). I also have to remark that i didn't experience so much speed loss with the ajax load. Not working in _dev mode Hey. Nice extension. But sadly it's not working in the _dev mode, when YII_DEBUG is defined =( help me to fix Hi WallTearer, thanks for the report. Could you help me in the fixing by sending the following infos by mail (you can mail me going into my profile page)? Thx OK I've send you an email with some explanations thanks + suggestion hi nlac, Thank you for your great extension. I installed and used it with no problem. In installation process,I took another approach: instead of 3rd step you mentioned(changing index file), I add just the following code: to onBeginRequest event of application. Actually, I added a class to main config file as behaviours parameter, and bind the above code to onBeginRequest in that class. my approach has 2 main advantage over yours: indeed Yes arash.. a handler for the onBeginRequest event is definitely the right place for clearing the script cache instead. An additional reason is, if an action registers a script then invokes an another action by eg. forwarding, the script cache may NOT be cleared meanwhile.. I'll include it into the next version, thx man. to WallTearer Thanks for the mail report. Unfortunately i couldn't really get more info from it, especially i would have been interested the code in the view/controller. I haven't ever met this issue. If you like, we can change mail addresses and you can share your code for me that way. I'm really curious btw what is the issue. Good but.. If user open multiple pages and using ajax on individual of those pages, Yii::app()->user->setState('nlsLoadedResources', array()); will be clear for the previously opened page? for kernel Thanks for the comment. Yes that's true, this is a kind of limitation of this session-based caching. The perfect would be to have some kind of "view scope" like in JSF but it isn't available in PHP (and i have no clue how could it be implemented). This limitation isn't recent in the DOM-based caching (in nlsclientscript v<2.0) but the advantage is to be able to cache js files from another domain and the performance is better. I try to have some time in the next 1-2 weeks to create the next version keeping the last comments in mind. Unfortunately i'm recently extremely busy in work and other stuffs :( Hi, nlac A solution I came up with is to identify each page by generating a (unique id / 'session id' / page token .. you name it) for it, and store the ID in state() or $_SESSION, AND, this ID will be destory on browser's window.onUnload event. So, individual ajax actions on differet pages will go with the it's page ID to help nlsclientscript to detect the ajax request context where it came from. I will paste some code here sooner or later. about v3.0 one drawback of client-side filtering of incoming data is more network traffic, i.e. already loaded js/css will send again to client on each ajax request which is not so good, especially for slow connections. hi Sending the script TAGs again - NOT the script - won't cause relevant network traffic. The already loaded scripts won't be loaded again, that's the point of the extension. As you may noticed, the extension recently doesn't care with css links, so css links are still allowed to be loaded more times. However that is not so big issue like the js case i'm still thinking about the best way to handle it. thank you I just want to thank you for providing such a good extension thx i hope you guys find it useful I don't understand Please, help somebody. 1 - The extension file name is NLSClientScript.php, but class in it has name EClientScript. My standart Yii autoloader don't understand this and when I'm trying to view any page of my site I see the next: This problem have a really simple solution (I manually rename class name EClientScript to NLSClientScript), but am I the only one who faced the similar problem? 2 - All versions after version 2.1 seems do not work for me! On user profile on my local site in the Firefox, Firebug, opened Network tab I can see that jquery file was loaded from. Next step I click on link to open CJuiDialog. After that in the Firebug's Network tab I see also one jquery file loaded from. The same I see about some other files. Where I wrong? i'll check Hi Alexey, issue 1 is because i've accidentally left the class renamed (i use it with that other name in a project). The class name should be NLSClientScript of course. The issue 2 i guess is because the random url parameter appended to the js url by the framework, the extension considers it to be another file than the earlier-registered jquery.js. I'll inspect how to avoid it and will update soon. let me know if ok Alexey and maybe others, pls try 3.3 and let me know if there is still any issue with the extension. However i couldn't reproduce the reported issue i applied a possible fix. to nlac Thank you for quick update but nothing changed for me about issue 2. :( I looked into your code in the Firebug and it seems defined what's wrong: My type is json. So this line ignore all next code and always return data. hmm.. than explains everything. So i guess you make a custom ajax request, response a json putting the html in some field of the json (capturing the output of a partial render i guess) and update a container "by hand" in the response handler? My goal was with it to improve the performance by adding that filtering there. Now i'm thinking about to extend the API with a js function, calling it would be possible to set that global filtering any time + handling the filtering of the json. about type=JSON Yes, you are completely right! But this is do not resolve the problem yet. :) I changed condition to and had tracking step by step. The line filled $._holder[0] by the data, but it seems this action also escape and urlencode all HTML. So having data we are getting as result. Next loop works not as I expect because on first loop I have and so on... :( Can you suggest anything? providing a callback the solution will be providing the chance for communicating with the global filter by setting a callback what can transform the "data" to filterable elements and can transform the filtered fragments back into the "data". I'll update soon. dataType=json see the updated Usage point 3 Version 3.4? Sorry, it seems something was gone wrong? Yesterday evening I see the version 3.4 and instruction how to use it for data type=json. But now it is hide. I don't want to be annoying, but want to say about my implementation of ajax request. I use this code: so 'dataType'=>'json' is already defined. But I have else one question. Even you will add a callback possibility for json responses, where will be benefit to use a new version with all required callback functions instead the version 2.1 without any callbacks? version 3.4 Forget 3.4, i added that yesterday but some minutes after i realized that callbacks are unnecessary, so i removed that version (i was really tired..). Latest is 3.3, no callbacks. Alexey, just check the "success"=>... code in the Usage point 3. and apply that in you code. Yes! It works great now! And so simple solution! Thank you for this work and for your attention to your customers. Re: version 3.3 Does exactly what it promises, and with a very elegant solution. Excellent! The only thing I don't understand is why it's called NLSClientScript, was there no more descriptive name available? (like EAjaxClientScript) :) And by the way, the changelog currently says it's called EClientScript.. table row problem hi, i've switched from 2.1 to 3.3 today and found the following bug: in a form which is organized in a table, the user can add new rows via ajax. the server is sending only a new table row (tr-element) with some columns in it. the javascript-filter takes this row and puts it via innerHTML into a div-element which kills all tr- and td-elements from the original code. it then filters the dom and returns the broken code. quick-fix: insert above line 74: to marcovtwout Hi, thanks. "NLS" points to "NLacSoft" what is my domain name actually. (i'll release some valuable content there in a few days!:)) The "EClientScript" is a kinda standard extension name format, i use my extension with that name at the company i work for. to stereochrome Thanks, excellent catch and solution. I'll include it in the next release, also have to fight other cases (dt,dd,li,caption,thead etc) for stereochrome Hi, i tested the bug, actually i experienced such issue only in IE (not exactly the same issue but can be related) and added a fix in 3.4. i'm curious if it has been fixed at your case problem persists Hi, unfortunately this new version does not fix the problem. I'm encountering this problem in chrome 15 on kubuntu and in other browsers. I've got a lot of work to do at the moment because it is the last workday for this year (yay), but i'm going to create a testcase for this problem in the new year. for stereochrome check 3.5 and also the demo on nlacsoft.net (it does exactly your case:)) works! yep, 3.5 works like expected. thanks for your work and a happy new year! Warning: "head" is not defined. Here is returning an error warning that "head" is not defined. This piece of code: should be? thx thimt8-yii fixed in 3.6 Issue with the parser in Internet Explorer 7 Hello, I am using version 3.6 of your extension in my Yii project. It seems that the selection of script tags in the HTML source fails for some reason in Internet Explorer 7. The looping through the SCRIPT or LINK elements is the place where we have now added an extra test case, to be sure that there is at least a not empty SRC or HREF attribute defined. (see added code below) And I had to add the code again in the other loop : Is it possible that the jquery selector is not precise enough ? Please let me know if there would be a fix for that issue in your next revision. Yann thx hoplayann thanks for the report, i'll check and will release a new version in a few days, there's also an other issue needs to be fixed. HTML content is being duplicated. I have problems. When the extension is enabled the HTML content is being duplicated in ajax requests. Detail: the problem does not occur in IE, only in Firefox and Chrome. And only occurs when the return of Ajax is the complete page, including the HTML tag. The javascript code behaves completely different in IE vs Firefox and Chrome. The developer needs to fix the default behavior on all browsers. A workaround for Firefox and Chrome (but certainly not the ultimate solution): Duplicate html content rendering is #7098 fixed in 3.6 version? I am still facing the problem in filtering in the cgridview IE7 bug IE7 has a problem with the "src" attribute of the script tag, as can be seen in So the update of the grid returns the alert "Parser error". The solution to the problem would be to change the script to: guys please test 5.0 test results about usages of the latest version are welcome Script error jquery undefined I have install version 5.0. It works on all my pages where jquery is included. But on my static pages (e.g. impressum page) where I need no jquery there I get the error 'jquery undefined'. What can I do. Must I include on all my pages jquery or can I do this with exclude pattern ? for klaus66 I guess that is not a static page where the issue occurs. On the pages where no script file registered, the component renders nothing. On your "static page" some script is registered requires no jquery, that's when the issue occurs. So yes, for now all pages need jquery registered since the component uses it. I'll fix in the next release to register jquery automatically in every case. Not really static page You are right. My 'static' pages register a small js script. But how you said they don't need jquery. I register now jquery for all my sites in my main layout view. I use the google lib and I think it is no problem for performance. Version 5 Using version 5 seems to work as expected in IE7/8 and Chrome. I upgraded after IE7/8-issues in version 3.3. Only thing I would like to see is proper DocBlock commenting on variables and functions in your extension. And actually, maybe a better name because "NLSClientScript" doesn't say anthing about what it does, but something like "AjaxCompatibleClientScript" or maybe just "EClientScript" or "XClientScript" is better. 5.0 not ok with 'dataFilter' Hi I just tried version 5.0 after previously using v3.6. I use it as in the example: $("#cont").html($.ajaxSettings.dataFilter(data.content)); . Apparently this does not work anymore ad 'dataFilter' is unknown in Chrome (for example) when using 5.0. It appears that 'dataFilter' is defined only when using 'IE'. =/ Even with the extension when using AJAX, the Bootstrap CSS files are reloaded the page. A problem & a suggested fix Hello nlac, Thanks so much for such work, was totally a savior for me. I'm reporting that there's a conflict between nlsclientscript & Minscript extension The js file is loaded again & duplicated after the first ajax request.. thats because the normUrl function of nlsclientscript takes the minscript url in the form of: & returns > Notice the '&' at the end The suggested fix would be to modify normUrl function, replacing With Fatal error in 6.0 $_SERVER['REQUEST_SCHEME'] is not always available, and will throw undefined index: Omitting it is safe in most browsers, but I haven't checked how it affects other parts of your code: serverBaseUrl issue yes i'm aware, will fix soon. Till then set that param explicitly in config.php import in css not work Thanks, nice extension, but import function of css not work after minify. May be you should use full css minifier (e.g) not simple css minifier? @Nafania Hi, i'm not sure the problem relates to the minification. When the extension merges the files, it publishes the merged one into the /assets dir. That way, if you use relative url in your @import (like "dir/xy.css" or "../dir/xy.css"), that will usually break. Have you tried to use url like "/dir/xy.css" in the @import rule (relative to the server root) or perhaps full absolute url? About cssmin, i try to keep my extension lightweight, without merging 3rd parties into being 10x large than the extension itself, unless it is really necessary. @Nafania Well, i've just have seen that is able to merge the @import-ed css files into the parent one by its documentation. That's indeed a feature what makes me consider to use it in a later version some way. Thanks for the tip... Misfunction of NLSClientScript i've problem. I plugged in NLSClientScript and it worked well for my Yii app. Yet, when i've moved the code to the new host it doesn't work right, it issues smth. like: instead of What's wrong? How to resolve it? @Igor i'm not sure what causes that right now. First make sure that and test again. If no success, go to my forum profile page and send me the following info (local menu "nlac"/Messenger/Compose new): @Igor Savinkin It's because CClientScript's scriptFilesproperty signature changed in 1.1.14. (I'm sure you've made upgrage yii version when moved to new host). Authors have to change a few line of code. In _mergeJsprotected method it's need to change to THANKS Thanks! This really saved me ;) Multiple gridview In my code i render, through ajax, 3 times (ajax1, ajax2, ajax3) the same widget(cgridview) with different parameters. Most of the time it works good, but sometimes i get the error "jQuery(...).yiiGridView is not a function" ERROR IMAGE NO ERROR IMAGE sourceMappingURL bug Not found sourceMappingURL when use compress css @Bianchi So if i understand correctly, more partials have been requested asynchronously, containing the same type widget (?). If this is the case, i have an idea about the issue, will test it and possibly fix in some days as i'll have time. @nlac You understood correctly, i'll wait for your fix :) sourceMappingURL bug example /css/bootstrap.css: /# sourceMappingURL=bootstrap.css.map / bootsrap.css.map in same path as bootstrap.css but after compressing it search bootsrap.css.map in /assets/ folder @fad first, there is no "compress" option in nlsclientscript, only mergeCss or compressMergedCss. I guess the problem caused by using the mergeCss. Just don't use the option in that specific case with mapping. Problem about "mergeCss" option Thanks for your great job, nlsc! You save my time. While I just stranded by the duplicated "jquery-ui.min.js". It works fine with JS. There is a small problem about "mergeCss" option. If set "mergeCss" to TRUE, some jquery-ui assets files may got a wrong uri (404 in browser). For example: VALID: INVALID: @yanhui + @fad yes, i'm aware of that problem, there is no url transformation in the content implemented when merging css. Since many people miss that feature I'll try to work out a solution soon. Till that time, pls do not use the css merging. @nlac Maybe compatibility problem work with Linux? Your reply is appreciated, thanks for your attention, nlac. nlsclientscript works fine in my develop pc (win8.1/xampp1.8.3), today I found it doesn't works properly in my server environment (centos6.4/lampp1.8.3). The error messages in browser like follow: -Uncaught ReferenceError: $ is not defined nls1858337492.js:79 -Uncaught ReferenceError: jQuery is not defined index.php:28 After some googling, I found someone may meets the same problem. - - @yanhui I didn't experienced yet op. system incompatibility regarding nlsclientscript. I contacted you for details through your forum profile. GitHub Would you consider moving the source code to GitHub and adding Composer support? :) moving to github good point, it would have several advantages. i registered it on my todo list:) Issues - some solutions I was still using 5.0, but for some reason my jQuery gets loaded two times when fetching a CJuiDialog through Ajax. So I was digging into that issue and decided to upgrade to the latest version to see if things were fixed. Actually things are worse. My page layout and javascript were completely "scrambled". First, the function to combine files is on by default and some "urls" were not found. Second, my production server serves compressed files and they remain compressed after combining - they should be combined uncompressed! The missing urls issue could be fixed by fixing the way the absUrl is computed. The following code works for me: Changes are that if the url starts with '//' that only the scheme is prefixed. Also, when the url starts with '/', the path is absolute from the host, but when the '/' is missing, the url is relative, so 'baseUrl' has to be added, which is also done above. I prefer using Yii methods if they exist, so in 'init()', I changed a few lines: So that works on my dev machine, but not on the server where compressed content is combined. I haven't checked that issue yet. I have now disabled combining again. I still have the dual jQuery loading, so I continue to look into that core issue for me. Duplicate library load fixed... The core mystery has been solved: my url had a trailing '&' after removing the '='. That is because I added a timestamp to the assets which postfixes the urls with '?' which becomes '?&='. So the normUrl function had to be updated with: ~~~ [javascript] [javascript] $.ajaxSettings.dataFilter(data) ~~~ ? @le_top Thanks for the bug hunt, i added the fix. About the old trick you pointed ($.ajaxSettings.dataFilter(data)), there's no need for that at all. From 5.0, the extension is rewritten in a totally different way, so you can work with any json ajax response without any neccessary post-processing. Fix(es) Ok for dataFilter - I suppose that is why my JSON calls do not seem to give any trouble. Some of the logic is done behind the scenes so I thought it did not apply. I also reported about the 'toAbsUrl' method which is equally important. Otherwise combining(merging) scripts/css is not functional for 'localhost/workspace/index.php' or '//blabla.google.com/blabla'. A resource like 'themes/mytheme/js/script.js' would not work without one of the fixes for toAbsUrl. Anyway, I also had some kind of compression issue where the combined file had compressed content - but only on the production server, so I suppose it has to do with compression from the content served by the production server and not with content served from third parties. @le_top Ok, somehow i skipped the toAbsUrl issue last time, focused only the second comment.. the fixes seem to be fine, have been added now. About the compression issue, the compression is done by the embedded JSMin.php, that project is unfortunately abandoned, no further buxfixes provided. Possible minification issues should be work-arounded, eg. deal with originally minified files and just concat them by NLSClientScript. Compression issue Well, the absUrl fix is integrated ;-). The compression issue is that the resulting file contains gzip compressed content, and is not related to content minimized (or compacted) by JsMin. problem in mergeJs to false Thanks for your work I have this code in widget: But when i set mergeJs to false: I get this error: How i can resolve it? Fix for compression bug Hi I found the solution for the compression bug. Just add the following line to the initCurlHandler method: @moa, @le_top @moa: when the error comes, when the page is loaded or at the first partial update? I guess the second case, i suspect the fancybox plugin has broken due to loading again a js file (maybe fancybox.pack or other). Please check with Firebug what js file is loaded twice - remember if you refer a js once xxx.js?v=2.14, later xxx.js or xxx.js?v=2.15, NLSClientScript will consider those files to be different and won't prevent to load again. Please normalize the url's, to be the same for the same js content. @le_top: nice, i'll test it and add to the next version, thx. CSS merging and relative resources Hi Now that merging is working, I'v done a few tests. Css merging has an important issue: relative resources are not rebased. Example: ~~~ [css] .grid-view table.items th { } ~~~ After merging, the navigators looks in the assets folder for 'bg.gif', but it is not there. Other issues with combining CSS files can be expected, and it might be a good idea to rely on an existing open source to do the job. @le_top Yep, i'm aware that url issue in the merged css, that was a topic in some earlier commments (btw not sure why only the top 20 comments are shown, near 90 comments are here). It's a plan to allow configuring a 3rd pary service to do the proper merging (or handle it properly without 3rd party, still not sure how much work it is). Anyway it requires effort, i can't say will be done in some weeks, but it is in my list (as other things...:( ). CSS urls Hi I suggest that you mention the limitation for the CSS at least in the notes above, and also in the comments of the source code (e.g., as a comment to the variable enabling the merge). Also, while the comments say that Js and CSS merging are disabled by default, they are actually active by default in the extension. I recommend to keep buggy/limited options inactive - so CSS merging should really be false because relative Urls are essential (to the framework CSS for starters). If things do not work "immediately", many developers try and abandon the extension. I understand the limitation of time completely - you'll likely add proper Css merging when you are in need of it yourself, which is fully understandable. That's why I generally fix bugs myself, the difficulty is often to get them into the official release. Very Cool Very Good extension, Its solve my issue as well as it boost my application speed CGridView/CListView NLSClientScript is an excellent extension but it does not work very well for the content of CGridView and CListView. I had to face that issue once more and I decided to have a deeper look and I found a solution. First, we can rely on the 'afterAjaxUpdate' parameter of the CListView/CGridView like this: That works but the second time you will try the Ajax reload will happen twice, the third time three times, etc. The "issue" is with the core library but was not accepted as a bug three years ago so here are the changes I apply after each Yii update for that (I just discovered the change needed for yiilistview.js as I have not been using that until recently). CButtonColumn: To remove the 'on' event on a grid before setting it. jquery.yiigridview.js: Add lines to switch off event listeners before defining them. -> There is a potential issue that there are other listeners using the same selector that might be deleted. -> However, the risk is low because the selector is specific and likely to be used by the yiiGridView only. jquery.yiilistview.js: ~~~ [javascript] $(document).off('click.yiiListView', settings.updateSelector); $(document).on('click.yiiListView', settings.updateSelector,function(){ ~~~ Have fun with it! le_top Hmm hardcore stuffs:) Interesting, as i remember i definitely tested the 7.0 with CGridView (not sure i did it with CListView) and didn't noticed any trouble... Could you pls descibe the original issue you experienced, regarding the conflict of NLSClientScript and CGridView/CListView? It is still not clear for me and to be honest i stopped digging into the further details from that point. I got confused from the "afterAjaxUpdate" thing - it was already an attempt to solve an issue or it was the case when the issue happened? Sorry maybe i'm too tired right now and missed something in the description:) Gist to demonstrate issue Hi I created a GIST that you can use as a view ('$this->render('viewname');' in a controller will do it). This example has two grids with a CJuiButton that I inserted in the filter. The CJuiButton is fully functionnal if the jquery-ui is executed on it. For the purpose of the test: EDIT: I just tested by dropping my gist inside your demo 'site/pages'. Bad behavior is as expected ;-). @le_top Ok, i just updated the 7.0beta, due to fix another issue - i suggest to use always the latest version, won't backport any fix to 6.x (and i'd prefer to have new issues reported to git, consequently;-). I tried your gist in the demo's environment with yii 1.16 and i can see the issue. I'm afraid there's nothing nlsclientscript could do with it, the issue is not related about to prevent re-loading some already loaded scripts, but related how Yii's grid/jQuery handles the grid update. Exactly the same happens there with or without using nlsclientscript. A trivial workaround i would apply for that specific issue is, set 'htmlOptions'=>array('onclick'=>'$("#mygridid").yiiGridView("update")') for the CJuiButton instead of 'onclick' as it is in the gist. It fixes the case. Fix for CJuiButton - which is just a "simple" example Thanks for having a look. With my post I am mainly sharing a solution and hoping for some "idea" to integrate something in NLSClientScript. CJuiButton is quite easy as an example and your susggestion does not make the button Jquery-ui'fy itself. It is more complex with jquery-ui dropdowns, date filter fields, ... . One of the options that could be implemented in NLSClientScript is that it automatically applies the 'scripts' on the page at least only once at the end of each ajax call. In other words, the scripts ('') that were already "added" during the ajax call are not added again, and the other ones are "executed"/added/. Anyway this is what I do with the code I add for 'afterAjaxUpdate' . So if you do not see another solution, "we" just have to apply what I explained in my comment. @le_top Well, the first versions of NLSClientScript filtered the received html data (removed the script tags already recent, looking at its "src"). This is similar to your suggestion, just now it should be applied to script tags with no src, considering their content. There were several issues with that technique, therefore i switched to a more clean approach from version 6. Anyway i will think about some general solution but this problem seems to be hard. Cache-control: no-transform is required! I just spent a long time debugging a particular case. To make a long story short, "Cache-control: no-transform" is required and this can be set in the extension by adding the following code: What went wrong? Some proxy (probably the antivirus) of the end-user replaced the 'script' and 'link' tags with the actual content of the URLs that it cached. As a result, nor jQuery nor NLSClientScript can identify the duplicates and 'jQuery' as well as the other code was loaded more than once. This was never observed earlier as all sites use 'https' preventing proxies from modifying the content. However this is a new setup without 'https' available. I determined that 'no-transform' could do the trick and it does. It tells the proxy that it should not change the content - and it respects that. Note: When calling 'header' elsewhere without setting the second parameter to false, potentially removes this directive from the response, so make sure that you include 'no-transform' in the Cache-Control directive when you need it. blank page hi team, when i m running this extension , i m getting white blank page without any error? Avoiding ajax timestamp Hi I've ran in a few other issues: a. The timestamp added by jQuery/ajax at the end of the URL breaks the use of the navigator's cache. b. [More annoying]: the scripts are loaded "asynchronously" and the order of execution is not guaranteed - not even with regards to the embedded script. Solution for a: Workaround for b: So basically I replace the call to 'registerScript' by 'registerD3Script'. This adds wrapper aroud the code which checks if D3 is loaded and if not it continues waiting for it. For b, a better workaround/solution would consist of: This all may explain some other issues I experienced in the past... If you have any questions, please ask in the forum instead.
https://www.yiiframework.com/extension/nlsclientscript
CC-MAIN-2019-51
refinedweb
6,924
65.01
- Transforming XML with XSLT - Beginning an XSLT Style Sheet - Creating the Root Template - Outputting HTML Code - Outputting a Node's Content Beginning an XSLT Style Sheet Every XSLT style sheet is an XML document in itself and therefore should begin with a standard XML declaration. Once that's out of the way, you must define the namespace for the style sheet. To begin an XSLT style sheet: Type <?xml version="1.0"?> to indicate that the XSLT style sheet is an XML document. Next, type <xsl:stylesheet xmlns: to specify the namespace for the style sheet and declare its prefix (xsl). Leave a few empty lines where you will create the style sheet Type </xsl:stylesheet> to complete the style sheet. The header for a style sheet is almost always the same. You can just copy and paste this information from one style sheet to the next. Tips There is no space in xsl:stylesheet. (It's not xsl:style sheet). Nevertheless, I do use two words to refer to style sheets when talking about them in this book (as is the convention). If you use Internet Explorer 5 for XSLT processing, you'll probably have to use the following namespace declaration: <xsl:stylesheet xmlns:.
https://www.peachpit.com/articles/article.aspx?p=20984&seqNum=2
CC-MAIN-2020-16
refinedweb
205
80.41
To follow up on yesterday’s post, here is some code to extract a bit field. First, a refresher: The field marked in blue is the value we’d like to extract. One way to do this is to “manually” pick each of the pieces and assemble them into the desired form: Here is a little CoffeeScript test version: buf = new Buffer [23, 125, 22, 2] for i in [0..3] console.log "buf[#{i}] = #{buf[i]} = #{buf[i].toString(2)} b" x = buf.readUInt32LE 0 console.log "read as 32-bit LE = #{x.toString 2} b" b1 = buf[1] >> 7 b2 = buf[2] b3 = buf[3] & 0b111 w = b1 + (b2 << 1) + (b3 << 9) console.log "extracted from buf: #{w} = #{w.toString 2} b" v = (x >> 15) & 0b111111111111 console.log "extracted from int: #{v} = #{v.toString 2} b" Or, if you prefer JavaScript, the same thing: var buf = new Buffer([23, 125, 22, 2]); for (var i = 0; i <= 3; ++i) { console.log("buf["+i+"] = "+buf[i]+" = "+buf[i].toString(2)+" b"); } var x = buf.readUInt32LE(0); console.log("read as 32-bit LE = "+x.toString(2)+" b"); var b1 = buf[1] >> 7; var b2 = buf[2]; var b3 = buf[3] & 0x7; var w = b1 + (b2 << 1) + (b3 << 9); console.log("extracted from buf: "+w+" = "+w.toString(2)+" b"); var v = (x >> 15) & 0xfff; console.log("extracted from int: "+v+" = "+v.toString(2)+" b"); The output is: buf[0] = 23 = 10111 b buf[1] = 125 = 1111101 b buf[2] = 22 = 10110 b buf[3] = 2 = 10 b read as 32-bit LE = 10000101100111110100010111 b extracted from buf: 1068 = 10000101100 b extracted from int: 1068 = 10000101100 b This illustrates two ways to extract the data: “w” was extracted as described above, but “v” used a trick: first use built-in logic to extract an integer field which is too big (but with all the bits in the right order), then use bit shifting and masking to pull out the required field. The latter is much simpler, more concise, and usually also a little faster. Here’s an example in Python, illustrating that second approach via “unpack”: import struct buf = struct.pack('BBBB', 23, 125, 22, 2) (x,) = struct.unpack('<l', buf) v = (x >> 15) & 0xFFF print(v) And lastly, the same in Lua, another popular language: require 'struct' require 'bit' buf = struct.pack('BBBB', 23, 125, 22, 2) x = struct.unpack('<I4', buf) v = bit.band(bit.rshift(x, 15), 0xFFF) print(v) So there you go, lots of ways to extract useful data from the RF12demo sketch output! Thanks for these 2 articles! I worked all weekend on a project and finally figured out what you were up to with the bit fields. I worked around this (and will now fix my code) by sending a “High Byte” and “Low Byte” for temperature and humidity from an SHT22 and reading the HEX output from Jeelink running the RF12demo. Here’s a sample: OKX 0201BF011500 02=Node, 01=Humidity High Byte, 1B=Humidity Low Byte, 01=Temperature High Byte, 15=Temperature Low Byte, 00=Voltage Low. Translated: Node 02, Humidity 447 (i.e. 44.7%), Temperature 283 (i.e. 28.3c), Battery is ok 00.
http://jeelabs.org/2013/09/06/decoding-bit-fields-part-2/
CC-MAIN-2016-50
refinedweb
534
74.79
Fl_Help_Dialog #include "Fl_Help_Dialog.h" -lfltk_images / fltkimages.lib The Fl_Help_Dialog widget displays a standard help dialog window using the Fl_Help_View widget. The constructor creates the dialog pictured above. The destructor destroys the widget and frees all memory that has been allocated for the current file. Hides the Fl_Help_Dialog window. Loads the specified HTML file into the Fl_Help_View widget. The filename can also contain a target name ("filename.html#target"). Set the screen position of the dialog. Change the position and size of the dialog. Shows the Fl_Help_Dialog window. Sets or gets the default text size for the help view. Sets the top line in the Fl_Help_View widget to the named or numbered line. The first form sets the current buffer to the string provided and reformats the text. It also clears the history of the "back" and "forward" buttons. The second form returns the current buffer contents. Returns 1 if the Fl_Help_Dialog window is visible. Returns the position and size of the help dialog.
http://fltk.org/doc-1.1/Fl_Help_Dialog.html
crawl-001
refinedweb
163
62.54
Opened 13 years ago Closed 13 years ago #105 closed Bug (Duplicate) _IEAttach does not work with embedded mode Description When trying to attach to an IE control that is embedded in another window, you get an "Subscript used with non-Array variable.: " error that points to IE.au3. example: #include <IE.au3> $oIE = _IEAttach ("AutoIt Help", "embedded") MsgBox(0,"",$oIE) C:\Program Files\AutoIt3\Include\IE.au3 (4088) : ==> Subscript used with non-Array variable.: If IsObj($aRet[4]) Then If IsObj($aRet ERROR Exit code: 1 Time: 3.574 Attachments (0) Change History (1) comment:1 Changed 13 fixed in beta 3.2.11.0. Remember, always test with the latest version of AutoIt before reporting a bug. Resolving as duplicate since it's already fixed.
https://www.autoitscript.com/trac/autoit/ticket/105
CC-MAIN-2021-21
refinedweb
127
59.4
You can subscribe to this list here. Showing 3 results of 3 On Mon, 04 Nov 2002, Alex Kirk wrote: > Hello,=20 >=20 > I've got my MX records working on my NetBSD/Dreamcast, which I've also go= t=20 > Mixmaster installed on now (thanks to Peter Palfrader). I'm getting ready= =20 > to debut it, but I'm having a bit of trouble: when I use=20 > "|/home/mix/Mix/mix -RM" as the contents of my .qmail file, Qmail sends t= he=20 > message into /var/mail/mix, not /home/mix/Maildir/new. I know Qmail's not= =20 > malfunctioning on a general level, as "./Maildir/" as .qmail puts the=20 > message in there A-OK.=20 >=20 > I've got many reasons I'm worried about this: my drive is over NFS, which= =20 > is a bad thing with mbox; I seem to remember my previous install ran with= =20 > Maildir; and, probably most importantly, I'm not getting the usage.txt=20 > reply when I send blank messages to mix@...=20 >=20 > Is this something I did wrong when I installed Mixmaster (I realized just= =20 > now there was no Maildir when I installed it)? Can I configure Mixmaster = to=20 > use Maildir, or is this a Qmail problem?=20 Mixmaster (since b40) can read incoming mail from a Maildir style mailfolder. set | MAILIN ~/Maildir/ in your mix.cfg set MAILINTIME to something reasonable like 1 minute or so: | MAILINTIME 60s if you run Mixmaster in Daemon mode, that's it. If you run it from cron just call it regularily with the -M switch. -M, --remailer Check if it is time to perform the regular remailer actio= ns: Send messages from the pool, get mail from POP3 servers and k= eep the internal files up-to-date. [ and read mail from MAILIN ] For MAILBOX (where error messages etc are stored) there is no maildir support in our stable branch. It's in the unstable CVS however. If there's need I could backport it (should be pretty trivial and non-intrusive). We just haven't done because there was no real need until now it seems. You can also set an email address instead of MAILBOX and mails will be sent there instead of stored on the filesystem. MAILBOX A generic mail folder for non-remailer messages that are = not stored in any of the following folders. If MAILBOX begins w= ith a |, it specifies the path to a program. If it contains an= @ sign, the message is forwarded to the given address (with an= X- Loop: header to prevent mail loops). If it ends with a / it = is treated as a Maildir, otherwise the message is appended to = the given file name or written to standard output if MAILBOX is s= td- out. Default: mbox. [ the part about Maildir is __not__ part of the released mixmaster as of 2.9b49 ] HTH yours, peter --=20 PGP signed and encrypted | .''`. ** Debian GNU/Linux ** messages preferred. | : :' : The universal | `. `' Operating System | `- > 1.) gmake is broken on your Platform. > Our Install script defaults to using "gmake" and then "make". If we > use "make" right from the start, it builds nicely. That's very interesting. I'll be sure to forward that point along to NetBSD/Dreamcast. >. Actually, I'm intentionally going forward without ncurses support. Basically, I've run Mixmaster remailers enough in the past to be fond of the project, and also to know that the online FAQ/documentation could use a bit of updating (no offense, I'm sure the maintainer is just busy). Since I had success recently rewriting the NetBSD/Dreamcast FAQ & HOWTO (I'm a laid-off tech writer/programmer by trade), and I'm looking to continue beefing up my resume while doing good in the world, I figured I'd get some remailers running again and then submit patches to the documentation. Running Mixmaster minus ncurses only helps me be more expansive in what I can knowledgeably write about. > . I'm going with Type II anyway, so no big deal. Thanks for fixing this so quickly. I'll be posting mix@... as a new remailer to the remops shortly. Alex Kirk [I've CCed mixmaster-devel] [Alex tried to build mixmaster 2.9b40 on NetBSD/Dreamcast. Unfortunatly the build failed - see his post to the remailer operators' list: ] On Mon, 04 Nov 2002, Alex Kirk wrote: > Let me know if you can get that to work, or if you have thoughts on how t= o=20 > fix it. I figure it's the best possible practical use I can think of for= =20 > the box, and it'd be good to get whatever bug fixed anyway.=20 Good news first: it builds. There were in fact several problems. 1.) gmake is broken on your Platform. Our Install script defaults to using "gmake" and then "make". If we use "make" right from the start, it builds nicely.. The Install script needed a bit of tweaking to correctly detect that idea is missing, and to also not use gmake. Here is the patch: [NetBSD has an idea.h, it also compiles nicely. Only on link time the linker tells you that you need to link against the evil (for patented and non-free) crypt_idea, which of course is not installed by default (There might be a package or something, no you don't need it).] --- Install 2002-11-01 20:48:47.000000000 +0100 +++ ../Install-netbsd 2002-11-04 22:31:43.000000000 +0100 @@ -665,10 +661,12 @@ cat <<END >tmptst.c #include <openssl/idea.h> int main() { + void *dummy; + dummy =3D idea_cfb64_encrypt; exit(0); } END - if gcc $INC tmptst.c -c -o /dev/null + if gcc $INC tmptst.c -o /dev/null then DEF=3D"$DEF -DUSE_IDEA" else @@ -705,7 +703,7 @@ =20 =20 echo "Compiling. Please wait." -whereis gmake make +whereis make make=3D$found =20 if [ "$system" =3D win32 ] yours, peter --=20 PGP signed and encrypted | .''`. ** Debian GNU/Linux ** messages preferred. | : :' : The universal | `. `' Operating System | `-
http://sourceforge.net/p/mixmaster/mailman/mixmaster-devel/?viewmonth=200211&viewday=4
CC-MAIN-2014-15
refinedweb
1,005
73.68
in reply to Best practices with globals and subroutine arguments I've been following the general disparagement of using global variables on PM for at least a year. I am especially curious when I read comments (as above) to the effect that global variables are unconditionally poor practice. Global variables have a respectable place in the annals of computer science. In recent history, the linux kernel is a large project that uses global variables. Linus originally elected this design to avoid the message-passing overhead of a microkernel. That was a technical decision, and kernel programmers seem to be bearing the maintenance burden fairly well. So are the maintainers of billions of lines of COBOL, also no doubt using global variables. Is the vociferous advocacy against global variables due to module authors' focus on separation of concerns, due to devotion to achieving reliability in large systems or what? My application, Audio::Nama started with a couple hundred lines of code. Now it has grown to having more than 200 global variables in the package's root namespace. Some 25 of these variables house widgets related to the Tk GUI. I could take them out of the root namespace and put them in a GUI namespace, but they'll still be globals. What benefit will I have from rewriting every statement referring to one of these variables? Configuration and state variables in the global namespace are convenient. The app has a shell that lets me dump variables and data structures at will. I have a useful test suite that helps me track regressions, and use version control to develop new functions. Does the convenience of globals and the inconvenience of other solutions figure into the 'globals are bad practice' calculation? Things have broken for many many reasons over the five year life of my project. Mistakenly assigning to a global variable may have happened only a couple times, typically when using '=' instead of '==' in a comparison. Some could criticize my app as being a ball of mud. Certainly high level structures have come gradually. I've only introduced namespaces and classes when confronted with difficulties in reading or maintaining the code that I couldn't otherwise resolve. Global variables is an area I'm considering, however I am still looking for alternatives and a motivation to shift over to other constructs from this straightforward way of addressing data. Namespaces became practical for me to introduce when I discovered that I could use 'our' declarations in a single file. That allowed me to move code to other namespaces while still accessing variables in the root namespace. Although I want to refactor my code to be more reliable and easier to maintain, any steps will need to be sufficiently incremental. Over time I've found attempts at search-and-replace on variable names to be suprisingly error prone.] Perl Cookbook How to Cook Everything The Anarchist Cookbook Creative Accounting Exposed To Serve Man Cooking for Geeks Star Trek Cooking Manual Manifold Destiny Other Results (155 votes), past polls
http://www.perlmonks.org/index.pl?node_id=843826
CC-MAIN-2014-41
refinedweb
505
53.61
Splitting the ZF2 Components. Why split them at all? "But you can already install components individually!" True, but if you knew how that occurs, you'd cringe. We've tried a variety of solutions, and every single one has failed us at some point or another, typically when we move to a new minor version of the framework, but occasionally even on trivial bugfix releases. We've tried filter-branch with subdirectory-filter, we've tried subtree split, and even subsplit. We've used manual scripts that rsync the contents of each commit and create a reference commit. Our current version is a combination of several approaches, but we've found we must run it manually and verify the results before pushing, as we've had a number of situations, as recently as the 2.4.0 release, where contents were not correct. On top of all this, there's another concern: why do all components get bumped in version, even when no changes are present? As an example, a number of components have had zero new features since the 2.0 release; they're either stable, or have smaller user bases. It doesn't make sense to bump their versions, but they get bumped regardless whenever we do a new release of the framework. When we start considering a new major version of the framework, it doesn't necessarily make sense to bump such components, as there will be literally zero breaking changes, and, in many cases, no new features. In other cases, such as the EventManager, ServiceManager, and a handful of other components, we know that these will require major versions due to necessary architectural changes. However, as long as we're still developing minor release branches of the framework, we cannot have meaningful development on those features due to the complexities of keeping changes in sync between branches. In short, we'd like to be able to version the individual components separately, in their own cycles. On top of that, when we look at maintenance, having a monolithic repository poses a challenge: we have to limit the number of developers with commit rights to ensure that those who can commit are aware of the impact a change might have across the framework. This means that a number of developers with time and energy to spend on improving a single component or small subset of components are hampered by how quickly their changes can be reviewed by the maintainers. Splitting the components gives us the opportunity to expand the number of contributors with commit access. The framework itself can pin to specific versions of components, and maintainers with commit access to the framework can review and change those versions based on integration and smoke tests. In the meantime, a larger set of contributors can be gradually improving the individual components, and users can selectively adopt those new versions into their applications, on their own review cycles. In the end: - We get components that follow Semantic Versioning properly. - We get accelerated development in components that need it. - We expand the number of active, able maintainers. - We enable users to adopt new features at their own pace. - We retain framework stability. The Goal Since we branched ZF2 development, our repository has looked something like the following: .coveralls.yml .gitattributes .gitignore .php_cs .travis.yml bin/ CHANGELOG.md composer.json CONTRIBUTING.md demos/ INSTALL.md library/ Zend/ {component directories} LICENSE.txt README-GIT.md README.md resources/ tests/ _autoload.php Bootstrap.php phpunit.xml.dist run-tests.php run-tests.sh TestConfiguration.php.dist TestConfiguration.php.travis ZendTest/ {component directories} The structure follows PSR-0, with each component below the library/Zend/ directory. The goal is to have individual component repositories, each with the following structure: .coveralls.yml .gitattributes .gitignore .php_cs .travis.yml composer.json CONTRIBUTING.md src/ LICENSE.txt phpunit.xml.dist phpunit.xml.travis README.md test/ bootstrap.php {component test cases} In the above structure, note the following differences: - Source and unit test files now follow PSR-4, and can be found directly beneath the new src/and test/directories (which replace library/and tests/, respectively), without any directory nesting based on namespace (unless any subnamespaces are present). - The README.mdfile will need to be specific to the component. Additionally, it can incorporate what was in the INSTALL.mdfile originally. - The composer.jsonfile will need to be for the component, not the framework. Additionally, we don't currently list dev/testing dependencies in our component repos, so those will need to be added. - The TestConfiguration.php.*files define constants referenced by the unit tests; those can be migrated to the phpunit.xml.*files — which we can move to the project root to simplify testing. - The .travis.ymlfile can be streamlined, as we're now only testing one component. - Most testing infrastructure can be removed, as it's around simplifying running tests for individual components within the larger framework. The Bootstrap.phpgets renamed to bootstrap.phpto avoid being confused with unit test files. README-GIT.mdgets replaced with a lengthier CONTRIBUTING.mdfile. On top of all this, we had the following requirements: - The components MUST have full history from 2.0.0rc7 forward. This is so those working on the components can see the why and who behind commits. - Commit messages MUST reference original issues and pull requests on the ZF2 repository; again, this is to facilitate the why behind changes. - Ideally, history should contain only history for the given component. - The directory structure in each commit, including (and especially!) tags, MUST follow the proposed structure. How we got there One of the huge benefits to using Git is the ability to rewrite history. (It's also one of its scariest features.) It provides a number of facilities for doing so, from rebase to grafts to subtree to filter-branch. In our component split research, we evaluated several solutions. Grafts Grafts provide a way to merge two different lines of history together, but, for our purposes, also allow us to prune history. Why would we do this? Because we don't really need history prior to 2.0.0 development at this point. In large part, this is because it's irrelevant; files were moved around and changed so much between forking from the 1.X tree and 2.0 that tracing the history is quite difficult. I eventually found a methodology for pruning that looks like this: $ echo bb50be26b24a9e0e62a8f4abecce53259d707b61 > .git/info/grafts $ git filter-branch --tag-name-filter cat -- --all $ git reflog expire --expire=now --all $ git gc --prune=now --aggressive $ rm .git/info/grafts It's supposed to essentially remove history before the given sha1. What I found was that by itself, I noticed little to no change in the repository, other than size; I could still reach earlier commits. However, when coupled with the final techniques we used, it meant that we effectively saw no commits prior to this point. subtree git subtree is a "contributed" git command; it's not available in default distributions of git, but often available as an add-on package; if you install git from source, it's in the contrib tree, where you can compile and install it. Subtree provides a rich set of functionality around dealing with repository subtrees, allowing you to split them off, add subtrees from other projects, and even push commits back and forth between them. At first blush, it seems like an ideal, simple solution: - Split each of the library/and tests/component subtrees into their own branches. - Create a new repository, and add each of the above as subtrees. $ git clone zendframework/zf2 $ git init zend-http $ cd zf2 $ git subtree split --prefix=library/Zend/Http -b src $ git subtree split --prefix=tests/ZendTest/Http -b test $ cd ../zend-http $ # add in basic assets, and create initial commit $ git remote add zf2 ../zf2 $ git subtree add --prefix=src/ zf2 src $ git subtree add --prefix=test/ zf2 test Indeed, if you do the above, when done, the directory looks exactly like it should! However, the history is all wrong; if you check out any tags, you get the full ZF2 tree for the tag. As such, subtree fails one of the most important criteria right off the bat: that each commit and tag represent only the component. subdirectory-filter subdirectory-filter is one of the git filter-branch strategies. It operates similarly to subtree, but also rewrites history. We used this approach when splitting the various "service" (API wrapper) components from the main repository prior to the first ZF2 stable release. The basic idea is similar to that of subtree; the difference is that you have to begin with separate checkouts for each of the source and tests. $ git clone zendframework/zf2 zend-http-src $ git clone zendframework/zf2 zend-http-test $ cd zend-http-src $ git filter-branch --subdirectory-filter library/Zend/Http --tag-name-filter cat -- -all $ cd ../zend-http-test $ git filter-branch --subdirectory-filter tests/ZendTest/Http --tag-name-filter cat -- -all $ cd .. $ git init zend-http $ cd zend-http # add in basic assets, and create initial commit $ git remote add -f src ../zend-http-src $ git remote add -f test ../zend-http-test $ git merge -s ours --no-commit src/master $ git read-tree -u --prefix=src/ src/master $ git commit -m 'Merging src tree' $ git merge -s ours --no-commit test/master $ git read-tree -u --prefix=test/ test/master $ git commit -m 'Merging test tree' Again, this looks great at first blush; all the contents for the given component are rewritten perfectly. But when you start looking at previous tags and commits, you see an interesting picture: based on the commit and which remote you added first, you'll see a completely different directory structure. Like subtree, this fails our criteria that the repo be in a usable state at any given commit. tree-filter Like subdirectory-filter, tree-filter is a filter-branch strategy. tree-filter allows you to rewrite the tree contents any way you want, while retaining the commit message and metadata. This turned out to be what we were looking for! However, there were a few more pieces we needed to address: - Rewriting commit messages referencing issues and pull requests to link to the main ZF2 repository. - Pruning empty commits. - Ensuring tags contain the expected tree. Fortunately, filter-branch has other strategies for just these purposes: msg-filterallows you to rewrite commit messages. commit-filterprovides tools for detecting and removing empty commits. tag-name-filterensures that tag references are rewritten when the parent commits change or are removed. So, what we ended up with was something like the following: git filter-branch -f \ --tree-filter "php /path/to/tree-filter.php" \ --msg-filter "sed -re 's/(^|[^a-zA-Z])(\#[1-9][0-9]*)/zendframework\/zf2/g'" \ --commit-filter 'git_commit_non_empty_tree "$@"' \ --tag-name-filter cat \ -- --all /path/to/tree-filter.php is a script that contains the logic for re-arranging the directory structure, as well as rewriting the contents of files as necessary (e.g., rewriting the contents of composer.json, or filling in the name of the component in the CONTRIBUTING.md). The msg-filter looks for issue and pull request identifiers (a # character followed by one or more digits), and rewrites them to reference the repository. The commit-filter checks to see if the repository contents have changed in this commit, and, if not, instructs git to ignore the commit (and, since tree-filter always executes before commit-filter, the comparison is always between rewritten trees). The tag-name-filter MUST be present, and essentially just ensures that the tag is rewritten; if absent, tags are not rewritten, and refer to the original contents! Stumbling blocks We had a few stumbling blocks getting the above to work. The first was that, for purposes of testing, we had to specify a commit range, instead of -- --all. This was necessary because of the size of the repo; at ~27k commits, running over every single commit can take between 5 and 12 hours, depending on git version, HDD vs ramdisk, speed of I/O, etc. For small subsets, we could get consistent results. When we expanded the range, we started seeing strange errors, such as some tags not getting written. To compound the situation, we also made a last minute change to only do history from the 2.0.0rc7 tag forward, and this is when things completely fell apart. A large number of tags would not get rewritten, the set of malformed tags varied between components, and we couldn't figure out why. At a certain point, I recalled that git stores commits as a tree, and that's when I realized what was happening: when we specified a commit range, we were essentially specifying a specific path through the commits. If a tag was made on a branch falling outside that path, it would not get rewritten. This meant that the only way to get consistent results that met our criteria was to run a test over the full history. Fortunately, sometime around that point, a community member, Renato, suggested I try a run using a tmpfs filesystem — essentially a ramdisk. This sped up runs by a factor of 2, and I was able to validate my hypothesis within an evening. Another stumbling block was empty commits. We originally used filter-branch's --prune-empty switch, but found it was generally unreliable when used with tree-filter. The solution to this problem is the commit-filter as listed above; it did a stellar job. Empty merge commits There was one lingering issue, however: when inspecting the filtered repository, we still had a large number of empty merge commits that had nothing to do with the component. After a lot of searching, I found this gem: $ git filter-branch -f \ > --commit-filter ' > if [ z$1 = z`git rev-parse $3^{tree}` ];then > skip_commit "$@"; > else > git commit-tree "$@"; > fi' \ > --tag-name-filter cat -- --all $ git reflog expire --expire=now --all $ git gc --prune=now --aggressive The above uses a commit-filter which internally uses rev-parse to determine if the commit is a merge and that both parents are present in the repository; if not, it skips (removes) the commit. The reflog expire and gc commands clean up and remove any objects in the repository that are now no longer reachable. Final Solution With a working graft, tree-filter, and commit-filter in place, we could finally proceed. We created a repository containing all scripts we needed, as well as the assets necessary for rewriting the component repository trees. We then had a tool that could be executed as simply as: $ ./bin/split.sh -c Authentication 2>&1 | tee authentication.log And with that, we could sit back and watch the component get split, and push the results when done. You can see the work in our component-split repository. But what about the speed? "But didn't you say it takes between 5 and 12 hours to run per component? And aren't there something like 50 components? That would take weeks!" You're quite astute! And for that, we had a secret weapon: a community contributor, Gianluca Arbezzano working for an AWS partner, Corley, which sponsored splitting all components in parallel at once, allowing us to complete the entire effort in a single day. I'll let others tell that story, though! The results I'm quite pleased with the results. The ZF2 repository has ~27k commits, 67 releases, and over 700 contributors; a clean checkout is around 150MB. As a contrast, the rewritten zend-http component repository ended up with ~1.7k commits, 50 releases, ~160 contributors, and a clean checkout clocks in at 5.4MB! So the individual components are substantially leaner! Additionally, they contain all the QA tooling necessary to start developing against for those wanting to patch issues or create features, making development a simpler process. The lessons learned: tree-filteris your friend, if your restructuring involves more than one directory and/or adding or removing files. tag-name-filterMUST be used anytime you use filter-branch; otherwise your tags may end up invalid! filter-branchshould be used on ranges sparingly, and ideally only if you're not worried about tags. In most cases, you want to run over the entire history. commit-filteris your best option for ensuring empty commits of any type are stripped, particularly if you're using tree-filter; the --prune-emptyflag is not terribly reliable. - Always do a full test run. It's tempting to use a commit range to verify that your filters work, but the results will differ from running over the entire history. Which leads to: - Schedule plenty of time, particularly if your repository is large. Those full test runs will take time, and, if you follow the scientific process and make one change at a time, you may need quite a few iterations to get your scripts right. All-in-all, this was a stressful, time-consuming, thankless task. But I am quite happy with the results; our components look like they are and were always developed as first-class components, and have a rich history referencing their original development as part of the encompassing framework. Kudos! I cannot thank Gianluca and Corley enough for their generous efforts! What looked like a task that would take days and/or weeks happened literally overnight, allowing us to complete a major task in Zend Framework 3 development, and setting the stage for a ton of new features. Grazie!
https://mwop.net/blog/2015-05-15-splitting-components-with-git.html
CC-MAIN-2019-09
refinedweb
2,924
54.32
- Advertisement ontheheapMember Content Count1571 Joined Last visited Community Reputation798 Good About ontheheap - RankContributor Happy pride everyone! ontheheap replied to Zahlman's topic in GDNet LoungeThis type of crap makes my blood boil. Hopefully those scumbags will at least lose their jobs for this, but I doubt it. Maybe someday we'll live in a society where we all have respect and compassion for fellow humans.. I doubt I'll live to see it, but someday... how to split a string in c++? ontheheap replied to Storyyeller's topic in For Beginners's ForumThere's boost tokenizor and boost string algorithm. Example: #include <iostream> #include <string> #include <vector> #include <boost/tokenizer.hpp> #include <boost/algorithm/string.hpp> using namespace std; using namespace boost; int main() { // here's a string in the format you described, plus // two empty vectors v1 and v2 string text = "A,B,C::D,E,F"; vector<string> v1; vector<string> v2; // tokenize the string by the "::" character // and get an iterator to the first tokenizer result char_separator<char> sep("::"); tokenizer<char_separator<char>> tokens(text, sep); tokenizer<char_separator<char>>::iterator it = tokens.begin(); // split (defined in algorithm/string.hpp will tokenize // the string pointed to by it and store each token in // the specified container split(v1, string(*it), is_any_of(","), token_compress_on); ++it; split(v2, string(*it), is_any_of(","), token_compress_on); } Edit: Added links. Learning Physics ontheheap replied to guyver23's topic in Math and PhysicsHave you checked out Khan Academy? He has a bunch of short video lectures on all sorts of topics, including physics lectures on everything from projectile motion to electromagnetism (basically what you would cover in two semesters of physics). Quick question in SDL ontheheap replied to Joshuad's topic in For Beginners's ForumMake sure your main function is defined as: int main( int argc, char* argv[] ) Make sure you are linking against SDL.lib and SDLmain.lib Edit: Fixed the wrong version of "your" above ;) Correct game C++ OOP impl. in main? ontheheap replied to programering's topic in General and Gameplay ProgrammingQuote:Original post by Sneftel Everyone who posted in this thread needs to read about the bike shed. That was great. "... the amount of noise generated by a change is inversely proportional to the complexity of the change." =) using string to trim filename from path ontheheap replied to SelethD's topic in For Beginners's ForumQuote:Original post by SelethD I tried system.io.path, and its not working for me. I need simply to know how to remove the text from a String after the last occuring '/' What do you mean "its not working" for you? Please post the relevant code. I suppose you could do it with String.LastInstanceOf and String.SubString. using string to trim filename from path ontheheap replied to SelethD's topic in For Beginners's ForumQuote:Original post by SelethD I am new to C# and I have never used the String class much. I have a complete path in a String String myPath = "C:\\directory\\subdirectory\\file.dat"; what can i do with myPath to get it to hold the data... "C:\\direcotory\\subdirectory\\" Thanks for your help, I have googled this and all I find is a buch of VB stuff. Check out System.IO.Path. I believe the method is GetDirectoryName. c++ sending keypresses to website ontheheap replied to Belgium's topic in For Beginners's ForumLook into InternetExplorer objects in VBScript. Example: Set IE = CreateObject("InternetExplorer.Application") IE.Visible = False IE.Silent = True IE.Navigate "" Do While IE.Busy Loop IE.Document.f.q.Value = "gamedev" IE.Document.f.submit() Do While IE.Busy Loop IE.Visible = True Need text wrapping code ontheheap replied to XTAL256's topic in General and Gameplay ProgrammingA comment system we use at work has limitations on the number of characters that will fit on a line, as well as the number of lines that will fit on a screen. I wrote an HTA script to make posting the comments easier. Here's my text-wrapping code.. written in VBScript so should be easy to follow and translate to whatever language you need. Each line gets stored as a new array entry, then the array can be looped through and a new line started at the beginning of each element. Function WrapComment() strComment = UCase("CLRC Updated by " & mainform.workerNumber.value & ", " & mainform.comment.value) strComment = Replace(strComment, vbCrLf, " ") strDirtyComment = "" strWrapText = "(((%%%WRAPTEXT.$$--</ br>--$$.WRAPTEXT%%%)))" arrWordsTmp = Split(strComment," ") intRunningLength = 0 intWrapLength = LineSizeCLRC ' Any single word (that is, block of characters without a space) that is ' longer the the line size get broken up into words the size of 1 line tmpStr = "" For Each Word In arrWordsTmp If Len(Word) => intWrapLength Then ' figure out how many lines it will take up numOfLinesNeeded = Len(Word) / intWrapLength temp = Round(numOfLinesNeeded) If temp < numOfLinesNeeded Then numOfLinesNeeded = temp + 1 Else numOfLinesNeeded = temp End If // Break the long word up into chunks that will fit on a clrc comment line For t = 1 To numOfLinesNeeded if t = numOfLinesNeeded Then tempStr = tempStr + Mid(Word, ((t-1)*intWrapLength)+1,intWrapLength) else tempStr = tempStr + Mid(Word, ((t-1)*intWrapLength)+1,intWrapLength) + " " end if Next Else tempStr = tempStr + Word + " " End If Next tempStr = Trim(tempStr) arrWords = Split(tempStr," ") arrTrailingCharacters = Split(tempStr," ") For x = LBound(arrTrailingCharacters) To UBound(arrTrailingCharacters) arrTrailingCharacters(x) = " " Next For x = LBound(arrWords) To UBound(arrWords) intRunningLength = intRunningLength + Len(arrWords(x) & " ") If intRunningLength = intWrapLength Then arrTrailingCharacters(x) = strWrapText intRunningLength = 0 End If If intRunningLength => intWrapLength And x > 0 Then arrTrailingCharacters(x-1) = strWrapText intRunningLength = Len(arrWords(x) & " ") End If Next For x = LBound(arrWords) To UBound(arrWords) strDirtyComment = strDirtyComment & arrWords(x) & arrTrailingCharacters(x) Next arrDirtyComment = Split(strDirtyComment, strWrapText) i = 0 For Each commentLine In arrDirtyComment ReDim Preserve CommentArray(i) CommentArray(i) = commentLine i = i + 1 Next If i = 0 Then ReDim Preserve CommentArray(0) CommentArray(0) = "" End If End Function Please help me understand some basic electrical concepts ontheheap replied to Drakkcon's topic in GDNet LoungeOMG I would have liked to have known about this site when I took physics last semester! I didn't really understand anything in the book we used. when to get a Wii ontheheap replied to Funkymunky's topic in GDNet LoungeQuote:Original post by CyberSlag5k Quote:Original post by Thevenin Is it true that you can play and browse online with the Wii for free (using an existing internet connection)? That Microsoft Live subscription nonsense was/is rediciously overpriced. I believe it costs $5 to active the browser, but after that it's free. Though I think right now the browser is in beta mode, and is free for the time being. Don't quote me on that, though. Yeah, it's supposedly going to cost 500 Wii points ($5.00 USD) but it's free until June 2007. Right now it's just a trial version, with the final version available in March. Wii remote... on the PC? ontheheap replied to evolutional's topic in General and Gameplay ProgrammingI've tried out DarwiinRemote on my Macbook (in OSX). It worked ok (it was a little hard to control the pointer). I'm downloading the software on wiiscript right now to check it out. Edit 1: Meh. My WiiMote is able to connect to Windows but none of the GlovePIE scripts work. I'm having trouble getting the BlueSoleil bluetooth software working (which seems to be the recommended software if MS bluetooth doesn't work with Glove). Oh well, I'll spend more time on this later to see if I can get it working. [Edited by - ontheheap on December 23, 2006 1:13:29 PM] Am I and idiot or is programming just hard ontheheap replied to monsterenergy's topic in For Beginners's ForumYou declare buymenu() as returning type int, but you have no return statement. I'm surprised it even compiles. VC++ Express issues an error. Based on what buymenu() does, I think you should make its return type void. Or, you could get the user's input within buymenu() and then as then return the menu option selected. void buymenu() { // stuff return; } Or: int buymenu() { // stuff int choice; cin >> choice; return choice; } WiiMote with candles no sensor bar ontheheap replied to Alpha_ProgDes's topic in GDNet LoungeQuote:Original post by Alpha_ProgDes Quote:Original post by Michalson I would try this but I already have my Wii turned on to make this post. Is there a WiiBoard or do you use the WiiMote to type? WiiMote with OnScreen keybord. It has that "intellitype" thing where it guesses what word you want as you input letters. It's pretty easy to "type" on, but it would be much better if you could just use a wireless keyboard. Help - please make my game idea not suck.. ontheheap replied to ArchangelMorph's topic in Game Design and TheoryEvent Two reminds me of one of the American Gladiator game called Assault. It was one of my favorite games on the show. - Advertisement
https://www.gamedev.net/profile/66098-stembro/
CC-MAIN-2018-43
refinedweb
1,489
54.32
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hallo, I would like to set a default value for the time tracking fields for special users (customer) - want that customer do not see the real value (it seems that it is not possible to hide it at all). I have tried to to attach following behaviour to the field I want to initiate with default value. But the behaviour is never executed. Do you know why? When I add a "Comment" behaviour it is executed, but by adding one of the time tracking fields as behavor it is never executed. Don't know why. So what I tried was to execute the code inside the comment behaviour - so that the groovy code sets the value of the time tracking t0 zero. But it does not work: def estimatedTime= getFieldById("tt_single_values_orig"); estimatedTime.setFormValue(0); But it does not work for some reason. Seems that the id is wrong. What is the id for manipulating Is there any way to do that? Maybe I'm doing something wrong in my script? Greetings, Your comment about the groovy code caught my attention in that it isn't supported in JIRA OnDemand. You might want to re-label this so that the JIRA gurus will see it. Here is the link for the OnDemand restricted functions if interested: Sorry I couldn't be more help. Cheers, Jason |.
https://community.atlassian.com/t5/Jira-questions/Se-time-tracking-value-to-0-for-customer/qaq-p/236151
CC-MAIN-2018-09
refinedweb
246
64.91
Warp¶ Performs a perspective warp based on ground control points to align images plantcv.transform.warp(img, refimg, pts, refpts, method="default") returns image after warping - Parameters: - img - binary or grayscale image to warp - refimg - image used as a reference for the warp - pts - coordinate points on img. 4 pairs should be given as a list of tuples - refpts - corresponding coordinate points on refimg. 4 pairs should be given as a list of tuples - method - method of calculating the transformation matrix. Available options are 'default', 'ransac', 'lmeds', 'rho' which correspond to the opencv methods and vary based on how they handle outlier points - Context: - Example use: A mask derived from a RGB image can be used to segment an NIR image which is difficult to segment otherwise. Input image A mask derived from a RGB image 2056x2454 An image from a SWIR camera is used as the reference image to define the transformation is 7000x5000 In this case we know the field of view of the two images is the same so we can use the image corners to define the transformation. In other cases you might need to establish corresponding control points in each image. from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) mrow, mcol = mask.shape vrow, vcol, vdepth = grayimg.shape img_warped = pcv.transform.warp(img=mask, refimg=grayimg, pts = [(0,0),(mcol-1,0),(mcol-1,mrow-1),(0,mrow-1)], refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1)]), method='default') Here is the warped mask: Here is the warped mask overlayed on the reference image:
https://plantcv.readthedocs.io/en/latest/transform_warp/
CC-MAIN-2021-10
refinedweb
278
57.4
From: Steven Watanabe (steven_at_[hidden]) Date: 2007-12-20 11:13:21 AMDG Ion Gaztañaga <igaztanaga <at> gmail.com> writes: > > Hi all, > > The formal review of the Unordered library started December 7, we have a > few nice reviews, but they are not enough. If you are interested in the > library (and I *know* you are), please take some time to review it. Sorry I didn't do a full review earlier. I just have a few implementation comments. allocator.hpp line 118: typedef typename Allocator::value_type value_type; Is there a reason you're not using allocator_value_type? lines 165 and 217: reset(ptr_); I don't think you want ADL here. hash_table.hpp: line 66: float_to_size_t: I don't think the test used is correct. The following program prints "0" under msvc 8.0: #include <limits> #include <iostream> int main() { std::cout << static_cast<std::size_t>( static_cast<float>(std::numeric_limits<std::size_t>::max())) << std::endl; } hash_table_impl.hpp lines 137-140: Do you want ADL for hash_swap? line 865: Is there a reason not to use cached_begin_bucket_? line 1282: return float_to_size_t(ceil( should qualify ceil with std:: line 1361: rehash_impl(static_cast<size_type>(floor(n / mlf_ * 1.25)) + 1); *std::*floor? The implementation files use BOOST_DEDUCED_TYPENAME but unordered_map and unordered_set use typename. Could you make it consistant? I'm getting a lot of warnings on the tests from msvc 8.0 with /W4 because minimal::ptr/const_ptr only defines operator+(int) and the internals call operator+ with a size_t. Is + required to work for the size_type or should it be cast to the difference_type explicitly? Also, minimal::ptr should use std::ptrdiff_t rather than int. In Christ, Steven Watanabe Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/12/131817.php
CC-MAIN-2020-10
refinedweb
298
60.92
In my previous post, about exporting data to Excel from api using React, there were comments like how to add custom header and style to the sheet!?. So considering these comments, in this post I decided to show solution for first issue which is adding custom headers to your Excel sheet. In this post two ways of adding custom headers will be shown First way Setup Create a new project npx create-react-app react-data-to-excel Run project locally npm start Let's dive into next step. Install libraries For this project we need to install following libraries: npm install xlsx file-saver axios xlsx - library for parsing and writing various spreadsheet formats file-saver - library for saving files on the client-side axios - promise based HTTP client for the browser and node.js. We will use it for fetching data from server Components Inside your project create component ExportToExcel.js import React from 'react' import * as FileSaver from "file-saver"; import * as XLSX from "xlsx"; export const ExportToExcel = ({ apiData, fileName }) => { const fileType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;charset=UTF-8"; const fileExtension = ".xlsx"; const exportToCSV = (apiData, fileName) => { const ws = XLSX.utils.json_to_sheet(apiData); const wb = { Sheets: { data: ws }, SheetNames: ["data"] }; const excelBuffer = XLSX.write(wb, { bookType: "xlsx", type: "array" }); const data = new Blob([excelBuffer], { type: fileType }); FileSaver.saveAs(data, fileName + fileExtension); }; return ( <button onClick={(e) => exportToCSV(apiData, fileName)}>Export</button> ); }; Update your App.js import React from 'react' import axios from 'axios' import './App.css'; import {ExportToExcel} from './ExportToExcel' function App() { const [data, setData] = React.useState([]) const <ExportToExcel apiData={data} fileName={fileName} /> </div> ); } export default App; According to official SheetJS CE docs. By default, json_to_sheet creates a worksheet with a header row. This way of adding header is reshaping array to object, based on our needs. Since the headers for Excel file come from the object keys we defined. In our case headers will be "Article Id" and "Article Title". Run project npm start Once project started successfully, click to button to download Excel file. Result Second way No need to reshape array inside App.js. Just need to add this code XLSX.utils.sheet_add_aoa(ws, [["Name", "Birthday", "Age", "City"]], { origin: "A1" }); inside your ExportToExcel.js file const exportToCSV = (apiData, fileName) => { const ws = XLSX.utils.json_to_sheet(apiData); /* custom headers */ XLSX.utils.sheet_add_aoa(ws, [["Name", "Birthday", "Age", "City"]], { origin: "A1" }); const wb = { Sheets: { data: ws }, SheetNames: ["data"] }; const excelBuffer = XLSX.write(wb, { bookType: "xlsx", type: "array" }); const data = new Blob([excelBuffer], { type: fileType }); FileSaver.saveAs(data, fileName + fileExtension); }; Discussion (12) Cool guide really easy to follow thanks for sharing. You're welcome! Sure no problem hope to see more of this content. thanks for sharing Okay, wait a second - did I get this correctly: I load JSON data from your server and turn that into an excel file INSIDE YOUR BROWSER using massive amounts of client side javascript. Why? Instead of requesting JSON from the server, why not requesting the excel file directly from the server (create it serverside)? That would result in a much smaller client side app and works even in older browsers. Or did I miss any reason to do that stuff in the browser? Server-side export is a different use case, and is not always a solution. For example, if you want to create a reusable table component with XLSX exporting feature, then client side is a way to do so. Certainly, information in the table is from an API, and at least I would not want to have my table feature being disabled due to internet connection issue. The article is an introduction to xlsxfeature, so either examples work. You should definitely mention just HOW BIG the xlsx module is. Its over 400kb of minified javascript. Thats nothing you should casually load alongside your app just because someone might want to save an excel file. And if you want to create a React table component, it should not be built with excel export included in the component. That logic definitely belongs someplace else. There may be a rare case of when there is really no other way than generating the excel file in the browser - but the way your article is written most people get the idea that there is no downside at all to that approach. "Just install those three modules" sounds simple but in the background you are getting truckloads of additional code into your application. Your article seems to be aimed at beginners. So big red warning: you would normally NOT want to do it this way, but there MAY be reasons to do so. I'm not the author by the way, just a passerby. I agree it has an impact on performance and should have more consideration in a real product implementation, but again, that was not the point of this article (at least in the way I interpret it). The table example was simply based on MUI data grid, which was on top of my head at that time. There is one big catch though, you must define cell format yourself. Let's say you already had an Excel sheet (with styling), and want to use that sheet to store data. An approach is to open your existing sheet, copy all existing cell information, write API data, then save to a new file (in-place replacement is not available AFAIK). However, xlsxwill nullify all existing style of the sheet so you must manually define cell style upon exporting. Apparently, this feature is only available in the Pro version, so it is a huge blockage IMO. How can I show values into rows rather than showing it in column Nice question, I will research on this than answer to you Sure,Thank you
https://dev.to/jasurkurbanov/how-to-export-data-to-excel-from-api-using-react-incl-custom-headers-5ded
CC-MAIN-2022-27
refinedweb
957
54.93
able.)) Express Delivery...is not available today (just normal delivery) The Only Newspaper That Is Not Controlled By The Cabal Who Are You What Are You Do- AAAAAA May, 4th 2013 - May the Fourth be with you! • Issue 182 • Sponsered by (and biased towards) the Rebel Alliance. This is not "Wednesday or Thursday" Are you back, then? Or did you just find a cyber-cafe in your tropical vacation paradise? Spıke Ѧ 01:21 5-May-13 - I'm back now, spent Thursday riding in the car (4 or 5 hour drive) back to my parents house, spent Thursday night over there since it was late. On Friday my uncle came in from New Jersey to visit us so I stayed around at my parents house long enough to see him (my parents live about an hour an a half drive from my place), and by the time he got in and we finished dinner, it was late so I spent Friday night over. My mom went out to an event on Saturday with her sorority (yes, it's a non-collegiate sorority for "education and entertainment purposes") so I stuck around until she got back so I could say goodbye, and in the meantime borrowed her computer. Also, yes, there was a computer room where we were staying near the ocean (Long Shores WA)and I did check in a couple times but didn't log in, looked like everything was under control when I checked in for about a half hour at a time. Thought briefly about logging in to make a forum comment but decided it wasn't important. There were only two computers for the whole resort building though, so I couldn't stay on those computers too long. But yes, now I'm back. And my cat is happy to see me. -- Simsilikesims(♀UN) Talk here. 04:59, May 5, 2013 (UTC) Hello. Welcome:03, May 5, 2013 (UTC) - Thanks, it's good to be back, where I can goof around on my computer AND watch TV at the same time, and have access to all my mp3's. -- Simsilikesims(♀UN) Talk here. 06:17, May 5, 2013 (UTC) I have always said that pristine beaches and fascinating local culture and cuisine are overrated! (Yes, I have been around computers when the concept of "background processing" while you do work was sorcery, and that multiple windows with moving contents was unthinkable.) The joint looks like it's lousy with Admins today, which is good, because I am in my third of three all-day sessions working a baseball tournament. No such major distractions between now and the start of our tiny regular summer season in June (which I don't work but only watch...depending on who quits between now and then). Spıke Ѧ 10:59 5-May-13 Ban Patrol Why list an IP on Ban Patrol for creating the one-word article ("Hello.") and then ban him yourself? Complete paperwork? The first thing I thought of when I saw him and deleted the one-word article was our "Hello. And goodbye" vandal whom I took care of at SpamRegex. I suspect he can easily change IPs anyway. Spıke Ѧ 01:39 6-May-13 - I wasn't the one who listed him on Ban Patrol. I was tempted to leave him be, but I decided he could stand to read HTBFANJS and learn a quick lesson. If he is the "Hello. And goodbye" vandal, he more than likely could change IPs, but that is something to be dealt with when it happens. -- Simsilikesims(♀UN) Talk here. 01:50, May 6, 2013 (UTC) I see; I misread it, as it was you who provided the details. No problem and no harm done by the two-hour ban. Spıke Ѧ 02:15 6-May-13 My new friend Tonight's vandal has identified himself as the site's old "Juicy taste of dia**hea" vandal, who predates me. There are defenses against him everywhere, all of which assume he will not alter his attack strategy. Rollback doesn't work when he hits the same page from two different IPs; there are about two dozen of these left in RecentChanges/hide-patrolled. But my T-Mobile service lets me work at only about one-twentieth of his pace. Meanwhile, another Anon is trying to soften up ScottPat for a move to the Fork. I suspect my old mentor MrN9000, as he behaves as a dick while appealing to me personally not to do the same in response. Sleazy, like everything else about the fork. Sleazier than Wikia. Spıke Ѧ 02:01 8-May-13 - It could be ANYONE from the fork who left the message on ScottPat's page. Unfortunately, this "new Uncyclopedia" is beginning to seem more and more like "New Coke" (if you remember that marketing fiasco - nobody liked it and they took it off the market). True, the censoring here can be draconian, but at least it keeps people from becoming preoccupied with the thing censored and then that becoming their primary focus. True, the content warning here "breaks the joke", but Wikia has required it due to legal issues I suspect, plus possibly one of their executives got offended by some of the content here and wanted to make it clear that Wikia does not officially endorse any of this. At the fork, they don't have problems with people going over the admins heads and complaining to wikia; thus no content warning there is required, but they are also at risk for potential lawsuits, plus they have their expensive server which is not paid for yet with the donations they have received thus far. One big lawsuit could sink their site easily, we aren't exposed in the same way. The fact that they are still coming here to cherry pick our editors is sleazy indeed, but it also means they aren't attracting enough new users on their own. -- Simsilikesims(♀UN) Talk here. 02:13, May 8, 2013 (UTC) - Whoops, accidentally rolled back your edits to this page while removing the fresh batch of steaming vandalism. I guess I deserve 49 lashes for not being observant enough as to which edits I was reverting. ◄► Tephra ◄► 03:24, May 8, 2013 (UTC) - Would you like your lashes with a wet noodle or a feather? -- Simsilikesims(♀UN) Talk here. 03:25, May 8, 2013 (UTC) - A wad of cash if possible. Also, I don't seem to have the ability to rollback all of the vandal's edits. I rolled back all that I could, but the ones that I haven't done would require me to manually undo them... why would that be? ◄► Tephra ◄► 03:30, May 8, 2013 (UTC) - If someone else (including the vandal using another IP) edits after the vandal, then rollback only works on the last editor of the page, and edits by someone other than the last editor of the page have to be undone manually, or one can go to the history and edit a version of the page prior to the vandalism. ---- Simsilikesims(♀UN) Talk here. 04:15, May 8, 2013 (UTC) - Well I knew that... However I just checked the history of one of the edits and saw that I had reverted an older edit which had been vandalized twice more and reverted twice before I finally reverted the original hit. So yes, you are right, I just hadn't realized the page had been edited four times between my revert and what I was reverting. Whew... I hope that made sense. I am not used dealing with this level of malicious behavior, although once I did have to deal with a coordinated attack from twenty different vandals at once on another wiki where I am a bureaucrat. ◄► Tephra ◄► 04:52, May 8, 2013 (UTC) This UnSignpost may be unsuitable for some viewers! The Newspaper Made Entirely From Recycled Internet Memes May, 10th 2013 • Issue 183 • Uncyclopedia set to have a bright future. Sunglasses stocks are running low. Update Otherwall I queried the author about this at User talk:Rpsingh. Would you take a look and decide what action is warranted? Spıke Ѧ 20:25 9-May-13 - I think the ICU on it is sufficient. The name probably isn't really anybody's real name, so I don't think that this is vanity, but it really needs to have more humor in it, not read like a dry biography. After all, this is a humor site, not a fiction site. A hoax about a person who doesn't exist isn't exactly humor material. -- Simsilikesims(♀UN) Talk here. 06:02, May 10, 2013 (UTC) Now blanked by yet another Anon (undone by Graphium, a new user who has done some good policing). As I told him, protecting this article against the too-many-Anons-spoiling-the-soup is not a solution; that would also freeze out Rpsingh for a couple more days. Speaking of food again, I recently went to WP:Cooking utensils to verify (for the evolving Cap'n Crunch) that the military weapon I had in mind was called a whisk. For some reason the entire table at Wikipedia struck me as hilarious and desperately in need of satire, for the benefit of everyone from prissy cooks to sloppy cooks to malevolent cooks. Spıke Ѧ 10:09 10-May-13 - If you want to make an article on Cooking Utensils, go right ahead; I would consider myself in the realm of lazy cooks (heat up a microwave dinner and call it done). But yes, the topic is ripe for satire. I don't know how it would be done without resorting to a list-like format though. -- Simsilikesims(♀UN) Talk here. 17:51, May 10, 2013 (UTC) Hi, is the email in your preferences right? I've mailed a couple of time recently about some srs stuff (well, a bit). Can you let me know if there's another address I should use? Many thanks -- sannse (talk) 23:50, May 9, 2013 (UTC) - The email @yahoo.com should be the correct one, stuff gets lost in there though because I am not checking it constantly and I get more offers than I need. -- Simsilikesims(♀UN) Talk here. 05:25, May 10, 2013 (UTC) - PS Found your email and replied to it. -- Simsilikesims(♀UN) Talk here. 17:52, May 10, 2013 (UTC) Autopatrolled for Anton199 I agree with you that Anton is a good team player. I had not given him Autopatrolled because his English is still spotty enough that I like to check his edits. However, he mostly edits a few specific projects in collaboration with ScottPat, so this is not a big problem. Spıke Ѧ 19:11 10-May-13 - I do not usually have my preferences set to see only unpatrolled edits on Recent Changes, and I like to see what our established editors are up to, so I can still check on these edits from time to time. Usually someone will come along and fix grammar and spelling eventually, if I don't happen to see it. I'm sorry if that someone happens to be you, I don't like to see you having to do all the work, but as you can understand, time and energy are limited and I do like to do things outside this site. Now that I'm back, and not working full-time, I should be on the site more often (when I'm not playing Facebook games or editing Castleville wiki which has basically one active editor, which is me, plus an admin (not me). )---- Simsilikesims(♀UN) Talk here. 19:20, May 10, 2013 (UTC) Wow: one Indian and one Chief. That could be us except that our Chief saw fit to elect two new Chiefs. I only filter for unpatrolled when I am willing to turn on JavaScript, which slows my T-Mobile USB suppository to its knees. Snippy often patrols overnight, and I've given ScottPat some instruction; however, during my baseball absence last weekend, we fell a weekend behind. Spıke Ѧ 20:57 10-May-13 Thank you! I'm probably the first active user to log on for a while since you've been on. I just woke up, checked my Uncyc. watchlist and it is full of pages that you have reverted an IP attack on. Thanks you have done a tremendous job and I still see more pages on my watchlist that need reverting so I'll help you:14, May 16, 2013 (UTC) Please ignore above as I pretty sure the reverting isn't you but a bot of some type. If you have cometo this page to thank Sims then you have misunderstood the situation like I did. The bot with Simsie's name is simply reverting the last IP edit on each page however below that there are many other vandal edits. All the pages it says Simsie has reverted haven't actually been fixed. More on village dump in new forum.:37, May 16, 2013 (UTC) - The vandalism is so extensive that I have started by rolling back edits of the last vandal to visit the page. The rollback tool apparently has its limits, had to manually rollback more edits beyond that. I am slowly working through the unpatrolled edits, and hope to have all this cleaned up eventually. Will check the forum. -- Simsilikesims(♀UN) Talk here. 06:40, May 16, 2013 (UTC) That explains why. I presumed that the rollbacks were a bot as rolling it back to the last IP hasn't worked and there are many that you have edited that still have the vandalism:46, May 16, 2013 (UTC) PS - Thanks for the work you've done to revert the vandals., May 16, 2013 (UTC) Vandal repair This (1800) works for me (but only because I'm at McDonald's drinking coffee. 1500 doesn't work. Spıke Ѧ 00:43 17-May-13 - I have been hiding unpatrolled edits in my preferences in order to work on this vandal issue. However, the default is 500, so I took a peek at the 1800 to see if progress is being made. Unfortunately, no light at the end of the tunnel yet. -- Simsilikesims(♀UN) Talk here. 00:49, May 17, 2013 (UTC) I don't agree; somewhere between 1500 and 1800 the pre-vandalism unpatrolled changes appear, and 1800 gives the same list as 2000 (or at least it ends in the same place). Spıke Ѧ 00:56 17-May-13 I'm sorry, that's not true; am now seeing a tunnel past the "end of the tunnel." Spıke Ѧ 00:58 17-May-13 Which means some of the pages I "patrolled" to keep them from popping back up as "unpatrolled" will do so anyway later. Spıke Ѧ 01:29 17-May-13 - At least it takes me less time when I realize that you have already fixed a page I am patrolling when I go in to fix it. -- Simsilikesims(♀UN) Talk here. 01:37, May 17, 2013 (UTC) We are down under 1500 now. Spıke Ѧ 03:35 20-May-13 - Yes, thank heavens the show 1800 tool now shows pre-vandal edits to be patrolled. Next goal: to get it under 500. -- Simsilikesims(♀UN) Talk here. 03:37, May 20, 2013 (UTC) I got it down to 650--nothing left but the Forums. Spıke Ѧ 21:43 20-May-13 - I now have patrolled all the rest of the vandal edits from May 15th. There are still some pre-vandal edits to patrol from May 15th, and May 16th to go. -- Simsilikesims(♀UN) Talk here. 00:28, May 21, 2013 (UTC) You are awesome! Your PC is too. But mine was yard-sale cheap. Spıke Ѧ 00:35 21-May-13 - I've got two like that in my back room (one that runs XP and still goes online occasionally, and one that runs Win 98 and never goes online anymore). Both hard drives of those computers are full or nearly so, however. The XP in the back room has a hard drive so full I can't install Service Pack 3 on it either. The notebook I'm using now I got in January last year, so its not spanking new, but I don't have to worry about the confusing new Windows 8 interface either. I like computers to act like computers, not like tablets. -- Simsilikesims(♀UN) Talk here. 00:41, May 21, 2013 (UTC) "Like computers, not like cartoons" is the way I would put it. Spıke Ѧ 01:00 21-May-13 Hey, I see a bunch of "Unpatrolled" flags still on May 15th edits. As that sign by the toilet says, The job isn't done until the paperwork is complete! Spıke Ѧ 17:20 21-May-13 - I just finished patrolling the rest of the pre-vandal edits on May 15th just now - if you are still showing unpatrolled edits from that date when you refresh the page, something isn't in sync here. -- Simsilikesims(♀UN) Talk here. 19:00, May 21, 2013 (UTC) The difference is between "patrolling the edits" and marking the edits as patrolled. [1] If you are saying you are sure the vandalism from May 15th is undone, I can go through and clear the flags. But you have the sexier computer. Spıke Ѧ 19:11 21-May-13 - I did mark them as patrolled, and they show marked on my computer. Strange. Also, all forums that are protected have been patrolled and marked patrolled. I don't know why your computer doesn't show them as marked. Maybe I should experiment with another browser to see if I see unmarked results on another browser. -- Simsilikesims(♀UN) Talk here. 19:14, May 21, 2013 (UTC) You marked what as patrolled? and how? If you used the "patrol these changes" JavaScript button for RecentChanges, you are only marking the edits that appear on the report, and if the report was not long enough to show the entire vandal attack, it is earlier hits that are now coming up in the report. Spıke Ѧ 19:19 21-May-13 - Yes, I did use the "patrol these changes" JavaScript button. But when I refresh my page to show recent changes, I am not showing any new changes from the 15th to mark patrolled. I plan to experiment with using Chrome instead of Firefox to see if I can see any "unpatrolled" changes that are coming up as unmarked. -- Simsilikesims(♀UN) Talk here. 19:23, May 21, 2013 (UTC) Using a different browser might render the report differently on your screen (colors, fonts) but will not change the information in the report; and there is no reason you should not be getting the same report I am if you type. There are screenfuls of edits between 17:26 and 18:34 (your time) on the 15th that are unpatrolled, and on several of these, the page is not completely repaired. Spıke Ѧ 19:46 21-May-13 - I tried using Chrome instead of Firefox, but am still not showing any unpatrolled edits from the 15th. Which pages are you showing? -- Simsilikesims(♀UN) Talk here. 21:06, May 21, 2013 (UTC) Again, your particular browser renders a report for display on your screen; but what's in the report is done at Wikia. The first three relevant pages in my report: All show unpatrolled edits at 18:34 your time. The pages themselves contain vandalism at the top. There are hundreds more. PS--I mentioned the current confusion at User talk:Furry. Spıke Ѧ 21:24 21-May-13 PPS--Hmm, if you have your account set to display UTC rather than local time, these would be on the 16th. Spıke Ѧ 21:25 21-May-13 - That would explain everything. These are indeed showing up as the 16th rather than the 15th, and I do have my preferences set to display UTC rather than local time. I still have lots to do from the 16th (or the 15th local time), including those pages you mentioned above. -- Simsilikesims(♀UN) Talk here. 21:28, May 21, 2013 (UTC) - UPDATE: I have got the vandal edits down to only 00:49 and 00:50 from the 16th, hopefully throttling should prevent so many edits from occurring at once from IP's in the future. -- Simsilikesims(♀UN) Talk here. 17:14, May 22, 2013 (UTC) Yes, throttling is working so well that even I was able to fix stuff on Monday faster than our vandal was able to break stuff. Spıke Ѧ 17:25 22-May-13c - So glad to hear it! I think I finally got the last of that vandal mess cleaned up now. Routine patrolling can resume as normal. -- Simsilikesims(♀UN) Talk here. 18:23, May 22, 2013 (UTC) Confirming that. Thanks again. Spıke Ѧ 18:46 22-May-13 What do think about the quality of this article? In its construction state, what do you think about the humour, the structure and the images so far in my article about League of Legends? I will send it in for a real review when it's finished. (Saddex (talk) 12:49, May 17, 2013 (UTC)) - User:SPIKE attempted to review it, but he don't know to much about gaming. So, even in its state of construction, how would you rate the article, on 0-50? (Saddex (talk) 19:23, May 17, 2013 (UTC)) - For a proper review out of 50 please put this article on pee review. Then ask Simsie to review:11, May 17, 2013 (UTC) - This article has a lot of redlinks. It is useless to compare it to Dota2 when the reader doesn't know what Dota2 is. Also, I am not familiar with this particular game, though I have some idea about multiplayer games I have not actually played World of Warcraft for instance, mainly my experience with multiplayer games is with those originating on Facebook, plus some Ragnarok a few years ago. I am also not familiar with the expression "GLHF" and without this the parody of this expression falls flat. We do have an article on N00B that you can link to in your article, and I'm not sure exactly what you are parodying when you refer to "Op". In IRC, this refers to a channel operator (admin), is it similar? I do recall a two player game I played years ago on a Macintosh called "Don't Fence Me In" which has similar gameplay to the game you are suggesting here - it was on a black and white screen, each player controlled a moving line, and the lines were constantly moving and expanding, you could control the direction of movement, and the idea was to make sure your opponent ran out of space before you did. This was back in the 1980's of course. So thanks for bringing back the memories there, but your article does need some more work to make it funnier. -- Simsilikesims(♀UN) Talk here. 01:09, May 18, 2013 (UTC) - The coloured lines was a completely random idea by me, and isn't even close to the gameplay of the real game. GL HF stands for "Good Luck Have Fun". The "overpowered;op" thing is just what many people blames when they can't admit they failed and got killed by another player. I will add more about the GL HF thing, such as "some dumb people thinks it means "Good luck have fun...". I think LoL players will get a good laugh. Nobody here has said anything of the images yet though. (Saddex (talk) 01:21, May 18, 2013 (UTC)) - Rather than saying that "dumb people", it would be funnier if you said "ignorant people". The former means having a low IQ, the latter means simply lacking knowledge or being unaware. You don't want to insult some of your target audience. -- Simsilikesims(♀UN) Talk here. 01:27, May 18, 2013 (UTC) Major reconstruction I decided to rewrite large parts of the article. It's still not finished, but what do you think? (Saddex (talk) 21:57, May 19, 2013 (UTC)) - Starting the article out with an overused "yo mama" joke doesn't help it much. The hoax (I had to check the wikipedia article on League of Legends) about the development team splitting off from Blizzard is a nice touch, but not sure if it is exactly funny to nonplayers of the game. I do like how you explained what this Dota2 business was about. The hard thing here will be to try to make the article appeal to both players and nonplayers, without making it too long. If it was going to be a feature, it would have to be hilariously funny to both players and nonplayers, which could prove impossible in this case. But I think you are on the right track now, since those who do not play League of Legends are unlikely to look up the article unless they get it by roll of the die by hitting "random article". For the future, you will probably have to monitor the article to make sure that IP's don't (A)vandalize the article, (B) add vanity to the article, and (C) try to modify the article to reflect the truth. Especially C, though all articles have issues with A and B. -- Simsilikesims(♀UN) Talk here. 00:06, May 20, 2013 (UTC) Trombone Anon uploaded this in one gulp; it is in bad shape and comes complete with a {{Fix}} tag. Where did it come from? the mirror site? Spıke Ѧ 19:28 17-May-13 - Suspicion confirmed; it did indeed come from carlb's mirror. Apparently this article had a fix tag that expired. That, or it got VFD. -- Simsilikesims(♀UN) Talk here. 00:57, May 18, 2013 (UTC) Yup, huff log shows suggests that the Chief deleted it last August. Can't plant a deleted article here and abandon it; it's gone. Separately, did you get my email? Wolverhampton is with us right now, but impeded by the throttle and Frosty is handling him in real time. Romartus and ScottPat repaired many articles, probably not most of the old Forums, but lots of patrol flags are still set. Spıke Ѧ 01:06 18-May-13 Transistor As you saw, Transistor redirects to MOSFET transistor. MOSFET (metal-on-silicon field effect transistor) adds utterly nothing to the humor of that article (and is dated; MOS gave way to complementary-MOS, or CMOS, in the 1980s). What if I move the meat to Transistor, ditch "MOSFET", and huff the old page? Spıke Ѧ 02:03 18-May-13 - That would be fine with me; I probably wouldn't have thought to look for "MOSFET transistor", and besides, if it gave way to CMOS in the 80s, that was almost before my time. I don't have an extensive electronics background, besides. -- Simsilikesims(♀UN) Talk here. 02:09, May 18, 2013 (UTC) Denza252 Did you notice on User talk:Romartus that Denza252 claims to have just returned from Taiwan and implies that the brouhaha with IRC pranks and sockpuppets was all an impostor? Spıke Ѧ 02:38 18-May-13 - Yes, I did. Either he is joking, or he needs to create a new account. He begged me on IRC to unblock him, so I gave him 2 more weeks, down from infinity, to think about fixing his signature and so on. I will go respond on that page. -- Simsilikesims(♀UN) Talk here. 03:41, May 18, 2013 (UTC) - There needs to be a film made about this. It is such a complicated saga no one understands what the heck is going:12, May 18, 2013 (UTC) - Agreed, and I think it has something to do with the illumanati, or some disgruntled irc server monkey... I can't be arsed to find out. Oh, and this is Denza252's new account, so, direct all queries to me --The Slayer of Zaramoth DungeonSiegeAddict510 02:45, May 23, 2013 (UTC) The UnSignpost has arrived...Quick hide! The Newspaper the Whole Family Must Enjoy! May, 18th 2013 • Issue 184 • Vandalpedia strikes back! Luckily the Jedi will return. Active admins I replaced my name in the list at UN:AA with my signature file a while ago, by way of personalization, and I invite you to do the same. Especially, it identifies you as a lady and some readers contemplating whom to make a request of might be more comfortable with that. Spıke Ѧ 23:12 18-May-13 - I have done so, now those looking for help via that page will also find the link to my talk page via my signature. -- Simsilikesims(♀UN) Talk here. 02:25, May 19, 2013 (UTC) Genius Factor Games In my opinion, {{Construction}} is too good for this. Anon saddled us with an article solely for the joy of ranting about a game company and calling Ted Nugent a "fat piece of shit." It seems clear the author will never improve it, it doesn't belong in mainspace, and there is no way to userspace it. I don't want to delete it over your tag, but please! it's beggin' for it! TheDarthMoogle agreed but put it on VFD, for which he got a 2-hour technical foul. Spıke Ѧ 12:29 20-May-13 - Nearly 24 hours have elapsed with no work done on it, so I am inclined to agree. Since there are three users now that agree that this article is below standards and unlikely to improve, (SPIKE, DarthMoogle and myself) I will huff it myself.---- Simsilikesims(♀UN) Talk here. 21:16, May 20, 2013 (UTC) User Uncycloperson I criticized new user Uncycloperson for edits that were excessively based on Uncyclopedia memes. He appended a CONGRATULATIONS template to his talk page which struck me as an upright middle finger. Later, he uploaded a photo of someone he "met once" who taught him a sad lesson about web hook-ups, at File:Random Prissy Rich Girl That Can Be Used.jpg. I deleted it with a summary that photos of non-famous people indeed Cannot Be Used. His next work was Gay Rights, which Mhaille saw fit to delete. He is now on Mason-Dixon line, a listy and unfunny collection of stereotypes about redneck racist Southerners and "the sudden urge to lynch a nigger" that has not improved since you tagged it. Not suggesting you do anything except keep an eye on him. Spıke Ѧ 01:00 21-May-13 - That Congrats template is a result of the article Scam and one of the tricks I added into it. I doubt it was added there by any sense of spite. The rest of his edits come across as pretty basic noobishness. I'd suggest give him a prod and a poke to try and get him to ease off a few of his excesses, but not much more at this stage. • Puppy's talk page • 02:25 21 May 2013 On the contrary, I tend to think things mean things, and not necessarily what you meant when you wrote it. Meanwhile, there are more interesting people to prod and poke. By the way, did you notice that Wolverhampton returned a few minutes ago, and that I was able to ban and revert, even working through T-Mobile? Spıke Ѧ 02:30 21-May-13 PS--Thank you for giving him a poke yourself. Wolvy, by the way, is no longer editing articles at the start; and had some of his edits marked bot edits. What does this mean? and how does Anon get to run a bot here? Spıke Ѧ 02:38 21-May-13 - I know as much as you do on Wolfgang. Maybe it's time to bring in VSTF? As for things mean things - prefer assume good faith. The poke will allow him to indicate more one way or t'other. • Puppy's talk page • 02:42 21 May 2013 I take your point. Sannse is anxious that we invite in VSTF. I replied to email today that we appreciated the help Furry gave us during the previous attack. Not sure how it would be interpreted if VSTF's patrolling extended to matters of content and taste, and were done in a way we would not have done. Spıke Ѧ 02:46 21-May-13 - From what I've seen elsewhere they're respectful of the way things are done locally. They work on obvious trolls, so taste shouldn't be an issue. And most of the members I've chatted to in the past are approachable, so if things are done poorly they're open to listen. I say bring them in - if it ends up being an issue we can always ask that they leave again. • Puppy's talk page • 02:53 21 May 2013 - Also - Hi Sims! • Puppy's talk page • 02:54 21 May 2013 Well, it is late here, and the baseball game was not interesting from the start (from the local point of view). Please keep an eye on the site. Spıke Ѧ 02:59 21-May-13 Thanks Hey simsie, thanks for unbanning me. A care package of 1 pie has been dispatched to you. I will resume editing over the summer, expect small edits right now. --The Slayer of Zaramoth DungeonSiegeAddict510 03:06, May 22, 2013 (UTC) - Congrats on fixing your signature (FINALLY). -- Simsilikesims(♀UN) Talk here. 03:09, May 22, 2013 (UTC) Sacrafice someone? Sacrafice [sic] the most spelling-deficient Uncyclopedian? After I just nominated him for Writer of the Month??? Spıke Ѧ 00:43 23-May-13 - Actually, that was just a joke, meant in the spirit of the joke just before it. Hopefully nobody's sacrificing anybody. Besides, I hadn't noticed ScottPat's spelling mistakes. Shabidoo's spelling mistakes were a little too obvious in the sentence prior to my forum post. And yes, I really was district alternative in my school's spelling bee years ago. -- Simsilikesims(♀UN) Talk here. 00:50, May 23, 2013 (UTC) - I got it. ScottPat must not have "apollagised" to you yet. (So, no one ever taught you not to write, "That isn't me"?) Spıke Ѧ 01:19 23-May-13 - Nope, and nope. Plus I have probably learned some bad habits along the way. -- Simsilikesims(♀UN) Talk here. 02:58, May 23, 2013 (UTC) - Did Spike just spell something in English-English? Wow! Well there's a first time for everything. Thanks for voting for VFH:Emu War Simsie and I'm glad someone else finds the topic of car insurance amusing. (Do you have does bloody annoying car insurance ads on TV?). Also thanks for voting on writer of the month. I'm not sure where you said sacrafice [sic.] so I haven't a clue what Spike's on about in terms of apolagising.:44, May 23, 2013 (UTC) - PS - And yes my spelling is awful but that is because I write hurrid:47, May 23, 2013 (UTC) - PPS or PSS - I've seen the forum now I thought Simsie wanted to sacrifice Al, May 23, 2013 (UTC) - We do indeed have annoying car insurance ads here. For awhile GEICO was flipping between the gecko and the caveman, and Allstate has someone talking in a bass (or deep tenor) voice like the Allstate ad person does. Nationwide has a commercial where a lady sings "Nationwide is on your side" offkey alongside the Nationwide insurance guy, hence my little addition to the article there. Of course, all this will be outdated in a few years. Then there's MetLife with the Peanuts comic strip characters, but wasn't sure where to go with that, nor do I know whether they have offices in the UK. Also, Shabidoo was the one who started all this business about Al being likely to be the first one sacrificed (and he badly misspelled sacrifice), so I thought I would joke about Shabidoo being the one to be sacrificed. Hope this clears things up a little. -- Simsilikesims(♀UN) Talk here. 02:16, May 26, 2013 (UTC) - Yeah thanks. We have none of those specific car insurance adverts mentioned. We have adverts that advertise companies that compare car insurance such as the "Go Compare" opera man jumping out of bushes and singing in people's faces and the "Compare the Meerkat" meerkats from Russia (I know!) telling you not to compare the:43, May 26, 2013 (UTC) Sotir If you are going to do something about this new article, you should first review the entire year-long career of author Maistor310, including Deleted User Contributions. There is a series of photo uploads and articles about obscure Bulgarians no one has ever heard of. The articles have been deleted; the photos should too, and this seems to be more of the same. I first posted to Romartus, as he dealt with this guy more recently; so did Xamralco. Spıke Ѧ 22:15 24-May-13 Review I sent in my League of Legends article for Pee review. More should be added to the 'Bugs' section, however, I don't know what more to add there right now. (Saddex (talk) 23:18, May 25, 2013 (UTC)) VFH Thanks for voting for the Ukraine article and the Boxes article. Much appreciated.:34, May 29, 2013 (UTC) The UnSignpost hath cometh In Pure Russian Fashion, The Newspaper That Reads) Hello I am with you for another couple of hours. I've had a talk with Pennyfeather regarding the crap that gets to remain in mainspace versus the crap that doesn't. Separately, ScottPat and I went to work on sporking and perverting Wikipedia's featured article of the day, which will certainly not be ready for an Uncyclopedia tit-for-tat before tomorrow. Spıke Ѧ 01:47 1-Jun-13 - Thanks, I'll go check out the discussion. I look forward to more good articles on here! -- Simsilikesims(♀UN) Talk here. 01:48, June 1, 2013 (UTC) Also, there is Fresh Meat for you at VFD. Spıke Ѧ 02:37 1-Jun-13 Armenian Federation Where do you think this one was sporked from (all in one Big Gulp)? Do we want it? Spıke Ѧ 18:45 2-Jun-13 A quick question... Am I allowed to slap a welcome template on the new users talk pages, if the admins don't get to them? Just wondering, I won't do anything until I get a response. Not even a peep. --The Slayer of Zaramoth DungeonSiegeAddict510 18:01, June 3, 2013 (UTC) - Yes but please include a warm friendly message and links to the help pages.:07, June 3, 2013 (UTC) - And get the inclusion of {{BASEPAGENAME}} to work, and don't drop names of specific Admins (or that of ScottPat, who isn't an Admin at all and is not some sort of official welcomer), and maybe-just-maybe look at what you've posted and see that it worked correctly before walking away. Spıke Ѧ 19:18 3-Jun-13 - In other words: Recommending specific Admins to new users is not "welcoming"; it is politicking. Spıke Ѧ 19:35 3-Jun-13 - Unless you are recommending me DSA, then I don't mind (do we have an article on False Modesty)? --:00, June 3, 2013 (UTC) - I second Romartus on reccomending me on your welcome page:22, June 3, 2013 (UTC) - Sorry I didn't respond sooner - my internet connection was down Saturday, and I was away on Sunday. But ScottPat and Spike are right; (1) Please feel free to put a welcome template on new users talk pages if they haven't been welcomed yet. Usually I use {{subst:Welcome}} to get the BASEPAGENAME to work properly on the welcome template. An alternative I used some time ago is to simply copy and paste the text of the Welcome template from the Welcome template page onto the new user's page. But using subst is a quick shortcut. (2)Avoid name-dropping, you don't speak for the admins even if you are friendly with us and recommending one admin over another or others is bad manners (it is impolite to the other admins). -- Simsilikesims(♀UN) Talk here. 00:19, June 4, 2013 (UTC) Ah thanks for clarifying, and I made it unspecific to any 1 admin. --The Slayer of Zaramoth DungeonSiegeAddict510 00:55, June 4, 2013 (UTC) Banned again! I'm afraid I banned your little bundle-of-joy again. You may wish to intervene. Spıke Ѧ 23:10 4-Jun-13 - I reviewed his contributions, and your notes on his talk page. Looks like n00bishness to me, rather than malicious behavior. I would have warned him rather than blocked him for a week, dealing with userpage templates can be difficult for beginners. Still, he should have read the rules at QVFD, somehow he overlooked the part about the redirect template. As a first time offense, this is something I might ban for one day as a consequence. -- Simsilikesims(♀UN) Talk here. 00:19, June 5, 2013 (UTC) It is more than n00bishness, but less than malevolence: a chronic need for everyone else to pay attention to him, rather than either he or we writing funny stuff. I did warn him with two hours earlier in the day. It is not a first-time offense; remember that I permabanned him (as Denza252) and you brought him back. Do what you like with him, but he is high-maintenance. Spıke Ѧ 00:39 5-Jun-13 - Taking the scorefix hijinx into account, I changed the ban to 2 days from 1 week. I think he will learn in time that to get the respect and attention he wants he will need to either write good funny stuff or do essential site maintenance (undoing unfunny IP edits for instance). We do not have a glut of users here, so I hate to drive one away just because he continues to act n00bish. -- Simsilikesims(♀UN) Talk here. 00:52, June 5, 2013 (UTC) Very well. To be clear, there was no vote-rigging involved here. And he did good work on saving an article on VFD, then on the day in question, casting the decisive 5th vote on another article--but in both cases, with a loud Victory Lap, the latter involving a post at a closed vote. Even money says he will not learn what you expect him to. Spıke Ѧ 01:24, 13:05 5-Jun-13 Er, just to clarify, the edits I'm doing as an IP isn't ban evading, just doing some reverts on spam edits... I'm not touching any article that doesn't need a simple undo. And yes, I do think that the template fiasco was a bit n00bish on my part, but I didn't want to put it in the mainspace... sorry 'bout that --The Slayer of Zaramoth DungeonSiegeAddict510 20:12, June 6, 2013 (UTC) Denza: (1) Don't tell us what ban evasion is. (2) You are not just policing articles but "removing some of my own profanity." (3) You are undoing edits and scolding editors in the Change Summary about their tastes in humor and asserting that you are the owner of the articles they are editing. Editing while banned is certainly ban evasion, and trying to maintain your control over articles is too. Spıke Ѧ 19:56 6-Jun-13 - Er... I should have phrased that better... What I should have said was "It wasn't for the purpose of ban evasion." Also, not to defy your authority, but I felt that some of the anon edits were a bit more spammy than you like. In addition, I thought that some content on my WIP article was less funny than I thought before, so I felt I should change it. If I have offended you, I'm sorry --The Slayer of Zaramoth DungeonSiegeAddict510 20:12, June 6, 2013 (UTC) - Consider this a warning: if you edit as an IP while banned, you risk getting your IP banned. If you continue to do the same thing you were banned for, you will definitely have your IP banned. Generally, when banned, don't act as if you "own" an article, since IP's don't own articles. In fact, you don't really "own" anything here, except what is in your userspace, since it is all released under Creative Commons, and may be added to by anyone. When you are unbanned, you can revert edits that don't make sense within the context of the article, or that are unfunny. - What Spike is saying here is, simply take a wiki break while banned. Take some time to read existing articles (notably the Beginner's Guide and the features) rather than edit existing articles. Some of the editors here actually use MS Word or Wordpad to edit articles offline without directly editing the wiki, and they can plan the strategy for the articles that way, though the wiki formatting doesn't necessarily come out well that way. Finally, it is impolite to scold others about their tastes in humor: everyone has different tastes in what they think is funny or not funny. That is why we hold votes to feature or delete articles rather than just have a random admin do it. That is also why we have Pee Review, though we are behind in that department currently. -- Simsilikesims(♀UN) Talk here. 21:52, June 6, 2013 (UTC) User Wakkoswish123 Wakkoswish123 is back, fresh off his ban and back to his old mischief, touting on Talk:Furry a "version 2" of the page, though his change mostly restores some crudeness and some point-of-view from his last edit to it. I reinstated his ban; would you please review the case? Also, we are mostly caught up on patrolling edits, but do you have an opinion on Armenian Federation (see above)? Spıke Ѧ 12:09 5-Jun-13 - I have left my input on the talk page of the Furry article regarding Wakkoswish123's changes. I will go check out the Armenian Federation article. -- Simsilikesims(♀UN) Talk here. 18:53, June 5, 2013 (UTC) - The Armenian Federation article appears to be about the quality of your average Uncyclopedia article. It could be funnier, but it does have some humor to it. It doesn't look like it was copied here from the fork or the mirror (I just looked into that) so it is original to this site as far as I can tell. I am letting it stay without a construction tag. -- Simsilikesims(♀UN) Talk here. 18:57, June 5, 2013 (UTC) Fine on both counts; except: - Wakkoswish123 cannot develop an alternative in userspace. Because I banned him. If you think he'll listen to you, unban him; but please review the ban log: When his last ban expired, he picked up right where he left off. - Regarding your post-edit to Furry: Changing "inhabit" to "can be found inhabiting" is one of those encyclopedia clichés that set me off. Say it simply! Spıke Ѧ 19:04 5-Jun-13 - He is banned for six months, I have a feeling he will return after 6 months. If he hasn't learned his lesson then, he'll get a year. Hopefully he will see the reply on the talk page. I will undo my change, since you find it cliched, I didn't see much difference, really in the wording. -- Simsilikesims(♀UN) Talk here. 19:11, June 5, 2013 (UTC) Infobox Myself, I love the infoboxes at the country pages. They are filled with lots of random information. The image wasn't broken, it some sort of bug now on Wikia (I have experienced it on my "main" wiki, but it didn't take too long time until they were visible), just click on the "broken" sign, which will lead you to its page. I will upload a photoshopped flag later. (Saddex (talk) 22:36, June 5, 2013 (UTC)) - There used to be lots of infoboxes on country pages; they were removed because IP's tend to add to the infobox lists, particularly the section on Imports and Exports, ruining the infoboxes effectively. Besides, the infoboxes even became the subject of editing wars, since people couldn't agree what should or shouldn't go in there. Also, in general, avoid editing featured articles like America; they are already good the way they are. Besides, adding a map takes away the focus of the article from the fast food angle and moves it to cartography. If the article had a more general focus, the map would have been ok, but the reader's attention is instantly pulled to the map and away from the text, which is undesirable. -- Simsilikesims(♀UN) Talk here. 22:45, June 5, 2013 (UTC) - Yup. Bizzeebeever's documentation at Template:Infobox notes that the editor can use an Infobox for good or eee-vil. BB also begs the editor that he doesn't have to fill out every single field if he doesn't have something authentically clever and funny to put there. Indeed, the bigger the Infobox the more magnetic the attraction for Anon to add crap to it. - Dittos on editing featured articles. You can bring them up to date but it is presumptuous to think your arrival is going to "make it funny" at long last. - Speaking of adding crap, I reverted virtually all of BetterSkatez2012's edits yesterday, explaining why on his talk page. He is back again, adding listcruft (I think not identical listcruft, but not better listcruft) at Roller coaster with Change Summary: please don't revert. Would you like to take a look? Spıke Ѧ 22:59 5-Jun-13 - However, a flag will still be added. I have planned to add a hamburger and a fat man on a tractor in it. The article was in perfect state? No. The first sentence, before I edited it, called America a "badass country". "Badass" can sometimes be defined as a somewhat positive term. No articles should be pro-subject. They should joke and indirectly insult the subject via jokes, lies and randomness. (Saddex (talk) 23:04, June 5, 2013 (UTC)) - No, some articles can be pro-subject, they don't all have to insult the subject, as long as they are still funny. Take for instance, George W. Bush which deliberately praises the subject, who many think was one of America's worse presidents. Likewise, the article on Michael Jackson deliberately takes the fanboy/fangirl point of view, praising Michael Jackson and being funny by being totally blind to the pedophile charges. Both were collaborations, and were deliberately planned out with a comedy strategy. Pay attention to see if you can determine what comedy strategy was used before editing the article. You can add a flag to the America article provided (1) it isn't so large that it overpowers the text; and (2)The picture is quality, not a bad photoshop or worse, an MS Paint product. A feature article deserves a good image. See Uncyclopedia:How To Be Funny And Not Just Stupid THE IMAGE VERSION for more details. -- Simsilikesims(♀UN) Talk here. 23:12, June 5, 2013 (UTC) - The featured version of the article (2008) began, "America is the name of the world's largest fast food restaurant." Someone later changed it to "largest badass country." This could be fanboyism and is unencyclopedic, but there was some fun going on here, as the first example of baddassness was Bedford-Stuyvesant, a district no one feels is bad-ass. Your opinion above--or, in the Change Summary, It should instead be offensive to Americans--is absurd. There is no orthodoxy on whether an article should be flattering or unflattering. It should be encyclopedic and funny--and a dozen Uncyclopedians in 2008 voted that it was. In fact, an unflattering article is less likely to amuse its most typical reader unless it insults with delicacy and skill. Spıke Ѧ 23:17 5-Jun-13 Wait, I got an idea... I have created a page in my user namespace, User:Saddex/America, in order to bring the infobox back to life, and just construct it for fun. Then I got an idea. Why not try to fill it with so much funny things as possible, lock it, and then mirror it to the America page? (Saddex (talk) 23:55, June 5, 2013 (UTC)) - Sure, you can have fun with it in your userspace, and edit the userspace version as much as you want as long as you don't mess up the mainspace version. -- Simsilikesims(♀UN) Talk here. 23:57, June 5, 2013 (UTC) - In fact, that is a great idea, especially if your point-of-view is different from the established article; and even a Featured Article can have a template pointing to "alternate version" or "adversarial version" and vice versa, plus disambiguation pages or whatever it takes. Spıke Ѧ 00:00 6-Jun-13 - If you, SPIKE, thought of a link to an alternate version, then it wasn't exactly that I meant. I wondered if we could fill the infobox in the userspace page with so much funny stuff as possible, then lock it when we are satisfied, and then mirror it to America. That would protect from anons (with bad skills), spammers, fanboys and so on. (Saddex (talk) 00:06, June 6, 2013 (UTC)) - I see. A page-specific Infobox template could receive separate protection and does solve the problem of Anons extending it to infinity with crap. But giving protection to a piece of a page would be unusual. What I thought you meant solves the separate problem of you having a different comedy concept from the creators of the featured article. Spıke Ѧ 00:18 6-Jun-13 - Hmm, so as I was understanding what you meant, you were changing what you meant! Separately, a map that works in my Humble Opinion is at Isle of Man. Spıke Ѧ 00:29 6-Jun-13 - What do you think about the infobox now? User:Saddex/America (Saddex (talk) 00:42, June 6, 2013 (UTC)) - I don't like the religion part - Islam is actually one of our religions, just not the top religion (which would probably be Protestantism, with Catholicism a close second). Dollarism, or maybe just capitalism, has some truth to it, and would be funnier. -- Simsilikesims(♀UN) Talk here. 00:51, June 6, 2013 (UTC) - Chuck Norris and non-huffable kitten are meme-cruft. Get rid; you are not practicing comedy but imitation with them; they have nothing to do with America nor, any more, with Uncyclopedia. "Emperor" George W. Bush has been out of power and discredited for five years and obviously has nothing to do with anything, unless you are about taking sides. Putting either Obama or a Republican in the Infobox will date the article and make the reader wonder where you're coming from. "Ethnic groups"? Get rid! America as a melting-pot is well-documented; if you have something new to say about this, apart from specific stereotypes on specific groups such as Mexicans, you haven't done so and it must be especially clever. "Largest city: Guantanamo Bay"? You know that's not true, and it's not funny unless you are getting into right-versus-left politics, and even then just stating the name isn't funny unless you have a point to make, and doing so will alienate half your readers unless you are unusually smooth at it. The funniest thing about Gitmo, unless you are an expert on war policy, is that Obama said he'd shut it and didn't, and you can't do anything with that in an Infobox. "Religion: Islam" is likewise not true and as we say, "Untrue ≠ Funny." You are trafficking in stereotypes, perhaps convinced that stating one is all you have to do, or perhaps that it doesn't matter provide you insult the American reader. Please step back and plan a comedy concept for the entire page: A misimpression of America that all your humor will feed. Spıke Ѧ 01:01 6-Jun-13 Listcruft Thank you for intervening with BetterSkatez2012. I haven't studied what you changed at Listcruft, but if it is motivated by his listcruft, then you should also visit HTBFANJS (Sec.8, Avoid Lists (nearly all the time)), a section I created recently to hold and unify advice mostly given in other places. Spıke Ѧ 23:47 5-Jun-13 - Listcruft is an article created to illustrate listcruft, and I decided to add a bit more to the lists there, which furthers the purpose of the article, and possibly updates it slightly. I already learned the hard way to avoid lists - one of the articles I created as a beginner a couple years ago was voted down on VFD - 100 Worst Ice Cream Flavors. Each piece of the list had at least one sentence to go with it too, but that wasn't enough to keep it from being killed as listcruft. I have mostly avoided listcruft ever since. -- Simsilikesims(♀UN) Talk here. 23:54, June 5, 2013 (UTC) OK; all I'm saying is that, if you were after a clearer policy statement, you should make it where authors expect to see policy statements, at HTBFANJS. My only casualty was Peruvian slang, an unabashed attempt to explain some of the inside jokes at Peru--that is, to educate not amuse. Spıke Ѧ 00:03 6-Jun-13 What do you think about this image? (Saddex (talk) 22:01, June 6, 2013 (UTC)) - I think it would work for an alternative version of the America page, especially for the infobox, but I'd rather not see it on the featured version. -- Simsilikesims(♀UN) Talk here. 22:07, June 6, 2013 (UTC) - User:Saddex/America - What do you think about it right now? :P (Saddex (talk) 23:52, June 7, 2013 (UTC)) 2 new questions from, you guessed it, DSA510! Er, I have 2 questions (no I'm not going to do a you have 2 cows joke), so, here they are, in glorious 720p, 1080p 4K definition!: - Can I make a welcome template of my own to use for new users? - Can I make a little comment symbol (to go along with the for, against, and abstain symbols? I'm fine if the answer to either question is NYET, and take your time to answer this, I'm in no hurry, yet. --The Slayer of Zaramoth DungeonSiegeAddict510 15:54, June 7, 2013 (UTC) - I said I can wait... but I can't wait forever! --The Shield of Azunai DSA510My Edits! 04:50, June 11, 2013 (UTC) (Newsig, btw) - Yes, you can make a welcome template of your own to use for new users, but make sure it contains all the links to the same pages that the current welcome template links to (Beginner's Guide and by extension HTBFANJS for instance) and make sure it is polite to new users. You can also make comment symbols to go along with the for against and abstain symbols, but make sure it has the same function as a for symbol, or an against symbol. Making a new abstain symbol might be too confusing. Make sure your symbol isn't too big (it should be the same size as the for against and abstain symbols), and make sure that the template is formatted correctly, test it well before using. Same with your custom welcome template. -- Simsilikesims(♀UN) Talk here. 05:28, June 11, 2013 (UTC) Yomamen New user Yomamen has created a user page which is either (1) a resumé or (2) subtle humor or (3) loads of laughs, but most found on someone's Facebook page. Denza and I debate it on User talk:Yomamen. Would you lend a third pair of eyes? Spıke Ѧ 21:16 7-Jun-13 Godaddy12121212 Thanks for your review of the last one; now another one for you. Even newer user Godaddy12121212: Who is he, and why is he developing template after template, seemingly to help us manage the QVFD process? Spıke Ѧ 21:54 7-Jun-13 As he's actually a user, I'm unsure whether to add him to QVFD or try to move stuff into his user-space. I would say QVFD, as the titles seem to be somewhat asking for it, but what do you two think?Sir Reverend P. Pennyfeather (fancy a chat?) CUN VFH PLS 22:13, June 7, 2013 (UTC) - Sorry, just realised what he's doing. Do we really need the template? It seems to be like VFD but without any element of democracy. Personally, I think the current system seems fine. Sir Reverend P. Pennyfeather (fancy a chat?) CUN VFH PLS 22:21, June 7, 2013 (UTC) - It would just be sort of in-between the two deletion mechanisms. In most QVFD cases, the author's input should be discouraged, as they'd probably just rant before their inevitable ban. My vote, for what it matters, is to politely tell this new user that we prefer our own, well-established and functional mechanisms to an untried, impractical one. Sir Reverend P. Pennyfeather (fancy a chat?) CUN VFH PLS 22:28, June 7, 2013 (UTC) - You can tell me if I was sufficiently polite on his talk page. I appreciate your comments. Wikipedia is bigger and slower than we are, and our process is biased toward deletion (and retrieval from the mag-tapes after-the-fact if the vandal bastard should register and ask for a copy in his userspace, which is always granted unless it's something like cyberbullying). Spıke Ѧ 23:23 7-Jun-13 - I think you've handled it just fine. I'm sure it was well meant, but it certainly is an odd thing to do, going around wikis adding unsolicited deletion templates. Sir Reverend P. Pennyfeather (fancy a chat?) CUN VFH PLS 08:43, June 8, 2013 (UTC) The IP vandal is using a bot or something, I can't revert everything. Please help. --The Slayer of Zaramoth DungeonSiegeAddict510 04:37, June 10, 2013 (UTC) - Crisis has been averted --The Shield of Azunai DSA510My Edits! 18:19, June 10, 2013 (UTC) Team masturbation This first effort of a new Uncyclopedian is not predictably awful. But it is awful. And especially, some of the red links worry me that it is the first in a planned "story arc." What, if anything, do you reckon should be done with it? Spıke Ѧ 23:30 10-Jun-13 - Eh, I have been looking at that article for a while, and I have been tempted to put it in the users userspace... but I wasn't sure what to do. I think we should slap an ICU tag on it and wait. --The Shield of Azunai DSA510My Edits! 23:42, June 10, 2013 (UTC) - I agree with DungeonSiegeAddict510 here, so I put the ICU tag on it, and we'll see if it gets funnier and less meme-y. -- Simsilikesims(♀UN) Talk here. 04:33, June 11, 2013 (UTC) Adventure Quest With my blessing (at Talk:Adventure Quest), Uncycloperson nominated this on VFD, then undid his nomination, perhaps because of a typo. If he wants to go through with it, he should be allowed to despite the tag you placed on the article. Spıke Ѧ 20:45 11-Jun-13 Denza252 DungeonSiegeAddict510 uploaded a drawing of his with a scolding to other Uncyclopedians not to use it. I deleted the upload, and banned him for 2 hours for scolding us over a nonexistent rule. He has just apologized on my talk page as Denza252--it seems that he was gaming us with the time-consuming hoax that this account was permanently hacked and unusable. I have permabanned the Denza252 account and am ready to permaban DSA510 for ban evasion unless you have a much better idea. Spıke Ѧ 23:06 13-Jun-13 - I concur on permabanning the Denza252 account; it is either hacked and not being used by the original user, or else it is being used as a sock by DSA510. However, I would not go so far as permabanning DSA510 for ban evasion; I would recommend a couple weeks to a month, as an extension of the latest ban. Remember, Puppy on the Radio did ban evasion some time ago with his sockpuppet, but was banned for three months, not ultimately permabanned. Also, this ban evasion is not as serious as that incident with POTR since DSA510 is not attempting to evade the rules of a competition or rig any votes. I have just extended his ban myself. -- Simsilikesims(♀UN) Talk here. 23:13, June 13, 2013 (UTC) (Although he did cast one vote on VFD with the Denza252 account, which I reverted as at least voting-while-banned.) It is frustrating as he has assembled a nice record of good patrolling, which somehow never gets beyond the recurring need to call attention to himself with rulebreaking. I'll go with the lower of your range, as I never thought he would do good 'anything' in the first place. (Edit-conflicted and now moot.) If you think there is real ambiguity whether this is ban evasion, we should find out for sure. Spıke Ѧ 23:25 13-Jun-13 - I'm sorry to evade my ban like this but I DungeonSeigeAddict510 is not the real Denza. I am the real Denza and my account is Denza252. If you could PLEASE unban Denza252 and place his ban on the imposter I will switch back to Denza252 and I will never use this account again. Sorry to get you caught up in this Denza deboccle. DungeonMaster494 (talk) 23:29, June 13, 2013 (UTC) (Edit-conflicted and superseded) You are going to love the post to my talk page just now. We cannot sort this out without checkusers. Spıke Ѧ 23:31 13-Jun-13 - I agree: We should run checkusers on Denza252, DungeonMaster494, and DungeonSeigeAddict510. I do think that this is getting too complicated and ambiguous. ---- Simsilikesims(♀UN) Talk here. 23:35, June 13, 2013 (UTC) Okay, for what it's worth, which I assume isn't all that much, the DungeonSiegeAddict510 account, from my experience, is the real person here. A few months ago a bunch of IRC users thought it would be funny to completely fuck around with Denza and I'm personally surprised at his determination to stick through it after the kind of abuse they've forced him to go through. I speak with him daily on the darthipedia IRC (as he's been put on auto-kick from the uncyclopedia IRC by one of said jokesters from before), and it's pretty easy to tell that the Denza I'm talking to every day is the socially-awkward and self-promoting good patroller that we've come to know here and on the fork as Denza. As far as I know and he has told me, DungeonSiegeAddict510 is the only account that he uses. I was present during the time when his account was "hacked." He was actually fooled into giving up his password to another user for "analytical purposes" or something like that, which certainly doesn't do great testament to his logical reasoning abilities, but certainly the users tormenting him saw an opportunity and took it. He's active on the fork and a good contributor. Has even attempted some writing a few times. And there he's active under the same account DungeonSiegeAddict510. Naturally I'm here because he told me about this situation on IRC, which he always does, about everything. On both wikis. It's really quite maddening. But he's not a malicious person. He just wanted to become a member of this community and he still does, and he's been punished for that by enterprising trolls who've allowed this situation to carry on a lot longer than their ages would bely of their maturity. You can do what you want with him as like I've said and told Denza, I don't know how much weight my input holds, but this is what I know to be the truth and I don't like to see an honest contributor ram-rodded this way. So I'd ask that you cut the kid a break, although certainly do the checkusers and such. Thanks. -RAHB 00:16, June 14, 2013 (UTC) - For what it's worth: - My sockpuppetry wasn't for the purposes of competition/vote rigging - I actually scored my sock deliberately lower in a competition, and avoided any vote rigging. There was a bit more behind the ban for me at that stage though. I actually was permabanned for socking. The ban was modified due to me being long term member with generally good history blah blah. - Funnybony also had (has) a sock account that - if I remember rightly - belonged to his partner, but ended up double voting at one stage. He was asked to not do it again when it was discovered. That sock is still unbanned, but rarely active. - MrN has a sock he uses regularly, as do other regular users. Some are more obvious socks than others. - My major issue with socking is either vote rigging or drama inciting. I don't know of any vote rigging in this instance - being only here irregularly at the moment - but there's definite drama. I've never relied on IRC to be an effective communication channel - the lack of accountability is an issue - but a check user is probably the best action going forward. Even if there was an issue with an imposter sockpuppet (which is just an odd concept), I'd want to have all the information I could possibly have before making a decision on what to do. Especially as it appears we have a determined noob with some writing potential here. • Puppy's talk page • 12:37 14 Jun 2013 - I should also mention User:PortuguseOttersTryRadios while I'm at it. • Puppy's talk page • 12:43 14 Jun 2013 Puppy, my friend, dragging us through your personal criminal record is a distraction. I am willing to believe that pranksters at the Fork (perhaps the same ones who wound Cat up to argue that this website is Auschwitz because I ban everyone and it's scary) saw me ban DSA510 and pounced. One of them I have permabanned as he cannot spell "Siege" whereas DSA510 sees the word all the time when he is not masturbating. I am convinced there is ambiguity on the ban evasion issue, though checkuser is a good idea. Spike sockpuppet (talk) 00:58, June 14, 2013 (UTC) - I have unbanned DungeonSeigeAddict510 pending a checkuser, because it is apparent that the ban evasion may not have been done by the same person. Should the checkuser reveal that there was ban evasion, the ban will be reinstated with interest. -- Simsilikesims(♀UN) Talk here. 01:03, June 14, 2013 (UTC) Correct on both counts. Meanwhile, we should turn the evening's gigantic diversion from writing funny stuff into an inside joke by adding to the Worst 100 Events of 2013 that I banned #494 merely for inability to spell Siege. Simsie, your own inability to spell Siege we will discuss later. Spıke Ѧ 01:16 14-Jun-13 Not to keep harping on this, but Denza says his IP is still banned, probably from that #<insertnumberhere> thing that happens when people get banned. -RAHB 01:50, June 14, 2013 (UTC) Now its my turn to talk. First of all, I would never use sockpuppets to apologize or edit during a ban, as I learned from my previous experience of editing while banned.(It got my school's IP banned) Second of all, I have said quite a few times, and will mark it on my userpage after this, that I only use 2 IPs, save the occasional edit during a trip to somewhere, which I may or may not have done. (I honestly can't remember if I edited uncyc during vacation to somewhere or not) I don't use proxies, ever. It goes against my code. Another detail about my usernames, is that they have the gamertag effect, or some digits slapped on the back. Now, I only use 4 main number codes, 252, 242, 510, and 525. If anyone really wants to dig superdeep on this issue, there are a few accounts that will back this up. First of all, my IRC account is TheFakeazneD525, after my original nick Denza252 got stolen. My email address has 242 on it. My armorgames account, is DarkeSpyne242 (with the profile pic that looks like a female ninja.) I don't keep many accounts, save the email, this AG, and IRC, and nothing else (save a defective TvTropes account.) Yet another topic I will say, to prove myself, is that I never write in the style that the person claiming to be Denza252 does, for example, in VFD, he was like "I agree with SPIKE 100%" or something. I never write like that. I would say Good X, interesting y, and possibly a Per above, along with some of my own Denza-esque humor. Now, the fact that this account is technically a sockpuppet is a rare exception to my code. I did it, as there was no other way to get back on Uncyclopedia. Otherwise, I would never create a sockpuppet. Also, the assumption that this was all a hoax and that I really did have access to the account Denza252 is actually a bit hurtful. I would never do such a thing, and the things that I had to do after that were long, and tiresome. Regarding the image, I may have been a bit too paranoid about it. I now feel that I should have said something along the lines of "You may use it, but do not abuse it" or something like that instead of "Do not use without my consent." I had worked on that particular sketch for a while, and I sometimes get a bit too defensive of things that I have made. I will re-upload the image with a revised description. And also, I have been monitoring the recent changes this whole time, and I saw that SPIKE added a bit about banning a user for misspelling "Siege" (among other things, like account impersonation, and sockpuppeting, and hacking my account, I presume) I feel that there should be a small bit about the whole "Denza Saga" as it is now named in that list, as it is quite empty as of right now, and a few events wouldn't hurt. After all, they are the WORST reflections on 2013. Anyways, if I have missed anything, you can contact me on my talk page. --The Shield of Azunai DSA510My Edits! 02:53, June 14, 2013 (UTC) - Jeez! you should run for President. Because all of your actions are justified except (1) the ones you can't remember, (2) the ones you simply had to take to return to Uncyclopedia during a ban, and (3) a few that in retrospect you might have done differently. And all the others are exceptions. And you might have a psychosis. Unlike all the other Admins you have bothered on both sides, I do not care whether this particular dog elected to have fleas (the trolls that follow you around); I simply do not want the dog and his fleas in the house. So I disagree that "its [your] turn to talk." "Its" your turn to do work that advances this website. PS--Am modifying your message to remove your e-mail address to make it harder for robot spammers. Spıke Ѧ 12:07 14-Jun-13 - Agreed with SPIKE on there. You've been given several extra chances and done the extra campaigning so to speak of being able to even get into the house despite the fleas, now make it worth something. And also don't make me embarrassed of my glowing endorsement from earlier. "It's my turn to talk." Jeez, man, you'd think he'd get the picture by now >_< -RAHB 19:36, June 14, 2013 (UTC) Gymleader Melchizedek New user Gymleader Melchizedek seems not to want to help us produce funny pages, so much as to leave a personal mark at the top of a lot of them, always involving initial quotations, none of which are horrible but neither invite the reader into the page. A second opinion, please? Spıke Ѧ 15:38 14-Jun-13 - I reverted the quote left at Feminism and reviewed the video. The video seems ok, and the uploading of it may have been yet another bug. I also left a note on the user's talkpage below your note. Leaving quotes on pages is a typical n00b mistake. -- Simsilikesims(♀UN) Talk here. 21:14, June 14, 2013 (UTC) Summer's here! And so is the post Telling You Stuff You Already Knew, But With Different Words! June, 14th 2013 • Issue 186 • This newspaper may not be able to tell you who Denza but then it can't tell you much anyway! He must think highly of it; there are two signatures! Like an edition of a newspaper with * * in the masthead. Spıke Ѧ 13:34 16-Jun-13 IFYMB! I received a message from this user about lifting a three month ban and an apology for past actions. I decided to lift it but I suggest I will put him on probation for the last two months of his sentence. I will add this to UN:OFFICE to make it clear for future:35, June 16, 2013 (UTC) - I don't remember the issues and haven't yet looked them up, but I object. If the ban was warranted, then cancelling it on appeal of the bannee--an appeal made during an act of ban evasion--is improper. I understand that the alternative is to shop for a willing Admin on the other side and use him as a proxy, and that's wrong too. (And cheers that RAHB limited his recent intervention to giving us helpful background information we didn't have, above.) This recurring issue is my only policy disagreement with youse in four months, and I don't claim that you should switch to my style; only ask yourselves why we can't have things (rules, bans, etc.) mean what they say. Spıke Ѧ 13:34 16-Jun-13 - Ok I understand the objections for lifting the ban on IFYMB! It was a stupid thing for him to have done on your user pages but he has apologised. I also apologise to both you and Sims for not asking you first. Hope we can close this issue and then judge the user on their subsequent actions post-ban.:32, June 16, 2013 (UTC) - What he did was blatant vandalism, and I banned him before he could extend it to the rest of the wiki. It was reminiscent of what a vandal had done not long before on the fork. He has a record of positive contributions before the fork happened, so the UN:OFFICE rule is a good idea, but I thought I recalled his vandalizing more than just a couple user pages. I will have to check his contributions to be sure. I have cancelled a ban before on apology and appeal of a bannee before, so I can understand your decision to do the same, Romartus. But this should not be done automatically, they have to be credible in their apology. As to rules, we have three of them, plus a number of "ignorable policies". All of the ignorable policies should be referenced to in the Beginner's Guide. -- Simsilikesims(♀UN) Talk here. 01:39, June 17, 2013 (UTC) - Thank you Sims. Acknowledged. --, June 17, 2013 (UTC) ZhenBang I gave feedback to new user ZhenBang but he bulled forward with uniformly vulgar edits, culminating in the awful UnNews:The IRS shall take your cash by force!, which would be "mercy-moved" immediately if I were still Editor-in-Chief. Would you care to try with him? Spıke Ѧ 23:19 18-Jun-13 - I just left him some feedback on his UnNews on his talk page. -- Simsilikesims(♀UN) Talk here. 08:21, June 19, 2013 (UTC) Checking InHey Sims, I sent you an email a few days back. Did you get it? Just wanted to make sure I sent it to the write email. (I used what you have stored with Wikia). Let me know, thanks! --Sarah Manley (talk) 21:34, June 19, 2013 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Simsilikesims?oldid=5708064
CC-MAIN-2015-32
refinedweb
13,513
70.84
Hi guys, So i'm thinking of importing a few japanese cars to resell in the market. What are the best selling japanese cars in pakistan right now? The mid-sized sedan cars i'm considering are Corolla Assista X, Corolla Axio, Premio. Smaller vehicles i was thinking of Vitz, Alto, Mira. Obviously smaller vehicles will have a lower profit margin but might be quicker to sell. So what cars would you guys recommend that are in high-demand in the market? I will be going for proper auctioned high-grade cars. I was also considering fielders but the market seems to be flooded with them at the moment. 660cc- Suzuki Alto,Daihatsu Mira1000cc- Vitz, Passo, belta1300cc- Corrolla X,Probox1500cc- Axio,Corrolla X, Fielder Best selling 2008+ models (8-11 lac): Vitz & Mira (upto 23 lac): Premio & Corolla X (Upwards of 10 million): Parado & Land cruiser u wont b able to import corolla x anymore Why is that? Was the production ceased in 2007, or is the Axio the new model of the X? I'm a little confused about it. best selling JDM car in pakistan is the Prado BTW, more of them have been sold than any other import. you might also want to look into this prospect. Hmmm yeah i was thinking of Prado's but maybe later on once i've understood the whole process. It will take longer to sell and i could get a couple of decent cars in the same amount. i think u should concentrate on hybrid civic.. it should be a hot cake especially in days to come.. How about a nissan bluebird? Saw one today and the interior is much better than the corolla's. I would love to import a 2007 Lexus IS 250 as well, costs hardly 9 lacs in the UK. Unfortunately it's got a 2.2L engine, so with the duty it would be unfeasible. Any idea how much duty would be on a 2.2L car? Vitz is the best selling Japanese import. Well on doing some more research there's hardly any money to be made in Vitz, unless you import dozens of them at once. So i think i'm leaning towards a mid-sized sedan for now. Vitz ki market kum ho gaye hay.........now swift is taking plce.......yes....its better u import 1300cc vitz,,,aur price b thori si zaida ho swift se.......it will b the most fast selling Japanese car.......but no doubt japansese alto is a lot In now a days.... import any car mentioned above.. just avoid that fugly probox... how much is the bluebird costing you ? Depends on many things, auction grade, mileage etc etc. Should be in the range of 1.5m i think. 660cc:Mira,Alto1000cc:Vitz,Passo1500cc:Axio,fielder,wish,X corolla
https://www.pakwheels.com/forums/t/best-selling-japanese-imports-in-pakistan/181565
CC-MAIN-2018-05
refinedweb
471
76.42
/* -*- Atoms_h___ #define nsHTMLAtoms_h___ #include "nsIAtom.h" #define NS_HTML_BASE_HREF "_base_href" #define NS_HTML_BASE_TARGET "_base_target" /** * This class wraps up the creation (and destruction) of the standard * set of html atoms used during normal html handling. This objects * are created when the first html content object is created and they * are destroyed when the last html content object is destroyed. */ 00051 class nsHTMLAtoms { public: static void AddRefAtoms(); /* Declare all atoms The atom names and values are stored in nsHTMLAtomList.h and are brought to you by the magic of C preprocessing Add new atoms to nsHTMLAtomList and all support logic will be auto-generated */ #define HTML_ATOM(_name, _value) static nsIAtom* _name; #include "nsHTMLAtomList.h" #undef HTML_ATOM }; #endif /* nsHTMLAtoms_h___ */
http://xulrunner.sourcearchive.com/documentation/1.8.0.15~pre080614d/nsHTMLAtoms_8h-source.html
CC-MAIN-2018-13
refinedweb
114
53
stabs.c uses bfd_vma to hold enum members. That doesn't work when bfd_vma is 32-bits but the enum member needs 64. This is a general problem that applies to debug readers of all formats. Debug info needs to express integer values sized according to the source language, but binutils ties the internal representation to bfd_vma, which is determined by target machine arch. dwarf.h makes an effort by defining dwarf_vma: #if __STDC_VERSION__ >= 199901L || (defined(__GNUC__) && __GNUC__ >= 2) /* We can't use any bfd types here since readelf may define BFD64 and objdump may not. */ typedef unsigned long long dwarf_vma; typedef unsigned long long dwarf_size_type; #else typedef unsigned long dwarf_vma; typedef unsigned long dwarf_size_type; #endif I'd like to make invent a universal longest-int type used by all debug-info readers, and standardize on a conditional. Is there any reason we shouldn't replace the test above with "#if BFD_HOST_64BIT_LONG_LONG" ?? G
https://sourceware.org/pipermail/binutils/2008-January/054997.html
CC-MAIN-2021-21
refinedweb
153
50.77
2.6 is Coming — Flask, Diagrams, Python 3.3 and more… After Python 2.5 came 2.6, so we thought we would follow the numbering with PyCharm. Version 2.6 brings some big new features and many important improvements to the IDE. Check out the short descriptions below and download PyCharm 2.6 to try it out. Diagrams for Python, Django, GAE Python with its structure is good for reading code. But to get a good understanding of some code a visual presentation still works better. Now PyCharm allows viewing class diagrams for any Python project (right-click a file and select Diagrams). Model relationships diagrams for Django, Google App Engine projects and SQLAlchemy ORM are also available under View->Model Dependency Diagram int the main IDE menu. Diagrams allow visual customization as well as printing and exporting to an image or SVG. Web Development With Flask PyCharm now supports the development of Web applications using the Flask microframework. Watch the screencast to see the Flask tutorial completed in action with many PyCharm productivity features showcased: - Code completion and specific live templates (snippets) in Python code, - Powerful code completion in SQL code, - Strong Jinja 2 support for creating application views and a lot more. Flask support is created as an external plugin for PyCharm. Its source code is available at GitHub. We encourage you to use it as a reference for implementing support for other frameworks. Python 3.3 PyCharm has been updated to support the new language features of Python 3.3, including the new namespace packages. What's new in Python 3.3 screencast » Code Analysis & Inspections - A number of enhancements in Python code inspections make PyCharm report much fewer false warnings. - New intention action (Alt+Enter) allows specifying data type for a variable implicitly using annotations, docstrings or 'assert instance' checks. - Improved Python type inference and auto-import routines in the editor. Django Many fixes and improvements in Django support including specific code completion, code inspections, Django templates editing and applications running. Database/SQL - Live schema refactoring (rename table/column, drop table/column, new table/column + DDL export). - Export query result as CSV, TSV, HTML, SQL INSERTs, SQL UPDATEs to a file or clipboard. - Improved SQL completion and other enhancements. IDE Enhancements There are other changes that make using the IDE more fun: - Possibility to run another run configuration or external tool as "before run" action. - Support for external merge tools. Configure it under IDE Settings | External Diff Tools. - 'Recent files' popup is now much more powerful and allows quickly navigating to IDE tool-windows.
http://www.jetbrains.com/pycharm/whatsnew/whatsnew_26.html
CC-MAIN-2014-15
refinedweb
429
58.08
for my assignment I am suppose to have a user input the name and price of items. However, they are to enter in an unlimited amount of times until a sentinel value is used. I don't actually know how I'd go about doing this. The only way I know how to declare an object with user input is to use a scanner and then place that data within the arguments of a constructor. But that would only create a single object. Thanks! import java.util.Scanner; public class Item { public static void main(String[] args) { Scanner input = new Scanner(System.in); } private String name; private double price; public static final double TOLERANCE = 0.0000001; public Item(String name,double price) { this.name = name; this.price = price; } public Item() { this("",0.0); } public Item(Item other) { this.name = other.name; this.price = other.price; } public String getName() { return name; } public double getPrice() { return price; } public void setName(String name) { this.name = name; } public void setPrice(double price) { this.price = price; } public void input(String n, double item) { } public void show() { // Code to be written by student } public String toString() { return "Item: " + name + " Price: " + price; } public boolean equals(Object other) { if(other == null) return false; else if(getClass() != other.getClass()) return false; else { Item otherItem = (Item)other; return(name.equals(otherItem.name) && equivalent(price, otherItem.price)); } } private static boolean equivalent(double a, double b) { return ( Math.abs(a - b) <= TOLERANCE ); } } As I understood you want just initialize an array of obects. Firstly you need initialize an array: int n = scanner.nextInt(); // you may get n in other way Item[] items = new items[n]; Then you can fill it with new instances of Item: for(int i = 0; i < n; i++){ items[i] = new Item(); //constructor args may be here }
https://www.codesd.com/item/java-how-to-declare-an-indeterminate-number-of-objects-with-the-data-entered-by-the-user.html
CC-MAIN-2019-18
refinedweb
299
58.38
Python is one of the most popular programming languages ever — its great productivity, flexibility and general-purpose nature efficiently address areas ranging from Internet applications to system uses. Much of Python’s power comes from extensions written in languages like C and C++. In this article, we look at extending Python with Ctypes, SWIG, Pyrex and Cython, and the pros and cons of each. The ability to call C/C++ functions from Python code, through extensions, exposes lower-level “system” functionality that would otherwise not be available in Python. Also, you can optimise the performance-critical parts of your applications by creating those as C/C++ extension modules, which will be compiled to native code, yielding better performance than the interpreted Python byte-code. How can we extend Python? The standard and most widely used implementation of Python is CPython, which is programmed in C. CPython has a C API to create extensions, but this approach of extending Python is complicated, tedious, and error-prone. You have to manually code everything from data conversion to garbage collection. Leaving even a small detail unattended increases the likelihood of crashes. Given these problems, I will not discuss this approach to extending Python. If you’re interested in it, then you could browse through the extension examples provided in the Python source directory. The preferred way of extending Python is through extension modules created in the form of shared libraries. This way, you don’t bloat Python itself with C/C++ chunks added to it, but load only the required extension module, on demand. Also, separate extension modules provide better modularity in the code of complicated or large applications. We will look at Ctypes, SWIG, Pyrex and Cython below; all of which allow you to create extensions in this manner. Note: I used Puppy Linux 4.2.1 and Ubuntu 9.10 64-bit desktop edition to test the source code presented in the article, and to generate the screenshots. Ctypes, a Python “insider” Ctypes is a foreign function interface Python package, and part of the official Python distribution from version 2.5 onwards. Your Python code can use the Ctypes module to invoke C functions residing in shared libraries. Ctypes also does the data conversion between C and Python data types, and provides some general-purpose functions for working with shared libraries. You can create Python wrappers over your C shared libraries very easily, and code using Ctypes stands a better chance of being portable across all platforms on which Python runs. To see Ctypes in action, type the following code in a Python file and run it: from ctypes import CDLL slibc = 'libc.so.6' hlibc = CDLL(slibc) iret = hlibc.abs(-7) print iret This Python code calls the GNU/Linux standard C library’s abs() function. You can dynamically load any C shared library that contains functions following the cdecl calling convention (which is the default C calling convention on the x86 architecture), by passing its name to the CDLL class constructor. Alternately, use the LoadLibrary() method of the CDLL class. Ctypes provides other classes to load shared libraries that export functions under other calling conventions, like stdcall and thiscall. I encourage you to explore those on your own if you’re interested — I’m focusing on the cdecl calling convention in this article. If you use the wrong Ctypes library-loading class for the calling convention used in functions exported by a shared library, you will encounter an exception. To learn more about calling conventions, follow this Wikipedia link. Loading the shared library returns a handle to the library in the form of a CDLL instance. You can then invoke a function in the shared library as a method of the class instance. Simple, right? Try to call other standard C library functions like sleep(), time(), etc. You can also experiment with other libc functions that accept None or a single integer/long parameter. API documentation for libc is here. The Ctypes utility function find_library() (in the util sub-module) finds the full name of any shared library given a library base name string (without the prefix “lib” and the extension and version number). The function returns None if it can’t find a matching library. Run the following code: from ctypes.util import find_library llibs = ('bz2', 'c', 'm',) for s in llibs: print (s + ': ' + str(find_library(s))) The code finds the full names of the bzip2, standard C and math libraries, respectively. Though using Ctypes seems simple, there are enough ways to crash Python with it — in particular, passing unsupported Python data types in a Ctypes call to a library function. None, integers, longs, byte strings and Unicode strings are the only native Python data types/objects that you can directly pass as parameters in Ctypes function calls. (This was the reason we could pass an integer directly in the abs() call shown earlier.) None is passed as a C NULL pointer, byte strings and Unicode strings are passed as pointers to the memory block that contains their data; integers and longs are passed as the platform’s default C int type. To pass other data types into functions, you have to construct a proper data object for each variable that may not be directly passed. Ctypes’ fundamental data type objects, which correspond to their counterparts in C, are: c_char, c_wchar, c_byte, c_short, c_ushort, c_int, c_uint, c_long, c_ulong, c_float, c_double, etc. Ctypes also has the basic pointer types, like c_char_p, c_wchar_p and c_void_p. You can instantiate these data objects, optionally passing an initialiser value of the corresponding Python data type to the constructor. For example, hw=c_char_p("Hello, World") and us=c_ushort(-3) creates the hw and us Ctypes data type instances. Let’s move on from calling functions in existing libraries to making a shared library of our own, and then using its exported functions in a small Python application. Note: To keep article length down, some source code is not included in-line; you’ll need to download the zip archive containing the files. Extract the contents of the archive in a folder you set aside for this experimentation. Compile the file testlib.c to generate a shared library, with the following command: gcc -shared -fPIC -o testlib.so testlib.c You should find the new shared library testlib.so in the current working directory. To make a call to the C function in the library from Python, run the Python file testlib.py. Note: Make sure that the dynamic runtime linker finds the shared libraries created in the various code examples shown in the article — otherwise the programs will throw errors. Open a terminal and navigate to the downloaded source code folder to make it your current working directory. Now run the following command: export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH The above command will add your current working directory to the LD_LIBRARY_PATH. However, note that this is a temporary solution. To make the change permanent, edit your ~.bashrc file with the relevant information. Note how we passed the integer data from Python to the C function: for each integer, we used c_int() to create the integer object, and passed a pointer to it using byref(). (That is because the shared library expects a pointer to an integer, because it modifies the value stored in it.) We use the value attribute of the integer object to retrieve the value set by the dataModel() function in testlib.c. You can follow the same method for any supported data object/type. You can also manipulate C arrays, structures, unions, pointers, etc, through Ctypes. To explore more of these, follow the Python library documentation on Ctypes. SWIG — a wrapper generator for Python extension modules Ctypes is limited in some ways: you can extend Python only with functions written in C. Also, you need to create Python “wrapper” code around the call to the function, with the necessary conversions between Python types and Ctypes data objects. SWIG, the Simplified Wrapper and Interface Generator, is another way to extend Python through shared libraries. It’s a powerful tool that can automatically generate Python bindings for C and C++ sources. Google extensively uses SWIG, which is proof of its capabilities. SWIG is not limited to Python use only: it can wrap C and C++ functionality for use in more than a dozen programming languages, including Ruby, Perl, PHP, Lua, Tcl and Java. The easiest way to install SWIG on Ubuntu is to run the following command: sudo apt-get install swig To install SWIG from source, you need GCC and g++ (install the build-essential metapackage). Download the SWIG source tarball from its homepage and then run these commands: tar -zxvf swig-version.tar.gz && cd swig-version ./configure && make && sudo make install You also need Python development headers installed to create extension modules with SWIG. Now run the following: sudo apt-get install python-dev SWIG follows a layered approach to generate Python extension modules from C/C++ sources: it generates a C file that contains the lower-level code required for the extension module, and a Python file that contains the higher-level code. To generate an extension module from your C/C++ source, first you need to prepare an interface file that provides SWIG with information about the declaration and definitions of your data structures and routines. SWIG generates layered wrapper files from the interface file. Then you turn the generated C wrapper file and your source file into a shared library, and use your extension library through the generated Python wrapper (see the example session below). You could also automate generation of the shared library using Python’s inbuilt distutils package. Let’s see SWIG in action, with a trivial C++ extension module. In the working directory where you extracted the source files archive, take a look at testmodule.i, testmodule.hpp and test.py. The SWIG interface file testmodule.i has the directive %module that specifies the extension module you intend to generate. You could also provide the module name with the -module switch to SWIG if you don’t provide this directive in the interface file. The text between %{ and %} is put in the generated C/C++ wrapper file verbatim, so it is the place for all macros, headers, etc, required to build the module library. Declaration of module data structures and routines is done below this section. SWIG generates the wrapper files based upon these signatures. Run the command swig -python -c++ testmodule.i to generate the wrapper files testmodule_wrap.cxx and testmodule.py in the working directory. Run the following command to build the shared extension library: g++ -shared -fPIC -o testmodule.so testmodule.cpp testmodule_wrap.cxx -I(location of Python headers) . Finally, execute test.py to access the C++ routine helloSwig() from Python. The -c++ and -python switches instruct SWIG to generate a C++ wrapper for Python. You can change the wrapper filename from the default modulename_wrap with the -o switch. Also note that the name of the generated shared library is _modulename.so, as this is the Python naming convention for extension modules, and the generated Python wrapper module looks for it under this name. Now we try something more serious. Look at the files testclass.i, testclass.hpp, testclass.cpp and testoop.py. Generate the extension shared library as described for the previous example. Run testoop.py to instantiate and access a simple C++ class from Python code. As you have seen, to create extension modules with SWIG, you need the C/C++ source and header files of the code you want to wrap as a Python extension. You also need the Python header files to compile the wrappers. You need to write the interface file, too, but SWIG’s capabilities can balance the extra effort required. These examples should give you enough of a boost to start with SWIG. SWIG is a very powerful tool that lets you use almost all the advanced features of C/C++ to create extension modules, and also has many powerful features of its own. Covering SWIG in detail would require a book in itself; you can explore it via the documentation provided on its home page. Pyrex and Cython — creating extension modules with Python itself Both Ctypes and SWIG wrap existing C/C++ code that you might have to write yourself. SWIG’s interface file is extra work as well. Besides, you can’t create new Python types with SWIG — and if something goes wrong, then debugging SWIG-generated C/C++ code is a daunting task. Creating new functionality and debugging it in C/C++ is complicated and tedious. Instead of all this, here’s a different approach: create Python modules by writing your extension’s code in Python itself! This code is then compiled to the equivalent C code before building into a shared library. Pyrex is a specialised language very similar to Python. It allows you to create C data types and functions in Python-like code. Cython is inspired by Pyrex; it is more feature-rich and optimised than Pyrex, is under very active development and is more frequently updated than Pyrex. Therefore, this section mainly explores Cython; you can apply working knowledge from here to Pyrex without much extra effort, should you choose to do so. To install Pyrex, if you wish to, download the latest source tarball from its home page and run the following commands: tar -zxvf Pyrex-version.tar.gz && cd Pyrex-version \ && sudo python setup.py install To install Cython, you again require the Python development headers (install with the command sudo apt-get install python-dev). Download the latest source tarball of Cython from its home page, and run: tar -zxvf Cython-version.tar.gz && cd Cython-version \ && sudo python setup.py install To get a glimpse of Cython in action, compile the primes.pyx example file taken from the Cython official documentation with cython primes.pyx. To compile the generated primes.c, run: gcc -shared -fPIC -o primes.so primes.c -I(path of Python headers) To test the created extension module, run testprimes.py. Unlike SWIG, it doesn’t matter if you name the shared library with or without an underscore before the module name: _primes.so or primes.so. The extension module in Python works for both these names. If you look at the source of primes.pyx, you will notice that we are mixing C and Python types in the Python routine to calculate prime numbers. In Cython, you declare C data types, and C struct, union or enum types with the cdef keyword. You can also declare functions with cdef — see below. There are two kinds of function definitions in Cython — Python functions, defined using the familiar Python def statement, and C functions, defined using the cdef statement. The first take Python objects as parameters and return Python objects, while Cython C functions can take either Python objects or C values as parameters, and can return either Python objects or C values. The small catch is that within a Cython module, Python functions and C functions can call each other freely — but you can only call Python functions from outside the module (from interpreted Python code). So, any functions that you want to “export” from your Cython module should be declared as Python functions using def — or use the hybrid cpdef, which can be called from both outside the module and from within, but which uses the faster C calling conventions only when being called from other Cython code. Cython’s source code compiler translates Python code to the equivalent C code that is executed within the CPython runtime environment, but at the speed of compiled C, and with the ability to call directly into C libraries. Yet, it keeps the original interface of the Python source code, making it directly usable from Python code. These characteristics enable Cython to extend the CPython interpreter with fast binary modules, and also to interface Python code with external C libraries. The sample code was just an introduction to Cython; you can learn more about it (and Pyrex) from the documentation provided on the home pages. To sum up, extension modules greatly enhance the functionality and power of Python. You can create very innovative Python applications using the tools we’ve covered in this article — being limited only by your imagination. Feature image courtesy: Christopher John SSF. Reused under the terms of CC-BY 2.0 License.
http://opensourceforu.com/2010/05/extending-python-via-shared-libraries/
CC-MAIN-2017-09
refinedweb
2,741
62.98
This structure is used for recording information about the registered filters. It associates a name with the filter's callback and filter type. More... #include <util_filter.h>. Trace level for this filter Whether the filter is an input or output filter The function to call when this filter is invoked.. The type of filter, either AP_FTYPE_CONTENT or AP_FTYPE_CONNECTION. An AP_FTYPE_CONTENT filter modifies the data based on information found in the content. An AP_FTYPE_CONNECTION filter modifies the data based on the type of connection. The registered name for this filter The next filter_rec in the list Protocol flags for this filter Providers for this filter
https://ci.apache.org/projects/httpd/trunk/doxygen/structap__filter__rec__t.html
CC-MAIN-2018-13
refinedweb
104
56.66
$NetBSD: ChangeLog,v 1.46 2007/08/06 04:58:04 lukem Exp $ Mon Aug 6 04:55:19 UTC 2007 lukem * Release as "tnftp 20070806" * Add a NEWS file. * Reduce differences between NetBSD-ftp and local copy. * Merge NetBSD ftp from 20070605 to 20070722. Changes: - Document about:ftp and about:version. * Add autoconf check for (Dante) SOCKS5. (Needs run-time testing and more portability testing.) Mon Jul 23 11:44:42 UTC 2007 lukem * Don't use non-standard: u_char u_short u_int. Use uint32_t instead of u_int32_t. * Consistently use AS_CASE() and AS_IF() in configure.ac. * Don't use defined() with HAVE_DECL_xxx. Use `LL' instead of `L' suffix for fallback defines of LLONG_MIN and LLONG_MAX. Sun Jul 22 12:00:17 UTC 2007 lukem * Include <arpa/nameser.h> if available, and provide fallback #defines. * Sync with lib/libc/inet/inet_pton.c 1.3: * Sync to bind 9.3.x version * Update ISC copyright * Fix some lint * Sync with lib/libc/inet/inet_ntop.c 1.3: * Sync to bind 9.3.x version * Update ISC copyright * Use socklen_t instead of size_t * Use snprintf() instead of SPRINTF() * Improve detection of various boundary conditions * Sync to NetBSD glob.h 1.21, glob.c 1.16: * Standards compliance fix * De-lint * Don't overflow when DEBUG is defined. * Sync fgetln.c to tools/compat/fgetln.c 1.7: * Clause 3 removal. * Sync to config.guess 2007-07-22, config.sub 2007-06-28. * Consistency tweaks in configure help strings. * Add check for struct sockaddr.sa_len. Change tests for HAVE_foo to defined(HAVE_foo). Replace HAVE_SOCKADDR_SA_LEN with HAVE_STRUCT_SOCKADDR_SA_LEN. * Remove pretence of supporting SOCKS for now; no test system is available, and the old autoconf 2.13 support wasn't upgraded to 2.61. * configure.ac style consistency tweaks. Move autoconf aux files from ./ to build-aux/ * Remove duplicate HAVE_STRERROR replacement in tn. Thu Jun 7 04:47:47 UTC 2007 lukem * Merge NetBSD ftp from 20070510 to 20070605. Changes: - Enforce restriction that (http) proxied URL fetchs don't support being restarted at this time. Addresses NetBSD Problem Report 28697. - Display times in RFC2822 form rather than using ctime(3), since the former is more explicit about the timezone offset. - main: call tzset() to ensure TZ is setup for other <time.h> functions. - remotemodtime(): use strptime() to parse the reply. - fetch_url(): ensure struct tm is zeroed before calling strptime(). - Modify parse_url() to consistently strip the leading `/' off ftp URLs. Fixes NetBSD Problem Report 17617. - Use 'RFCnnnn' (with leading 0) instead of 'RFC nnnn', to be consistent with the style in the RFC index. - Refer to RFC3916 instead of 1738 or 2732. - Expand the list of supported RFCs in ftp(1) to contain the document name as well. Fri May 11 04:39:55 UTC 2007 lukem * Update INSTALL and COPYING. * Rename HAVE_QUAD_SUPPORT to HAVE_PRINTF_LONG_LONG, and only require support for 'long long' in that check rather than needing sizeof(off_t)>=8, as some systems have a separate off64_t when Large File Support is enabled. * config.guess: treat 'i86xen:SunOS:5.*' as 'i86pc:SunOS:5.*' Thu May 10 15:23:33 UTC 2007 lukem * Remove checks for util.h and libutil.h, and replacement for fparseln(), since fparseln() isn't used any more. * Merge NetBSD ftp from 20070418 to 20070510. Changes: - Switch from fparseln() to the internal getline() when parsing HTTP headers. Makes ftp a bit more portable (not needing fparseln()) at the expense of not supporting arbitrary long header lines, which I'm not concerned about because we don't support header line continuation either... - Replace references from draft-ietf-ftpext-mlst-NN to RFC 3659. - Fix misplaced const. - Implement copy_bytes() to copy bytes from one fd to another via the provided buffer, with optional rate-limiting and hash-mark printing, using one loop and handle short writes. Refactor sendrequest() and recvrequest() to use copy_data(). Addresses NetBSD Problem Report 15943. Wed May 9 05:24:55 UTC 2007 lukem * Fix typo in poll()-based implementation of usleep() replacement. Wed May 9 04:58:50 UTC 2007 lukem * Rename configure.in to configure.ac, as the latter is the preferred name in autoconf 2.61. * Convert from autoconf 2.13 to 2.61: * Use a consistent quoting mechanism. * Use modern autoconf macros, #define names, etc. * Search for more header files, and only #include if found. * Remove old-style config.h.in generation. This may fix various tests on platforms such as FreeBSD and OS X. * Add -Wl,-search_paths_first to LDFLAGS on OS X (Darwin) if the linker supports it. This is needed so we use our libedit rather than the system one. XXX: SOCKS support is currently disabled until I update the autoconf support. Mon Apr 23 06:04:26 UTC 2007 lukem * Merge NetBSD ftp from 20050610 to 20070418. Changes: - Add '-s srcaddr'. - Use IEC 60027-2 2^N based "KiB", "MiB" (etc) instead of 10^n "KB", "MB", ... - Recognize 307 redirect code. - Suppress printing non-COMPLETE reply strings when EPSV/EPRT fails and we fall-back to PASV/PORT. Should fix a problem with the emacs ftp wrapper. - Fix display of 'Continue with <cmd>' messages. - Prevent segfaults in .netrc parsing. - Flush stdout before each command; ftp as slave process on a pipe should work. - getpass() can return NULL in some implementations; cope. - Support '-q quittime' when waiting for server replies. - Various spelling & grammatical fixes in the manual. - Plug some memory leaks. - If a file upload (via -u) fails, return an non-zero exit value based on the index of the file that caused the problem (a la auto-fetch retrieval). - Coverity fixes for CIDs: 873 874 875 1447 1448 2194 2195 3610 - Don't remove trailing character during auth_url() - Fix progressbar display on narrow terminals (<43 columns) Fri Mar 16 06:00:14 UTC 2007 lukem * Change the return value of the replacement gai_strerror() from "char *" to "const char *", to match the current standards. Problem noted by Thomas Klausner. Thu Oct 26 07:24:22 UTC 2006 lukem * Correctly parse "AM" and "PM" in the replacement strptime(). Problem noted by Kathryn Hogg. Sat Jun 25 06:27:00 UTC 2005 lukem * Release as "tnftp 20050625" * Simplify the detection & replacement of dirname() and fparseln() and just use AC_REPLACE_FUNCS. (We don't care if the vendor has a working version in -lgen or -lutil instead of -lc; they'll get our replacement version in that case). Fixes build issue on older Darwin where the previous autoconf check wouldn't find dirname() in the default system libraries. * Only provide a prototype for dirname() if we can't find one in <libgen.h> * Search for NS_IN6ADDRSZ instead of IN6ADDRSZ, since we use the former and not the latter and older Darwin has the former. (This allows INET6 support to be enabled on Darwin 7.9.0) Mon Jun 13 09:22:13 UTC 2005 lukem * Tweak SOCKS5 support: acconfig.h: - fix a comment - ensure close() is replaced - list entries in the same order as aclocal.m4 (and the SOCKS5 FAQ) aclocal.m4: - ensure getpeername() is replaced - don't replace listen() twice Fri Jun 10 04:39:33 UTC 2005 lukem * Release as "tnftp 20050610" * Add dependencies on ${srcdir}/../tn and ../config.h * Merge NetBSD ftp from 20050609 to 20050610. Changes: - Implement getline() to read a line into a buffer. - Convert to use getline() instead of fgets() whenever reading user input to ensure that an overly long input line doesn't leave excess characters for the next input operation to accidentally use as input. - Zero out the password & account after we've finished with it. - Consistently use getpass(3) (i.e, character echo suppressed) when reading the account data. For some reason, historically the "login" code suppressed echo for Account: yet the "user" command did not! - Display the hostname in the "getaddrinfo failed" warning. - Appease some -Wcast-qual warnings. Fixing all of these requires significant code refactoring. (mmm, legacy code). Thu Jun 9 16:49:05 UTC 2005 lukem * src, libnetbsd: Excise RCSID block, rather than using #if 0 ... #endif. The point was to minimise RCSID conflicts, and the latter isn't helping there. * Merge NetBSD ftp from 20050531 to 20050609. Changes: - Only print the "Trying <address>..." message if verbose and there's more than one struct addrinfo in the getaddrinfo() result. - Don't use non-standard "u_int". Wed Jun 1 15:08:01 UTC 2005 lukem * Look for dirname(3), which may be in -lgen on IRIX, and replace it if not found.. Wed Jun 1 11:48:58 UTC 2005 lukem * libnetbsd: - Don't use non-standard: u_char u_short u_int. - Use uint32_t instead of u_int32_t. - Don't use register. * libedit: Don't use non-standard uint or u_int. Tue May 31 02:23:08 UTC 2005 lukem * tn: need <libgen.h> for dirname(3) * Merge ftp from 20050513 to 20050531. Changes: - Helps if the definition of xconnect() matches its declaration.... - Fix some cast issues highlighted by Scott Reynolds using gcc 4 on OSX.4 - Use size_t instead of int where appropriate. - Make this compile on sparc64 (size_t != int). - Printf field widths and size_t don't always mix well, so cast to int. Fixes build problem for alpha. - Some const cleanups. - tab cleanup - Improve method used in fileindir() to determine if `file' is in or under `dir': realpath(3) on non-NetBSD systems may fail if the target filename doesn't exist, so instead use realpath(3) on the parent directory of `file'. Per discussion with Todd Eigenschink. - formatbuf(): fix %m and %M to use the hostname, not the username. - fetch_ftp(): preserve 'anonftp' across a disconnect() so that multiple ftp auto-fetches on the same command line login automatically. - auto_fetch(): use an initialized volatile int to appease IRIX cc. * Merge libedit from NetBSD 20050105 to 20050531. Changes include: - Rui Paulo: Incorrect tok_line and tok_str declarations. - Remove clause 3 from the UCB license. - Luke Mewburn: Don't abuse unconstify'ing a string and writing to it, because you'll core dump. Also remove extra const that gives pain to the irix compiler. - Make sure we flush after we prepare when we are unbuffered otherwise the prompt will not appear immediately. - Terminate the arglist with a NULL instead of 0. (Shuts up gcc4.x) Sat May 28 13:19:38 UTC 2005 lukem * libnetbsd/strvis.c: - Sync to NetBSD's vis.c 1.33: Use malloc(3) instead of alloca(3). - Remove extraenous #endif Fri May 27 05:46:58 UTC 2005 lukem * libnetbsd/strvis.c: Sync to NetBSD's vis.c 1.30: Use a more standard TNF license. * libedit/sig.c: Include "src/progressbar.h" for xsignal_restart() prototype. * Ensure that fallback #define of __attribute__ is available. Fixes build problem on HP-UX with cc. Thu May 26 14:21:08 UTC 2005 lukem * Extend xpoll()'s HAVE_SELECT implementation to support POLLRDNORM, POLLWRNORM, and POLLRDBAND - the latter using exceptfds. Per discussion with Christos Zoulas. Mon May 16 13:33:27 UTC 2005 lukem * Pull in <poll.h> or <sys/poll.h> if they exist even if we're not using poll, as struct pollfd might exist in those. Fixes build problem on OSX.3. * Separate CPPFLAGS from CFLAGS. * Sync various files in libnetbsd with the original versions in NetBSD. Notable changes - Convert 4 clause UCB license to 3 clause. - Use strlcpy instead of strcpy. - Update ISC copyright. - Use NS_INADDRSZ, NS_IN6ADDRSZ and NS_INT16SZ instead of equivalents without NS_ prefix. - Use socklen_t instead of size_t where appropriate. - Improve bounds checking. - Don't update the size of allocated storage until realloc succeeds. - Fix comment about return value. - Reverse the order of two loop invariant to make 'strlcat(0, "foo", 0)' not get a SEGV. - Use Todd C. Miller's latest copyright notice (more loose). - Use "long long" instead of "quad" in various comments & constants. - Support VIS_HTTPSTYLE. - Implement svis(), strsvis(), strsvisx(), strunvisx(). * Prefer poll over select when implementing replacement usleep(). Sat May 14 04:44:35 UTC 2005 lukem * Release "tnftp 20050514" * Fail if we can't find a library with tgetent (needed for libedit). NetBSD PR pkg/28925. * Improve quoting when using various autoconf macros. * Merge NetBSD-ftp 20050513: - Correct the "optlen" argument passed to getsockopt(3) and setsockopt(3) in various places. Fixes a problem noted by Allen Briggs. - Improve warning printed when connect(2) for the data channel fails. Wed May 11 04:19:43 UTC 2005 lukem * Release "tnftp 20050511" Wed May 11 04:10:01 UTC 2005 lukem * Update the THANKS file. * Only use poll() to implement xpoll() if it's available, otherwise attempt to use select() if that's available, otherwise #error. * Detect if struct pollfd is available in <poll.h> or <sys/poll.h>. Improve consistency in use of autoconf macros. Wed May 11 02:42:08 UTC 2005 lukem * Merge NetBSD-ftp 20050511: - Use socklen_t instead of int as the 5th argument to getsockopt(). Improve invocation of setsockopt() and associated failure messages. Wed May 11 01:46:29 UTC 2005 lukem * Clean up RCSID usage in vendor-derived code, restoring original IDs where possible. Wed May 11 00:08:16 UTC 2005 lukem * Merge NetBSD-ftp 20050510: - Prevent an overly-long input line causing a core dump when editing is enabled. Issue noted by Ryoji Kanai in FreeBSD Problem Report # 77158. - Implement a timeout on the accept(2) in dataconn() and the connect(2) in xconnect() by temporarily setting O_NONBLOCK on the socket and using xpoll() to wait for the operation to succeed. The timeout used is the '-q quittime' argument (defaults to 60s for accept(2), and the system default for connect(2)). Idea inspired by discussion with Chuck Cranor. This may (indirectly) fix various problems with timeouts in active mode through broken firewalls. - Implement xpoll() as a wrapper around poll(2), to make it easier to replace on systems without a functional poll(2). Unconditionally use xpoll() instead of conditionally using select(2) or poll(2). - In fetch_url(), don't call freeaddrinfo(res0) too early, as we use pointers to its contents later in the function. Problem found by Onno van der Linden. - Fix ftp url reget when globs are being used. Provided by Mathieu Arnold <mat@FreeBSD.org>. - Factor out common string processing code eliminating static buffers, making functions that should be static be static, and cleaning up const usage. Added a guard against buffer overflow, but the domap function is a bit too complicated to tackle right now. - Clean up whitespace. - Expand description of http_proxy by suggesting the use of RFC 1738 '%xx' encoding for "unsafe URL" characters in usernames and passwords. Wed Jan 5 05:53:59 UTC 2005 lukem * For now, assume libedit is not up-to-date and use our own version. * Merge libedit from NetBSD 20020605 to 20050105. Changes include: - Improve vi-mode. - Delete-previous-char and delete-next-char without an argument are not supposed to modify the yank buffer in emacs-mode. - Improve incremental searching. - Improve memory allocation & usage. - Move UCB-licensed code from 4-clause to 3-clause. - Make the tokenization functions publically available. - Various tty access bug-fixes. - Improve readline emulation. Tue Jan 4 13:33:40 UTC 2005 lukem * Unixware 7.1.1 implements RFC 2133 (Basic Socket Interface Extensions for IPv6) but not the successor RFC 2553. The configure script detects this and decides that tnftp needs to compile its own version of getaddrinfo(). This produces the error message /usr/include/netdb.h:248: `getaddrinfo' previously defined here because Unixware provides an implementation of getaddrinfo() in netdb.h instead of a prototype declaration :-/. Since netdb.h cannot be omitted, we will always get this definition and tnftp's version of getaddrinfo will always create a conflict. This ugly preprocessor hack works around the problem. Hints for a better solution welcome. Fix from pkgsrc/net/tnftp. * Workaround poll() being a compatibility function on Darwin 7 (MacOSX 10.3) by adding a custom test for _POLL_EMUL_H_ which is defined in poll.h on some MacOSX 10.3 systems. Not all 10.3 systems have poll.h, so only do the poll() test if at least one of the header files is found. Fix from pkgsrc/net/tnftp. * Add a utimes() replacement (using utime()) for Interix. From pkgsrc/net/tnftp. Mon Jan 3 10:21:57 UTC 2005 lukem * Release "tnftp 20050103" * Merge NetBSD-ftp 20050103: - Forbid filenames returned from mget that aren't in (or below) the current directory. The previous behaviour (of trusting the remote server's response when retrieving the list of files to mget with prompting disabled) has been in ftp ~forever, and has been a "known issue" for a long time. Recently an advisory was published by D.J. Bernstein on behalf of Yosef Klein warning of the problems with the previous behaviour, so to alleviate concern I've fixed this with a sledgehammer. - Remember the local cwd after any operation which may change it. - Use "remotecwd" instead of "remotepwd". - Add (unsigned char) cast to ctype functions - Ensure that "mname" is set in ls() and mls() so that an aborted confirm() prints the correct name. Problem highlighted & suggested fix from PR [bin/17766] by Steve McClellan. - If an ftp auto-fetch transfer is interrupted by SIGINT (usually ^C), exit with 130 instead of 1 (or rarely, 0). This allows an ftp auto-fetch in a shell loop to correctly terminate the loop. Should fix PR [pkg/26351], and possibly others. - Save approximately 8K by not including http authentication, extended status messages and help strings when the appropriate options are set. - Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22365, verified by Alistair Crooks. - Always decode %xx in a url's user & pass components. - Only remember {WWW,Proxy}-Authenticate "Basic" challenges; no point in tracking any others since ftp doesn't support them. - Improve the parsing of HTTP responses. - Don't base64 encode the trailing NUL in the HTTP basic auth response. Problem noted by Eric Haszlakiewicz. - Improve parsing of HTTP response headers to be more RFC2616 compliant, and skip LWS (linear white space; CR, LF, space, tab) and the end of lines and between the field name and the field value. This still isn't 100% compliant, since we don't support "multi line" responses at this time. This should fix PR [bin/22611] from TAMURA Kent (although I can't easily find a http server to reproduce the problem against.) - Fix a minor memory leak when parsing HTTP response headers. - Don't unnecessarily display a 401/407 error when running with -V. Fix from PR [bin/18535] by Jeremy Reed. - Don't warn about "ignored setsockopt" failures unless debugging is enabled. Suggested by Todd Vierling. - Allow empty passwords in auto-fetch URLs, per RFC 1738. Requested by Simon Poole. - correct URL syntax in comment - Note potentially surprising file-saving behaviour in case of HTTP redirects - -n is ignored for auto-fetch transfers - If connect(2) in xconnect() fails with EINTR, call select(2) on the socket until it's writable or it fails with something other than EINTR. This matches the behaviour in SUSv3, and prevents the problem when pressing ^T (SIGINFO, which is marked as restartable) during connection setup would cause ftp to fail with EADDRINUSE or EALREADY when the second connect(2) was attempted on the same socket. Problem found and solution provided by Maxime Henrion <mux@freebsd.org>. - Add -q to usage. From Kouichirou Hiratsuka in PR 26199. - PR/25566: Anders Magnusson: ftp(1) do not like large TCP windows. Limit it to 8M. Mon Oct 6 01:23:03 UTC 2003 lukem * configure.in improvements: - When testing for IN6ADDRSZ in <arpa/nameser.h>, pull in <sys/types.h> first. From Stoned Elipot <seb @ NetBSD> - Whitespace cleanup Mon Aug 25 11:45:45 UTC 2003 lukem * Release "tnftp 20030825" * Add autoconf test for <sys/syslimits.h>; Cygwin needs it for ARG_MAX. Per discussion with Eugene Kotlyarov <ekot@protek36.esoo.ru>. Thu Jul 31 07:30:00 UTC 2003 lukem * release "tnftp 20030731" * merge ftp from NetBSD 20030731 to 20030731b: - Work around broken ftp servers (notably ProFTPd) that can't even follow RFC 2389, and skip any amount of whitespace before a FEATure response. The RFC says 'single space' yet ProFTPd puts two. Noted by DervishD <raul@pleyades.net>. - Improve formatting of features[] debug dump. - Invalidate remote directory completion cache if any command which may change the remote contents completes successfully, including: del, mdel, ren, mkdir, rmdir, quote, and all upload commands. Patch from Yar Tikhiy <yar@freebsd.org>. * merge ftp from NetBSD 20030228 to 20030731: - $FTPUSERAGENT overrides the HTTP User-Agent header. Based on patch from Douwe Kiela <virtus@wanadoo.nl>. - Add about:tnftp - Fix URL in about:netbsd - netbsd.org->NetBSD.org - strlcpy fix in fetch.c - Uppercase "URL" - fix a bogus error message when given a HTTP URL with a trailing slash - groff fixes in man page - tweak progressbar.c copyright; the stuff jason did in util.c wasn't migrated to this file - Don't coredump when printing '%n' in the prompt if there's no username yet. Fix from Maxim Konovalov <maxim@freebsd.org> * Add test for HAVE_IN6ADDRSZ (which older Darwin is lacking), and only enable INET6 if it exists. Patch from Amitai Schlair <schmonz@schmonz.com>. * Improve ipv6 check for older linux systems that don't provide sin6_scope_id. Patch from YAMANO Yuji <Yamano_Yuji@tk-bay.ogis-ri.co.jp>. Fri Feb 28 10:57:30 UTC 2003 lukem * tagged as "tnftp 2.0 beta1" Fri Feb 28 10:07:07 UTC 2003 lukem * renamed to `tnftp' (from `lukemftp') * renamed `libukem' to `libnetbsd' Mon Jun 17 06:50:13 UTC 2002 lukem * #if USE_GLOB_H, use <glob.h> instead of "ftpglob.h". Requested by Mike Heffner <mikeh@freebsd.org> Mon Jun 10 08:12:35 UTC 2002 lukem * crank FTP_VERSION from 1.6-beta1 to 1.6-beta2 * replace missing fseeko(), with a wrapper to fseek() which checks that the offset isn't > LONG_MAX * #include <regex.h> #if HAVE_REGEX_H Mon Jun 10 01:27:46 UTC 2002 lukem * check for and replace sa_family_t definition * don't bother checking for issetugid(); it was only used in the internal libedit to prevent $HOME/.editrc from being used if running set-id, and the newer libedit code wouldn't even read $HOME/.editrc if issetugid() wasn't available. as many target operating systems don't have issetugid(), and lukemftp isn't likely to be run set-id (and $HOME/.netrc is used in any case), the issetugid() check has been disabled in libedit. * add back cpp code which #defines REGEX #if HAVE_REGEX_H Wed Jun 5 14:39:11 UTC 2002 lukem * crank FTP_VERSION from 1.6alpha1 to 1.6-beta1 * implement replacement setprogname() * use getprogname() instead of __progname * convert to christos' replacement fgetln(), as it's better than mine * merge ftp from NetBSD 20020605 to 20020606: - use setprogname() - only support -6 if INET6 is defined Wed Jun 5 13:08:25 UTC 2002 lukem * don't bother checking if <glob.h> is usable (see below). * always compile in local glob; it's the best way to ensure that various security issues are fixed * update libukem/glob.c from NetBSD's __glob13.c rev 1.22 and rev 1.23 * merge libedit from NetBSD 20010413 to 20020606: - constify; passes all gcc and lint strict checks. - add config.h [Jason Evans], to create a portable version of libedit that can be easily compiled on other OS's. - PR/12963:Jason Waterman: Fix signed cast problems. - Fixed an __P remnant - Close quoting. - Generate <>& symbolically. - Punctuation and whitespace nits, fix a typo. - PR/14188: Anthony Mallet: Provide an opaque data pointer to client programs. - a couple of minor fixes. originally by Ruslan Ermilov <ru@FreeBSD.org>, highlighted to me by way of Mike Barcroft <mike@FreeBSD.org> (thanks!) - PR/14067: Anthony Mallet: Provide a programmatic way to set the read_char function via a new el_set() operation. Thanks, nicely done :-) - `existent', not `existant' - Don't use HAVE_ yet. - Fix a warning. - Remove an unused variable. - If term_init() fails, cleanup and return NULL. This avoids other lossage. Pointed by charles. - va_{start,end} audit: Make sure that each va_start has one and only one matching va_end, especially in error cases. If the va_list is used multiple times, do multiple va_starts/va_ends. If a function gets va_list as argument, don't let it use va_end (since it's the callers responsibility). Improved by comments from enami and christos -- thanks! - history_def_enter: fix off-by-one mistake in delete condition (the behaviour to keep at least one entry on the history list is retained). This fixes lib/9704 by Phil Nelson. * merge ftp from NetBSD 20020524 to 20020605: - when showing the final progress bar, replace "00:00 ETA" with the elapsed time. (suggested by simonb) - actually display transfer stats after a URL fetch. (bug introduced a *long* time ago) - update copyright & version * merge ftp from NetBSD 20001127 to 20020524: - Use "r+" instead of "r+w", since the latter is not standard. Noted by <Steve.McClellan@radisys.com> in private email. - Only send port number in HTTP/1.1 Host: request if port != 80. Fixes [bin/15415] from Takahiro Kambe <taca@sky.yamashina.kyoto.jp> - Fix bad mode passed by mls() to recvrequest(). Fixes [bin/16642] from <steve.mcclellan@radisys.com> - update copyrights - minor knf - invoke cmdtab.c_handler()s with argv[0] == c_name instead of the supplied name. that way the full (unambiguous) name is displayed in error messages and usage strings. - line2 may overrun if line is too long (> 200). be more careful on strcpy. - Handle URLs without files correctly (e.g, when using '-o -'). Fix from Anders Dinsen <anders@dinsen.net> in [bin/13768] - portnum is unsigned, use %u instead of %d - Add -4 to force IPv4 and -6 to force IPv6 address usage. From Hajimu UMEMOTO, via Mike Heffner of FreeBSD. - use u_char instead of char in base64_encode(). problem noticed by Jorgen Lundman in private mail. - don't make broken file with -R option. - handle "*" in Content-Range properly. - If no_proxy condition is true && urltype == FTP_URL_T, use fetch_ftp to retrieve - convert to use getprogname() - Fix description for "form", "mode", and "struct" commands. Inspired by [bin/16736] from Steve McClellan <steve.mcclellan@radisys.com> - Generate <>& symbolically. I'm avoiding .../dist/... directories for now. - Punctuation nits. - Whitespace cleanup. - put "site" in alphabetical order. noted by Mike Barcroft in private email - avoid buffer overrun on PASV from malicious server. - Large file ASCII mode support by using fseeko() instead of fseek(). From Andrey A. Chernov of FreeBSD, via Mike Heffner. - Deal with const'ification if el_parse(). - call setlocale() on startup - display a limited progress bar (containing bytes xferred and xfer rate) when the file size is unknown - disable progress bar during remglob() Thu Mar 14 05:41:49 UTC 2002 lukem * ensure all AF_INET6 use is protected with #ifdef INET6 * remove unnecessary __attribute__ goop * libukem/snprintf.c: fix compile errors with gcc 3.x Tue Apr 17 08:07:29 UTC 2001 lukem * autoconf check for %q long long support in *printf() (instead of %ll), define and use HAVE_PRINTF_QD if so * ipv6 isn't compatible with socks, so disable the former * look for <libutil.h> (instead of <util.h>) and <arpa/nameser.h> * don't check for fparseln() twice * fix getaddrinfo() checks * crank FTP_VERSION from 1.5 to 1.6alpha1 * always ensure _PATH_BSHELL and _PATH_TMP are defined * prototype inet_pton() if its missing * don't bother trying to use if_indextoname() in ip6_sa2str() (fixes problems on MacOS X) * in inet_pton(), pull in <arpa/nameser.h> for IN6ADDRSZ and INT16SZ, and define if missing Fri Apr 13 15:24:44 UTC 2001 lukem * only include <arpa/nameser.h> if we have it * update glob(3) to netbsd-current (20010329), adding support for GLOB_LIMIT and fixing various buffer overflows. * update editline from NetBSD 20000915 -> NetBSD 20010413 - Enlarge editline buffers as needed to support arbitrary length lines. This also addresses lib/9712 by Phil Nelson. - consistently check for allocation failures and return -1, if we could not get more memory. - add support for home and end keys. - improve debugging support - el_line_t: make 'limit' const Mon Nov 27 23:23:40 EST 2000 lukem * merge ftp from NetBSD-current (20001127): - implement "mreget"; as per "mget" but uses "reget" instead of "get" - add -N netrc and $NETRC, as methods to select an alternative .netrc file - cache local user name and home directory for further use - in mget(), use docase() instead of a local version to do the case conversion. - format string cleanups - be more explicit that $ftp_proxy and $http_proxy are not supported for interactive sessions - cope with 2553bis getnameinfo (always attach scope id) getnameinfo error check. - use NI_MAXHOST with getnameinfo. we can assume presence of getnameinfo. Tue Nov 7 00:16:23 EST 2000 lukem * libukem/snprintf.c had a non-functional `%s' due to a function declaration mismatch. problem found and fixed by Hubert Feyrer <hubert@feyrer.de> Wed Oct 11 14:06:19 EST 2000 lukem * released version 1.5 Tue Oct 3 10:22:36 EST 2000 lukem * crank to version 1.5 beta6 * merge ftp from NetBSD-current (20001003) - explicitly use SOCK_STREAM with socket() instead of res->ai_socktype, because it appears that linux with glibc doesn't set the latter correctly after one of getaddrinfo() or getnameinfo(). - clarify that $ftp_proxy only works for full URLs and can't be used for interactive connections. Mon Sep 25 21:52:12 EST 2000 lukem * crank to version 1.5 beta5 Sun Sep 24 13:31:19 EST 2000 lukem * merge ftp from NetBSD-current (20000924) - since everything else here uses ANSI C, we might as well replace __STRING() with the ANSI C stringization stuff... - base64_encode should be static. picked up by hp/ux(!) compiler - It appears that whilst Apache 1.3.9 incorrectly puts a trailing space after the chunksize (before the \r\n), Apache 1.3.11 puts *multiple* trailing spaces after the chunksize. I 'm fairly certain that this is contrary to RFC 2068 section 3.6, but whatever... Found by David Brownlee <abs@mono.org> - always include <netdb.h>, not just when INET6 is defined. resolves PR [bin/10970] by Richard Earnshaw <rearnsha@cambridge.arm.com>> - in progressmeter() perform the check for foregroundproc() a little earlier - removed unused variable `items' in list_vertical() Sat Sep 23 15:43:34 EST 2000 lukem * remove unused sverrno in warnx() and errx() * remove unused h_error in getnameinfo() * in getaddrinfo(), don't bother declaring in6_addrany[] and in6_loopback #ifndef INET6 Thu Sep 21 11:26:35 EST 2000 lukem * in getaddrinfo.c::str_isnumber(), use strtol() and check the result, instead of using strtoul() and not checking the result. * define INADDRSZ if it's not found (e.g, HP/UX doesn't seem to have it in <arpa/nameser.h>) Wed Sep 20 09:23:59 EST 2000 lukem * crank to version 1.5 beta4 Mon Sep 18 18:19:54 EST 2000 lukem * add AC_AIX test, which defines _ALL_SOURCE under AIX * use ANSI # stringization instead of __STRING() * define HAVE_RFC2553_NETDB if <netdb.h> defines AI_NUMERICHOST (et al) and has getaddrinfo(). (some systems only implement RFC2133) * don't bother with AC_C_CONST as we depend upon ANSI C elsewhere * when HAVE_RFC2553_NETDB isn't set, and we're #defining various EAI_, AI_, and NI_ items, #undef first incase a system partially implements these in <netdb.h> * look for tgetent() in -ltinfo before -lncurses, because ncurses 5.0 has been split up into multiple libraries. from Arkadiusz Miskiewicz <misiek@pld.org.pl> Fri Sep 15 01:09:10 EST 2000 lukem * don't bother defining __P() or __STRING() based on whether __STDC__ is available or not, since these aren't used any more * fix mkstemp() prototype * declare getpass() if necessary * we don't need the readline xxgdb hack in libedit... * convert to ansi declarations * use ansi prototypes instead of __P() * merge in changes from makelist 1.4 -> 1.6: - generate ansi prototypes instead of using __P(). noted by christos - fix a couple of comments - add -m option to makelist, which generates an mdoc table with the key bindings and their descriptions - manually add the output of 'sh ./makelist -m vi.c ed.c common.c' to a new section in editrc(5) called `EDITOR COMMANDS' * merge libedit from NetBSD-current (20000915) * convert to new style guide, which includes: - ansi prototypes & features (such as stdargs) - 8 space indents * history_def_set has a `const int' as a third arg, not an `int'. picked up by the ultrix compiler, reported by simonb@ ... * generate ansi prototypes instead of using __P(). noted by christos. fix a couple of comments * make xxgdb and a gdb linked with libedit's readline emulation work properly together. xxgdb communicates with a gdb running on a pty that it sets to -echo,-onlcr prior to forking the gdb process. GNU readline preserves the -echo setting while libedit was undoing it (setting the tty to a sane state and totally confusing xxgdb's parser). this diff simply disables libedit if both readline emulation and "stty -echo" are used/set. that is enough to make xxgdb work once again, but (XXX) this is not how GNU readline handles stty -echo (it does not echo anything, but editing commands like ^A,^K, etc. still work), so the readline emulation isn't perfect. Tue Aug 29 18:00:08 EST 2000 lukem * don't bother testing for #if __STDC__; just assume we have it... Mon Aug 28 22:45:08 EST 2000 lukem * refine tests for IPv6 #defines (EAI_, AI_, NI_, ...). should improve portability on systems which implement RFC 2133 but not RFC 2553. Wed Aug 9 02:12:51 EST 2000 lukem * use #if __STDC__ instead of #ifdef __STDC__ * only test 'case NETDB_INTERNAL:' if it's defined * fix support for --program-prefix et al * only include <arpa/nameser.h> in the files that need it, because the DELETE define in some system's implementations causes name collisions in libedit. Mon Aug 7 08:17:37 EST 2000 lukem * merge ftp from NetBSD-current (20000807) * implement parseport(), which takes a string and attempts to convert it to a numeric port number * use parseport() in parse_url() and hookup() * don't try and lookup the port number using getaddrinfo(), as it's too hard to separate a failed host name lookup from a failed service name lookup. this was causing lossage on systems that don't have `http' in services(5) (such as solaris), but only crept in when we started using getaddrinfo() unconditionally. Wed Aug 2 23:43:50 EST 2000 lukem * crank to version 1.5 beta3 * define NO_LONG_LONG not NO_QUAD * detect if struct sockaddr.sa_len exists (rather than relying upon #ifdef BSD4_4) * detect if socklen_t exists, and if not, typedef as unsigned int * detect if struct addrinfo exists, and if not declare it and #define associated EAI_, AI_, and NI_ defines. * look for & replace: getaddrinfo(), getnameinfo(), inet_ntop(), inet_pton() * look for gethostbyname2() * don't bother looking for hstrerror() or inet_aton() anymore * include <arpa/nameser.h> and <stddef.h> * define USE_SELECT instead of __USE_SELECT * always define HAVE_H_ERRNO * add Brian Stark to THANKS, for lots of AIX porting feedback * improve detection of sin_len for AIX (now part of sa_len test) * add functions needed by recent ftp import: getaddrinfo(), getnameinfo(), inet_ntop(), inet_pton() remove functions not needed anymore: hstrerror(), inet_aton() * use #if HAVE_ISSETUGID not #ifdef * update from NetBSD-current (20000802): - rename NO_QUAD to NO_LONG_LONG, QUAD* -> LL* and add ULL* (unsigned) equivalents. name change suggested by Klaus Klein <kjk@NetBSD.org> - change defined(BSD4_4) || HAVE_SIN_LEN tests into HAVE_SOCKADDR_SA_LEN, and set the latter if BSD4_4 exists Mon Jul 31 10:59:10 EST 2000 lukem * merge ftp from NetBSD-current (20000731) - we can't just rename BSD4_4 -> HAVE_SIN_LEN, since bsd systems define BSD4_4; change tests to test for either defined(BSD4_4) or HAVE_SIN_LEN - more KNF Sun Jul 30 16:55:09 EST 2000 lukem * merge ftp from NetBSD-current (20000730): - clean up NO_QUAD support: create helper #defines and use as appropriate: #define NOQUAD ! NOQUAD ------- ------ - ------ QUADF "%ld" "%lld" QUADFP(x) "%" x "ld" "%" x "lld" QUADT long long long STRTOL(x,y,z) strtol(x,y,z) strtoll(x,y,z) - always use getaddrinfo() and getnameinfo() instead of maintaining two code paths. - rename __USE_SELECT to USE_SELECT - rename BSD4_4 to HAVE_SIN_LEN - replace union sockunion {} with struct sockinet {}, and modify the code accordingly. this is possibly more portable, as it doesn't rely upon the structure alignment within the union for our own stuff. Fri Jul 28 22:11:17 EST 2000 lukem * merge ftp from NetBSD-current (20000728): - no trailing , on last item (FEAT_max) in enum - rename "opts" to "remopts", so people used to "o host" don't get bitten Wed Jul 26 18:59:19 EST 2000 lukem * merge ftp from NetBSD-current (20000726): - add support for FEAT and OPTS commands with `features' and `opts'. (from RFC 2389). - add support for MLST & MLSD (machine parseble listings) with 'mlst', 'mlsd' and 'pmlsd' (mlsd |$PAGER) commands. (from draft-ietf-ftpext-mlst-11) - rename remotesyst() to getremoteinfo(), and modify to parse the result from FEAT (if supported), and take into account the support for the various extensions such as MDTM, SIZE, REST (STREAM), MLSD, and FEAT/OPTS. - put each feature into one of the following categories: - known to work (explicit FEAT) - unknown but assume works until explicit failure, when it's then tagged as `known not to work'. - known not to work (FEAT succeeded but didn't return anything, or was unknown and then explicit failure) assign results into features[] matrix. - add support to getreply() so that an optional callback will be called for each line received from the server except for the first and last. this is used in FEAT (and MLST) parsing. - modify various commands to check if REST (STREAM), MDTM and SIZE are explicitly or implicitly supported before using. - fix `syst' when verbose is off. - minor knf (indent goto labels by one space, etc). - simply various command usage handlers by assuming that argv != NULL except for quit() and disconnect(). - errx?/warnx? audit. do not pass variable alone, use %s. * check for issetugid() and don't use in libedit if it doesn't exist. * merge libedit from NetBSD-current (20000726): * Only look in home directory for .editrc. (Discussed with Christos.) * in glob.c #undef TILDE before redefining, because some AIX systems #define TILDE in <sys/ioctl.h> Mon Jul 10 00:28:51 EST 2000 lukem * released lukemftp 1.4 Thu Jun 15 23:28:49 EST 2000 lukem * merge ftp from NetBSD-current (20000615): * migrate the SYST parsing from setpeer() into a separate remotesyst(). call remotesyst() only when login has been successful some servers don't let you run SYST until you've successfully logged in. * in fetch_ftp(), always call setpeer() with autologin disabled, and use the following ftp_login() to DTRT. this prevents ftp from trying to login a second time if the first autologin fails when connecting to a remote site anonymously using autofetch. * reset unix_proxy and unix_server in cleanuppeer() * missed a function conversion in the KNF sweep... Mon Jun 12 01:16:12 EST 2000 lukem * change lukem to check !HAVE_STRDUP instead of !HAVE_STRSUP. fixes compile problem on systems which have strdup() as a macro. * merge ftp from NetBSD-current (20000612): from itojun: better fix for previous (doesn't need in_addr_t or u_int32_t) Sun Jun 11 12:19:52 EST 2000 lukem * merge ftp from NetBSD-current (20000611): portability fixes for lukemftp: * initconn(): use in_addr_t instead of u_int32_t when manipulating IPv6 addresses (and assume anything with ipv6 has in_addr_t; if not, i'll add an autoconf test for it) * ai_unmapped(): not all systems have sin_len; so only set #ifdef BSD4_4 * fix some lint Mon Jun 5 21:10:31 EST 2000 lukem * released lukemftp 1.3 Mon Jun 5 19:53:49 EST 2000 lukem * convert various support files to ANSI C * look for strtoll() instead of strtoq() * update COPYRIGHT, THANKS, NEWS * merge ftp from NetBSD-current (20000605): - fix ai_unmapped() to be a no-op in the !def INET6 case - display `(-INET6)' at the end of the version string if !def INET6 - clarify in the man page that IPv6 support may not be present (for lukemftp :) * ensure <vis.h> has VIS_WHITE et al Sun Jun 4 18:00:07 EST 2000 lukem * merge ftp from NetBSD-current (20000604): - Change `ls' to use the `LIST' and not `NLST' FTP protocol command. Now that after many years on not caring we find certain popular ftp servers are starting to obey RFC959 to the letter of the law and will only return a list of filenames (not directories or other filetypes) in the output of `NLST', then `LIST' is more useful in this case. (Note that the aforementioned pedanticness means that filename completion isn't as useful as it could be...) Fixes [bin/8937] by David A. Gatwood <dgatwood@deepspace.mklinux.org> - convert to ANSI KNF - Add support for `fget localfile', which reads a list of filenames to retrieve from localfile. Based on work by Darren Reed. - Update copyright dates. - s/strtoq/strtoll/ (the latter is standardised) - Add support for 'ftp -u url file ...', to upload a list of files to given url. Mostly based on [bin/10019] by Scott Aaron Bamford <sab@ansic.net> - convert IPv4 mapped address (::ffff:10.1.1.1) into real IPv4 address before touching it. IPv4 mapped address complicates too many things in FTP protocol handling. - do not pass scoped IPv6 address notation on Host: directive, since scope identifier is local to the originating node. do not allow scoped IPv6 address notation in URL, if it is via proxy. - fixes from cgd: * sanity check a length (otherwise certain bogus responses can crash ftp) * allow a transfer encoding type of `binary'; certain firewall vendors return this bogus type... - make debugging output unambiguous on IPv6 numeric addrs (don't use host:port) - http://[::1]:8080/ is legal. - send Host: directive with RFC2732 bracket notation for IPv6 numeric, otherwise "host:port" is ambiguous to servers (clarification will be submitted as update to RFC2732). - only use getaddrinfo() et al if both NI_NUMERICHOST *and* INET6 are defined... (allows --disable-ipv6 in lukemftp's configure script to disable this as well, which is good for testing when it appears getaddrinfo() is borken) - updated comment on IPv4 mapped address. sync with kame. - Fix examples on using pipes in local filenames. AFAICT, ftp has always required `dir . |more' not as `dir |more' treats `|more' as the remote filename. Resolves [bin/9922] by Geoff Wing <mason@primenet.com.au> - ftp(1): treats IPv4 mapped destination as IPv4 peer, not native IPv6 peer. this does not support network with SIIT translator. - inhibit too-noisy message for scoped address data transfer (will be enabled in "debug" mode). - only use IPTOS_ setsockopt()s if they're defined (e.g, SunOS doesn't). from Havard.Eidnes@runit.sintef.no - allow IPv6 extended numeric address in host part. (draft-ietf-ipngwg-scopedaddr-format-01.txt). fixes PR 9616. * merge libedit from NetBSD-current (20000604): - use strtol() (instead of atoi()) for sane error detection Wed May 31 19:24:53 EST 2000 lukem * merge libedit from NetBSD-current (20000531): - Fix refresh glitches when using auto-margin. - Don't dump core on empty .editrc files. - el_insertstr takes a "const char *" not "char *" now as it doesn't modify the argument. Thu Feb 3 20:19:40 EST 2000 lukem * released lukemftp 1.2 Tue Feb 1 09:47:51 EST 2000 lukem * add --enable-ipv6 and --disable-ipv6 to configure * modify libedit/sig.? to use sigfunc instead of sig_t, and deprecate autoconf tests for retsigtype and sig_t. This fixes portability problems with Digital UNIX 5.0. * merge ftp from NetBSD-current (20000201): - define private type `sigfunc' as typedef void (*sigfunc) __P((int)); and replace use of sig_t and void (*)(int). certain other OSes define sig_t differently to that (they add extra arguments), and it causes problems due to function mismatches, etc... Wed Jan 26 22:54:38 EST 2000 lukem * search for tgetent() in -ltermcap then -lcurses and -lncurses * merge ftp from NetBSD-current (20000126): - roll back to using sscanf() instead of strptime() to parse `yyyymmddhhmmss' strings, since the latter technically can't parse dates without non alphanumerics between the elements (even though NetBSD's strptime() copes). Tue Jan 25 19:09:37 EST 2000 lukem * merge ftp from NetBSD-current (20000125): - complete_ambiguous(): be consistent about completing unambiguous matches; if the word is already complete then return CC_REFRESH so that the higher layer may append a suffix if necessary. Fix from Launey Thomas <ljt@alum.mit.edu> - change references from draft-ietf-ipngwg-url-literal-01.txt to RFC2732 - work around bug in apache 1.3.9 which incorrectly puts a trailing space after the chunksize. noted by Jun-ichiro itojun Hagino <itojun@itojun.org> in [bin/9096] - work around lame ftpd's that don't return a correct post-Y2K date in the output of `MDTM'. obviously the programmer of aforementioned lame ftpd's did something like "19%02d", tm->tm_year instead of "%04d", tm->tm_year + TM_YEAR_BASE fixes [bin/9289] by jbernard@mines.edu * merge libedit from NetBSD-current (20000125): - PR/9244: Kevin Schoedel: libedit dumps bindings inconsistently - PR/9243: Kevin Schoedel: libedit ignores repeat count - Add support for automatic and magic margins (from tcsh) This makes the rightmost column usable on all programs that use editline. Tue Dec 21 08:59:22 EST 1999 lukem * update INSTALL notes for some systems * if sl_init() exists, check return value of sl_add() is int and compile in a replacement copy if it's not the case * don't look for <stringlist.h> - always use local prototypes; older NetBSD systems may have conflicting prototypes Mon Dec 20 11:21:28 EST 1999 lukem * merge ftp from NetBSD-current (19991220): - Move version from ftp_var.h to version.h - Fix chunked support; probably broke after rate limiting was added. Problem noticed/debugging assisted by giles lean <giles@nemeton.com.au>. - remove unnecessary freeaddrinfo(res), since res0 was changed to be freed earlier in itojun's last commit. fixes [bin/8948]. - remove `const char *reason'; it was being assigned but not used. - fix memory leak in fetch_url (no freeaddrinfo was there). sync with recent KAME. - separate out the main `data pump' loop into two: one that supports rate limiting and one that doesn't. simplifies the code, and speeds up the latter case a bit, at the expense of duplicating a few lines... Sun Nov 28 18:20:41 EST 1999 lukem * merge ftp from NetBSD-current (19991128): - implement xsl_init() and xsl_add(); error checking forms of sl_{init,add}() - fix bug where the second press of <TAB> on an empty word (i.e, list all options) may have resulted in an strncmp() against NULL. (detected by _DIAGASSERT()) - in cleanuppeer(), reset username to NULL after free()ing it. fixes [bin/8870] by Wolfgang Rupprecht <wolfgang@wsrcc.com> - complete_remote(): use remglob("", ...) instead of remglob(".", ...), for listings of the current working directory; some ftp servers don't like `NLST .'. [noted by Giles Lean <giles@nemeton.com.au>] - recvrequest(): treat remote=="" as remote==NULL when calling command(). (to support the above change) - support `[user@]' in `[user@]host' and `[user@]host[:][path]'. [based on idea (and initial code) from David Maxwell <david@fundy.ca>] - `idle' may be invoked without any args - reformat some comments - reformat usage string in program and man page - call updateremotepwd() after successful login, not after successful connect - always call setsockopt(, IPPROTO_IP, IP_TOS, ) (et al); using #if defined(IPPROTO_IP) doesn't work on certain foreign systems where enums instead of #defines are used... [noted by Matthias Pfaller <leo@dachau.marco.de>] Mon Nov 15 23:01:58 EST 1999 lukem * released lukemftp 1.1 Mon Nov 15 09:07:01 EST 1999 lukem * merge libedit from NetBSD-current (19991115): - instead of using a private coord_t global variable to store the size of the rprompt, use the previously unused coord_t el->el_rprompt.p_pos Sat Nov 13 14:42:22 EST 1999 lukem * support caching of results in AC_MSG_TRY_{COMPILE,LINK} autoconf tests * add NEWS file * clarify copyright statement in COPYING * merge ftp from NetBSD-current (19991113): - implement `set rprompt'; right side version of `set prompt'. depends on EL_RPROMPT support i added to editline(3). - allow $FTPPROMPT and $FTPRPROMPT to override defaults for the relevant prompts - move `%' formatting code from prompt() to expandbuf(). - implement `%.' and `%c', similar to the same % codes in tcsh(1) (functionality I added to tcsh nearly 6 years ago), except that `%.' always does `...trailing' and `%c' always does `/<x>trailing'. - unknown `%foo' codes get printed as `%foo' - implement updateremotepwd(); update the global variable `remotepwd' to contain the remote working directory. - add `set prompt', a user configurable prompt. (defaults to `ftp> '). the following escape characters a la tcsh(1) are supported: %/, %m, %M, and %n. - add global var `username'; used by prompt code - fix a couple of minor memory leaks - bump version - prevent minor memory leak (unnecessary strdup) - implement restarting non-proxied http:// URLs (with -R). - fix a semicolono which stopped from working - split the version string into product and version - be consistent about reporting the version between: + status command + about:version URL fetch + User-agent sent in http requests - hookup(): when using getservbyname() (when getaddrinfo() isn't available), if the provided port is a valid number use that rather than trying to do getservbyname() against it. fixes a problem on foreign systems noted by Chuck Silvers <chuq@chuq.com> - support `about:version'. also display the version in the output of `status'. * merge libedit from NetBSD-current (19991113): - implement printing a right-side prompt. code derived from similar work I wrote for tcsh(1) three years ago. - implement EL_RPROMPT, which allows a setting/getting of a function which returns a string to be used as the right-side prompt. * replace manually managed config.h.in with acconfig.h and use autoheader to generate the former. * add missing entry for `#undef write' in acconfig.h (for SOCKS) * configure.in: - use `LL' suffix on long long constant used to test snprintf("%lld") - test for EL_RPROMPT instead of EL_EDITMODE, since the former is is a newer required feature * in makelist, set LC_ALL="C", in case the locale confuses awk. problem noted by Peter Seebach <seebs@plethora.net> Wed Oct 27 07:00:00 UTC 1999 lukem * released 1.0 * removed libedit/TEST/test.c; no need to distribute it Mon Oct 25 21:59:54 EST 1999 lukem * released 1.0b7 * put VERSION string into lukem, and display with the `status' command Mon Oct 25 11:36:59 EST 1999 lukem * merge ftp from NetBSD-current (19991025): - fix up confirm() (broke `a' and `p' in last commit) - simplify main loop (don't need `top' variable any more) - use a struct sockaddr_in6.sin6_addr for the result from inet_pton(), rather than u_char buf[16] - add a few more comments new features: - add `usage'; displays the usage of a command. implemented by calling the c_handler() with argc = 0, argv = "funcname". - add `passive auto'; does the same as $FTPMODE=auto. - add `set [option value]'; display all options, or set an option to a value. - add `unset option'; unset an option. - add getoptionvalue() to retrieve an option's value, and replace a few global variables with calls to this. - implement cleanuppeer(), which resets various bits of state back to `disconnected'. call in disconnect() and lostpeer(). - support completing on `options'. - improve recovery after a SIGINT may have closed the connection. XXX: there's still a couple to fix other stuff: - various consistency fixes in the man page. - ensure that the command usage strings in the code and man page match reality. - mput/mget: check that the connection still exists before each xfer. - minor cosmetic changes in confirm(). - set code correctly in sizecmd() and modtime() - don't need \n in err() strings. - change lostpeer to take an argument (rather than casting (sig_t)lostpeer in signal handlers) - knf and whitespace police. Sun Oct 24 17:02:59 EST 1999 lukem * merge libedit from NetBSD-current (19991024): - don't assume locales are not working - it may not be the case - re_refresh(): cast the character passed to re_addc() to unsigned char, so we don't end up calling isprint() with negative value when chars are signed and character value is >= 128 - Fix pointer arithmatic (caused problems on LP64, including ftp dumping core when `edit' was turned off then on). Problem solved by David Huggins-Daines <dhd@eradicator.org> Tue Oct 12 18:05:21 EST 1999 lukem * install man page from ${srcdir} not from . Tue Oct 12 17:00:41 EST 1999 lukem * released 1.0b6 * merge from NetBSD-current (19991012): a few user interface and cosmetic tweaks: - confirm(): move from util.c to cmds.c. display mnemonic string in its prompt. add support for `q' (terminate current xfer), `?' (show help list) - in various signal handlers, output a linefeed only if fromatty. - if fgets(stdin) returned NULL (i.e, EOF), clearerr(stdin) because you don't want future fgets to fail. this is not done for the fgets() in the main command loop, since ftp will quit at that point. - unless ftp is invoked with -a, don't retain the anonftp setting between hosts (`ftp somehost:' sets anonftp, but you don't want that to `stick' if you close that connection and open a new one). Mon Oct 11 23:06:38 EST 1999 lukem * check for working const * reorganise addition of -lukem to LIBS (was being added twice) * merge from netbsd-current: - use sigjmp_buf instead of jmp_buf for sigsetjmp() buffer * libedit: don't bother generating & compiling editline.c, since its component parts are compiled anyway. Sun Oct 10 12:08:39 EST 1999 lukem * released 1.0b5 * in libedit, use xsignal_restart() (from src/util.c) instead of signal(); the isn't guaranteed to work on some foreign systems (e.g, IRIX) if sigaction() is used in the same program. * merge from netbsd-current: - use sigsetjmp()/siglongjump() instead of setjmp()/longjmp(); the latter don't save the signal mask on some foreign systems. - ensure signal handlers don't use stdio and do reset errno if they don't exit with siglongjmp() - use a common SIGINT handler for {send,recv}request() - allow a second SIGINT during the "xfer aborted. waiting for remote to finish abort." stage. if this occurs, just call lostpeer() to close the connection. whilst this might be considered brutal, it's also extremely handy if you're impatient or there's lossage at the remote end. * add preformatted manual page * fix --enable-editline Wed Oct 6 10:19:00 EST 1999 lukem * released 1.0b4 * don't defining SIGINFO to SIGQUIT if the former doesn't exist; the code now supports both as a method of getting the transfer stats * rototill signal handling in the actual data xfer routines, and specifically set SIGQUIT to psummary in each one, to override editline's handler Tue Oct 5 23:48:29 EST 1999 lukem * factor out SIGINFO setting into a handler that is always active (but only prints out info if bytes > 0). only set the handler if SIGINFO is defined * hijack SIGQUIT to be the same as SIGINFO * in {recv,send}request(), factor a lot of duplicated code out into a `cleanup' section at the end * rework shell() a bit * enhancments from Marc Horowitz <marc@mit.edu> to improve connection timeouts: - implement xsignal_restart(), which only sets the SA_RESTART flag if specifically requested - xsignal() is now a wrapper to xsignal_restart(). INFO, USR1, USR2 and WINCH are restartable, ALRM, INT, PIPE and QUIT are not - improve getreply()'s timeout code to take advantage of the above * improve wording of how globbing works for `classic' URLs (host:path) suggested by John Refling <johnr@imageworks.com> in relation to PRs [bin/8519] and [bin/8520] * always compile in the `edit' command even if NO_EDITCOMPLETE defined it's just a no-op in the latter case, which is more consistent to the users * always compile in about: support (i.e, remove NO_ABOUT). i'm entitled to some vanity in this program... * update copyrights Mon Oct 4 10:57:41 EST 1999 lukem * Invoke ar with `cr' not `cq' * Use AC_PROG_RANLIB to find ranlib, and use it on the libraries * Remove `makelist' from dependency list for libedit files; re-running configure shouldn't result in rebuilding libedit * Add support for --{en,dis}able-editcomplete (defaults to enabled), which prevents libedit support from being compiled in. From Chris G. Demetriou <cgd@NetBSD.org> Sun Oct 3 16:49:01 EST 1999 lukem * touch up the README * add COPYING, INSTALL, THANKS * whitespace consistency * in config.h, replace NO_QUAD with HAVE_QUAD_SUPPORT, and in lukem define the former if the latter is non zero * change test against GETPGRP_VOID from #ifdef to #if * snprintf(): in the truncation case, ensure that the length returned is the actual length, not the needed length Sat Oct 2 00:41:34 EST 1999 lukem * fix more lossage with $(srcdir) / $(VPATH) stuff; seems to work now when configured in a separate directory * actually test the correct variable when determining whether to run AC_FUNC_GETPGRP Fri Oct 1 19:32:22 EST 1999 lukem * released 1.0b3 * use AC_PROG_MAKE_SET * determine setting of NO_QUAD with configure not lukem * if have long long and have snprintf, test that snprintf supports %lld. if it doesn't use private version * change strtoq from returning off_t to returning long long * updates from NetBSD mainline: - only try epsv once per connection (i.e, don't bother again if it fails) - improve description of rate command - fix up global vars; they're now externed in ftp_var.h except when main.c includes it - remove "pathnames.h" Fri Oct 1 10:08:43 EST 1999 lukem * updates from NetBSD mainline: - fix determining of homedir - parse_url(): fix checking of portnum - move kame copyrights after bsd/tnfi ones * released 1.0b2 * add %lld and %qd support to snprintf() for displaying long long's * support VPATH and srcdir Thu Sep 30 17:19:35 EST 1999 lukem * released 1.0b1 * fix from NetBSD mainline: in empty() FD_ZERO the correct variable Wed Sep 29 23:34:33 EST 1999 lukem * major rework; reimport code from NetBSD-current 1999/09/29 into separate subdirectories and build from there. organisation is now: libedit replacement libedit libukem replacements for missing functions src main ftp source Mon Sep 27 00:43:12 EST 1999 lukem * released 1.0 a6 Sun Sep 26 17:17:05 EST 1999 lukem * released 1.0 a5 Sat Sep 25 00:58:28 EST 1999 lukem * released 1.0 a4 Fri Sep 24 17:07:07 EST 1999 lukem * released 1.0 a3 Fri Sep 24 16:18:29 EST 1999 lukem * released 1.0 a2 Tue Sep 21 11:38:49 EST 1999 lukem * import usr.src/bin/ftp and usr.src/lib/libedit sources from NetBSD
http://opensource.apple.com/source/lukemftp/lukemftp-12/tnftp/ChangeLog
CC-MAIN-2016-18
refinedweb
9,745
66.44
Quadrature encoder interface library. Dependents: PreHeater-Rev2 QEI.h - Committer: - Hapi_Tech - Date: - 2015-07-29 - Revision: - 2:d811f926cf4a - Parent: - 1:aea205976bf8 File content as of revision 2:d811f926cf4a: /** * @author Aaron Berk * * @section LICENSE * * Copyright (c) 2010 ARM. * * @section DESCRIPTION * * Quadrature Encoder Interface. * * A quadrature encoder consists of two code tracks on a disc which are 90 * degrees out of phase. It can be used to determine how far a wheel has * rotated, relative to a known starting position. * * Only one code track changes at a time leading to a more robust system than * a single track, because any jitter around any edge won't cause a state * change as the other track will remain constant. * * Encoders can be a homebrew affair, consisting of infrared emitters/receivers * and paper code tracks consisting of alternating black and white sections; * alternatively, complete disk and PCB emitter/receiver encoder systems can * be bought, but the interface, regardless of implementation is the same. * * +-----+ +-----+ +-----+ * Channel A | ^ | | | | | * ---+ ^ +-----+ +-----+ +----- * ^ ^ * ^ +-----+ +-----+ +-----+ * Channel B ^ | | | | | | * ------+ +-----+ +-----+ +----- * ^ ^ * ^ ^ * 90deg * * The interface uses X2 encoding by default which calculates the pulse count * based on reading the current state after each rising and falling edge of * channel A. * * +-----+ +-----+ +-----+ * Channel A | | | | | | * ---+ +-----+ +-----+ +----- * ^ ^ ^ ^ ^ * ^ +-----+ ^ +-----+ ^ +-----+ * Channel B ^ | ^ | ^ | ^ | ^ | | * ------+ ^ +-----+ ^ +-----+ +-- * ^ ^ ^ ^ ^ * ^ ^ ^ ^ ^ * Pulse count 0 1 2 3 4 5 ... * * This interface can also use X4 encoding which calculates the pulse count * based on reading the current state after each rising and falling edge of * either channel. * * +-----+ +-----+ +-----+ * Channel A | | | | | | * ---+ +-----+ +-----+ +----- * ^ ^ ^ ^ ^ * ^ +-----+ ^ +-----+ ^ +-----+ * Channel B ^ | ^ | ^ | ^ | ^ | | * ------+ ^ +-----+ ^ +-----+ +-- * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ * Pulse count 0 1 2 3 4 5 6 7 8 9 ... * * It defaults * * An optional index channel can be used which determines when a full * revolution has occured. * * If a 4 pules per revolution encoder was used, with X4 encoding, * the following would be observed. * * +-----+ +-----+ +-----+ * Channel A | | | | | | * ---+ +-----+ +-----+ +----- * ^ ^ ^ ^ ^ * ^ +-----+ ^ +-----+ ^ +-----+ * Channel B ^ | ^ | ^ | ^ | ^ | | * ------+ ^ +-----+ ^ +-----+ +-- * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ * ^ ^ ^ +--+ ^ ^ +--+ ^ * ^ ^ ^ | | ^ ^ | | ^ * Index ------------+ +--------+ +----------- * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ * Pulse count 0 1 2 3 4 5 6 7 8 9 ... * Rev. count 0 1 2 * * Rotational position in degrees can be calculated by: * * (pulse count / X * N) * 360 * * Where X is the encoding type [e.g. X4 encoding => X=4], and N is the number * of pulses per revolution. * * Linear position can be calculated by: * * (pulse count / X * N) * (1 / PPI) * * Where X is encoding type [e.g. X4 encoding => X=44], N is the number of * pulses per revolution, and PPI is pulses per inch, or the equivalent for * any other unit of displacement. PPI can be calculated by taking the * circumference of the wheel or encoder disk and dividing it by the number * of pulses per revolution. */ #ifndef QEI_H #define QEI_H /** * Includes */ #include "mbed.h" /** * Defines */ #define PREV_MASK 0x1 //Mask for the previous state in determining direction //of rotation. #define CURR_MASK 0x2 //Mask for the current state in determining direction //of rotation. #define INVALID 0x3 //XORing two states where both bits have changed. /** * Quadrature Encoder Interface. */ class QEI { public: typedef enum Encoding { X2_ENCODING, X4_ENCODING } Encoding; /** * Constructor. * * Reads the current values on channel A and channel B to determine the * initial state. * * Attaches the encode function to the rise/fall interrupt edges of * channels A and B to perform X4 encoding. * * Attaches the index function to the rise interrupt edge of channel index * (if it is used) to count revolutions. * * @param channelA mbed pin for channel A input. * @param channelB mbed pin for channel B input. * @param index mbed pin for optional index channel input, * (pass NC if not needed). * @param pulsesPerRev Number of pulses in one revolution. * @param encoding The encoding to use. Uses X2 encoding by default. X2 * encoding uses interrupts on the rising and falling edges * of only channel A where as X4 uses them on both * channels. */ QEI(PinName channelA, PinName channelB, PinName index, PinMode intRes, int pulsesPerRev, Encoding encoding = X2_ENCODING); /** * Reset the encoder. * * Sets the pulses and revolutions count to zero. */ void reset(void); /** * Read the state of the encoder. * * @return The current state of the encoder as a 2-bit number, where: * bit 1 = The reading from channel B * bit 2 = The reading from channel A */ int getCurrentState(void); /** * Read the number of pulses recorded by the encoder. * * @return Number of pulses which have occured. */ int getPulses(void); /** * Read the number of revolutions recorded by the encoder on the index channel. * * @return Number of revolutions which have occured on the index channel. */ int getRevolutions(void); private: /** * Update the pulse count. * * Called on every rising/falling edge of channels A/B. * * Reads the state of the channels and determines whether a pulse forward * or backward has occured, updating the count appropriately. */ void encode(void); /** * Called on every rising edge of channel index to update revolution * count by one. */ void index(void); Encoding encoding_; InterruptIn channelA_; InterruptIn channelB_; InterruptIn index_; int pulsesPerRev_; int prevState_; int currState_; volatile int pulses_; volatile int revolutions_; }; #endif /* QEI_H */
https://os.mbed.com/users/Hapi_Tech/code/QEI-Intruptinmode-set/file/d811f926cf4a/QEI.h/
CC-MAIN-2020-40
refinedweb
775
55.64