text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Data binding in depth
Important APIs
Note
This topic describes data binding features in detail. For a short, practical introduction, see Data binding overview.
Data binding is a way for your app's UI to display data, and optionally to stay in sync with that data. Data binding allows you to separate the concern of data from the concern of UI, and that results in a simpler conceptual model as well as better readability, testability, and maintainability of your app.
You can use data binding to simply display values from a data source when the UI is first shown, but not to respond to changes in those values. This is a mode of binding called one-time, and it works well for a value that doesn't change during run-time. Alternatively, you can choose to "observe" the values and to update the UI when they change. This mode is called one-way, and it works well for read-only data. Ultimately, you can choose to both observe and update, so that changes that the user makes to values in the UI are automatically pushed back into the data source. This mode is called two-way, and it works well for read-write data. Here are some examples.
- You could use the one-time mode to bind an Image to the current user's photo.
- You could use the one-way mode to bind a ListView to a collection of real-time news articles grouped by newspaper section.
- You could use the two-way mode to bind a TextBox to a customer's name in a form.
Independent of mode, there are two kinds of binding, and they're both typically declared in UI markup. You can choose to use either the {x:Bind} markup extension or the {Binding} markup extension. And you can even use a mixture of the two in the same app—even on the same UI element. {x:Bind} is new for Windows 10 and it has better performance. All the details described in this topic apply to both kinds of binding unless we explicitly say otherwise.
Sample apps that demonstrate {x:Bind}
Sample apps that demonstrate {Binding}
- Download the Bookstore1 app.
- Download the Bookstore2 app.
Every binding involves these pieces
- A binding source. This is the source of the data for the binding, and it can be an instance of any class that has members whose values you want to display in your UI.
- A binding target. This is a DependencyProperty of the FrameworkElement in your UI that displays the data.
- A binding object. This is the piece that transfers data values from the source to the target, and optionally from the target back to the source. The binding object is created at XAML load time from your {x:Bind} or {Binding} markup extension.
In the following sections, we'll take a closer look at the binding source, the binding target, and the binding object. And we'll link the sections together with the example of binding a button's content to a string property named NextButtonText, which belongs to a class named HostViewModel.
Binding source
Here's a very rudimentary implementation of a class that we could use as a binding source.
If you're using C++/WinRT, then add new Midl File (.idl) items to the project, named as shown in the C++/WinRT code example listing below. Replace the contents of those new files with the MIDL 3.0 code shown in the listing, build the project to generate
HostViewModel.h and
.cpp, and then add code to the generated files to match the listing. For more info about those generated files and how to copy them into your project, see XAML controls; bind to a C++/WinRT property.
public class HostViewModel { public HostViewModel() { this.NextButtonText = "Next"; } public string NextButtonText { get; set; } }
// HostViewModel.idl namespace DataBindingInDepth { runtimeclass HostViewModel { HostViewModel(); String NextButtonText; } } // HostViewModel.h // Implement the constructor like this, and add this field: ... HostViewModel() : m_nextButtonText{ L"Next" } {} ... private: std::wstring m_nextButtonText; ... // HostViewModel.cpp // Implement like this: ... hstring HostViewModel::NextButtonText() { return hstring{ m_nextButtonText }; } void HostViewModel::NextButtonText(hstring const& value) { m_nextButtonText = value; } ...
That implementation of HostViewModel, and its property NextButtonText, are only appropriate for one-time binding. But one-way and two-way bindings are extremely common, and in those kinds of binding the UI automatically updates in response to changes in the data values of the binding source. In order for those kinds of binding to work correctly, you need to make your binding source "observable" to the binding object. So in our example, if we want to one-way or two-way bind to the NextButtonText property, then any changes that happen at run-time to the value of that property need to be made observable to the binding object.
One way of doing that is to derive the class that represents your binding source from DependencyObject, and expose a data value through a DependencyProperty. That's how a FrameworkElement becomes observable. FrameworkElements are good binding sources right out of the box.
A more lightweight way of making a class observable—and a necessary one for classes that already have a base class—is to implement System.ComponentModel.INotifyPropertyChanged. This really just involves implementing a single event named PropertyChanged. An example using HostViewModel is below.
... using System.ComponentModel; using System.Runtime.CompilerServices; ... public class HostViewModel : INotifyPropertyChanged { private string nextButtonText; public event PropertyChangedEventHandler PropertyChanged = delegate { }; public HostViewModel() { this.NextButtonText = "Next"; } public string NextButtonText { get { return this.nextButtonText; } set { this.nextButtonText = value; this.OnPropertyChanged(); } } public void OnPropertyChanged([CallerMemberName] string propertyName = null) { // Raise the PropertyChanged event, passing the name of the property whose value has changed. this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } }
// HostViewModel.idl namespace DataBindingInDepth { runtimeclass HostViewModel : Windows.UI.Xaml.Data.INotifyPropertyChanged { HostViewModel(); String NextButtonText; } } // HostViewModel.h // Add this field: ... winrt::event_token PropertyChanged(Windows::UI::Xaml::Data::PropertyChangedEventHandler const& handler); void PropertyChanged(winrt::event_token const& token) noexcept; private: winrt::event<Windows::UI::Xaml::Data::PropertyChangedEventHandler> m_propertyChanged; ... // HostViewModel.cpp // Implement like this: ... void HostViewModel::NextButtonText(hstring const& value) { if (m_nextButtonText != value) { m_nextButtonText = value; m_propertyChanged(*this, Windows::UI::Xaml::Data::PropertyChangedEventArgs{ L"NextButtonText" }); } } winrt::event_token HostViewModel::PropertyChanged(Windows::UI::Xaml::Data::PropertyChangedEventHandler const& handler) { return m_propertyChanged.add(handler); } void HostViewModel::PropertyChanged(winrt::event_token const& token) noexcept { m_propertyChanged.remove(token); } ...
Now the NextButtonText property is observable. When you author a one-way or a two-way binding to that property (we'll show how later), the resulting binding object subscribes to the PropertyChanged event. When that event is raised, the binding object's handler receives an argument containing the name of the property that has changed. That's how the binding object knows which property's value to go and read again.
So that you don't have to implement the pattern shown above multiple times, if you're using C# then you can just derive from the BindableBase bass class that you'll find in the QuizGame sample (in the "Common" folder). Here's an example of how that looks.
public class HostViewModel : BindableBase { private string nextButtonText; public HostViewModel() { this.NextButtonText = "Next"; } public string NextButtonText { get { return this.nextButtonText; } set { this.SetProperty(ref this.nextButtonText, value); } } }
// Your BindableBase base class should itself derive from Windows::UI::Xaml::DependencyObject. Then, in HostViewModel.idl, derive from BindableBase instead of implementing INotifyPropertyChanged.
Note
For C++/WinRT, any runtime class that you declare in your application that derives from a base class is known as a composable class. And there are constraints around composable classes. For an application to pass the Windows App Certification Kit tests used by Visual Studio and by the Microsoft Store to validate submissions (and therefore for the application to be successfully ingested into the Microsoft Store), a composable class must ultimately derive from a Windows base class. Meaning that the class at the very root of the inheritance hierarchy must be a type originating in a Windows.* namespace. If you do need to derive a runtime class from a base class—for example, to implement a BindableBase class for all of your view models to derive from—then you can derive from Windows.UI.Xaml.DependencyObject.
Raising the PropertyChanged event with an argument of String.Empty or null indicates that all non-indexer properties on the object should be re-read. You can raise the event to indicate that indexer properties on the object have changed by using an argument of "Item[indexer]" for specific indexers (where indexer is the index value), or a value of "Item[]" for all indexers.
A binding source can be treated either as a single object whose properties contain data, or as a collection of objects. In C# and Visual Basic code, you can one-time bind to an object that implements List(Of T) to display a collection that doesn't change at run-time. For an observable collection (observing when items are added to and removed from the collection), one-way bind to ObservableCollection(Of T) instead. In C++ code, you can bind to Vector<T> for both observable and non-observable collections. To bind to your own collection classes, use the guidance in the following table. notifications. When the data binding engine requests more data, your data source must make the appropriate requests, integrate the results, and then send the appropriate notifications in order to update the UI.
Binding target
In the two examples below, the Button.Content property is the binding target, and its value is set to a markup extension that declares the binding object. First {x:Bind} is shown, and then {Binding}. Declaring bindings in markup is the common case (it's convenient, readable, and toolable). But you can avoid markup and imperatively (programmatically) create an instance of the Binding class instead if you need to.
<Button Content="{x:Bind ...}" ... />
<Button Content="{Binding ...}" ... />
If you're using C++/WinRT or Visual C++ component extensions (C++/CX), then you'll need to add the BindableAttribute attribute to any runtime class that you want to use the {Binding} markup extension with. object declared using {x:Bind}
There's one step we need to do before we author our {x:Bind} markup. We need to expose our binding source class from the class that represents our page of markup. We do that by adding a property (of type HostViewModel in this case) to our MainPage page class.
namespace DataBindingInDepth { public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); this.ViewModel = new HostViewModel(); } public HostViewModel ViewModel { get; set; } } }
// MainPage.idl import "HostViewModel.idl"; namespace DataBindingInDepth { runtimeclass MainPage : Windows.UI.Xaml.Controls.Page { MainPage(); HostViewModel ViewModel{ get; }; } } // MainPage.h // Include a header, and add this field: ... #include "HostViewModel.h" ... DataBindingInDepth::HostViewModel ViewModel(); private: DataBindingInDepth::HostViewModel m_viewModel{ nullptr }; ... // MainPage.cpp // Implement like this: ... MainPage::MainPage() { InitializeComponent(); } DataBindingInDepth::HostViewModel MainPage::ViewModel() { return m_viewModel; } ...
That done, we can now take a closer look at the markup that declares the binding object. The example below uses the same Button.Content binding target we used in the "Binding target" section earlier, and shows it being bound to the HostViewModel.NextButtonText property.
<!-- MainPage.xaml --> <Page x:Class="DataBindingInDepth.Mainpage" ... > <Button Content="{x:Bind Path=ViewModel.NextButtonText, Mode=OneWay}" ... /> </Page>
Notice the value that we specify for Path. This value is interpreted in the context of the page itself, and in this case the path begins by referencing the ViewModel property that we just added to the MainPage page. That property returns a HostViewModel instance, and so we can dot into that object to access the HostViewModel.NextButtonText property. And we specify Mode, to override the {x:Bind} default of one-time.. For other settings, see {x:Bind} markup extension.
To illustrate that the HostViewModel.NextButtonText property is indeed observable, add a Click event handler to the button, and update the value of HostViewModel.NextButtonText. Build, run, and click the button to see the value of the button's Content update.
// MainPage.xaml.cs private void Button_Click(object sender, RoutedEventArgs e) { this.ViewModel.NextButtonText = "Updated Next button text"; }
// MainPage.cpp void MainPage::ClickHandler(IInspectable const&, RoutedEventArgs const&) { ViewModel().NextButtonText(L"Updated Next button text"); }
Note
Changes to TextBox.Text are sent to a two-way bound source when the TextBox loses focus, and not after every user keystroke.
DataTemplate and x:DataType
Inside a DataTemplate (whether used as an item template, a content template, or a header template), the value of Path is not interpreted in the context of the page, but in the context of the data object being templated. When using {x:Bind} in a data template, so that its bindings can be validated (and efficient code generated for them) at compile-time, the DataTemplate needs to declare the type of its data object using x:DataType. The example given below could be used as the ItemTemplate of an items control bound to a collection of SampleDataGroup objects.
<DataTemplate x: <StackPanel Orientation="Vertical" Height="50"> <TextBlock Text="{x:Bind Title}"/> <TextBlock Text="{x:Bind Description}"/> </StackPanel> </DataTemplate>
Weakly-typed objects in your Path
Consider for example that you have a type named SampleDataGroup, which implements a string property named Title. And you have a property MainPage.SampleDataGroupAsObject, which is of type object, but which actually returns an instance of SampleDataGroup. The binding
<TextBlock Text="{x:Bind SampleDataGroupAsObject.Title}"/> will result in a compile error because the Title property is not found on the type object. The remedy for this is to add a cast to your Path syntax like this:
<TextBlock Text="{x:Bind ((data:SampleDataGroup)SampleDataGroupAsObject).Title}"/>. Here's another example where Element is declared as object but is actually a TextBlock:
<TextBlock Text="{x:Bind Element.Text}"/>. And a cast remedies the issue:
<TextBlock Text="{x:Bind ((TextBlock)Element).Text}"/>.
If your data loads asynchronously
Code to support {x:Bind} is generated at compile-time in the partial classes for your pages. These files can be found in your
obj folder, with names like (for C#)
<view name>.g.cs. The generated code includes a handler for your page's Loading event, and that handler calls the Initialize method on a generated class that represent's your page's bindings. Initialize in turn calls Update to begin moving data between the binding source and the target. Loading is raised just before the first measure pass of the page or user control. So if your data is loaded asynchronously it may not be ready by the time Initialize is called. So, after you've loaded data, you can force one-time bindings to be initialized by calling
this.Bindings.Update();. If you only need one-time bindings for asynchronously-loaded data then it's much cheaper to initialize them this way than it is to have one-way bindings and to listen for changes. If your data does not undergo fine-grained changes, and if it's likely to be updated as part of a specific action, then you can make your bindings one-time, and force a manual update at any time with a call to Update.
Note
{x:Bind} is not suited to late-bound scenarios, such as navigating the dictionary structure of a JSON object, nor duck typing. "Duck typing" is a weak form of typing based on lexical matches on property names (a in, "if it walks, swims, and quacks like a duck, then it's a duck"). With duck typing, a binding to the Age property would be equally satisfied with a Person or a Wine object (assuming that those types each had an Age property). For these scenarios, use the {Binding} markup extension.
Binding object declared using {Binding}
If you're using C++/WinRT or Visual C++ component extensions (C++/CX) then, to use the {Binding} markup extension, you'll need to add the BindableAttribute attribute to any runtime class that you want to bind to. To use {x:Bind}, you don't need that attribute.
// HostViewModel.idl // Add this attribute: [Windows.UI.Xaml.Data.Bindable] runtimeclass HostViewModel : Windows.UI.Xaml.Data.INotifyPropertyChanged ...} assumes, by default, that you're binding to the DataContext of your markup page. So we'll set the DataContext of our page to be an instance of our binding source class (of type HostViewModel in this case). The example below shows the markup that declares the binding object. We use the same Button.Content binding target we used in the "Binding target" section earlier, and we bind to the HostViewModel.NextButtonText property.
<Page xmlns: </Page.DataContext> ... <Button Content="{Binding Path=NextButtonText}" ... /> </Page>
// MainPage.xaml.cs private void Button_Click(object sender, RoutedEventArgs e) { this.viewModelInDataContext.NextButtonText = "Updated Next button text"; }
// MainPage.cpp void MainPage::ClickHandler(IInspectable const&, RoutedEventArgs const&) { viewModelInDataContext().NextButtonText(L"Updated Next button text"); }
Notice the value that we specify for Path. This value is interpreted in the context of the page's DataContext, which in this example is set to an instance of HostViewModel. The path references the HostViewModel.NextButtonText property. We can omit Mode, because the {Binding} default of one-way works here.
The default value of DataContext for a UI element is the inherited value of its parent. You can of course override that default by setting DataContext explicitly, which is in turn inherited by children by default. Setting DataContext explicitly on an element is useful when you want to have multiple bindings that use the same source.
A binding object has a Source property, which defaults to the DataContext of the UI element on which the binding is declared. You can override this default by setting Source, RelativeSource, or ElementName explicitly on the binding (see {Binding} for details).
Inside a DataTemplate, the DataContext is automatically set to the data object being templated. The example given below could be used as the ItemTemplate of an items control bound to a collection of any type that has string properties named Title and Description.
<DataTemplate x: <StackPanel Orientation="Vertical" Height="50"> <TextBlock Text="{Binding Title}"/> <TextBlock Text="{Binding Description"/> </StackPanel> </DataTemplate>
Note
By default, changes to TextBox.Text are sent to a two-way bound source when the TextBox loses focus. To cause changes to be sent after every user keystroke, set UpdateSourceTrigger to PropertyChanged on the binding in markup. You can also completely take control of when changes are sent to the source by setting UpdateSourceTrigger to Explicit. You then handle events on the text box (typically TextBox.TextChanged), call GetBindingExpression on the target to get a BindingExpression object, and finally call BindingExpression.UpdateSource to programmatically update the data source.. The ElementName property is useful for element-to-element binding. The RelativeSource property has several uses, one of which is as a more powerful alternative to template binding inside a ControlTemplate. For other settings, see {Binding} markup extension and the Binding class.
What if the source and the target are not the same type?
If you want to control the visibility of a UI element based on the value of a boolean property, or if you want to render a UI element with a color that's a function of a numeric value's range or trend, or if you want to display a date and/or time value in a UI element property that expects a string, then you'll need to convert values from one type to another. There will be cases where the right solution is to expose another property of the right type from your binding source class, and keep the conversion logic encapsulated and testable there. But that isn't flexible nor scalable when you have large numbers, or large combinations, of source and target properties. In that case you have a couple of options:
- If using {x:Bind} then you can bind directly to a function to do that conversion
- Or you can specify a value converter which is an object designed to perform the conversion
Value Converters
Here's a value converter, suitable for a one-time or a one-way binding, that converts a DateTime value to a string value containing the month. The class implements IValueConverter.
public class DateToStringConverter : IValueConverter { // Define the Convert method to convert a DateTime value to // a month string. public object Convert(object value, Type targetType, object parameter, string language) { //, string language) { throw new NotImplementedException(); } }
// See the "Formatting or converting data values for display" section in the "Data binding overview" topic.
And here's how you consume that value converter in your binding object markup.
<UserControl.Resources> <local:DateToStringConverter x: </UserControl.Resources> ... <TextBlock Grid. <TextBlock Grid.
The binding engine calls the Convert and ConvertBack methods if the Converter parameter is defined for the binding. When data is passed from the source, the binding engine calls Convert and passes the returned data to the target. When data is passed from the target (for a two-way binding), the binding engine calls ConvertBack and passes the returned data to the source.
The converter also has optional parameters: ConverterLanguage, which allows specifying the language to be used in the conversion, and ConverterParameter, which allows passing a parameter for the conversion logic. For an example that uses a converter parameter, see IValueConverter.
Note
If there is an error in the conversion, do not throw an exception. Instead, return DependencyProperty.UnsetValue, which will stop the data transfer.
To display a default value to use whenever the binding source cannot be resolved, set the FallbackValue property on the binding object in markup. This is useful to handle conversion and formatting errors. It is also useful to bind to source properties that might not exist on all objects in a bound collection of heterogeneous types. native.
Note
Starting in Windows 10, version 1607, the XAML framework provides a built in boolean to Visibility converter. The converter maps true to the Visible enumeration value and false to Collapsed so you can bind a Visibility property to a boolean without creating a converter. To use the built in converter, your app's minimum target SDK version must be 14393 or later. You can't use it when your app targets earlier versions of Windows 10. For more info about target versions, see Version adaptive code.
Function binding in {x:Bind}
{x:Bind} enables the final step in a binding path to be a function. This can be used to perform conversions, and to perform bindings that depend on more than one property. See Functions in x:Bind
Resource dictionaries with {x:Bind}
The {x:Bind} markup extension depends on code generation, so it needs a code-behind file containing a constructor that calls InitializeComponent (to initialize the generated code). You re-use the resource dictionary by instantiating its type (so that InitializeComponent is called) instead of referencing its filename. Here's an example of what to do if you have an existing resource dictionary and you want to use {x:Bind} in it.
TemplatesResourceDictionary.xaml
<ResourceDictionary x: <DataTemplate x: <Grid> <TextBlock Text="{x:Bind Name}"/> </Grid> </DataTemplate> </ResourceDictionary>
TemplatesResourceDictionary.xaml.cs
using Windows.UI.Xaml.Data; namespace ExampleNamespace { public partial class TemplatesResourceDictionary { public TemplatesResourceDictionary() { InitializeComponent(); } } }
<Page x: <Page.Resources> <ResourceDictionary> .... <ResourceDictionary.MergedDictionaries> <examplenamespace:TemplatesResourceDictionary/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Page.Resources> </Page>
Event binding and ICommand
{x:Bind} supports a feature called event binding. With this feature, you can specify the handler for an event using a binding, which is an additional option on top of handling events with a method on the code-behind file. Let's say you have a RootFrame property on your MainPage class.
public sealed partial class MainPage : Page { ... public Frame RootFrame { get { return Window.Current.Content as Frame; } } }
You can then bind a button's Click event to a method on the Frame object returned by the RootFrame property like this. Note that we also bind the button's IsEnabled property to another member of the same Frame.
<AppBarButton Icon="Forward" IsCompact="True" IsEnabled="{x:Bind RootFrame.CanGoForward, Mode=OneWay}" Click="{x:Bind RootFrame.GoForward}"/>
Overloaded methods cannot be used to handle an event with this technique. Also, if the method that handles the event has parameters then they must all be assignable from the types of all of the event's parameters, respectively. In this case, Frame.GoForward is not overloaded and it has no parameters (but it would still be valid even if it took two object parameters). Frame.GoBack is overloaded, though, so we can't use that method with this technique.
The event binding technique is similar to implementing and consuming commands (a command is a property that returns an object that implements the ICommand interface). Both {x:Bind} and {Binding} work with commands. So that you don't have to implement the command pattern multiple times, you can use the DelegateCommand helper class that you'll find in the QuizGame sample (in the "Common" folder).
Binding to a collection of folders or files. Remember to declare the picturesLibrary capability in your app package manifest, and confirm that there are pictures in your Pictures library folder.(); this.PicturesListView.ItemsSource = dataSource; }.
Binding to data grouped by a key
If you take a flat collection of items (books, for example, represented by a BookSku class) and you group the items by using a common property as a key (the BookSku.AuthorName property, for example) then the result is called grouped data. When you group data, it is no longer a flat collection. Grouped data is a collection of group objects, where each group object has
- a key, and
- a collection of items whose property matches that key.
To take the books example again, the result of grouping the books by author name results in a collection of author name groups where each group has
- a key, which is an author name, and
- a collection of the BookSkus whose AuthorName property matches the group's key.
In general, to display a collection, you bind the ItemsSource of an items control (such as ListView or GridView) directly to a property that returns a collection. If that's a flat collection of items then you don't need to do anything special. But if it's a collection of group objects (as it is when binding to grouped data) then you need the services of an intermediary object called a CollectionViewSource which sits between the items control and the binding source. You bind the CollectionViewSource to the property that returns grouped data, and you bind the items control to the CollectionViewSource. An extra value-add of a CollectionViewSource is that it keeps track of the current item, so you can keep more than one items control in sync by binding them all to the same CollectionViewSource. You can also access the current item programmatically through the ICollectionView.CurrentItem property of the object returned by the CollectionViewSource.View property.
To activate the grouping facility of a CollectionViewSource, set IsSourceGrouped to true. Whether you also need to set the ItemsPath property depends on exactly how you author your group objects. There are two ways to author a group object: the "is-a-group" pattern, and the "has-a-group" pattern. In the "is-a-group" pattern, the group object derives from a collection type (for example, List<T>), so the group object actually is itself the group of items. With this pattern you do not need to set ItemsPath. In the "has-a-group" pattern, the group object has one or more properties of a collection type (such as List<T>), so the group "has a" group of items in the form of a property (or several groups of items in the form of several properties). With this pattern you need to set ItemsPath to the name of the property that contains the group of items.
The example below illustrates the "has-a-group" pattern. The page class has a property named ViewModel, which returns an instance of our view model. The CollectionViewSource binds to the Authors property of the view model (Authors is the collection of group objects) and also specifies that it's the Author.BookSkus property that contains the grouped items. Finally, the GridView is bound to the CollectionViewSource, and has its group style defined so that it can render the items in groups.
<Page.Resources> <CollectionViewSource x: </Page.Resources> ... <GridView ItemsSource="{x:Bind AuthorHasACollectionOfBookSku}" ...> <GridView.GroupStyle> <GroupStyle HeaderTemplate="{StaticResource AuthorGroupHeaderTemplateWide}" ... /> </GridView.GroupStyle> </GridView>
You can implement the "is-a-group" pattern in one of two ways. One way is to author your own group class. Derive the class from List<T> (where T is the type of the items). For example,
public class Author : List<BookSku>. The second way is to use a LINQ expression to dynamically create group objects (and a group class) from like property values of the BookSku items. This approach—maintaining only a flat list of items and grouping them together on the fly—is typical of an app that accesses data from a cloud service. You get the flexibility to group books by author or by genre (for example) without needing special group classes such as Author and Genre.
The example below illustrates the "is-a-group" pattern using LINQ. This time we group books by genre, displayed with the genre name in the group headers. This is indicated by the "Key" property path in reference to the group Key value.
using System.Linq; ... private IOrderedEnumerable<IGrouping<string, BookSku>> genres; public IOrderedEnumerable<IGrouping<string, BookSku>> Genres { get { if (this.genres == null) { this.genres = from book in this.bookSkus group book by book.genre into grp orderby grp.Key select grp; } return this.genres; } }
Remember that when using {x:Bind} with data templates we need to indicate the type being bound to by setting an x:DataType value. If the type is generic then we can't express that in markup so we need to use {Binding} instead in the group style header template.
<Grid.Resources> <CollectionViewSource x: </Grid.Resources> <GridView ItemsSource="{x:Bind GenreIsACollectionOfBookSku}"> <GridView.ItemTemplate x: <DataTemplate> <TextBlock Text="{x:Bind Title}"/> </DataTemplate> </GridView.ItemTemplate> <GridView.GroupStyle> <GroupStyle> <GroupStyle.HeaderTemplate> <DataTemplate> <TextBlock Text="{Binding Key}"/> </DataTemplate> </GroupStyle.HeaderTemplate> </GroupStyle> </GridView.GroupStyle> </GridView>
A SemanticZoom control is a great way for your users to view and navigate grouped data. The Bookstore2 sample app illustrates how to use the SemanticZoom. In that app, you can view a list of books grouped by author (the zoomed-in view) or you can zoom out to see a jump list of authors (the zoomed-out view). The jump list affords much quicker navigation than scrolling through the list of books. The zoomed-in and zoomed-out views are actually ListView or GridView controls bound to the same CollectionViewSource.
When you bind to hierarchical data—such as subcategories within categories—you can choose to display the hierarchical levels in your UI with a series of items controls. A selection in one items control determines the contents of subsequent items controls. You can keep the lists synchronized by binding each list to its own CollectionViewSource and binding the CollectionViewSource instances together in a chain. This is called a master/details (or list/details) view. For more info, see How to bind to hierarchical data and create a master/details view.
Diagnosing and debugging data binding problems
Your binding markup contains the names of properties (and, for C#, sometimes fields and methods). So when you rename a property, you'll also need to change any binding that references it. Forgetting to do that leads to a typical example of a data binding bug, and your app either won't compile or won't run correctly.
The binding objects created by {x:Bind} and {Binding} are largely functionally equivalent. But {x:Bind} has type information for the binding source, and it generates source code at compile-time. With {x:Bind} you get the same kind of problem detection that you get with the rest of your code. That includes compile-time validation of your binding expressions, and debugging by setting breakpoints in the source code generated as the partial class for your page. These classes can be found in the files in your
obj folder, with names like (for C#)
<view name>.g.cs). If you have a problem with a binding then turn on Break On Unhandled Exceptions in the Microsoft Visual Studio debugger. The debugger will break execution at that point, and you can then debug what has gone wrong. The code generated by {x:Bind} follows the same pattern for each part of the graph of binding source nodes, and you can use the info in the Call Stack window to help determine the sequence of calls that led up to the problem.
{Binding} does not have type information for the binding source. But when you run your app with the debugger attached, any binding errors appear in the Output window in Visual Studio.
Creating bindings in code
Note This section only applies to {Binding}, because you can't create {x:Bind} bindings in code. However, some of the same benefits of {x:Bind} can be achieved with DependencyObject.RegisterPropertyChangedCallback, which enables you to register for change notifications on any dependency property. a binding in code.
<TextBox x:
//. Binding binding = new Binding() { Path = new PropertyPath("Brush1") }; MyTextBox.SetBinding(TextBox.ForegroundProperty, binding);
'. Dim binding As New Binding() With {.Path = New PropertyPath("Brush1")} MyTextBox.SetBinding(TextBox.ForegroundProperty, binding)
{x:Bind} and {Binding} feature comparison
Feedback
Send feedback about:
|
https://docs.microsoft.com/en-us/windows/uwp/data-binding/data-binding-in-depth
|
CC-MAIN-2019-18
|
refinedweb
| 5,535
| 55.74
|
10 October 2008 05:16 [Source: ICIS news]
By Hong Chou Hui
SINGAPORE (ICIS news)--Export orders at the biannual 104th China Import and Export Fair, a key barometer of global consumer confidence, are expected to take a beating in the wake of the global financial crisis, petrochemical producers and traders said on Friday.
?xml:namespace>
The event, which is also know as the Canton Trade Fair, is China’s biggest and will take place from 15-19 October in Guangzhou, southern China. The last edition of the Canton Trade Fair in April chalked up about $2.28bn (€1.66bn) worth of overseas sales, a reduction of 24.4% from the event in October 2007.
A key sector of China's export market, the garment and clothing industry, is bracing itself for a poor showing at the coming trade fair.
“The global meltdown is surely going to reduce spending power in the US and Europe. Buyers will tighten their belts and they will be more prudent before purchasing,” said a northeast Asian producer of polyester fibres and yarns (POY) in Mandarin.
Echoing this sentiment was a trader of ethylene glycols in northeast Asia who said, “Consumers in the US and Europe would rather feed themselves than dress in the latest fashion because you can’t eat clothes.”.
“Our company’s sales-to-output ratio is now 80% but this is pretty much as good as it gets because there isn’t going to be a lot of demand coming in next week from the US and Europe,” said an eastern China-based seller of POY in Mandarin.
She added that a lot of overseas buyers were choosing not to fly to China for the Canton Trade Fair to reduce their costs and were placing their orders via email or phone instead.
“Some of the buyers are placing their orders earlier to avoid a sudden upsurge in sentiment and prices after the event but at the rate feedstock prices are falling, they really have nothing to fear,” said the same northeast Asian POY maker.
One acrylonitrile butadiene styrene (ABS) trader in Hong Kong who is not planning to attend the event confirmed the expectations and said that orders from the fair should be poor this year due to the US/Europe credit crisis.
The global financial markets are still in the throes of a crisis of confidence, with the benchmark Dow Jones index falling by nearly 680 points overnight while Asian bourses started the day in a sea of red. Major stock indices around the world have taken severe beatings over the past month on the back of bad debt triggered by the sub-prime credit crisis in the US which began in October 2007.
Job losses in the US doubled month-on-month to 159,000 for September according to statistics from the US Department of Labour’s website, making it the worst period of unemployment to date ever since the financial maelstrom started.
To discuss issues facing the chemical industry go to ICIS connect
Clive Ong contributed to this.
|
http://www.icis.com/Articles/2008/10/10/9162836/China-petchems-pessimistic-pre-Canton-Fair.html
|
CC-MAIN-2013-20
|
refinedweb
| 510
| 59.16
|
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum!
Originally posted by khushhal yadav: Hi Ahmed Yehia It will give you compile time error.. as you can't have something like this Short s = 15; As s is a reference type of wrapper class Short while 15 is an integer(primitive data type) It's not allowed by compiler.. Regards..
Originally posted by Ahmed Yehia:
public class TestObj {
Short s = 15;
public static void main(String args[]) {
TestObj o1 = new TestObj();
TestObj o2 = new TestObj();
}
}
In the above program should we assume 2 Short Objects created or 1 according to being less than 127. in other words, nulling 1 Object (eg o1) also eligibilze its associated Short?
Originally posted by Ahmed Yehia: According to K&B in a similar question, chapter 3 Question #2: C is correct. Only one CardBoard object (c1) is eligible, but it has an associated Short wrapper object that is also eligible. could someone illustrate more on this issue [ June 26, 2007: Message edited by: Ahmed Yehia ]
Originally posted by Aks Rudra: I feel that the two objects eligible for GC are the two CardBoards C1 and C3.
The question seems to be interested in knowing about the CardBoards and not Shorts.
Originally posted by Manfred Klug: The question is, how many objects are eligible for garbage collection, and not how many CardBoards.
|
http://www.coderanch.com/t/263519/java-programmer-SCJP/certification/Objects-created
|
CC-MAIN-2015-11
|
refinedweb
| 242
| 55.88
|
div() function in C++
This tutorial illustrates the working of the div() function in C++. We can use this function to find the quotient and remainder of a division. The div() function is defined in the cstdlib header file of C++. We will discuss more on this function here.
As the name suggests we use this function in our C++ program in division operation between two parameters. The syntax for this function is as follows:
div_t div(int a, int b); ldiv_t div(int a, int b); lldiv_t div(int a, int b);
Here a and b are the numerator and denominator respectively.
The return types for div() function are div_t, ldiv_t, and lldiv_t. These are structures and are defined as given below.
div_t:
struct div_t{ int quot; int rem; };
ldiv_t:
struct ldiv_t{ long quot; long rem; };
lldiv_t:
struct lldiv_t{ long long quot; long long rem; };
Here, quot is the quotient and rem is the remainder.
If the input parameters in div() function are a and b, then quot = a/b and rem = a%b. We can access these in the same way as we access an element of a structure in C++.
Here is the example program that illustrates the working of div() function.
#include <iostream> #include <cstdlib> using namespace std; int main() { div_t out = div(6,4); cout << "quotient of 6/4 = " << out.quot << ".\n"; cout << "remainder of 6/4 = " << out.rem << ".\n"; return 0; }
And the output of the program is:
quotient of 6/4 = 1. remainder of 6/4 = 2.
Similarly, we can use the div() function for long and long long as well.
Thank you.
Also, read: Print only digits from a string in C++
|
https://www.codespeedy.com/div-function-in-cpp/
|
CC-MAIN-2021-43
|
refinedweb
| 280
| 73.37
|
.
Integrating J2ME devices into messaging.
Architectural requirements for messaging.
The Java-based messaging solution.
Figure 1. JMS clients connected to the JMS provider
JMS supports both one-to-one and broadcast message styles. One-to-one messaging exists between two communicating parties. The sender sends a message destined for one recipient. On the other hand, broadcast messaging is one-to-many, in which a message from a sender is destined for many recipients.
To coordinate the interaction with messaging clients, the provider uses administered objects. Messaging clients can access administered objects using the Java Naming and Directory Interface (JNDI) API (see Resources for more information). An administrator (one who administers the JMS provider) configures these objects. The messaging client then uses the objects to communicate with other clients through the provider. Let's look at an example to explain how this works.
The administrator configures at least one connection factory (a type of administered object) for the clients connected to the provider. The clients use the connection factory to create connections with the provider. Similarly, the administrator configures destination objects (another type of administered object). The sender clients use these objects to specify their message's target destination. There are two types of destination objects: queues used for one-to-one communication and topics used for broadcast communication. Therefore, a sender can send a message to a queue destination object, if the message is destined for one receiver. Similarly, a sender can send a message to a topic destination object, if it wants to send a message to several receiver clients.
Figure 2 shows the following sequence of events that result in a message sent to a destination:
- The administrator creates a queue or a topic destination object on the provider.
- A sender client looks up the destination object of his choice.
- The sender writes a JMS message.
- The sender sends the message to the destination object.
Figure 2. The message-sending sequence
How does the receiver receive the messages sent to a particular queue or topic? First, the receiver makes a connection object using the connection factory on the provider. Next, the receiver creates a topic or a queue connection object (corresponding to the receiver's interest) using the connection created in the first step. Now the receiver can receive messages sent to the particular destination (queue or topic).
Logically, you can have only one receiver connected to queue connections, but any number of receivers connected to topic destinations. In Part 2 of this series, you will see how this works; for example, clients exchange messages with each other through a JMS provider.
J2ME devices as JMS clients
Now, try to determine if a J2ME device can act as a JMS client (a message producer or a consumer). To act as a JMS client, the J2ME device needs to implement the client-side set of JMS functions. This generally includes implementing the support of the following J2ME client features:
- The ability to use connection factories to establish connections with the provider. This means you need to implement the
QueueConnectionFactoryand
TopicConnectionFactoryclasses in the J2ME device. This lets you create
QueueConnectionand
TopicConnectionobjects using their respective connection factories.
- The ability to create and maintain
Queueand
Topicobjects corresponding to the queues and topics maintained on the provider.
- The inclusion of J2ME client features such as message authoring, message sending and receiving, and session-related JMS client-side classes.
The J2ME device also must provide a JNDI client, which can hook into a server and ask it for JNDI lookup. The JNDI client enables the J2ME device to look up different administered objects used to communicate with the JMS provider. JMS clients look up connection factories and destination objects on the provider. After a client has a valid connection factory, it creates connections and sessions using the factory. Then, it writes JMS messages and uses the connection and session objects to send and receive messages.
Naturally, you can't expect a J2ME device with just 160 KB of memory to do all this. The JMS API was not designed with low-end, limited-resource devices in mind. Every machine running a JMS client might not be a powerful server, but it will at least be a desktop machine with reasonable processing capability.
The reduced processing capabilities and inherent mobility of a J2ME device make it difficult to run JMS client applications directly on it. However, you can still enable mobile devices to act as messaging clients. You can use a middleware entity (a proxy or a relay) to achieve messaging between mobile clients and enterprise systems.
The J2ME device won't actually need to do any heavy processing. It simply issues orders and commands to the relay, which do all the necessary processing. For example, the J2ME device will ask the relay to perform the JNDI lookup; the relay performs the lookup (for example, communication with the server) and sends the result back to the device. This configuration lets the low-resourced mobile device be part of a JMS network, without performing the heavy processing that normal JMS clients require.
You must design communication data formats to allow the J2ME device to issue commands to the relay and to accept the results. Although you can design all the required data formats from scratch, you don't need to here. You can use JXTA data formats for communication between a J2ME device and the relay. This way, instead of designing a proprietary set of data formats from scratch, you can use the framework of an existing technology to integrate J2ME devices into JMS networks. Moreover, you already have J2ME and relay-side open source implementations of JXTA communication available, which you can leverage for your purpose.
Shortly, I will start digging into the details of JXTA communication, but before I do, there's another small problem to discuss.
Network address-binding concepts
A J2ME device's reduced processing capability is not the only problem that makes it difficult to connect to a JMS network. Mobile devices are inherently non-static and might continue to change their physical network addresses. For example, a cell phone might be connected to different service providers depending on its current location. Therefore, you might not be able to allocate a fixed physical address to a J2ME device. To locate a physically static networked device, you can assign a permanent address to each one. This method, known as static or early network binding, allows devices to communicate with each other by using the target device's permanent network address.
The most common example of using early network bindings is the way Internet service providers (ISPs) provide connectivity to Internet users. An ISP has many IP addresses reserved for its users and simply assigns an IP address to each of its currently logged-on users. When a user sends a request (for example, the request for a Web site), its response is routed to the ISP, which in turn sends the response back to the requesting user.
But what happens if a user (for example, a cell phone user connected to the Internet) periodically changes its ISP or IP address? In this case, some responses destined for the requesting user might never reach him or her. For such users, you need some mechanism to locate the network address bindings dynamically. This is the concept of dynamic or late network address bindings. Dynamic address bindings are more complicated than static address bindings because the addresses change as the position and state of the target devices in the network changes.
JXTA provides stable and reliable connectivity to mobile devices that have dynamic network address bindings. It uses a layer of virtualization to cover up the network dynamism. Later in the article you'll see how JXTA accomplishes this.
To solve the problem of dynamic network connectivity, JXTA offers a special arrangement. The J2ME client always requests something from the server (the server never initiates contact). If the server needs to contact the client (for example, with the result of an earlier client command or order), it keeps its messages in a queue and waits for the client to contact or poll the server. When the client polls, the server sends queued messages to the client.
JXTA already has an entity that can work as your middleware component for reducing the processing burden on the mobile devices as well as for providing dynamic address binding. This entity is called a JXTA relay; let's see how it works.
JXTA defines a set of protocols to enable an open framework for peer-to-peer computing. It provides a virtual layer over the network layer. This virtual layer works through the exchange of XML messages between the different JXTA network users.
The protocols defined by JXTA are independent of implementations. You can implement the protocols for any platform using any network topology and any type of network transport. This is why application developers can develop value-added JXTA applications by depending on the set of protocols. In fact, the idea of integrating a J2ME client into a JMS network using JXTA technology is an example of a value-added JXTA application.
The main objective for having a virtual layer is to hide physical addresses and low-level networking details from other entities in the network. Instead, all users and entities in the JXTA network have identifiers. The JXTA network can resolve identifiers to physical network addresses dynamically on the fly at run time. Therefore, JXTA network users are not identified by their network addresses.
All JXTA network users are peers to each other. Normally, different JXTA applications running across a JXTA network act as peers. Peers communicate with each other to perform different tasks (such as searching for new peers with common interests). Peer identifiers uniquely identify the peer on the JXTA network, and do not change from one session to the next. No matter which physical address (for example, an IP address) you use for communication over the JXTA network, your peer identifier remains the same.
Therefore, if you are trying to search for a friend over the JXTA network, you just need to know the peer identifier. No matter what IP address the friend is currently using, you will reach them if they are logged onto the JXTA network using their peer identifier. This is the main advantage of having this identifier-based virtualization.
JXTA also allows peers to join peer groups. Peers with common interests are logically grouped together. For example, if you are a JXTA peer with an interest in a particular sport, you might want to join other peers interested in the same sport. In a sports peer group, you might want to share sports news, exchange information with other peers, or do some collaborative work. Peer groups can also share the processing burden of computationally intensive tasks. And just like individual peers, peer groups are identified using identifiers. A peer can join any number of peer groups.
According to the JXTA set of protocols, every peer must join at least one peer group; otherwise, the peer cannot participate in any JXTA messaging. If a peer does not want to join any particular peer group, the peer joins a universal peer group called NetPeerGroup.
Both peers and peer groups work from the idea of advertisements. JXTA advertisements are XML documents that specify the details of peers, peer groups, and other JXTA resources. A JXTA advertisement contains useful information about a JXTA resource (including the name of the resource or service, its identifier, and description).
Peers specify their service details using these JXTA advertisements. For example, if you want to offer information regarding a particular topic, you would publish an appropriate advertisement over the JXTA network. Other peers search for your advertisements and discover you as a peer as well as the services you are hosting or offering. Naturally, advertisements are not perpetual. They have a lifetime, after which they expire.
Although I do not discuss the details of JXTA advertisements in this series, I do need to mention that all entities in a JXTA network know that other entities exist through their advertisements. Resources contains more details about the JXTA advertisements.
JXTA defines two special types of peers that play important roles in JXTA communication: rendezvous peers and router peers. Rendezvous peers provide a meeting point for peers. Peers can discover other peers by contacting rendezvous peers, who keep the advertisements cached with them. Therefore, you can send your advertisements to rendezvous peers and also search for other peers and peer groups (as well as the services they offer) at rendezvous peers.
Router peers store routing information to keep track of how to reach remote peers. Sometimes it's difficult to find a direct connection or network route from one peer to another; this is where router peers can help.
Apart from peers and peer groups, JXTA also defines another important resource known as a JXTA pipe. Pipe is a virtual communication channel used for communication between different peers. Pipes are also identified using identifiers, just like peers and peer groups. JXTA defines several types of pipes. I will use only two types of pipes: unicast and propagate.
Unicast pipes are used for communication between two peers (a sender and a receiver). A propagate pipe is used for communication between more than two peers (one sender and many recipients). Notice the resemblance between a JXTA unicast pipe and a JMS queue. Similarly, a JXTA propagate pipe is analogous to a JMS topic. You will see this parallel in Part 2 of this article, where I use JXTA for integrating J2ME clients into JMS networks.
Pipes are unidirectional, which means you can use a pipe either to send data from one end to the other or to receive data coming from the opposite end. When you want to receive incoming messages you first need to create a pipe. After you create a pipe, you open it for input and start listening for incoming messages from other peers. After you successfully open the pipe for input, the relay accepts messages and stores them until you're ready to retrieve them.
On the other hand, if you want to send a message to a peer over a pipe, you need to first search for your peer's pipe. After you find it, you can send the message. Assuming the peer is listening at the pipe, the peer will receive your message.
The JXTA relay can accept client commands and act upon the commands on the client's behalf. The relay acts as a junction between the JXTA network and peers (those that cannot directly communicate with the JXTA network). A J2ME client uses relays to share the XML authoring and processing burden.
JXTA has defined the data communication protocols that enable messaging between a relay and a client. It is worthwhile to note that a JXTA relay is not just meant to serve J2ME clients. A relay can serve any client that can communicate according to the specified protocols.
In any case, JXTA has provided a J2ME-based client-side implementation that implements all communication with the relay. It's called JXTA for J2ME, or JXME for short. Using JXME with a relay lets J2ME clients act as JXTA peers.
The JXTA relay receives commands from a J2ME client, perform what's necessary on the client's behalf, and represents a JXME client on the JXTA network. The JXME client sends commands and messages to the relay. The relay executes the command on the JXTA network and returns the results back to the JXME client. For example, Figure 3 shows the following sequence of events:
- A JXME client sends a request to a relay to create a pipe.
- The relay performs all the steps to create a JXTA pipe (for example, authoring the pipe advertisement and sending the advertisement to known rendezvous peers).
- The relay returns the newly created pipe's identifier to the JXME client.
Figure 3. JXTA relay -- JXME client interactions
Messaging between a relay and a JXME client
The JXME client sends several types of messages to the relay to perform different tasks. JXTA has defined a special message format that a mobile device uses to communicate with the relay. Before looking at the actual communication, I need to discuss the message format. Figure 4 shows the graphical representation of a typical message.
Figure 4. Representation of a JXME message
As you can see, a JXME message contains basically two things: a message header and a number of elements that give the message its actual meaning. Figure 4 graphically shows an element's different fields as well as the arrangement of a JXME message's different elements.
Both the header and each individual element of a JXME message consist of various fields, which I'll discuss. First, take a look at the JXME message's textual representation. Listing 1 shows the JXME message that Figure 4 graphically represents.
Listing 1. Textual representation of a typical JXME message
In actual practice, you would place all the data in a JXME message's single line, but Listing 1 shows the header and each individual element on a separate line for better readability.
You can compare Figure 4 and Listing 1 to find the values of the figure's different fields. The first line in Listing 1 is the message header. The different fields in the header are explained below:
- All JXME messages start with the string
jxmg. This is shown as the message signature field in Figure 4's header.
- The
0after
jxmgspecifies the JXME message's version number.
- The header's third field is a two-byte value, which is
01in Listing 1. This value specifies the number of namespaces that this JXME message uses. In Listing 1, I have used only one namespace (which I will define in the header's fourth field). The message's different elements reside inside some namespace and each element can belong to a different namespace. If you are familiar with the concept of namespaces in XML, you can see that JXME messages use the same concept: each element in a JXME namespace belongs to a certain namespace; if it does not, you can assume the element belongs to a so-called empty or default namespace.
- The fourth field in the header provides the declaration of namespaces used by this message. The namespace declaration field starts with a two-byte integer value (
05in Listing 1), which specifies the number of characters in the name of the first namespace (
proxy). The actual name of the namespace (
proxy) follows the two-byte integer value.
Because you can have any number of namespaces in a JXME message, the fourth field can contain any number of namespace declarations. Each namespace declaration is a pair of two-byte values and a string that specifies the namespace name.
Notice that the order of namespaces in the header is important. The first namespace (
proxyin Listing 1) is automatically assigned an identifier
2. All subsequent entries get an identifier after an increment of 1 to the previous identifier. For example, if I create another namespace, say
ourNamespace, it would have an identifier
3.
The identifiers
0and
1are preassigned to an empty namespace and
jxtanamespace respectively.
I use namespace identifiers (instead of actual names) in the JXME message's individual elements. I map the namespace names to integer identifiers in order to reduce the message size. This optimizes the use of bandwidth in wireless communication.
- The last two bytes
07specify the number of elements in the message. As you can see in Listing 1, I have seven elements after the message header.
I've noted that Listing 1 includes seven elements. Each element inside a message conveys certain information. The different elements of a message together provide meaningful information to the message's recipient. For example, this information could be a request to create a peer group or to join one. The following points explain an individual element's different fields:
- Each element starts with a signature
jxel, which indicates the start of a message element.
- The identifier of the element's namespace follows the element signature. In the message's first five elements, the field's value is
2, which specifies the
proxynamespace (recall the header's fourth field, where I defined
2as the identifier for
proxynamespace). The namespace identifier
1in Listing 1's last two elements specifies the predefined namespace
jxta.
- Next to the namespace identifier is a one-byte flag value. This byte value's bits act as flags and specify the optional fields (such as MIME type and encoding) in the element. A value of
0for this field means that I will use default values for the optional fields (the default MIME type is
application/octet-streamand the default content encoding is
base64).
- The element's name follows the flag byte. The name actually consists of two subfields; first is a two-byte value (
07in the first element) that specifies the number of characters in the element name. The length value is followed by the element name itself (
requestin the first element).
- The element name is followed by the element's actual contents. The contents field consists of two subfields: a four-byte value that specifies the number of characters in the contents field (
0006in the first element) and the content itself (
createin the first element).
Now, take a look at the various messages a mobile client sends to the relay to perform different operations. When a mobile client connects to a JXTA network, it first requests a peer identifier. The relay assigns a peer identifier in response to the request, and the client uses the peer identifier in subsequent communication. After the mobile client has the peer identifier, it requests a connection with the relay, and the relay confirms a connection in a response message. Figure 5 illustrates this sequence.
Figure 5. Request-response sequence for a JXME client entering the network for the first time
After a connection is established, a JXME peer can issue other requests to the relay to carry out its operations. Now, look at the messages in detail.
Requesting a peer identifier
As shown in Listing 2, the mobile client sends the request for a peer identifier as a URL in an HTTP message using the
HTTP-GET method. It stores the peer identifier that the relay returns for subsequent use.
Listing 2. The request for a peer identifier
The URL in the HTTP request is shown below:
Examining this URL, you find the following components:
unknown-unknown-- This specifies that the requestor doesn't have a peer identifier.
0-- This is the default time-out value in milliseconds. A value of 0 means that the client waits until it receives a response.
-1-- This is how long (in milliseconds) the JXME peer stays connected to the relay after the response to a peer identifier request has arrived. The value -1 means that the client disconnects immediately after receiving the response. This value is always -1 for JXME clients.
http:// 172.16.0.37:2481-- This is the IP address and the port number where the relay is listening.
EndpointService:jxta-NetGroup-- This part is an identifier for the relay. This value is always the same in all JXME requests for peer identifiers.
uuid-DEADBEEFDEAFBABAFEEDBABE0000000F05-- This is a class identifier for the relay service. JXTA has defined class identifiers for different types of JXTA services.
pid-- The last part specifies the actual command string. Here
pidmeans that the client is asking for a new peer identifier.
Listing 3 shows a typical response from the relay to the above request. The relay sends the response as a JXTA message packed inside the HTTP response body.
Listing 3. A peer identifier response
The response contains four elements, two each in the
relay and
jxta namespaces:
- The first element specifies that it is a response to a peer identifier request.
- The second element specifies the peer identifier assigned to the peer.
- The third element tells the endpoint destination address. In this case, the destination is the JXME client.
- The last element specifies the endpoint source address. In this case, the source is the relay.
After the client requests a peer identifier, it requests a connection
with the JXTA relay. Listing 4 shows a typical connection request, which
is sent as an HTTP request message using the HTTP
GET method.
Listing 4. Connection establishment request
Listing 4 shows that you can easily extract the URL from the request.
This request is very much like the peer identifier request, with a few important things to note:
- The URL starts with a valid peer identifier (the same identifier that you received in response to the peer identifier request).
- You use a different command string,
connect, to request a connection. In the request for a peer identifier the command string was
pid.
- The value after the command string
3600000tells the time in milliseconds that the relay will store the incoming messages for the JXME client. If the client does not retrieve its messages within the specified time, the relay discards them.
In response to this message, a relay might send back information shown in Listing 5, which tells you that the connection has been successfully established.
Listing 5. A connection response
After the client connects with the relay, you can perform the following operations:
- Joining a peer group
- Searching for resources
- Creating a pipe
- Sending messages over a pipe
- Opening a pipe for input
To join a peer group, the client sends a join request message to the relay. The request and response sequence is depicted in Figure 6.
Figure 6. A request for joining a peer group and its response
Listing 6 shows a typical
join request.
Listing 6. The request for joining a peer group
This request message is composed of six elements. The first four
elements belong to the
proxy namespace and the
last two belong to the
jxta namespace. The
message's different elements are explained below:
- The
requestelement specifies the request's purpose (joining a peer group).
- The
idelement holds the peer group identifier.
- The
argelement specifies a password required to join the peer group. This helps to authenticate a mobile peer on the peer group. If no password is supplied (for example, when it's not required), an empty string is sent as this element's value.
- The
requestIdelement holds an integer associated with the request and is used to match responses from the relay. Notice that the relay's responses come in any order; therefore, you need message identifiers to match requests with responses.
- The
EndpointDestinationAddressand
EndpointSourceAddresselements, respectively, specify the addresses of the relay and the requesting client.
Listing 7 shows the message that the relay sends back in response to the request to join a peer group.
Listing 7. The response to a join peer group request
The response element in the message shown in Listing 7 indicates that the request was successfully met and the requesting client has joined the new group, which means it can proceed to operate in that group.
You use the search request messages to search for pipes, peers, peer groups, and other JXTA resources. Figure 7 represents the search request and its responses from the relay. The number of messages a search request might return depends on the number of resources that successfully match the search criteria. This is represented by a dashed response line in Figure 7.
Figure 7. A request for searching a resource and its response
Now, take a look at a sample search message that issues a search request for a peer group.
Listing 8. The request for searching a peer group
I'll explain the several elements in this message one by one:
- The
requestelement specifies that it is a search request.
- The
typeelement specifies the type of the resource that is searched. The contents of the
typeelement in Listing 8 include
GROUP, which means you are searching for a peer group. Other values of this field might be
PIPE(if you are searching for a pipe) and
PEER(if you are searching for a peer).
- The
attrelement specifies the attribute of the searched resource. For example, if you are searching for a particular peer group by name, the content of the
attrelement is
name, as shown in the listing above.
- The
valueelement specifies the search query. For example, if you are searching a peer group with a particular name (for example,
myPeerGroup), you specify the name as contents of the
valueelement. You can also use the wildcard character (asterisk, *) in the query string to create pattern-matching queries.
- Different JXTA peers are expected to send messages in response to the search query. The
thresholdelement is used to limit the number of responses that a single peer can send in response to the search query.
Each response of the search message comes as a separate message. Listing 9 shows a typical response message to the search request.
Listing 9. Response for group search
This response contains a single search result. The
type element specifies the type of resource searched (peer group), the
name element specifies the resource name, and the
id attribute specifies the peer group identifier. The requesting client can now directly use the
identifier for further communication with the peer group. For example, the
client can now use the peer group identifier to join the group.
As previously mentioned, when you want to receive incoming messages over the JXTA network you first need to create a pipe. Other peers use your pipe to send messages to you. Figure 8 depicts the pipe creation request and its response from the relay.
Figure 8. A request for creating a pipe and its response
Listing 10 shows what a request message for creating a pipe looks like.
Listing 10. Pipe creation request
The request is composed of seven elements:
- The first element specifies that it is a request to create a resource.
- The second element tells the name of that resource.
- The
requestIdis an integer associated with the request, which the request author uses to match responses from the relay.
- The
typeelement specifies the resource's type (pipe, in this case). If you were creating a peer group, the value of this field would have been
GROUP.
- The
argelement specifies the pipe's type. For JXME clients, this is either
Unicastor
JxtaPropagate.
- The sixth and seventh elements specify the destination and source endpoint addresses respectively.
Listing 11 shows the message from the relay confirming a new pipe was created.
Listing 11. Pipe creation response
Most of the response message elements are similar to their counterparts in the request message. In fact, there are only two main differences:
- The first element tells you that this is a
responsemessage and its content indicates that the request was completed successfully.
- The
idelement gives you the identifier of the newly created pipe.
The peer group creation request messages are very similar to the pipe creation request messages. The only difference is in the
type element. If you want to create a new peer group, the content of the
type attribute is
GROUP. Also, you don't need the
arg element to create a peer group request.
Now, take a look at how to open a pipe for input (called listening in JXME terminology) in order to receive messages from other peers. To open a pipe for input, you need to send a listen request message to the relay. The relay starts listening for your messages, which you can retrieve later.
Figure 9 shows the request/response graphical representation.
Figure 9. The request and response sequence for opening a pipe for input
Listing 12 shows a typical listen request message.
Listing 12. Input pipe open request
The
request element in Listing 12 specifies
that the request is to listen on a certain pipe. The
id element specifies the identifier of the pipe on which the peer wants to listen for messages.
Listing 13 shows the relay's response to this message. It says that the request was successful, which means that the relay listens for messages sent on this pipe and stores them until retrieved.
Listing 13. The relay's response to a listen request
Sending a message over a pipe
Before you can send a message to a pipe, you first have to search for that pipe. A pipe search message is very similar to the peer group search message, so I won't go through it again.
Figure 10 shows that sending a message over a pipe is a simple request-response procedure.
Figure 10. The request and response for sending a message over a pipe
Listing 14 shows a message addressed to a specific pipe.
Listing 14. The request for sending a message
I have already explained the first three elements (
request,
requestId, and
id). The
sender and
message elements belong to the empty namespace (the
namespace identifiers for these elements is 0). These two elements are
application-specific, which means you can invent your own elements to go
with the message being sent to a pipe.
You can also define your own namespace and include an application-specific element that belongs to your namespace in the message. For example,
look at the element named
fileReference,
whose namespace identifier field has a value 3. The header of the message
defines the namespace as
records.
Listing 15 shows the relay's response to this message. The
response element in Listing 15 says that the request
was successful, which means that the relay has accepted the message for
delivery. Now the relay must dispatch the message to its destination.
Listing 15. The response for a message send request
Retrieving the response messages
JXME communication with the relay is asynchronous in nature, which means the relay does not send the response to a JXME's client request or a message immediately. Other peers respond at their convenience and the relay stores incoming messages for the JXME client. The JXME client contacts the relay (a mechanism called polling) to receive its incoming messages.
This means that the JXME client always contacts the relay; the relay never initiates a connection. This is especially important because a J2ME device can only act like an HTTP client (by sending HTTP requests) and cannot act like an HTTP server (by listening for requests coming from clients).
JXME does not define any special request message that explicitly asks
the relay to send one or more response messages it has for the JXME
client. The JXME client continues to send requests (for example, a peer
group join request, a pipe creation request, or a search query) or
other outbound messages to the relay. The relay avails these requests
as opportunities to send any incoming message to the JXME client in
response to a request. The JXME client uses the
requestId element in the response message to find out which request this response corresponds to.
What happens if the JXME client has no outbound message to send to the relay, but still wants to check for incoming messages? In this scenario, the client can send a simple HTTP request. The HTTP request doesn't contain any JXTA message or cause any processing at the relay's end. The relay simply sends an incoming message to the client in response to the request.
In the next section, I'll go through the programming concepts required to implement JXTA-enabled applications.
I've discussed messaging between a JXTA relay and a mobile client. Now, it's time to demonstrate how to programmatically implement this messaging into a J2ME client. I don't need to implement the low-level details of JXME messaging because it is already available from the JXTA project Web site (see Resources for a direct link to the JXTA downloads Web site).
The JXME implementation consists of three main classes:
- The
Elementclass
- The
Messageclass
- The
PeerNetworkclass
The
Element class represents a single
element of the JXME message. You can use the
Element class to author individual elements of a JXME message. The JXME implementation uses the
Element class to author JXME messages. On the other hand, you use this class to author the elements of your own (customized) JXME messages.
When you want to author a JXME message's element, you instantiate an
Element object. The
Element constructor takes four parameters, as shown below.
The first parameter (
"message") specifies
the element name. The second parameter (
"hello!".getBytes()) contains the byte array, which represents the element's contents. The third parameter (
"peerNamespace") carries the namespace string (an empty string means the default or empty namespace). The last parameter (
null) specifies the MIME type of the data (
null means the default MIME type, which is
application/octet-stream).
The
Element class provides getter methods to
extract information from an
Element object.
There are four getter methods. The
getName()
method returns the element name. The
getData() method returns the element's contents. The
getNamespace() method returns the element namespace (if any, otherwise an empty string). The
getMimeType() method returns the element's MIME type.
The
Message class represents a complete JXME
message. To create a JXME message, I first create
Element objects corresponding to the message's total individual elements. I can then put the
Element objects in an array and pass the array to the
Message class's constructor.
The
Message class contains three getter
methods that help extract individual
Element
objects in the JXME message:
- The
getElementCount()method returns an integer that represents the number of elements in the
Messageobject.
- The
getElement()method takes an integer, which represents an index. The method returns the
Elementobject at that specific index.
- The
getSize()method returns an integer. The integer value represents the size of the JXME message in bytes.
The
PeerNetwork class contains the methods
to allow JXME communication with the relay. The
PeerNetwork class is like a communication module that
internally uses different
Message and
Element objects and handles all communication with the relay.
Naturally, you would expect the
PeerNetwork
class to have methods to perform the various tasks managed previously
between the relay and the mobile client. The
PeerNetwork class manages all the messaging and also manages other support tasks, such as maintaining the identity of various messages exchanged
between the relay and the client.
In short,
PeerNetwork is the most
important class that you can use as the JXME client-side implementation to
integrate a J2ME device into a JXTA network. In Part 2, I will use this
class to integrate a J2ME client into a JMS network using the JXTA
network. Following is a discussion on the various methods of the
PeerNetwork class.
The createInstance() method
You can instantiate the
PeerNetwork class by
using one of the two overloaded static factory methods called
createInstance(). The constructors of the
PeerNetwork class are private, so you cannot use them
for instantiation.
One of the two
createInstance() methods
takes only a single string parameter, which specifies the name you want for
the JXME peer. The
createInstance() method
internally performs the following tasks:
- It calls the
PeerNetworkclass's constructor, which takes two parameters: the peer name and the name of the peer group that you want to join. As there was just one parameter passed to the
createInstance()method (which specifies the peer name), I don't have anything to specify for the peer group name. Therefore, the
createInstance()method uses the default group name (NetPeerGroup). This means that if you don't want to join any peer group, you will automatically join the NetPeerGroup peer group.
- The constructor internally reduces the size the group identifier (also called trimming). This helps in optimizing outgoing data traveling over the wireless network.
- Next, the
createInstance()method instantiates an
HttpMessengerobject. The
HTTPMessengerclass is part of the JXME implementation. It is a helper class, which other methods of the
PeerNetworkclass use to communicate with the relay. Apparently, the only purpose of introducing this class is to provide a layer of abstraction between the higher-level JXME classes (such as the
PeerNetworkclass) and lower-level J2ME/MIDP classes.
For example, JXME comes with two different versions of the
HTTPMessengerclass, one for CLDC and the other for CDC devices. This means that by providing the
HTTPMessengerclass, JXME designers have made the higher-level JXME classes work on top of different flavors of J2ME.
The two-argument
PeerNetwork.createInstance() method is different from the one-argument method in only one way; it
takes the peer group identifier as the second argument. A call to the
two-argument
PeerNetwork.createInstance()
method results in joining the specified peer group (instead of the default
NetPeerGroup group in the case of the single
argument
PeerNetwork.createInstance() method).
The connect() method
After instantiating the
PeerNetwork object,
you call its
connect() method to
establish connection with the relay. You pass two parameters to the
connect() method, as shown in Listing 16:
Listing 16. PeerNetwork.connect()
The first parameter is a string that contains the relay URL. This is the IP address and port number where the relay is executing. You must know the address of the relay in order to connect to it.
The second argument is the peer identifier that wants to connect to the
relay. But, I don't know the peer identifier yet, so I want to request
the relay to issue me a peer identifier. That's why I have passed
null as the second parameter to the
connect() method call.
The
connect() method carries out the
following steps:
- The
connect()method internally calls the
connect()method of the
HttpMessengerclass and passes both arguments to it.
- The
HttpMessenger.connect()method constructs the URL (shown in Listing 2) based on the information passed to it.
- Next, the
HttpMessenger.connect()method authors the peer identifier request and sends the request to the relay.
- If the relay successfully returns a peer identifier, the
HttpMessenger.connect()method sends the connection establishment request to the relay.
- After a successful connection, the
PeerNetwork.connect()method returns the peer identifier in the form of a byte array. This peer identifier represents the JXME client in this communication session. The JXME client uses this identifier for subsequent communication with the relay as well as in any future sessions. Therefore, I need to save this identifier for subsequent use.
If you supply a peer identifier as the second parameter to the
PeerNetwork.connect() method, the method's internal
workings are slightly changed. Now the method does not need to issue a peer
identifier request, so it skips this part and directly issues a connection
establishment request.
Note that when you pass
null as the second
parameter value to the
PeerNetwork.connect()
method, you are assigned a new peer identifier. Next time, you
should use this peer identifier for connecting to the relay. Why? Because
if you keep on passing
null as the second
parameter value, you get a new identifier every time. This is like
creating a new mailbox every time you want to check your mail. You won't
be able to retrieve your incoming mail.
The create() method
You use the
create() method to
create a JXTA resource (for example, a pipe or a peer group). The method takes four
parameters:
- The type of resource to be created. If you want to create a pipe, you pass
PeerNetwork.PIPEas the first parameter's value. Similarly, if you want to create a peer group, you pass
PeerNetwork.GROUP.
- The name you want to give to the resource. For example, if you are creating a pipe that you want to name myPipe, you pass on "myPipe" as the second parameter's value.
- A predefined identifier for the resource. You can pass
nullas the third parameter's value, if you want to let the relay provide the identifier.
- An additional argument specifying the pipe's
type. For example, if you are creating a unicast pipe, you pass
PeerNetwork.UNICAST_PIPEas this field's value. If you are creating a propagate pipe, you pass on
PeerNetwork.PROPAGATE_PIPEas this field's value. Note that you pass
nullas this parameter's value if you are creating a peer group.
The
create() method authors a resource
creation message. It then hands the message over to a private helper method
named
sendMessage(), which adds the message
to the queue of outbound messages.
The method returns an integer that the application can use to match responses from the relay. This mechanism is needed because the responses from the relay might arrive in random order.
The join() method
You can use the
PeerNetwork.join() method to
join a particular peer group. This method takes two parameters, the
identifier of the peer group that you want to join and an authentication
password. The current JXME implementation does not support using
passwords and ignores them if one is provided.
The
join() method simply authors the join
request message and places it in a message queue. The message queue
contains all the messages (for example, a peer group joining request, pipe
creation request, or a search request) that you might have asked the
PeerNetwork class to send to the relay. The queue's outbound messages are sent to the relay when you call the
poll() method. I will describe how the
PeerNetwork.poll()
method operates further along in the article.
The poll() method
You can call the
PeerNetwork.poll() method
to poll the relay for incoming messages (I discussed the polling
mechanism in Retrieving the Response Messages). The following
code shows how a JXME client application polls for incoming messages.
The
poll() method takes just one parameter,
a time out value. This value specifies the time (in milliseconds) that
the application wants to wait for a message to arrive. A value of 0
indicates to wait forever until a message arrives.
When a JXME client (a JXME peer) polls the relay, the
PeerNetwork.poll() method first examines its queue of unsent
outbound messages. If the method finds an unsent message in the queue,
it places the message in the request's HTTP body part and sends it to
the relay using the HTTP POST method.
Naturally, I expect the relay to send me some incoming message in
response to polling. The relay might not have incoming messages for
the JXME client. If it doesn't, the
poll()
method returns
null.
If the relay has a message for me, it responds with the response
message. The
poll() method internally parses the response, creates a
Message object
corresponding to the incoming message, and returns the
Message object to the calling application. Naturally, the
calling application would like to call various methods of the
Message object to extract information from it. I will
provide an example of this later in Listing 20.
The send() method
You use the
send() method to send your
messages to specific pipes. The method takes two parameters: the
message to be sent and the target pipe's identifier.
The
send() method internally passes the parameters to a private method of the
PeerNetwork class called
pipeOperation(). The
pipeOperation() method is a helper method that provides common functions used by all the pipe-based methods, such as
listen() and
send().
Recall that while discussing Listing 14, I mentioned a few application-specific elements (
sender,
message, and
fileReference) that form a customized message. The
pipeOperation() method takes the customized message passed to it and adds a few more elements to it (like the target pipe identifier, the request Id, and destination address elements in Listing 14).
The send() method finally places the message in the queue for dispatch to the relay.
The search() method
A J2ME application uses the
PeerNetwork.search() method to search different JXTA resources (like peer groups and pipes). The
search() method takes four parameters:
- The type of the resource to search for. For example,
PeerNetwork.GROUP, if you want to search for a peer group or
PeerNetwork.PIPE, if you want to search for a pipe.
- The name of the resource attribute against which the search query is matched. For example, "Name" if you want to search for a resource by specifying its name.
- A string specifying the search query. For example, "myPipeName" in case you are searching for a pipe named
myPipeName.
- A threshold value specifying the maximum number of responses a remote peer can send
The
search() method authors the search
request according to the data provided as parameters. Recall that I
discussed the search messaging between the J2ME client and the relay in
Listings 8 and 9.
The
search() method then hands over the authored search query to the
sendMessage() method, which places the messages in the queue of outbound messages. The
search() method returns an integer value, which specifies the search request message. You use this identifier to match responses coming from the relay.
The listen() method
You use the
listen() method for opening the
pipe for input. The
listen() method takes
just one parameter, the pipe's identifier. The
PeerNetwork.listen() method internally uses the
pipeOperation() method for request authoring and then passes the
message to the
sendMessage() method. The
sendMessage() method places the final message in
the queue of unsent messages. The
listen()
method returns an integer identifier to be used for matching the
responses from the relay.
You've seen the major classes in the current JXME implementation. I'll now present a few programming examples to demonstrate how you can use these classes to do some of the tasks that you commonly encounter while developing JXME applications, including:
- How to connect to the relay
- How to create a peer group
- How to search for a peer group
- How to process JXME messages
- How to send and receive messages over pipes
I'll explain these tasks in a step-by-step manner, and this demonstration provides the basic ground for developing the set of classes that you will develop in Part 2 of the series in order to integrate J2ME clients into JMS applications.
You will develop two sample MIDlet applications (PeerGroupDemo and PipeDemo) to demonstrate client-side JXME application development. These MIDlets put the pieces together and show you how to use the JXME classes and methods I have already introduced. The PeerGroupDemo sample MIDlet demonstrates all peer group-related tasks and the PipeDemo MIDlet shows how to create, search, and use pipes.
The code listings that accompany the demonstrations only show the code specific to the demonstration and not the complete MIDlet application. However this article's source code download contains complete source code as well as compiled versions of both sample MIDlets. It also contains instructions in a readme file to show how to use the sample MIDlets.
How to connect to the relay
Both sample MIDlets need to connect to the relay before doing anything else. Therefore, I have written relay connection code in the constructors of both MIDlets. Listing 17 demonstrates the steps a JXME client has to perform in order to join the JXTA network through the JXTA relay.
Listing 17. How to connect to the relay
Listing 17 shows the following steps:
- Start up JXTA on the JXME client by creating an instance of the
PeerNetworkclass. To do this, use the single parameter
PeerNetwork.createInstance()method and only specify the peer name.
- Connect to the JXTA relay at the specified URL using the
PeerNetwork.connect()method.
As a result of a successful connect request, the JXME peer joins NetPeerGroup. If, later on, a peer wants to join another peer group, it searches the peer group of interest and then joins the group.
The
PeerGroupDemo MIDlet demonstrates the use of peer groups. This MIDlet contains three methods:
createPeerGroup,
searchPeerGroup, and
joinPeerGroup, which demonstrate how to create, search, and join
peer groups, respectively. Another method named
processMessage() demonstrates how to process an incoming JXME message and extract useful information from it.
The
PeerGroupDemo constructor calls these methods in a sequence just for the sake of simple demonstration. First, it creates a peer group, then
searches for that group, and finally joins the group.
How to create a peer group
Listing 18 shows a method named
createPeerGroup, which demonstrates the steps you need to follow in order to create a peer group. You will find this method in the
PeerGroupDemo MIDlet.
Listing 18. How to create a peer group
The
createPeerGroup() method takes a string
type parameter, which is the name of the peer group that you want to
create. Listing 18 demonstrates that in order to create a peer group, you
have to make a call to the
PeerNetwork.create() method. You pass a string constant
PeerNetwork.GROUP as the first parameter to the
create() method call and the name of the peer group as the second
parameter. If you want to specify the identifier for the peer group you
are creating, you can pass the identifier as the third parameter. If
you want to let the relay decide the identifier, you can pass
null as the third parameter. You always pass
null as the fourth parameter while creating peer
groups, as this parameter is not needed for peer group creation.
You store the request identifier returned by the
create() method for future use to match the response to this request.
The next step is to call the
poll() method
of the
PeerNetwork object. The
poll() method sends the create peer group request from
the queue to the relay and also checks if any incoming messages for the
client are available. In case the relay has a message for you, the
poll() method returns a
Message object; otherwise, it returns
null.
If the
Message object is not
null, you need to check if the message is in response to the
create group request or a response to some other message. For this
purpose, you need to process and check the
Message object. For this, I have written a method named
processMessage().
How to process JXME messages
Listing 19 presents the
processMessage()
method, which demonstrates a simple processing logic for a JXME message.
I have written the
processMessage() method
to demonstrate the use of various methods of the
Message and
Element classes.
The
processMessage() method takes three
parameters: a
Message object named
msg, an integer type request identifier named
reqId, and a string named
successCriterion.
The
processMessage() method checks whether
the
Message object is in response to the
request whose identifier is passed as the second parameter. If it is, the
processMessage() method probes the
Message object further to determine if the request
(for example, a peer group search request) was successfully met.
The success criterion is different for different types of requests.
That's why I have passed the third parameter named
successCriterion. The
processMessage()
method checks whether the success criterion string matches with the
contents of the message's
response element.
If it matches, the
processMessage() method
extracts the contents of an element named
id
and returns the contents.The contents of the
id element contain the identifier of the searched resource (for example, a
peer group).
You use the
processMessage() method to
process the responses to create, search, join, listen, and send
requests. For these requests, the responses do not return any resource
identifier. In this case, the
processMessage() method
returns the contents of the
response element
instead of the
id element.
Listing 19. How to process JXME messages
Listing 19 demonstrates the following steps:
- First, call the
getElementCount()method of the
Messageobject, which returns the number of elements in the message.
- Now loop through each element by calling the
getElement()method one by one. The
getElement()method takes an integer parameter (a zero-based index value) and returns the
Elementcorresponding to the index.
- Next, process the
Elementin one or more
ifstatements. In Listing 19, I've used the
Elementclass's
getName(),
getNamespace(), and
getData()methods to extract the data from three particular message elements (
requestId,
response, and
id). All three elements belong to the
proxynamespace. I stored the
requestIdelement's contents in a variable named
requestIdentifierInResponse, the response element's contents in a variable named
responseString, and the
idelement's contents in a variable named
searchedResourceIdentifier. The
requestIdentifierInResponsevariable now holds the identifier of the request to which the
msgobject is a response. The
searchedResourceIdentifiervariable holds the identifier of the JXTA resource (for example, a peer group or a pipe) that someone has returned in response to the search query. The
responseStringvariable holds the string that carries information whether the request was successfully met or failed.
- The
requestIdelement specifies the request to which this
Messageobject is a response. Therefore, I have to match the value of the
requestIdentifierInResponsevariable with the
reqIdparameter. If they don't match, I don't need to process any further.
- If the
msgobject is really in response to the request identified by the
reqIdidentifier, I match the success criterion with the contents of the element named
response.
- If the success criterion matches, I simply return the contents of the
idelement. This element always contains the identifier of the resource searched in all responses to search queries.
- Finally, if the success criterion matches, but the
idelement is not present (which means the
searchedResourceIdentifiervariable is
null), I return the response element's contents. This step ensures that you can use the
processResponse()method to process responses to join, listen, and send requests that do not contain the
idelement.
You can use this example to implement message processing logic according to your own requirements. In Part 2 of the series, I will use these concepts to implement J2ME client-side messaging requirements.
How to search for a peer group
Now, I will demonstrate how you can search for a specific peer group.
Listing 20 shows a method named
searchPeerGroup(), which I have written to demonstrate how you can search for peer groups. This method is included in the PeerGroupDemo MIDlet.
Listing 20. How to search for a peer group
The
searchPeerGroup() method takes just one
parameter, which is the name of the peer group you want to search. The
method demonstrates the following steps:
- First, call the
PeerNetworkclass's
search()method. You pass on the four parameters to the
search()method call. As a result of this method call, the search message is authored and placed in the queue of outbound messages inside the
PeerNetworkobject. The
search()method returns an identifier for the search message. In Listing 20, I have stored the identifier in an integer variable named
messageID.
- Now you can call the
poll()method of the
PeerNetworkclass, which returns a
Messageobject.
- The next step is to check whether the object returned by the
poll()method is
null. If it isn't, you have received a message from the relay.
- Call the
processMessage()method to determine if the message is in response to the search query you just sent. If it is, the
processMessage()method returns the identifier of the peer group you were searching.
Listing 21 shows a simple method named
joinPeerGroup to demonstrate how you join the peer group. The
joinPeerGroup() method takes just one parameter, the identifier of the peer group that you want to join.
Listing 21. How to join a peer group
Listing 21 shows that joining a peer group simply requires you to
call the
PeerNetwork class's
join() method. The
join() method
takes as parameter the group's identifier.
After calling the
join() method, you poll the relay and then call the
processResponse() method (already discussed in Listing 19) to check if you have joined the peer group.
After you have joined the group, you might also want to instantiate another
PeerNetwork object using the two-argument
constructor. I advise you to instantiate a new
PeerNetwork object for every peer group you join. This way, the
communication logic in your application remains straightforward and simple to understand.
This section demonstrates how JXTA pipes work. As you know, pipes are used for peer-to-peer communication, which involves sending and receiving messages over pipes. You have written a simple MIDlet application named PipeDemo to demonstrate pipe communication. You'll find the complete MIDlet in the source code for this article.
Listing 22 shows the PipeDemo MIDlet's constructor.
Listing 22. The PipeDemo constructor
As you can see from Listing 22, the
PipeDemo
constructor first instantiates the
PeerNetwork object and then calls its
connect() method. I have already discussed these points.
The constructor then calls five methods:
createPipe(),
listenToPipe(),
searchPipe(),
authorMessage(), and
sendMessage(). You have written these five methods to enable readers to easily understand how to perform the following tasks:
- How to create a pipe
- How to open a pipe for listening
- How to search for a pipe
- How to author a message
- How to send a message to a pipe
The final outcome of these method calls in the constructor is that you create your own pipe and start listening to it. Then you search the same pipe, author a message, and then send the message to your own pipe. Naturally, you receive the message you just sent.
This arrangement provides you with a single MIDlet that demonstrates all the major operations you need to do on pipes. You can use the basic concepts to build your own application logic. In Part 2 of the series, I will use these concepts to integrate a J2ME client into a JMS application.
Listing 23 shows the
createPipe() method
implementation and demonstrates how to create a pipe.
Listing 23. How to create a pipe
You can see from the
createPipe() method
that creating a pipe is very similar to creating a peer group. You just
have to call the
PeerNetwork class's
create() method. When you called the
create() method in Listing 18 to
create a peer group, you passed
PeerNetwork.GROUP as the first parameter's value. Now, you pass
PeerNetwork.PIPE as the first parameter's value.
Also note that when you created a peer group, you didn't need to
specify the last parameter to the
create() method,
so you passed
null. When you create a pipe,
you also need to pass the last parameter, which specifies the pipe
type. If you are creating a unicast pipe, you pass on
PeerNetwork.UNICAST_PIPE. If you are creating a propagate
pipe, you pass on
PeerNetwork.PROPAGATE_PIPE. The
create() method returns the identifier of the create pipe request message.
After calling the
create() method, you call the
poll() method, which returns a
Message object. You pass the
Message object as well as the create pipe request identifier to the
processMessage() method that I
discussed in Listing 18. The
processMessage() method tells us the pipe identifier.
A successful pipe creation does not start receiving incoming messages until you open it and listen for messages. The next example demonstrates opening a pipe and listening over it.
How to open a pipe for listening
Notice the
listenToPipe method shown in Listing 24.
Listing 24. How to open a pipe for listening
To open a pipe for listening, you need to call the
listen() method of the
PeerNetwork class, passing the pipe's identifier. The
listen() method returns the identifier of the listen request message it
authored.
After calling the
listen() method, you call the
poll() method. The
poll() method returns a
Message object, which you pass on to the
processMessage() method along with the listen request message identifier. If the
processMessage() method does not
return
null, it means your listen request was
successful and the pipe has been opened for input.
If you want to send a message to a peer, you need to know the identifier of the peer's pipe. Therefore, you first search for a pipe. After you know the pipe identifier, you can directly send a message over the pipe to the peer, as shown in the
searchPipe() method of Listing 25.
Listing 25. How to search for a pipe
The
searchPipe() method takes just one
parameter, the pipe's name. It calls the
search()
method of the
PeerNetwork class, passing the
four parameters mentioned earlier. Notice in Listing 25 that you have passed on
PeerNetwork.PIPE as the first parameter's value to the
search() method. This means you are searching for a pipe.
The
search() method returns the
identifier of the search query message. You then call the
poll() method, which returns a
Message object. You pass on the search query identifier and the
Message object to the
processMessage() method, which checks if the
Message object is a response to the search query and extracts the pipe
identifier from the
Message object.
How to author a JXME message
I have already mentioned that you use the
Element and
Message classes to author your JXME messages. I have written a simple method in Listing 26 named
authorMessage(), which demonstrates this process.
Listing 26. How to author a JXME message
The
authorMessage() method takes a string
parameter named
messageString, which
represents the message contents that you want to send over the pipe. In Listing
26, you have performed the following steps:
- Authored an array of elements.
- Instantiated an
Elementobject. Notice that you pass on a string (for example, "message") as the first parameter. This string specifies the name of the element, which wraps the actual contents of the message. The second parameter specifies the message contents (the
messageStringparameter), which should be supplied as a byte array (for example,
messageString.getBytes()). The third element specifies the element namespace, which is
nullif you want to create an element with no namespace. The last parameter specifies the message's MIME type (
nullif you want to use the default MIME type).
- Used the array of step 1 (which contains just one
Elementobject) to construct the
Messageobject.
The
authorMessage() method returns the
Message object, which you now send over the pipe.
How to send a message over a pipe
Listing 27 shows how you can send a message over a pipe using the
sendMessage() method.
Listing 27. How to send a message over a pipe
The
sendMessage() method takes a
Message object and a pipe identifier and sends the
message over the pipe. The
sendMessage() method
calls the
send() method of the
PeerNetwork class, passing the
Message object and the pipe identifier along with the method call.
This sends the message over the pipe.
You can now call the
poll() method to get the message you just sent over the pipe.
You now know why you cannot directly integrate J2ME clients into a JMS network. I've discussed messaging between a JXME client and a JXTA relay, and demonstrated the programmatic steps required in a J2ME MIDlet to communicate with a JXTA relay. The scene is now set to start developing a set of classes that use JXTA to integrate J2ME clients in a JMS application. This is called JXTA4JMS, which I will develop in Part 2 of this series. I will use the concepts presented in this article to build wireless connectivity for JMS clients.
- Download the source code of this article.
- For more information on JMS, see the JMS page at the Sun Microsystems Web site. Also, read this comprehensive tutorial on JMS for and introduction of the JMS API.
- Get more information on JNDI at the Sun Microsystems Web site.
- Visit the JXTA home page for information on Project JXTA objectives.
- Read Mobile P2P messaging, Part 2 (developerWorks, January 2003), which explains one of the samples that comes with the JXME download.
- Take the tutorial Introducing the Java Message Service (developerWorks, June 2004) to learn the basic programming techniques for creating JMS programs.
Faheem Khan is an independent software consultant specializing in Enterprise Application Integration (EAI) and B2B solutions. He can be reached at fkhan872@yahoo.com.
|
http://www.ibm.com/developerworks/wireless/library/wi-jxta/
|
crawl-003
|
refinedweb
| 11,259
| 54.63
|
Is there any way to use ZeroMQ from a sublime plugin on windows? I've tried using binary builds from here: github.com/zeromq/pyzmq/downloads and they give initial impression of working, but when I try to create a connection everything falls apart (ST2 crashes).
File: pyzmq-2.1.7.1.win-amd64-py2.6.msiSublime: win64, 2177
Plugin code:
import zmq
c = zmq.Context()
s = c.socket(zmq.PUB)
s.bind("tcp://*:5555")
This works, but as soon as I do this on the other side, everything just blows up.
c = zmq.Context()
s = c.socket(zmq.SUB)
s.connect("tcp://127.0.0.1:5555")
I have the .mdmp / appcompat / WERInternalMetadata saved if necessary.
EDIT: Actually, it sometimes crashes on s.bind("tcp://*:5555") as well.
whachya whachya whachya doooon?
|
https://forum.sublimetext.com/t/using-python-library-zeromq-in-a-plugin/4384
|
CC-MAIN-2017-22
|
refinedweb
| 133
| 63.15
|
Note that there is a g++ compiler flag that you can specify to get additional error/warning messages. The command with the option is:
g++ -c -pedantic file.cpp
The term core is an old-fashioned term that was used to describe a particular type of memory - "core memory". So when the system reports a core dump it means that it has copied (dumped) the whole area of memory that your program was using at the time it crashed. By default, it puts all this information (and there is a LOT of it) into a file called core. Seasoned Unix programmers actually look inside this file to try to figure out where their program went off the track. The rest of us usually say "Oh darn..." and just delete the file.
A better solution is to use a debugger program so that you can control how your program is running. Here are some of the basic actions you want to perform:
g++ -g filename.cpp -o executable_fileStart gdb by typing in the following command:
gdb executable_filewhere executable_file is the executable version of the program. Remember: if you do not use the -o option when you compile a program, the executable file will be called a.out.
The system prompt changes to the gdb prompt, and you are ready to start entering gdb commands. There are many commands available but the following table shows some of the basic ones.
As is the case with most debuggers, the best way to learn gdb is to actually use it on a program. There is an simple program you can use to do this in the lab exercise. Here is a list of the references for using gdb:
By now, you have learned how to declare a
char
variable to hold characters.; You might also have learned that a
string (like a person's first or last name) in C++ can simply be
represented by a null-terminated sequence of characters stored in a
char array. These days you can use the
string type for strings. In the bad old days, before
circa 1994, you had to use character arrays, because there was no
native
string type in C++ most implementations. There is
still no string type in C. A string, represented as a character
array, is called a "C string".
Here's a quick little review of declaring and initializing C strings.
To declare a character array:
char s1[8];To assign a value to an array when you declare it:
char s1[8] = "one";Actually, if you are assigning a value at the same time as you are declaring the
chararray, you don't need to specify an array size.
char s1[] = "one";Assigning a value either way results in a character array of finite length, established at compile time. Note: You cannot assign the value of a C string after the declaration as you do with other simple variable assignments. i.e.
s1 = "one";<- WRONG To assign a value after the declaration, you need to use a C String function such as
strcpy. We'll look at that next.
There are two different ways to refer to an array when its size is
not important. You can write
char arr[] or you can write
char *arr. You will need to know both methods to
understand the following discussion. The first way is what you were
taught in CS 110. The second can be used in declarations and in
function headers and prototypes – wherever you would see
char arr[]. They are equivalent. You use the same syntax to refer
to an element of both types of array. They are passed to functions the
same way too.
C++ supports a wide range of C string manipulation functions. Following are some of them:
strcpy
Use this function if you would like to copy
one string to another string. The syntax of this function is
strcpy(char *s1, const char *s2)
where s1 is the copy and s2 is the original.
Following is an example:
// File name: ~ftp/pub/class/170/ftp/cpp/StringClass/Copy1.cpp // Purpose: Demonstrates the use of strcpy() #include <cstring> #include <iostream> using namespace std; #define COPY_SIZE 64 // maximum characters to copy int main() { const char *Original = "I am the original!"; char Copy[COPY_SIZE + 1] = "I am the copy!"; cout << "Before strcpy(): " << endl; cout << "Original = " << Original << endl; cout << "Copy = " << Copy << endl; strcpy(Copy, Original); cout << "After strcpy(): " << endl; cout << "Original = " << Original << endl; cout << "Copy = " << Copy << endl; return 0; }
Here is the running result of the above program:
mercury[156]% CC -o Copy1 Copy1.cpp mercury[157]% Copy1 Before strcpy(): Original = I am the original! Copy = I am the copy! After strcpy(): Original = I am the original! Copy = I am the original! mercury[158]%
strlen
Use this function if you would
like to know the length of a string. The syntax of this function is
strlen(const *char str). Following is a simple program demonstrating
the use of
strlen:
// File name: ~ftp/pub/class/170/ftp/cpp/StringClass/StrLen.cpp // Purpose: Demonstrates the use of strlen() #include <cstring> #include <iostream> using namespace std; int main() { const char *String= "I am the string!"; cout << "Here is string:" << String << endl; cout << "Its length is: " << strlen(String) << " characters\n"; return 0; }
Here is the running result of the above program:
mercury[166]% CC -o StrLen StrLen.cpp mercury[167]% StrLen Here is string:I am the string! Its length is: 16 characters mercury[168]%
strncpy
strncpy is a variation of
strcpy. It's syntax is:
strncpy(char *s1, const char *s2, size_t n), where size_t
is simply unsigned int. It simply copys n characters from s2
to s1. Following is an example:
// File name: ~ftp/pub/class/170/ftp/cpp/StringClass/Copy2.cpp // Purpose: Demonstrates the use of strncpy() #include <cstring> #include <iostream> using namespace std; #define COPY_SIZE 64 int main() { const char *Original = "I am the original!"; char Copy[COPY_SIZE + 1] = "I am the copy!"; cout << "Before strncpy(): " << endl; cout << "Original = " << Original << endl; cout << "Copy = " << Copy << endl; // calls strncpy and strlen strncpy(Copy, Original, COPY_SIZE); Copy[COPY_SIZE + 1] = '\0'; cout << "After strncpy(): " << endl; cout << "Original = " << Original << endl; cout << "Copy = " << Copy << endl; return 0; }
Running result:
mercury[187]% CC -o Copy2 Copy2.cpp mercury[188]% Copy2 Before strncpy(): Original = I am the original! Copy = I am the copy! After strncpy(): Original = I am the original! Copy = I am the original!
For more string manipulating functions, refer to your text book or type
man strcpy at your Unix command prompt.
C String is one of the two ways that you can manipulate
strings in C++. The other way is through the
string class.
This allows strings to be represented as objects in C++.
The string
class gives you strings of
unbounded length
because the string space is allocated at
run-time.
Compare this to C Strings where the
finite string length
of char array variables is defined
at compile time.
Moreover, assignment, comparison, and concatenation of strings using the string class is arguably easier than using C Strings. However, you'll still see C String code around in older programs so it's best to know both.
Why, you may ask, do we need the string class when we already have C string? Here is why: null terminated strings (C strings) cannot be manipulated by any of the standard C++ operators. Nor can they take part in normal C++ expressions. For example, consider this fragment:
char s1[80], s2[80], s3[80]; s1 = "one"; // error s2 = "two"; // error s3 = "three"; //error
As the comments show, in C++, it not possible to use the assignment operator to give a character array a new value. To do this, you need to use the strcpy function that was discussed above:
strcpy(s1, "one"); strcpy(s2, "two"); strcpy(s3, "three");
There are also other reasons for using strings . For example, the null-terminated C String does not check if the array is out of bound and that contributes to many string problems encountered by C++ programmers, experienced and inexperienced alike.
Following is an example showing you how to use the string class to manipulate strings. The example gives you a feeling of how string objects work, which means you do not need to remember how it really works.
// File name: ~ftp/pub/class/170/ftp/cpp/StringClass/StlString.cpp // Purpose: Demonstrates the use of the string class #include <iostream> #include <string> using namespace std; int main() { string str1("This is a test"); string str2("ABCDEFG"); cout << "Initial strings: \n"; cout << "str1: " << str1 << endl; cout << "str2: " << str2 << "\n\n"; // demonstrate length() cout << "Length of str1 is: " << str1.length() << endl; // demonstrate insert() cout << "Insert str2 into str1:\n"; // insert str2 to str1 starting from the 5th position str1.insert(5, str2); cout << str1 << "\n\n"; // demonstrate erase() cout << "Remove 7 characters from str1: \n"; // remove 7 characters from str1 starting from the 5th position str1.erase(5, 7); cout << str1<< "\n\n"; // demonstrate replace cout << "Replace 2 characters in str1 with str2:\n"; // starting from the 5th position of str1, replace 2 characters of // str1, with str2 str1.replace(5, 2, str2); cout << str1 << endl; return 0; }
Here is how it runs:
mercury[36]% CC -o StlString StlString.cpp mercury[37]% StlString Initial strings: str1: This is a test str2: ABCDEFG Length of str1 is: 14 Insert str2 into str1: This ABCDEFGis a test Remove 7 characters from str1: This is a test Replace 2 characters in str1 with str2: This ABCDEFG a test mercury[38]%
|
http://www.cs.uregina.ca/Links/class-info/115/05-strings/
|
CC-MAIN-2018-22
|
refinedweb
| 1,584
| 71.04
|
Services
Web services separate out the service contract from the service interface. This feature is one of the many characteristic required for an SOA-based architecture. Thus, even though it is not mandatory that we use the web service to implement an SOA-based architecture, yet it is clearly a great enabler for SOA.
Web services are hardware, platform, and technology neutral The producers and/or consumers can be swapped without notifying the other party, yet the information can flow seamlessly. An ESB can play a vital role to provide this
separation.
Binding Web Services
A web service’s contract is specified by its WSDL and it gives the endpoint details to access the service. When we bind the web service again to an ESB, the result will be a different endpoint, which we can advertise to the consumer. When we do so, it is very critical that we don’t lose any information from the original web service contract.
Why Another Indirection?
There can be multiple reasons for why we require another level of indirection between the consumer and the provider of a web service, by binding at an ESB. Systems exist today to support business operations as defined by the business processes. If a system doesn’t support a business process of an enterprise, that system is of little use. Business processes are never static. If they remain static then there is no growth or innovation, and it is doomed to fail.
Hence, systems or services should facilitate agile business processes. The good architecture and design practices will help to build “services to last” but that doesn’t mean our business processes should be stable. Instead, business processes will evolve by leveraging the existing services. Thus, we need a process workbench to assemble and orchestrate services with which we can “Mix and Match” the services. ESB is one of the architectural topologies where we can do the mix and match of services. To do this, we first bind the existing (and long lasting) services to the ESB. Then leverage the ESB services, such as aggregation and translation, to mix and match them and
advertise new processes for businesses to use.
Moreover, there are cross service concerns such as versioning, management, and monitoring, which we need to take care to implement the SOA at higher levels of maturity. The ESB is again one way to do these aspects of service orientation.
HTTP
HTTP is the World Wide Web (www) protocol for information exchange. HTTP is based on character-oriented streams and is firewall-friendly. Hence, we can also exchange XML streams (which are XML encoded character streams) over HTTP. In a web service we exchange XML in the SOAP (Simple Object Access Protocol) format over HTTP. Hence, the HTTP headers exchanged will be slightly different than a normal web page interaction. A sample web service request header is shown as follows:
GET /AxisEndToEnd/services/HelloWebService?WSDL HTTP/1.1 User-Agent: Java/1.6.0-rc Host: localhost:8080 Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2 Connection: keep-alive POST /AxisEndToEnd/services/HelloWebService HTTP/1.0 Content-Type: text/xml; charset=utf-8 Accept: application/soap+xml, application/dime, multipart/related, text/* User-Agent: Axis/1.4 Host: localhost:8080 Cache-Control: no-cache Pragma: no-cache SOAPAction: "" Content-Length: 507
The first line contains a method, a URI and an HTTP version, each separated by one or more blank spaces. The succeeding lines contain more information regarding the web service exchanged.
ESB-based integration heavily leverages the HTTP protocol due to its open nature, maturity, and acceptability. We will now look at the support provided by the ServiceMix in using HTTP.
servicemix-http in Detail
servicemix-http is used for HTTP or SOAP binding of services and components into the ServiceMix NMR. For this ServiceMix uses an embedded HTTP server based on the Jetty.
In Chapter 3, you have already seen the following two ServiceMix components:
- org.apache.servicemix.components.http.HttpInvoker
- org.apache.servicemix.components.http.HttpConnector
As of today, these components are deprecated and the functionality is replaced by the servicemix-http standard JBI component. A few of the features of the servicemix-http are as follows: Supports
- SOAP 1.1 and 1.2
- Supports MIME with attachments
- Supports SSL
- Supports WS-Addressing and WS-Security
- Supports WSDL-based and XBean-based deployments
- Support for all MEPs as consumers or providers
- Since servicemix-http can function both as a consumer and a provider, it can effectively replace the previous HttpInvoker and HttpConnector component.
Consumer and Provider Roles
When we speak of the Consumer and Provider roles for the ServiceMix components, the difference is very subtle at first sight, but very important from a programmer perspective. The following figure shows the Consumer and Provider roles in the ServiceMix ESB:
The above figure shows two instances of servicemix-http deployed in the ServiceMix ESB, one in a provider role and the other in the consumer role. As it is evident, these roles are with respect to the NMR of the ESB. In other words, a consumer role implies that the component is a consumer to the NMR whereas a provider role implies the NMR is the consumer to the component. Based on these roles, the NMR will take responsibility of any format or protocol conversions for the interacting components.
Let us also introduce two more parties here to make the role of a consumer and a provider clear—a client and a service. In a traditional programming paradigm, the client interacts directly with the server (or service) to avail the functionality. In the ESB model, both the client and the service interact with each other only through the ESB. Hence, the client and the service need peers with their respective roles assigned, which in turn will interact with each other. Thus, the ESB consumer and provider roles can be regarded as the peer roles for the client and the service respectively.
Any client request will be delegated to the consumer peer who in turn interacts with the NMR. This is because the client is unaware of the ESB and the NMR protocol or format. However, the servicemix-http consumer knows how to interact with the NMR. Hence any request from the client will be translated by the servicemix-http consumer and delivered to the NMR. On the service side also, the NMR needs to invoke the service. But the server service is neutral of any specific vendor’s NMR and doesn’t understand the NMR language as such. A peer provider role will help here. The provider receives the request from the NMR, translates it into the actual format or protocol of the server service and invokes the service. Any response will also follow the reverse sequence.
servicemix-http XBean Configuration
The servicemix-http components supports the XBean-based deployment. Since the servicemix-http component can be configured in both the consumer and provider roles, we have two sets of configuration parameters for the component. Let us look into the main configuration parameters:
servicemix-http as consumer: A sample servicemix-http consumer:
Now, while configuring the provider component you need to ensure that the service (IHelloWebService) and the endpoint (HelloWebService) match the service name and port elements of the WSDL that you use to correctly return the WSDL for the endpoint. Moreover, the service name will use the targetNamespace for the WSDL (.:
Web Service Binding Sample
We will now look at a complete sample of how to bind a web service to the ServiceMix. While doing so, we will also see how to use the Apache Axis client-side tools to generate stubs based on the binding at ServiceMix. Normally we point to the actual WSDL URL to generate client stubs, but in this example we will point the tools to the ServiceMix binding. Then the ServiceMix binding will act completely as the web service gateway visible to the external clients, thus shielding the actual web service in the background.
Sample Use Case
By using a web services gateway, you can use the intermediation to build and deploy the web services routing application. But keep in mind that the routing is just one of the various technical functionalities that you can implement at the gateway. For our sample use case, we have an external web service, deployed and hosted in a node remote to the ESB. In the ESB, we will set up a Web Services Gateway, which can proxy the remote web service. The entire setup is shown in the following figure:
Along with the previous discussion, we need the servicemix-http in the consumer and provider roles. MyConsumerService is a servicemix-http component in the consumer role and IHelloWebService is a servicemix-http component in the provider role. Both of them are shown in the following figure:
Let us now take a closer look at the gateway configured in the ESB. Here, we configure servicemix-http in both the consumer and provider roles and hook it to the NMR. Any client requests are intercepted by the consumer and the consumer then sends the request on behalf of the client to the NMR. From there the request will be routed to the destination web service through the provider. The message flow is marked in sequence in the following figure:
Deploy the Web Service
As a first step,eMixHttpBinding ant
In fact, the build.xml file will call the build in the subprojects to build the web service as well as the ServiceMix subproject. The web service is built completely and the war file can be found in the folder ch10\ServiceMixHttpBinding1_ws\dist\AxisEndToEnd.war. To deploy the web service, drop this war file into your favorite web server’s webapps folder and restart the web server, if necessary.
Now to make sure that your web service deployment works fine, we have provided a web service test client. To invoke the test client, execute the following commands:
cd ch10\ServiceMixHttpBinding\01_ws ant run
We can also check the web service deployment by accessing the WSDL from the URL: Let us list out the WSDL here, since we want to compare it with the WSDL accessed from the ServiceMix binding later to cross check the similarities. This is provided in ch10.html Access WSDL and Generate Axis Stubs to Access the Web Service Remotely Now for the really cool stuff. As we discussed earlier, we have set up the ServiceMix as a separate web service gateway in front of the actual web service deployment. Now we have to check whether we can access the WSDL from the ServiceMix. For this, we can point our browser using the standard WSDL query string, like: or Note that, the above URL points to the locationURI attribute configured for the consumer component, which is HelloWebService. The WSDL placed in
location ch10\ServiceMixHttpBinding\ HelloWebService-esb.wsdl, matches the following code:
If we compare the two WSDL, the major difference is in the service description section. Here, ServiceMix forms the service and port name taking values from service and endpoint attributes of the consumer service—MyConsumerService and HelloWebService respectively.
If we are able to retrieve the WSDL, the next step is to use the Apache Axis tools to auto-generate the client-side stubs and binding classes, using which we can write simple Java client code to access the service through HTTP channel. The Axis client classes are placed in the directory ch10 = ""; private static String namespaceURI =". axis.apache.binildas.com"; private static String localPart = "MyConsumerService"; protected void executeClient(String[] args)throws Exception { MyConsumerService myConsumerService = null; IHelloWeb iHelloWeb = null; if(args.length == 3) { myConsumerService = new MyConsumerServiceLocator(args[0], new QName(args[1], args[2])); } else { myConsumerService = new MyConsumerServiceLocator(wsdlUrl, new QName(namespaceURI, localPart)); } iHelloWeb = myConsumerService.getHelloWebService(); } public static void main(String[] args)throws Exception { Client client = new Client(); client.executeClient(args); } }
To build the entire Axis client codebase, assuming that the ServiceMix is up and running, change directory to ch10 by introducing the servicemix-http JBI component. Then we looked at the samples of binding web services to ESB using the servicemix-http binding component. By doing so, we have, in fact, implemented a complete functional web services gateway at the ESB.
A lot of times, we utilize this pattern to expose useful web services hosted deep inside your corporate networks protected by multiple levels of firewall. When we do so, the web services gateway is the access point for any external client. It should mock the actual web service not only in providing the functionality but also in exposing the web services contract (WSDL). Now, do you want to improve the QOS attributes of your web service?
The next chapter will take you through a similar exercise by demonstrating how to access your HTTP-based web services through an MOM channel like JMS.
|
https://javabeat.net/service-oriented-java-business-integration/
|
CC-MAIN-2021-39
|
refinedweb
| 2,143
| 52.49
|
The new
BitmapData ActionScript Class is used to represent a bitmap object in memory. When you create a new instance of the class, a blank image is stored in memory. You can then manipulate this bare bones bitmap with the various methods of the
BitmapData class. Before you can use this class, however, you need to know your bitmaps inside out.
A bitmap is a digital image format that describes an image using a grid of color values. Each cell in the grid represents a pixel. Each pixel is drawn by a renderer with a specified color value to form an image. Bitmaps inside Flash Player are stored at 32-bit color depth. This means that the color of any given pixel is stored as a binary number that is 32 bits long. The color of a pixel in a 32-bit image can be one of roughly 16.7 million colors in a so-called true-color image. (It's called true color because the number of colors the human eye can detect is approximately 16.7 million colors .) Each color is made up of four color channels, the familiar Red, Green, and Blue channels plus an alpha channel, which is used to add varying levels of transparency to the color.
Color values used in conjunction with the
BitmapData class should be represented in ActionScript using a 32-bit hexadecimal number. A 32-bit hexadecimal number is a sequence of four pairs of hexadecimal digits. Each hexadecimal pair defines the intensity of each of the four color channels (Red, Green, Blue, and alpha). The intensity of a color channel is a hexadecimal representation of a decimal number in the range of 0–255; FF is full intensity (255), 00 is no color in the channel (0). As you can see, you need to pad a channel so it is two digits in length, for example, 01 instead of 1. This ensures that you always have eight digits in a hexadecimal number. You should also ensure that you specify the hexadecimal number prefix, 0x. For example, white (full intensity on all channels) is represented in hexadecimal notation as: 0xFFFFFFFF. Black, by contrast, is the opposite; it has no color in any of the Red, Green, and Blue channels: 0xFF000000. Note that the alpha channel (the first pair) is still full intensity (FF). Full intensity on the alpha channel means no alpha (FF) and no intensity (00) means full alpha. So a transparent pixel color value is 0x00FFFFFF.
It is often easier for people to remember particular colors in terms of alpha, Red, Green, and Blue (ARGB) values instead of Hexadecimal values. If that is the case with you, you should know how to convert from an ARGB to a hexadecimal value. The following ActionScript function will do just that:
function argbtohex(a:Number, r:Number, g:Number, b:Number) { return (a<<24 | r<<16 | g<<8 | b) }
You can use it like this:
hex=argbtohex(255,0,255,0) //outputs a 32-bit red hexadecimal value as a base-10 number
To convert hexadecimal color values back to four-decimal numbers (one for each channel, ARGB) in the range of 0–255, use the following ActionScript function:
function hextoargb(val:Number) { var col={} col.alpha = (val >> 24) & 0xFF col.red = (val >> 16) & 0xFF col.green = (val >> 8) & 0xFF col.blue = val & 0xFF return col }
You can use it like this:
argb=hextoargb(0xFFFFCC00); alpha=argb.alpha; red=argb.red; green=argb.green; blue=argb.blue;
To create a bitmap at runtime using ActionScript, create an instance of the
BitmapData class:
myBitmap=new flash.display.BitmapData(width,height,transparent,fillColor)
The
BitmapData class is located in the flash.display package. To save yourself from typing the fully qualified path to the class every time you want to create a new instance, import the package first:
import flash.display.BitmapData
Then you can create an instance of the class:
myBitmap=new BitmapData(width,height,transparent,fillColor);
The
BitmapData class constructor accepts four arguments:
width,
height,
transparent, and
fillColor. The
width and
height arguments are used to specify the dimensions of the bitmap. If you think of the bitmap in terms of a grid, then the width is the number of pixels in a row and the height is the number of rows. The
transparent argument is a boolean value (true/false) used to specify whether you intend for the bitmap to contain transparency. The
fillColor argument is used to specify the default 32-bit color of each pixel in the bitmap. However, if you set the
transparent parameter to false, then the first 8 bits of the specified color are ignored. In that case, it is not necessary to pass a 32-bit hexadecimal number. Instead you can pass a 24-bit hexadecimal number such as 0xFFFFFF for white pixels. The
transparent and
fillColor arguments are optional. If you omit them, then the bitmap will be stored in memory with no transparency and each pixel will default to white (0xFFFFFFFF).
For example, if you wanted to create a black, fully opaque (no transparency) image that is 100 pixels wide by 100 pixels high, you would use the following code:
import flash.display.BitmapData myBitmap = new BitmapData(100,100,false,0xFF000000)
Note: The maximum dimensions of a bitmap in Flash Player is limited to 2880 pixels in either direction (width or height). If you attempt to create a BitmapData instance that is larger than this restriction, the Bitmap will not be created. This limit is in place to keep people from creating Flash movies that gobble up the client's RAM. A bitmap that is 2880 x 2880 pixels, will use roughly 32MB of RAM.
After you have created an instance of the
BitmapData class you have numerous methods available that you can use to manipulate the bitmap. For example, you can apply filter effects, fill particular areas of color, modify the color palette, make certain areas transparent, and so on. The possibilities are endless. However, in this article I will cover the basics that will start you off with a good foundation.
An instance of the BitmapData object can eat up the viewer's memory quickly. Every pixel in a bitmap is stored using 4 bytes of memory (1 byte per color channel ARGB). If you create a bitmap that is 500 x 500 pixels in size, it will take up close to 1MB of RAM. If you don't need a BitmapData object anymore, then it is good practice to free the memory that the bitmap is using. The
BitmapData class has a method that enables you to do just this, the
dispose() method. You use it like this:
myBitmap.dispose()
Remember to keep your Flash movies tidy by cleaning up after yourself. If you don't need a bitmap anymore, then free the memory.
|
http://www.adobe.com/devnet/flash/articles/image_api_02.html
|
crawl-002
|
refinedweb
| 1,144
| 62.68
|
In a prior article I wrote about calling Java classes from Microsoft .NET using a bridging technology called JNBridge (). That works by creating proxy classes in .NET that call into the Java VM.
This is one approach to connecting Java and .NET, but another approach is taken in a project called IKVM (). IKVM is an open source project which actually re-creates the Java VM in .NET! So rather than a bridging technology, it is instead a transforming technology. It also has a tool which will take a Java class, and migrate its bytecode into .NET intermediate language (IL). I have not used IKVM to replicate complex Java classes, such as the Primavera Integration API, but I have used it on simple classes, and I will show a simple example here using Hello World.
IKVM is not actually a bridging technology, since Java never actually runs. Instead, it is an implementation of Java in .NET, which then runs as true .NET programs. IKVM consists to three main technologies:
We will focus on "ikvmc.exe", the static compiler which turns java classes and jar files into .NET assemblies. From my point of view, I want to re-use pretty simple java tools in .NET, so this utility gets me through 90% of what I need.
In this example, I show how to create a simple Java class, compile it, and then transform it into a .NET dll using IKVM. Then I show how to code, compile, and call it from .NET
Here is a very simple Java class:
package awi;
public class demo
{
public String Message = "Hello from Java!
}
Compile it with the javac compiler:
javac demo.java -d ./
ikvmc awi\demo.class
Next we create a C# program which uses this dll.
using System;
using awi;
public class demo
{
static public void Main(string[] args)
{
Console.WriteLine("Hello from .NET!");
// create the proxy class
awi.demo foo = new awi.demo();
// this will output "Hello from Java!"
Console.WriteLine(foo.Message);
}
}
This code is compiled using the C# command line compiler, csc:
csc demo_net.cs /debug /r:demo.dll /r:ikvm.openjdk.classlibrary.dll
Run the program from the command line and it should say:
Hello from .NET!
Hello from Java!
IKVM is a neat package which can transform a Java class or jar file into a runnable .NET assembly. This example was very bare bones. But in the future I hope to test IKVM further, transforming more complex Java tools into .NET and using them inreal-world applications.
|
http://it.toolbox.com/blogs/daniel-at-work/using-ikvm-to-call-java-from-net-21993
|
CC-MAIN-2016-40
|
refinedweb
| 418
| 68.77
|
texture pixels with raw preformatted data.
This function fills texture pixel memory with raw data. This is mostly useful for loading compressed texture format data into a texture.
Passed data should be of required size to fill the whole texture according to its width, height, data format and mipmapCount; otherwise a UnityException is thrown. Mipmaps are laid out in memory starting from largest, with smaller mip level data immediately following. For example, a 16x8 texture of RGBA32 format with no mipmaps can be filled with a 512-byte array (16x8x4).
For runtime texture generation, it is also possible to directly write into texture data via GetRawTextureData that returns a
Unity.Collections.NativeArray. This can be faster since it avoids a memory copy that LoadRawTextureData would do.
Call Apply after setting image data to actually upload it to the GPU.
See Also: SetPixels, SetPixels32, SetPixelData, Apply, GetRawTextureData, ImageConversion.LoadImage.
using UnityEngine;
public class ExampleScript : MonoBehaviour { public void Start() { // Create a 16x16 texture with PVRTC RGBA4 format // and fill it with raw PVRTC bytes. Texture2D tex = new Texture2D(16, 16, TextureFormat.PVRTC_RGBA4, false); // Raw PVRTC4 data for a 16x16 texture. This format is four bits // per pixel, so data should be 16*16/2=128 bytes in size. // Texture that is encoded here is mostly green with some angular // blue and red lines. byte[] pvrtcBytes = new byte[] { 0x30, 0x32, 0x32, 0x32, 0xe7, 0x30, 0xaa, 0x7f, 0x32, 0x32, 0x32, 0x32, 0xf9, 0x40, 0xbc, 0x7f, 0x03, 0x03, 0x03, 0x03, 0xf6, 0x30, 0x02, 0x05, 0x03, 0x03, 0x03, 0x03, 0xf4, 0x30, 0x03, 0x06, 0x32, 0x32, 0x32, 0x32, 0xf7, 0x40, 0xaa, 0x7f, 0x32, 0xf2, 0x02, 0xa8, 0xe7, 0x30, 0xff, 0xff, 0x03, 0x03, 0x03, 0xff, 0xe6, 0x40, 0x00, 0x0f, 0x00, 0xff, 0x00, 0xaa, 0xe9, 0x40, 0x9f, 0xff, 0x5b, 0x03, 0x03, 0x03, 0xca, 0x6a, 0x0f, 0x30, 0x03, 0x03, 0x03, 0xff, 0xca, 0x68, 0x0f, 0x30, 0xaa, 0x94, 0x90, 0x40, 0xba, 0x5b, 0xaf, 0x68, 0x40, 0x00, 0x00, 0xff, 0xca, 0x58, 0x0f, 0x20, 0x00, 0x00, 0x00, 0xff, 0xe6, 0x40, 0x01, 0x2c, 0x00, 0xff, 0x00, 0xaa, 0xdb, 0x41, 0xff, 0xff, 0x00, 0x00, 0x00, 0xff, 0xe8, 0x40, 0x01, 0x1c, 0x00, 0xff, 0x00, 0xaa, 0xbb, 0x40, 0xff, 0xff, }; // Load data into the texture and upload it to the GPU. tex.LoadRawTextureData(pvrtcBytes); tex.Apply(); // Assign texture to renderer's material. GetComponent<Renderer>().material.mainTexture = tex; } }
|
https://docs.unity3d.com/ScriptReference/Texture2D.LoadRawTextureData.html
|
CC-MAIN-2022-27
|
refinedweb
| 380
| 58.21
|
It's not the WSDL that's wrong, it's the schema. You must add an
<xsd:import> after line 03. In order to reference an element or type from
another namespace, you must both declare the namespace and import it. (I'm
assuming that you purposefully left out a bunch of namespace declarations in
the <wsdl:definitions> element to save space.)
01. <wsdl:definitions
02. <wsdl:types>
03. <schema elementFormDefault="qualified" targetNamespace="a.b.c">
03.5 <import namespace="x.y.z"/>
04. <element name="fault" type="tns2:MyFault"/>
05. </schema>
06. <schema elementFormDefault="qualified" targetNamespace="x.y.z">
07. <complexType name="MyFault">
08. ... definition here ...
09. </complexType>
10. </schema>
11. </wsdl:types>
On 3/30/06, Jarmo Doc <jarmod@hotmail.com> wrote:
>
> Can you recommend an easily-downloadable validator? The WSDL's not
> publicly
> available so web-sites that offer to validate based upon a URL won't work
> for me.
>
> I've run the WSDL through various XML validators and they all report fine.
> I don't have a WSDL validator though and haven't been able to locate
> anything good. There is one at the pocketsoap site but it seems to
> require
> you to download half a dozen other things and then tweak a bunch of
> scripts
> which is not exactly what I'd call convenient.
>
> PS the WSDL is very large, and proprietary, so I doubt I'll be able to
> upload the whole thing.
>
>
> >From: Dies Koper <dies@jp.fujitsu.com>
> >Reply-To: axis-user@ws.apache.org
> >To: axis-user@ws.apache.org
> >Subject: Re: Axis java2wsdl fails WebSphere validator
> >Date: Thu, 30 Mar 2006 10:58:35 +0900
> >
> >Have you tried running it through a validating parser?
> >Please try and post the result.
> >#I could try if you post the whole WSDL before I go home tonight.
> >
> >Regards,
> >Dies
> >
> >Jarmo Doc wrote:
> > > My WSDL document, generated directly from Java via Axis 1.3 java2wsdl,
> > > is structured something like this:
> > >
> > > 01. <wsdl:definitions
> > > 02. <wsdl:types>
> > > 03. <schema elementFormDefault="qualified" targetNamespace="a.b.c">
> > > 04. <element name="fault" type="tns2:MyFault"/>
> > > 05. </schema>
> > > 06. <schema elementFormDefault="qualified" targetNamespace="x.y.z">
> > > 07. <complexType name="MyFault">
> > > 08. ... definition here ...
> > > 09. </complexType>
> > > 10. </schema>
> > > 11. </wsdl:types>
> > >
> > > This WSDL appears to be quite acceptable to Axis' wsdl2java and to the
> > > gSOAP equivalent. However, WebSphere's wsdl2java does not like it.
> > > Specifically it complains that tns2:MyFault on line 04 cannot be
> >resolved.
> > >
> > > So, a few questions:
> > >
> > > 1. is the type definition for MyFault on line 07 correct? I had
> assumed
> > > that it was implicitly in namespace x.y.z because that is the
> > > targetNamespace of the enclosing schema (line 06) and hence does not
> > > need to be explicitly decorated with tns2.
> > >
> > > 2. Generally, is it valid to reference MyFault in schema #2 from
> schema
> > > #1 in this way?
> > >
> > > 3. Any idea if this is a bug in WebSphere wsdl2java or in Axis
> >java2wsdl?
> > >
> > > Thanks.
> >
> >
>
> _________________________________________________________________
> On the road to retirement? Check out MSN Life Events for advice on how to
> get there!
>
>
|
http://mail-archives.apache.org/mod_mbox/axis-java-user/200603.mbox/%3Cbf414ee60603300525m42d0fa41xf41460dbf84b29d2@mail.gmail.com%3E
|
CC-MAIN-2018-09
|
refinedweb
| 507
| 60.11
|
02 April 2012 09:03 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Taiyo’s 310,000 tonne/year PVC unit at
The company also operates a 100,000 tonne/year plant at
Taiyo Vinyl is still unable to offer PVC materials for export and can only supply to the domestic market because it is operating its PVC units at reduced rates amid a shortage of feedstock vinyl chloride monomer (VCM), according to the source.
The producer obtains its feedstock VCM from its parent company, Tosoh Corp, whose 1.2m tonne/year VCM facility at Nanyo has remained shut since November 2011 following an explosion
|
http://www.icis.com/Articles/2012/04/02/9546723/japans-taiyo-vinyl-to-shut-osaka-pvc-plant-in-july.html
|
CC-MAIN-2014-42
|
refinedweb
| 104
| 56.08
|
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
OS_RESULT os_evt_wait_or (
U16 wait_flags, /* Bit pattern of events to wait for */
U16 timeout ); /* Length of time to wait for event */
The os_evt_wait_or function waits for one_or function returns when at least one of
the events specified in the wait_flags has occurred or when
the timeout expires. The event flag or flags that caused the
os_evt_wait_or function to complete are cleared before the
function returns. You can identify those event flags with
os_evt_get function later.
The os_evt_wait_or function is in the RL-RTX library. The
prototype is defined in rtl.h.
note
The os_evt_wait_or function returns a value to indicate
whether an event occurred or the timeout expired.
os_evt_get, os_evt_wait_and
#include <rtl.h>
__task void task1 (void) {
OS_RESULT result;
result = os_evt_wait.
|
https://www.keil.com/support/man/docs/rlarm/rlarm_os_evt_wait_or.htm
|
CC-MAIN-2020-34
|
refinedweb
| 135
| 65.52
|
Recently I received a copy of the Windows Vista Resource Kit.
This is the best resource kit I have seen since the box set was released for Windows 2000. This book is worth buying as it is a great resource to have on all things Windows Vista. Among the 1500+ pages of goodness are several chapters dedicated to networking topics and to network troubleshooting on Windows Vista. The CD that accompanies the book includes many useful tools, scripts and resources.
If you are an old hand at networking and troubleshooting network issues you might be thinking that you can get by without another resource kit on your shelves. Maybe you can. But...this book contains information on most of the new technologies built into Windows Vista that you haven't read about in past resource kits from Microsoft. LOTS of things have changed between Windows XP and Windows Vista. It’s worth a look.
You can read more about it here:
Recently I recorded this TechNet webcast focusing on the new network diagnostics built into Windows Vista:
This webcast focuses on the Network Diagnostics Framework (NDF) and the Network Connectivity Status Indicator (NCSI). These features will help you determine if you are connected to a local network and the Internet, and get you re-connected when there are common problems.
This webcast is slightly more technical than the Support webcast that I recorded in December.
Recently my team published a whitepaper on the Network Diagnostics that we built into Windows Vista. The target audience for the paper is IT Professionals.
The paper covers the Network Diagnostics Framework (NDF) and the Network Connectivity Status Indicator (NCSI) in depth including related registry keys/values and event log IDs.
This is a great resource if you really want to get the most out of these features.
Recently I recorded this webcast focusing on the new network diagnostics built into Windows Vista:
During this 45 minute webcast I introduce two new features in Windows Vista: the Network Diagnostics Framework (NDF) and the Network Connectivity Status Indicator (NCSI).
The webcast is worth a look if you are interested in advances in network troubleshooting or if you simply want to get a head start learning about some of the new features in Windows Vista.
For the better part of the last two years I have been working as a Program Manager on the Network Experience team in the Core Operating System Division at Microsoft. I have been working on the network diagnostics built into Windows Vista with a team of talented and dedicated Developers, Testers and Program Managers.
These are v1 features that I hope will help every Consumer user running Windows Vista overcome the most common networking issues that they typically experience. I also hope that experienced support folks can also leverage these features to reduce the amount of work they have to do to isolate and fix common networking issues. I plan to make a few blog posts to evangelize these features in the near future.
Why did we build network diagnostics into Windows Vista? In the past, in order to troubleshoot a network issue a knowledgeable individual would have to use several support tools to gather information, test hypotheses, and identify how to fix an issue. As you can see from some of my other blog posts, I developed many support tools in the past to help experienced troubleshooters do just that.
There are several issues that limit the usefulness of support tools:
One of the goals we had for network diagnostics in Windows Vista was to mitigate the need for users to use network support tools when they encountered common network related issues. This simplifies the troubleshooting process for both Consumers and IT Pros and makes network diagnostics more accessible for everyone. We wanted to simplify the input and output that users had to deal with (this is much harder to do than it sounds). Since these network diagnostics are built into Windows Vista, the output is localized in all the languages that Windows Vista supports and being built-in will also improve our ability to service network diagnostics over time.
I will introduce you to some of these new features included in Windows Vista in my upcoming blog posts.
Wow,:
Do operating systems and/or specific sniffer behavior, they can generate false positive and false negative results.:
Both of these tools essentially have the same functionality:
Both tools require the .Net Framework to run. This means you need the .Net Framework installed on the system you run Promqry or PromqryUI from, but not on the remote systems you want to query. If you don’t have the .Net Framework already installed, you can get it from here: The “general users” install package will be sufficient for most users.
You can get both versions of Promqry (for free) from the download center on using these links:
A command line version:
A version with a GUI:
I hope you find these tools useful.
One of the most popular tools that I have developed over the last few years is DNSLint. This tool shipped with the Support Tools on the Windows Server 2003 CD and has been available for download from the download center on for a few years.
I frequently get asked where the idea for this tool came from, so I thought I would post the story here. I developed this tool when I worked on the Enterprise Platform Support Networking team in Product Support Services (here at Microsoft). One of the services that this team supports is DNS. After Windows 2000 shipped it seemed like almost every customer I talked to needed help designing a DNS namespace or had implemented a design that they needed help with. The primary reason for this is that Windows 2000 requires DNS for Active Directory and many customers were upgrading from Windows NT 4.0 which did not require DNS for domain building purposes. Many people needed help with DNS during this transition period.
After spending hours with nslookup troubleshooting lame delegation issues, I decided to build a tool to automate the process and save everyone some time. DNSLint was born. Travis Adams from the Enterprise Platform Support Directory Services team asked me to add a feature to help troubleshoot Active Directory replication issues caused by missing or inconsistent DNS records. Then I added a feature that allows you to query all the DNS records specified in an input file. With this feature you can check all the DNS records for all of your critical servers (domain controllers, web servers, SQL servers, etc) on every DNS server that should know about them in a very short time frame.
I receive e-mail about DNSLint weekly. A frequently asked question I get about this tool is not a technical question at all: Is the Lint in DNSLint an acronym and if so…what does it mean? Just to clear this up…lint is something you find in your blue jeans after they come out of the dryer. When you find lint, it is useless and sort of embarrassing…so you quickly throw it away. Not unlike outdated or inaccurate DNS records for important systems. ;>)
You can read all the technical details about DNSLint in this Knowledge Base article:
I also recorded a webcast for your listening and viewing pleasure:;en-us;329982
Also, there are lots of good DNS resources at the DNS center:
Recently.
When Mark Minasi wrote this very flattering article about me, I felt it was only right to develop the tool that he had been waiting years for.
Why does Mark, and many other people, think that NetBIOS name resolution is still so important? You still run WINS on your network…right? The vast majority of customers that I talk to, still run a WINS service somewhere on their network.
Many people anticipated the demise of WINS when Windows 2000 was released. They had a vision of a simplified network where the only types of name resolution problems to troubleshoot were related to DNS. No more name registration problems, replication issues, record tomb stoning riddles or secure channel troubleshooting to do. This was going to be a world where every application was directory aware and discoverable via protocols like DNS and LDAP.
There are several reasons why WINS is still necessary on most networks. The biggest reason I can think of is that many applications still use NetBIOS to provide some functionality to users. In the past I tried to compile a list of such applications, but this was a daunting task because an application’s use of NetBIOS can be very subtle.
As it turns out, for most administrators, the question of whether to use WINS or not is an easy one to answer and doesn’t require an exhaustive list of applications that use NetBIOS. Two of the most popular applications that have shipped with different Windows operating systems over the years are Network Neighborhood and My Network Places. These applications are used heavily by administrators and the end-users that they support. End-users love these applications.
If you or your users use these applications, you are probably going to want to use WINS to help populate the lists of network resources that these applications present to the user. These lists are generated and maintained by the NetBIOS Browsing mechanism built into the Windows operating system. In fact, if you run applications that allow the user to open and/or save data across the local network or pick a computer to connect to, i.e. select a server or workstation from a list of network resources, then it’s a good chance those applications use the NetBIOS Browsing mechanism to populate those lists. Where NetBIOS Browsing is used, WINS is typically involved too. WINS helps facilitate the distribution of the browse lists of network resources, to all the Windows systems on a network.
Some applications are now using mechanisms other than NetBIOS to populate these types of lists. But, upon close inspection, it may surprise you how many applications that you use still rely on the NetBIOS Browsing mechanism and WINS.
If you have been considering retiring a WINS server on your network it would be prudent to determine how much it is being used before stopping the service. One method that many customers have found effective is to use Performance Monitor on the WINS server. When WINS is installed on a server, some performance monitor counters for WINS are also installed. These counters can tell you how many queries and responses the WINS server is handling.
If it turns out that you still need to run the WINS service, there are a few Resource Kit tools to help you manage it and troubleshoot problems. As I mentioned above, I developed a tool that may help you with troubleshooting and name registration/record availability confirmation tasks.
Nblookup.exe is a tool that is modeled very closely to the nslookup.exe utility that is used to troubleshoot DNS issues. It is relatively small (around 102 Kb) and does not have an installation program (just copy it into any directory and run it). It allows you to query WINS servers for name registration records just like nslookup allows you to query DNS servers for DNS records. Unlike most other WINS tools that you may have used, nblookup does not require an authenticated connection to the WINS server, i.e. you don’t have to run this tool in administrator context. I also added some features like ability to query/verify large numbers of records very quickly using an input file. This makes it very easy to quickly determine whether all of the important systems/applications on your network are registered and discoverable using all of the WINS servers on your network.
This tool is not part of any resource kit, but it can be downloaded (for free) from microsoft.com. You can download nblookup and read all the details in this Knowledge Base article:
Mark Minasi was kind enough to include nblookup as one of his “The Magnificent Six” list of tools. Did I mention that Mark is a great author. ;>)
If you still run WINS, and you probably do, it may be worth your while to add nblookup to your toolkit.?” ;>)
"What.
|
http://blogs.msdn.com/tim_rains/
|
crawl-002
|
refinedweb
| 2,052
| 59.64
|
operator== (hash_multimap)
Visual Studio 2015
Tests if the hash_multimap object on the left side of the operator is equal to the hash_multimap object on the right side.
The comparison between hash_multimap objects is based on a pairwise comparison of their elements. Two hash_multimaps are equal if they have the same number of elements and their respective elements have the same values. Otherwise, they are unequal.
In Visual C++ .NET 2003, members of the <hash_map> and <hash_set> header files are no longer in the std namespace, but rather have been moved into the stdext namespace. See The stdext Namespace for more information.
// hash_multimap_op_eq.cpp // compile with: /EHsc #include <hash_map> #include <iostream> int main( ) { using namespace std; using namespace stdext; hash_multimap_multimaps hm1 and hm2 are equal." << endl; else cout << "The hash_multimaps hm1 and hm2 are not equal." << endl; if ( hm1 == hm3 ) cout << "The hash_multimaps hm1 and hm3 are equal." << endl; else cout << "The hash_multimaps hm1 and hm3 are not equal." << endl; }
The hash_multimaps hm1 and hm2 are not equal. The hash_multimaps hm1 and hm3 are equal.
Reference
Show:
|
https://msdn.microsoft.com/en-us/library/aa985895.aspx
|
CC-MAIN-2015-35
|
refinedweb
| 176
| 74.69
|
I was given a project to complete:
Help determine how much time someone has left to meet a deadline
- Ask a user to enter the deadline for their project
- Tell them how many days they have to complete the project
- For extra credit, give them the answer as a combination of weeks & days (Hint: you will need some of the math functions from the module
on numeric values)
import datetime
currentday=datetime.date.today()
#set variable to recieve deadline for project
deadLine = 0
deadLine = raw_input('when is the deadline for your project? (dd/mm/YYYY) ')
deadLine=datetime.dateime.strptime(deadLine, '%d/%m/%Y').date()
daysLeft= deadLine-currentday
print 'Number of days left for your project is : '
print daysLeft
when is the deadline for your project? (dd/mm/YYYY) 21/10/2016
Traceback (most recent call last):
File "C:\Users\Oluwaseun Okungbowa\Desktop\Video editing and python programming\projectdeadline.py", line 7, in <module>
deadLine=datetime.dateime.strptime(deadLine, '%d/%m/%Y').date()
AttributeError: 'module' object has no attribute 'dateime'
#import the datetime class
import datetime
#declare and initialize variables
strDeadline = ""
totalNbrDays = 0
nbrWeeks = 0
nbrDays = 0
#Get Today's date
currentDate = datetime.date.today()
#Ask the user for the date of their deadline
strDeadline = input("Please enter the date of your deadline (mm/dd/yyyy): ")
deadline = datetime.datetime.strptime(strDeadline,"%m/%d/%Y").date()
#Calculate number of days between the two dates
totalNbrDays = deadline - currentDate
#For extra credit calculate results in weeks & days
nbrWeeks = totalNbrDays.days / 7
#The modulo will return the remainder of the division
#which will tell us how many days are left
nbrDays = totalNbrDays.days%7
#display the result to the user
print("You have %d weeks" %nbrWeeks + " and %d days " %nbrDays + "until your deadline.")
Please enter the date of your deadline (mm/dd/yyyy): 10/21/2016
Traceback (most recent call last):
File "C:\Users\Oluwaseun Okungbowa\Desktop\Video editing and python programming\projectdeadlineteachers.py", line 16, in <module>
deadline = datetime.datetime.strptime(strDeadline,"%m/%d/%Y").date()
TypeError: strptime() argument 1 must be string, not int
Install python3 so that you and your tutor are on the same page.
However, if you do decide to stick with python 2.7, this will fix your problem.
Your problem is in this line
#Ask the user for the date of their deadline strDeadline = input("Please enter the date of your deadline (mm/dd/yyyy): ")
Here's an example of what I mean
>>> input() 5 5 >>> input() 10/2 5 >>> input() 10/2/2016 0
python is thinking that your date is arithmetic division of integers. change
input() to
raw_input() to accept the string.
i.e.
strDeadline = raw_input("Please enter the date of your deadline (mm/dd/yyyy): ")
|
https://codedump.io/share/MUfJq5HM4XJh/1/errors-running-python-code-attribute-error-for-datetime-and-strptime-type-error
|
CC-MAIN-2018-22
|
refinedweb
| 450
| 50.46
|
Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply..
This document is an introduction to RDFa, a method for achieving precisely this kind of structured data embedding in XHTML. The normative specification of RDFa may be found in [RDFa joint work of the RDFa Syntax to Recommendation Status and then publish a final version of this Primer as a W3C Working Group Note.
This version of the RDFa Primer is a substantial update to the previous version, representing several design changes since the previous version was published. These are summarized in the Changes section.
Comments on this Working Draft are welcome and may be sent to public-rdf-in-xhtml-tf@w3.org; please include the text "comment" in the subject line. All messages received at this address are viewable in a public archive. Purpose and Preliminaries
2 Simple Data: Publishing Events and Contacts
2.1 The Basic XHTML, before RDFa
2.2 Publishing An Event
2.3 Publishing Contact Information
2.4 The Complete XHTML with RDFa
2.5 Working Within a Fragment of the XHTML
3 Advanced Concepts: Custom Vocabularies, Document Fragments, Complex Data, ...
3.1 Creating a Custom Vocabulary and Using Compact URIs
3.2 Qualifying Other Documents and Document Chunks
3.3 Data Types
3.4 Layers of Structure — Subresources
3.5 Using @src on img, and the use of @rev
3.6 Overriding @href and @src
4 RDF Correspondence
4.1 Events and Contact Information
4.2 Custom Vocabularies and Datatypes
4.3 Layered Data and Subresources
4.4 Using @src on img, and the use of @rev
4.5 Overriding @href and @src
5 Case Studies
6 Acknowledgments
7 Bibliography existing XHTML attributes and a handful of new ones. Where data, such as a photo caption, is already present on the page for human readers, the author need not repeat it for automated processes to access it. A Web publisher can easily reuse data fields, e.g. an event's date, defined by other publishers, or create new ones altogether. RDFa gets its expressive power from RDF [RDFPRIMER], though the reader need not understand RDF before reading this document.
For simplicity, instead of using RDF terminology, we use the word "field" to indicate a unit of labeled information, e.g. the "first name" field indicates a person's first name.
RDFa uses Compact URIs, which express a URI using a prefix, e.g.
dc:title where
dc: stands for. In this document, for
simplicity's sake, the following prefixes are assumed to be already
declared:
dc for Dublin Core [DC],
foaf
for Friend-Of-A-Friend [FOAF],
cc for Creative
Commons [CC], and
xsd for XML Schema Definitions [XSD]:
dc:
foaf:
cc:
xsd:
We use standard XHTML notation for elements and attributes: both are
denoted using fixed-width lowercase font, e.g.
div, and attributes
are differentiated using a preceding '@' character, e.g.
@href.
For an overview of the use cases for RDFa, the user should consult [RDFa-USECASES]. For in-depth details on RDFa, including how to implement an RDFa parser, the reader should consult [RDFa-SYNTAX].
Jo keeps a private blog for her friends and family.
Jo is organizing one last summer Barbecue, which she hopes all of
her friends and family will attend. She blogs an announcement of this
get-together at her private blog,. Her blog also includes her
contact information:
<html> <head><title>Jo's Friends and Family Blog</title></head> <body> ... <p> I'm holding one last summer Barbecue, on September 16th at 4pm. </p> ... <p class="contactinfo"> Jo Smith. Web hacker at <a href=""> Example.org </a>. You can contact me <a href="mailto:jo@example.org"> via email </a>. </p> ... </body> </html> existing XHTML attributes and a small handful of additional attributes. Since this is a calendar event, Jo will specifically use the iCal vocabulary [ICAL-RDF] to denote the data's structure.
The first step is to reference the iCal vocabulary within the XHTML page, so that Jo's friends' Web browsers can look up the calendar concepts and make use of them:
<html xmlns: ...
then, Jo declares a new event:
<p instanceof="cal:Vevent"> ... </p>
Note how
@instanceof is used here to define the type of
data being expressed. The use of this attribute on the
p
element ensures that, by default, data expressed inside this element
refers to the same event. There, Jo can set up the event fields,
reusing the existing XHTML. For example, the event summary can be
labeled as:
I'm holding <span property="cal:summary">one last summer Barbecue</span>,
@property on
span indicate the data field
cal:summary. The existing content, "one last summer
Barbecue", is the value of this field. Sometimes, this is not the
desired effect. Specifically, the start time of the event should be
displayed pleasantly — "September 16th" —, but should be
represented in a machine-parsable way:
20070916T1600-0500, the standard ISO 8601 format used by
iCal (which is cumbersome for human readers). In this case, the markup
needs only a slight modification:
<span property="cal:dtstart" content="20070916T1600-0500"> September 16th at 4pm </span>
The actual content of the
span, "September 16th at 4pm", is
ignored by RDFA parsers, which instead read from the explicit
@content. The full markup is then:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" ""> <html xmlns: > ... </body> </html>
Note that Jo could have used any other XHTML element, not just
span, to mark up her event. In other words, when the event information is already laid out in the XHTML using elements such as
h1,
em,
div, etc., Jo can simply add the
@instanceof data type declaration,
@property, and optionally
@content to mark up the event.
(For the RDF-inclined reader, the RDF triples that correspond to the above markup are available in Section 4.1 Events and Contact Information.)
Now that Jo has published her event in a human-and-machine-readable way, she realizes there is much data on her blog that she can mark up in the same way. Her contact information, in particular, is an easy target:
... <p class="contactinfo"> Jo Smith. Web hacker at <a href=""> Example.org </a>. You can contact me <a href="mailto:jo@example.org"> via email </a>. </p> ...
Jo discovers the vCard RDF vocabulary [VCARD-RDF], which she adds to her existing page. The vCard format is a standard representation for contact information. Thus, Jo uses the prefix
contact to designate this vocabulary. Note that adding the vCard vocabulary is just as easy and does not interfere with the already added iCal vocabulary:
<html xmlns: ...
Jo then sets up her vCard using RDFa, by deciding that the
p will be the container for her vcard. She notes that the
vCard schema does not require declaring a vCard type. Instead, it is
recommended that a vCard refer to a Web page that identifies the
individual. For this purpose, Jo can use
@about, a new
attribute introduced by RDFa indicating that all contained XHTML
pertains to Jo's designated URI. The
@about value is
inherited from ancestor elements in the XHTML: in this case, all HTML
contained within
p will apply to the
@about on the
p element.
... <p class="contactinfo" about=""> <!-- everything here pertains to --> </p> ...
"Simple enough!" Jo realizes, noting that RDFa does not interfere with her existing markup, in particular the
@class she uses for styling. She adds her first vCard fields: name, title, organization and email.
... <p class="contactinfo" about=""> <span property="contact:fn">Jo Smith</span>. <span property="contact:title">Web hacker</span> at <a rel="contact:org" href=""> Example.org </a>. You can contact me <a rel="contact:email" href="mailto:jo@example.org"> via email </a>. </p> ...
Notice how Jo was able to use
@rel directly within the
anchor element to designate her organization and email address.
@rel indicates the type of relationship
between the current URI, designated by
@about, and the
target URI, designated by
@href. Specifically,
contact:org indicates a relationship of type "vCard
organization", while
contact:email indicates a
relationship of type "vCard email".
For simplicity's sake, we have slightly abused the vCard
vocabulary above: vCard technically requires that the
type of the email address be specified, e.g. work or home
email. In Section 3.4 Layers of Structure — Subresources, we show how
rel can be used without a corresponding
href, in
order to create subresources and provide correct markup for
expressing data such as full vCards.
Jo's complete XHTML with RDFa is thus:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" ""> <html xmlns: > ... <p class="contactinfo" about=""> <span property="contact:fn">Jo Smith</span>. <span property="contact:title">Web hacker</span> at <a rel="contact:org" href=""> Example.org </a>. You can contact me <a rel="contact:email" href="mailto:jo@example.org"> via email </a>. </p> ... </body> </html>
If Jo changes her email address link, her organization, or the
description of her event, RDFa parsers will automatically pick up
these changes in the marked up, structured data. The only places
where this doesn't happen is when
@content overrides the
value displayed in the element's content, in which case the
@content value must be updated along with the displayed
value. This limitation is inevitable when human and machine
readability are at odds.
(Once again, the RDF-inclined reader will want to consult the resulting RDF triples 4.1 Events and Contact Information.)
What if Jo does not have complete control over the XHTML of her
blog? For example, she may be using a content management system
which makes it particularly difficult to add the vocabularies in the
html start-tag without adding it to every page on her site.
Or, she may be using a Web provider that doesn't allow her to change
the header of the page to begin with.
Fortunately, RDFa uses compact URIs, where prefixes can be declared using standard XML namespace conventions. Thus, vocabularies can be imported "locally" to an XHTML element. Jo's blog page could express the exact same structured data with the following markup:
<html> <head> <title>Jo's Friends and Family Blog</title> </head> <body> ... <p instanceof="cal:Vevent" xmlns: I'm holding <span property="cal:summary"> one last summer Barbecue, </span> on <span property="cal:dtstart" content="20070916T1600-0500"> September 16th at 4pm. </span> </p> ... <p class="contactinfo" about="" xmlns: <span property="contact:fn"> Jo Smith </span>. <span property="contact:title"> Web hacker </span> at <a rel="contact:org" href=""> Example.org </a>. You can contact me <a rel="contact:email" href="mailto:jo@example.org"> via email </a>. </p> ... </body> </html>
In this case, each
p only needs one vocabulary: the first uses iCal, the second uses vCard. When needed, more than one vocabulary can be imported into any element, not just
html. This makes copying and pasting XHTML with RDFa much easier. In particular, it allows Web widgets to carry their own RDFa in a self-contained manner.
RDFa can do much more than the simple examples described above. This next section explores some of its advanced capabilities:
All field names and data types in RDFa are URIs, e.g. is the "Dublin Core
title" field. In RDFa, we often use compact versions of those URIs, by
This helps keep the markup short and clean:
<div xmlns: <span property="dc:title">Yowl</span>, created by <span property="dc:creator">Mark Birbeck</span>. </div>. Shutr expects that users will be able to easily develop programs that download photo pages marked up with RDFa, extract the structured data, and provide new functionality, e.g. "image newsreaders", family photo-album screensavers, etc. Eventually, RDFa-enabled browsers can even pick up this structured data as the user surfs.
Some structured-data define relationships (including full equivalence) between concepts that develop organically in the community.
Other publishers can then choose to reuse Shutr's published concepts where they see fit. Using RDF, Shutr can add information about the concepts at each concept's URI, indicating a human-readable description for the concept, what other concepts it may be related to and in what way.
Shutr may choose to present many photos in a given XHTML page. In
particular, at the URI, all of the
album's photos will appear inline. Structured data about each photo can
be included simply by specifying
@about, which
indicates the resource that fields refer to within that XHTML element.
<ul> <li> <a href="/user/markb/photo/23456"> <img src="/user/markb/photo/23456_thumbnail" /> </a> <span about="/user/markb/photo/23456" property="dc:title">Sunset in Nice</span> </li> <li> <a href="/user/markb/photo/34567"> <img src="/user/markb/photo/34567_thumbnail" /> </a> <span about="/user/markb/photo/34567" property="dc:title">W3C Meeting in Mandelieu</span> </li> </ul>
This same approach applies when the field value is a URI. For
example, each photo in the album has a creator and may have its own
@about to refer to the photo once, then adding as many fields
as we want for that photo between the start- and and end-tags of the
element with the
@about value:
<ul> <li about="/user/markb/photo/23456"> <a href="/user/markb/photo/23456"> <img src="/user/markb/photo/23456_thumbnail" /> </a> <span property="dc:title">Sunset in Nice</span> taken by photographer <a property="dc:creator" href="/user/markb">Mark Birbeck</a>, licensed under a <a rel="cc:license" href=""> Creative Commons Non-Commercial License </a>. </li> <li about="/user/markb/photo/34567"> <a href="/user/markb/photo/34567"> <img src="/user/markb/photo/34567_thumbnail" /> </a> <span property="dc:title">W3C Meeting in Mandelieu</span> taken by photographer <a property="dc:creator" href="/user/stevenp">Steven Pemberton</a>, licensed under a <a rel="cc:license" href=""> Creative Commons Commercial License </a>. </li> </ul>,
with the
@id XHTML attribute used to designate each camera.
RDFa can be used to add structured data in this situation, too.
Consider the
page,
which, as its URI implies, lists Mark Birbeck's
cameras. Its XHTML contains:
<ul> <li id="nikon_d200"> Nikon D200, 3 pictures/second. </li> <li id="canon_sd550"> Canon Powershot SD550, 5 pictures/second. </li> </ul>
and the photo page will then include information about which camera was used to take each photo:
<ul> <li> <img src="/user/markb/photo/23456_thumbnail" /> ... using the <a href="/user/markb/cameras#nikon_d200">Nikon D200</a>, ... </li> ... </ul>
The RDFa syntax for formally specifying the relationship is exactly the same as before, as expected:
<ul> <li about="/user/markb/photo/23456"> ... using the <a rel="shutr:takenWithCamera" href="/user/markb/cameras#nikon_d200">Nikon D200</a>, ... </li> ... </ul>
Then, the XHTML snippet at is:
<ul> <li id="nikon_d200" about="#nikon_d200"> <span property="dc:title">Nikon D200</span> <span property="shutr:frameRate">3 pictures/second</span> </li> <li id="canon_sd550" about="#canon_sd550"> <span property="dc:title">Canon Powershot SD550</span> <span property="shutr:frameRate">5 pictures/second</span> </li> </ul>
Notice, again, how text can serve both for the human and machine readable versions: there is no need to keep a separate file up-to-date.
The RDF interpretation of the above markup can be found in 4.2 Custom Vocabularies and Datatypes.
@content to provide a machine-readable
value. Adding a datatype is only one more attribute:
@datatype.
For example, when expressing the date on which a photo was taken:
<ul> <li about="/user/markb/photo/23456"> ... taken on <span property="dc:date" content="2007-05-12" datatype="xsd:date"> May 12th, 2007 </span> ... </li> </ul>
RDFa uses XML schema data types [XSD-DT].
The RDF interpretation of the above markup can be found in 4.2 Custom Vocabularies and Datatypes.:
<div> This photo depicts Mark Birbeck (mark@example.org) and Steven Pemberton (steven@example.org). </div>
The simplest way to mark this up without attempting to provide unique identities for photo subjects is to define subresources, effectively new resources that are not given a name. (In RDF, we call these blank nodes.) The following markup will do just that:
<div about="/user/markb/photo/23456"> This photo depicts <span rel="foaf:depicts"> <span property="foaf:firstname">Mark</span> <span property="foaf:lastname">Birbeck</span> (<span property="foaf:mbox">mark@example.org</span>) </span> and <span rel="foaf:depicts"> <span property="foaf:firstname">Steven</span> <span property="foaf:lastname">Pemberton</span> (<span property="foaf:mbox">steven@example.org<span>). </span> </div>
The above markup uses the FOAF (Friend-Of-A-Friend) vocabulary which includes the field
foaf:depicts that relates a photograph with a person depicted in the photograph. The use of
@rel without
@href triggers the definition of a new subresource, which is then the value of 4.3 Layered Data and Subresources.
@srcon
img, and the use of
@rev
Shutr authors may notice that, in a number of cases, the URI of the photos it displays inline using the
img element is actually the same as
@about for marking up the photo's fields. In order to minimize markup, RDFa allows authors to make use of
@src on an
img element: it behaves just like
@href.
Consider Mark's profile page on Shutr, which lists all of his albums and cameras. This page will likely include a picture of Mark himself:
<div> <h1>Mark Birbeck's Photos</h1> <img src="/user/markb/profile_photo.jpg" /> ... </div>
Shutr may want to indicate that this is Mark's photo, using the FOAF field
foaf:img defined specifically for this purpose. This can be accomplished as follows:
<div about="/user/markb"> <h1>Mark Birbeck's Photos</h1> <img rel="foaf:img" src="/users/markb/profile_photo.jpg" /> ... </div>
Shutr then notes that the profile photo is not only Mark's profile photo, it also happens to depict Mark, since Mark obviously appears in his own profile photo (hopefully). This requires expressing an inverse relationship, where the field is actually added to the image's URI, not to Mark's profile.
For this purpose, RDFa provides
@rev, which can be applied to
img or any other element.
@rev functions much like
@rel, except the direction of the relationship is reversed. That is, while
@rel tells us that Mark has a FOAF image at
/user/markb/profile_photo.jpg,
@rev tells us that this
/user/markb/profile_photo.jpg depicts Mark.
<div about="/user/markb"> <h1>Mark Birbeck's Photos</h1> <img rel="foaf:img" rev="foaf:depicts" src="/user/markb/profile_photo.jpg" /> ... </div>
In other words, Mark has, as his main image,
/user/markb/profile_photo.jpg, which of course happens to depict Mark.
The RDF interpretation of the above markup can be found in 4.4 Using @src on img, and the use of @rev.
@hrefand
@src
When the displayed content is not quite the correct machine-readable
data, we used
@content to override it. In some cases, the
navigable link is not quite the right machine-readable data, either.
Consider, for example, the case where the displayed photo on Mark
Birbeck's profile is, in fact, just a thumbnail, while his official FOAF
image is the full-sized version. In this case, and in any case where
@href or
@src appears and needs to be overridden by
another URI, another RDFa attribute,
@resource, is used.
The XHTML written above can then be transformed to:
<div about="/user/markb"> <h1>Mark Birbeck's Photos</h1> <img rel="foaf:img" resource="/user/markb/profile_photo.jpg" src="/user/markb/profile_photo_thumbnail.jpg" /> ... </div>
Here, the loaded image will use the thumbnail, but an RDFa-aware browser will know that the machine-readable data only cares about the full-sized version specified in
@resource.
@resource can be particularly useful in cases
where the URI is not navigable in any way, e.g. a book's ISBN number represented as
URN:ISBN:0-395-36341-1.
The RDF interpretation of this markup can be found in 4.5 Overriding @href and @src
RDF [RDF] is the W3C standard for interoperable structured data. Though one need not be versed in RDF to understand the basic concepts of RDFa, it helps to know that RDFa is effectively the embedding of RDF in XHTML.
Briefly, RDF is an abstract generic data model. An RDF statement is a
triple, composed of a subject, a predicate, and an object. For example,
the following triple has
/photos/123 as subject,
dc:title as predicate, and the literal "Beautiful Sunset" as object:
</photos/123> dc:title "Beautiful Sunset" .
A triple effectively relates its subject and object by its predicate:
the document
/photos/123 has, as title, "Beautiful Sunset".
Structured data in RDF is represented as a set of triples. The notation
above is called N3 [N3]. URIs are written using angle
brackets, literal string values are written in quotation marks, and
compact URIs are written directly (with prefixes declared earlier).
A collection of triples is often called a graph because when the same URIs appear as the subjects and objects of multiple triples, the triples can be treated as a collective unit that is greater than the sum of its parts. (While it is not unusual to have a program create a visual representation of one of these graphs, it's actually named after the computer science data structure graph, not the visual presentation of one.)
All subjects and predicates are nodes of such a graph, while objects
can be nodes or literal string values. Nodes can be URIs, or they can be
blank, in which case they are not addressable by other documents. Blank
nodes, denoted
_:bnodename, are particularly useful when
expressing layered data without having to assign URIs to intermediate
nodes.
In Section 2.2 Publishing An Event, Jo published an event without giving it a URI. The RDF triples extracted from her markup are:
_:blanknode0 rdf:type cal:Vevent; cal:summary "last summer Barbecue"; cal:dtstart "20070916T1600-0500" .
In plain English, these triples mean that an unnamed item,
_:blanknode0, is a calendar event, with event summary "last summer Barbecue" and start time "20070916T1600-0500".
In Section 2.3 Publishing Contact Information, Jo published contact information. The RDFa is parsed to generate the following RDF triples:
<> contact:fn "Jo Smith"; contact:title "Web Hacker"; contact:org <>; contact:email <mailto:jo@example.org>.
In plain English, these triples mean that the data item designed by the URL
<> has full name "Jo Smith", job title "Web Hacker", organization
example.org, and email contact
jo@example.org.
The XHTML+RDFa in the 3.2 Qualifying Other Documents and Document Chunks yields the following triples:
</user/markb/photo/23456> dc:title "Sunset in Nice" . </user/markb/photo/34567> dc:title "W3C Meeting in Mandelieu" .
The more complete example, including licensing information, yields the following triples:
</user/markb/photo/23456> dc:title "Sunset in Nice" ; dc:creator "Mark Birbeck" ; cc:license <> . </user/markb/photo/34567> dc:title "W3C Meeting in Mandelieu" ; dc:creator "Steven Pemberton" ; cc:license <> .
The example that links a photo to the camera it was taken with corresponds to the following triple:
</user/markb/photo/23456> shutr:takenWithCamera </user/markb/cameras#nikon_d200> .
while the complete camera descriptions yields:
<#nikon_d200> dc:title "Nikon D200" ; shutr:frameRate "3 pictures/second" . <#canon_sd550> dc:title "Canon Powershot SD550" ; shutr:frameRate "5 pictures/second" .
Finally,
@datatype as in 3.3 Data Types indicates a datatype as follows:
</user/markb/photo/23456> dc:date "2007-05-12"^^xsd:date .
The subresources example in 3.4 Layers of Structure — Subresources, with photos annotated with the individuals depicted, correspond to RDF blank nodes as follows:
</user/markb/photo/23456> foaf:depicts _:span0 ; foaf:decpits _:span1 ; _:span0 foaf:firstname "Mark" ; foaf:lastname "Birbeck" ; foaf:mbox "mark@example.org" . _:span1 foaf:firstname "Steven" ; foaf:lastname "Pemberton" ; foaf:mbox "steven@example.org" .
Note specifically how the blank node, in this example, is both the object of a first triple, and the subject of a number of follow-up triples. This is specifically the point of the layered markup approach: to create an unnamed subresource.
@srcon
img, and the use of
@rev
The use of
@src on an image, as per 3.5 Using @src on img, and the use of @rev, yields exactly the same triple as if
@src were
@href:
</user/markb> foaf:img </user/markb/profile_photo.jpg> .
@rev specifies a triple with the subject and object reversed:
</user/markb> foaf:img </user/markb/profile_photo.jpg> . </user/markb/profile_photo.jpg> foaf:depicts </user/markb> .
@hrefand
@src
Where
@resource is present, as per 3.6 Overriding @href and @src, the same triples are generated, with the value of
@resource replacing the value of
@href. Even though the
@href points to
/user/markb/profile_photo_thumbnail.jpg, the corresponding triple is:
</user/markb> foaf:img </user/markb/profile_photo.jpg> ..
Since Working Draft #3 of this document:
classattribute is no longer used to indicate
rdf:type, as this was found to be too confusing. We now use the new
instanceofattribute.
|
http://www.w3.org/TR/2007/WD-xhtml-rdfa-primer-20071026/
|
CC-MAIN-2014-35
|
refinedweb
| 4,163
| 56.55
|
I'll preface this question with I'm new to C# and Xamarin, and working on my first Xamarin Forms app, targeting iOS and Android.
The app is using HttpClient to pass requests to an api, and the headers in the response return session cookies that are used to identify the current user. So once I've received an initial response and those cookies have been stored in the CookieContainer, I want to store that CookieContainer in some global scope so it can be reused and passed with all subsequent requests.
I've read that attempting to serialize the cookies data can be problematic, and HttpOnly cookies are included in the response, which I apparently can't access in the CookieContainer. Because of this, I've also tried enumerating through the values returned in the Set-Cookie header and storing it as a comma delimited string, then using SetCookies on the CookieContainer with that string for subsequent requests. But that seems overly complex and trying to do so results in a consistent, vague error "Operation is not valid due to the current state of the object." when requests are made. So I'm hoping to simply reuse the entire CookieContainer object.
So more pointedly, my questions are:
Where is an appropriate place to store a CookieContainer object so that it will persist throughout the app's lifecycle, and preferably still be available when the app goes into the background and is resumed. Is simply declaring it as a static variable in my WebServices class good enough?
If I do reuse the CookieContainer in this way, will the individual cookies be automatically updated on subsequent requests if more are added or the values of existing ones sent by the server change?
Here's a snippet from the method we're currently using (excluding my attempts at parsing the cookies into a string, which hopefully is unnecessary):
HttpResponseMessage httpResponse; var cookieContainer = new CookieContainer (); // want to set/store this globally var baseAddress = new Uri(""); using (var handler = new HttpClientHandler () { CookieContainer = cookieContainer }) using (HttpClient client = new HttpClient (handler) { BaseAddress = baseAddress }) { // has a timeout been set on this Web Service instance? if (TimeoutSeconds > 0) { client.Timeout = new System.TimeSpan(0,0,TimeoutSeconds); } // Make request (the "uri" var is passed with the method call) httpResponse = await client.GetAsync (uri); }
Answers
Can't seem to edit my initial post any more, but want to make a couple of corrections:
Serializing seems to be problematic because of not being able to read/access the HttpOnly cookies. Please correct me if I'm wrong here. I've been having difficulty implementing serialization since the concept is new to me, so any direction on that would be super helpful. I've been having trouble with examples I've found, in part because of namespace references not being recognized in Xamarin studio. For example, I get an unknown resolve error trying to include System.Runtime.Serialization.Formatters.Binary. I was able to serialize into a json string, but I did not see the HttpOnly cookies there so didn't go any further.
It is actually a requirement, not just a preference, that the cookies persist in the app even if it is shut down and restarted, so it seems I need to store in something like Application.Current.Properties (as opposed to just a static member of the class). But it seems you can't store an object that way.
I do something similar. This is all off memory so I am sorry for the generalized answer. Anyway, I extract the cookie from the CookieStore object when I initially authenticate the user. I store it in my app settings (I use the Settings Plugin by James Montemagno). So whenever I do an API request, I plug in the cookie I saved and use that. If the request fails, I check why and if it is due to the cookie being timed out, I reauth the user and store a new cookie. Isn't the best solution but it is something that works.
Thanks for the suggestion, Travis. I'll look into that plugin - if that allows us to store an object it may be what I'm looking for, since Application.Current.Properties only seems to allow primitive types.
I think it is just the basic types. You will probably have to disassemble it and save it to its own individual pieces and then reassemble when you try to get the data back. Although, I only save the cookie string. I dont save everything within the CookieStore object.
@TravyDale how do you put the cookies into your API Request? I can save them and iOS accesses them, but I can't get Android to recognize them. Thanks!
@ChristineBlanda did find a way to that? (in all platforms)
|
https://forums.xamarin.com/discussion/comment/195587/
|
CC-MAIN-2020-10
|
refinedweb
| 796
| 61.67
|
def dataTableDataAsJSON = {
def list = []
if (params.sort == 'crap') params.remove('sort')
def demoList = Demo.list(params)
response.setHeader("Cache-Control", "no-store")
demoList.each {
list << [
id: it.id,
name: it.name,
birthDate: grailsUITagLibService.dateToJs(it.birthDate),
age: it.age,
netWorth: it.netWorth,
isDumb: it.isDumb,
dataUrl: g.createLink(action: 'dataDrillDown') + "/$it.id"
]
}
def data = [
totalRecords: Demo.count(),
results: list
]
render data as JSON
}
As you can see, you send the 'birthDate' Date object into "dateToJs", and it will transform it into a string that will be recognized by the DataTable as a date in the view, and pulled back out into a JavaScript Date object for formatting.
|
http://jira.codehaus.org/browse/GRAILSPLUGINS-491
|
crawl-002
|
refinedweb
| 107
| 56.82
|
Cobalt Strike 3.3 – Now with less PowerShell.exeMay 18, 2016
The fourth release in the Cobalt Strike 3.x series is now available. There’s some really good stuff here. I think you’ll like it.
Unmanaged PowerShell
How do you get your PowerShell scripts on target, run them, and get output back? This is the PowerShell weaponization problem. It’s unintuitively painful to solve in an OPSEC-friendly way (unless your whole platform is PowerShell).
Cobalt Strike tackled this problem in its September 2014 release. Beacon’s PowerShell weaponization allows operators to import scripts, run cmdlets from these scripts, and interact with other PowerShell functionality. Beacon’s method is lightweight. It doesn’t touch disk or require an external network connection. It has a downside though: it relies on powershell.exe.
In December 2014, Lee Christensen came out with an Unmanaged PowerShell proof-of-concept [blog post]. Unmanaged PowerShell is a way to run PowerShell scripts without powershell.exe. Lee’s code loads the .NET CLR, reflectively loads a .NET class through that CLR, and uses that .NET class to call APIs in the System.management.automation namespace to evaluate arbitrary PowerShell expressions. It’s a pretty neat piece of code.
This release integrates Lee’s work with Beacon. The powerpick [cmdlet+args] command (named after Justin Warner’s early adaptation of Lee’s POC) will spawn a process, inject the Unmanaged PowerShell magic into it, and run the requested command.
I’ve also added psinject [pid] [arch] [command] to Beacon as well. This command will inject the Unmanaged PowerShell DLL into a specific process and run the command you request. This is ideal for long-running jobs or injecting PowerShell-based agents (e.g., Empire) into a specific process.
I took a lot of care to make powerpick and psinject behave the same way as Beacon’s existing powershell command (where possible). All three commands are friendly to long-running jobs and they will return output as it’s available. All three commands can also use functions from scripts brought into Beacon with the powershell-import command.
More One-Liners for Beacon Delivery
One of my favorite Cobalt Strike features is PowerShell Web Delivery. This feature generates a PowerShell script, hosts it, and gives back a one-liner that you can use to download and execute a Beacon payload. These one-liners have many uses: they seed access in assume breach engagements, they help turn an RDP access or command execution vulnerability into a session, and they’re great for backdoors.
Cobalt Strike 3.3 extends this feature. The PowerShell Web Delivery dialog is now Scripted Web Delivery with one-liners to download and run payloads through bitsadmin, powershell, python, and regsvr32. Each of these options is a different way to run a Cobalt Strike payload.
The bitsadmin option downloads and runs an executable. The python option will download and run a Python script that injects Beacon into the current python process. The regsvr32 option uses a combination of an SCT file with VB Script and a VBA macro to inject Beacon into memory. The regsvr32 option is based on research by Casey Smith and I really didn’t appreciate the power of this until I played with it more.
Search and Filter Tables with Ctrl+F
This release adds Ctrl+F to tables. This feature allows you to filter the current table on a column-by-column basis. Even when this feature is active, updates to the table will still show in real-time, if they match your criteria.
The feature is built with special search syntax for different column types. For example, you can specify CIDR notation or address ranges to filter host columns. You can use ranges of numbers to filter number columns. And, you can use wildcard characters in string columns.
*phew*. That’s a lot. Would you believe there’s more? Check out the release notes to see a full list of what’s new in Cobalt Strike 3.3. Licensed users may use the update program to get the latest. A 21-day Cobalt Strike trial is also available.
Well-timed release of these features. Recently attended a defender’s talk that made fun of Cobalt Strike and how easy it has been to catch. Hopefully users of your platform will recognize the need for these techniques for stealth and for control bypass. For example, powerpick was intended to also be a way to wiggle (i.e., lockpick) into a system protected with AppLocker. regsvr32 is another method intended to bypass app whitelisting. bitsadmin may appear to sysadmins that a Windows host is merely updating, which bypasses software firewalls (all whitelist it by default for obvious reasons) — but bitsadmin can also be used to download executables or exfiltrate data. Welcome these excellent additions!
Congrats for releasing new CS !
I know this is out of topic but today I tried and failed at bypassuac from within beacon on Windows 10 machine (had local admin RID 500 access but in medium integrity level). CS version was 3.2. Is it possible to bypass UAC in this scenario using CS 3.3 beacon ? I don’t have access to CS 3.3 at the moment (and thus am posting this).
Thanks in advance
I’d give 3.3 a try. I significantly reworked Bypass UAC in 3.3 for this exact reason. The causes of failure were… interesting.
1. Prior to 3.3, Beacon executed the Bypass UAC attack from an x86 context. To deal with x64 Windows, I would disable the WOW64 file system redirection. This step fails in some contexts though. For example, I’ve seen this step fail when Beacon lives in an x86 powershell.exe context with an x64 powershell.exe parent. The remediation here was to port the Bypass UAC attack to a Beacon post-exploitation job module. Beacon’s post-exploitation job launcher transparently deploys x64 modules on x64 systems.
Bonus: The UAC attack now provides feedback.
2. Cobalt Strike uses a different DLL artifact for UAC bypass on Windows 10 versus other versions of Windows. The alternate UAC artifact has the constraint that it cannot block DllMain. This will cause the attack to fail on Windows 10. Easy enough, right? On Windows 10 the GetVersionEx function lies about the Windows version if it’s called from a process that is not manifested for Windows 8.1 or Windows 10. This leads to very inconsistent results. If Beacon is run via a PowerShell one-liner, it will report itself as Windows 10. If Beacon is run via an executable, it will report itself as Windows 8.1. This broke my logic to choose the right artifact and led to a situation where this attack would fail on Windows 10 as well. I mitigated this in 3.3.
The above are fresh in my head as I was able to zero in on these conditions very recently (8pm last night).
I just tested the scenario with CS 3.3 and confirm that it works fine on Win10 (i.e. got high mandatory level) !
Great job!
Thanks
|
https://blog.cobaltstrike.com/2016/05/18/cobalt-strike-3-3-now-with-less-powershell-exe/
|
CC-MAIN-2018-05
|
refinedweb
| 1,187
| 68.26
|
Porting LinuxBIOS to the AMD SC520
The build process builds a binary image that is loaded to a Flash part. LinuxBIOS provides a utility, flash_rom, for this purpose. Alternatively, you can use the MTD drivers in the Linux kernel.
The layout of a typical ROM image is shown in Figure 2. The top 16 bytes contain two jump vectors, a jump to the fallback and a jump to the normal. LinuxBIOS always jumps to the fallback first. If all is well, it jumps back to the jump to normal vector at the top of memory, and from there to the normal image. If the fallback code detects problems or if the CMOS settings indicate that fallback BIOS should be run, the fallback BIOS runs.
Enough overview, let's get to work. To build support for a new board, we start with the mainboard first, and the easiest way to do this is to pick a similar mainboard. Because the Digital Logic ADL855 is much like the SC520, we start with that. We can clone much of the directory structure of the ADL855 for the SC520 board.
The basic naming process for directories in LinuxBIOS is to name the type of resource, in this case, mainboard; the vendor, here digitallogic; and the part name, in this case, msm586seg. Before we start the mainboard configuration file, we need to know what's on this mainboard. We don't have to get everything at first; in fact, we can leave a lot out simply to get something to work. Typically, the best approach is to make sure you know what drives the serial port and make sure you get that. To get DRAM up, you need to make sure you set up whatever device drives the SMBUS. None of these chips are in the right state when the board is turned on; you need to set a few bits to get things going.
For figuring this all out, you have a few choices. Almost always, the easiest thing to do is boot Linux and type lspci. For work with this type of board, it's easiest to have a CompactFlash part with a small Linux distribution installed so you can boot long enough to run the lspci command. You can use lspci to dump configuration space registers too, which sometimes is invaluable for discovering how to set control bits the vendor might have forgotten to tell you about. The setpci command also is handy for probing bits and learning the effects of setting and clearing them. On several boards, we've used setpci to probe the chipsets to find undocumented enable lines for onboard devices.
Although lspci shows discrete devices, on the SC520 they are integrated into the part. In the old days, we would create a new resource even if the part was integrated into the CPU. We have decided, based on previous experience, that if a part is integrated into the CPU, we do not consider it a separate resource. Therefore, there are no separate directories for the north and south bridge. The code for these devices is supported in the CPU device. The LinuxBIOS code base is flexible in this way. A given BIOS can be implemented with different types of parts, but in fact none of them are required.
Our first step in getting the resources set up for the mainboard is to name the CPU and set up the directory for it. The code for a given CPU is contained in the src/cpu directory. Luckily, the CPU in this case is an x86 system, so there is no need to add an architecture directory.
This article traces development from our point of view—a LinuxBIOS developer. If you want to develop a new tree, however, you can clone the LinuxBIOS arch repository, do development and submit patches to a developer. We will check your patches and help get them into the repository. In most cases with new developers, if their code is good, we allow them to become developers for our team.
We create a directory, src/cpu/amd/sc520, and populate it with files to support the CPU. We are not going to show all the commands for everything we do in this port, but for this first change, we show the commands to give you flavor of how it works. Even this simple part explains a lot of the important aspects of how LinuxBIOS is constructed:
cd src/cpu/amd mkdir sc520 tla commit
This sets up the directory; now we need to populate it. The src/amd/socket_754 directory is a good candidate for providing model files, so we use them:
cd sc520 cp ../socket_754/* .
This gives us an initial set of files:
rminnich@q:~/src/freebios2/src/cpu/amd/sc520> ls chip.h Config.lb socket_754.c
The chip.h file defines a simple data structure that is linked into the BIOS image by the Makefile, which is generated by the config tool. For this part, it's basically empty:
rminnich@q:~/src/freebios2/src/cpu/amd/sc520> catchip.h extern struct chip_operations cpu_amd_socket_754_ops; struct cpu_amd_socket_754_config { };
What does this mean? First, we create an instance of a struct called chip_operations for this part, called cpu_amd_socket_754_ops. This is a generic structure, used by all chips. This generic structure looks like this:
/* Chip operations */ struct chip_operations { void (*enable_dev)(struct device *dev); #if CONFIG_CHIP_NAME == 1 char *name; #endif };
The chip_operations structure, in src/include/device/device.h, defines a generic method of accessing chips. It currently has two structure members: a function pointer to enable the device, enable_dev; and an optional name, used for debug prints, called name. Notice that in the style of the Linux kernel, C preprocessor-enabled code is controlled by testing the value of a preprocessor symbol, not by testing whether it is defined. As you can see, the enable_dev function takes a pointer to a device struct.
Why do we do this? Although there is one chip_operations structure for a type of chip, there is a device structure for each possible instance of a chip. We say possible because a device structure is defined for each chip that may exist in a system. Consider an SMP motherboard, which has from one to four or even eight CPUs; not all the CPUs may be there. Part of the job of the enable function is to determine whether the chip is even there.
The device struct looks like this:
struct device { struct bus * bus; /* bus this device is on, for * bridge devices, it is the * upstream bus */ device_t sibling; /* next device on this bus */ device_t next; /* chain of all devices */ struct device_path path; unsigned vendor; unsigned device; unsigned int class; /* 3 bytes: * (base,sub,prog-if) */ unsigned int hdr_type; /* PCI header type */ unsigned int enabled : 1; /* set if we should * enable the device */ unsigned int initialized : 1; /* set if we have initialized the device */ unsigned int have_resources : 1; /* Set if we have read the device's resources */ unsigned int on_mainboard : 1; unsigned long rom_address; uint8_t command; /* Base registers for this device. I/O, MEM and Expansion ROM */ struct resource resource[MAX_RESOURCES]; unsigned int resources; /* links are (downstream) buses attached to the * device, usually a leaf device with no child * has 0 busses attached and a bridge has 1 bus */ struct bus link[MAX_LINKS]; /* number of buses attached to the device */ unsigned int links; struct device_operations *ops; struct chip_operations *chip_ops; void *chip_info; };
This is a pretty complicated structure, and we don't go into all the issues here. During the configuration step, the LinuxBIOS configuration tool instantiates a struct device for each chip by writing C code to a file in the build directory. The C code that the config tool generates has initial values so that the array of device structures forms a tree, with sibling and child nodes. The LinuxBIOS hardwaremain() function walks this tree, starting at the root, and performs device probing and initialization.
The last structure member is a void *—that is, a pointer that can point to anything. The next-to-last element is a chip_operations pointer. As part of the creation of the initialized C structures, the config tool fills in the chip_info and chip_operations pointer with a pointer to the per-chip configuration structure and per-chip-type structure. Thus, each device in the tree has pointers to structures for the type of chip and the individual instance of the chip. The enable structure member, which is a function pointer, for the type of chip is called with a pointer to the structure for the device for each instance of the chip. The device structure has a lot of generic structure members, as you can see, and it has a pointer to a structure for nongeneric chip components.
For each chip, we optionally can provide declarations of both structures, but it is not required. The chip_operations structure, or the type-of-chip structure, has a type fixed by LinuxBIOS itself; the chip_info structure has a structure fixed by the chip. The enable function in the chip_operations structure can be un-initialized, in which case there is no enable function to call for the chip—the chip is always enabled. That is the case for the SC520 CPU—there is only one, and it is always there.
Now we need to change these files to match the SC520. We show them before and after to give you an idea how it looks.
chip.h changes to look like this:
extern struct chip_operations cpu_amd_sc520_ops; struct cpu_amd_sc520_config { };
The enable_dev pointer is empty and is not called. We leave it empty for now but may fill it in later as needed. Similarly, there are no special structure members for the chip_info structure.
The C code looks like this:
#include <device/device.h> #include "chip.h" struct chip_operations cpu_amd_socket_754_ops = { CHIP_NAME("socket 754") };
The changes are simple; we rename the file to sc520.c and then change it to this:
#include <device/device.h> #include "chip.h" struct chip_operations cpu_amd_sc520_ops = { CHIP_NAME("AMD SC520") };
The final file is the Config.lb file. Here we get our first glance at what a configuration file looks like. The original file looks like this:
uses CONFIG_CHIP_NAME if CONFIG_CHIP_NAME config chip.h end object socket_754.o dir /cpu/amd/model_fxx
The first line declares that we are using the CONFIG_CHIP_NAME option. The language requires that we declare the variables we are going to use before we use them. In the case of this file that seems trivial, but in longer files this requirement is really useful. Second, if we are using the CONFIG_CHIP_NAME option, we use the chip.h file. Notice that nothing is set in chip.h unless we were using the CHIP_NAME macro, which is why this test is there. We declare any object files produced in this directory, in this case, socket_754. Finally, we include another directory using the dir keyword. The naming scheme in the config language for other directories is that the pathname is relative if it does not start with a /. Otherwise, it is rooted at the source of the LinuxBIOS source tree. In this case, the dir directive points to src/cpu/amd/model_fxx. As it happens, this is code for Opteron and is of no use to the SC520. After modifying this file for the SC520, it looks like this:
uses CONFIG_CHIP_NAME if CONFIG_CHIP_NAME config chip.h end object sc520.o
That's about it. We've now set up support for the SC
just want to try this feature
Please remove this just want to see what it did and how?
|
http://www.linuxjournal.com/article/8120?page=0,2&quicktabs_1=1
|
CC-MAIN-2015-32
|
refinedweb
| 1,932
| 70.33
|
Warren Stringer wrote: > Hey JJ, Kelly, I've combined both of your replies, > > On Jun 23, 2008, at 2:48 AM, Shannon -jj Behrens wrote: > >> Let's make it really fun: >> >> from thirdparty import * >> >> class A(B): >> def bar(): # Assuming we have implicit self. >> foo() >> >> Now, did foo come from: >> >> * The parent class B? >> * Something in thirdparty? >> * Something in thirdparty that someone else shoved in there dynamically? >> * Something in builtins that someone else shoved in there dynamically? >> * Some global in the current module that's a few hundred lines down? >> * Some global in the current module that someone else shoved in from >> a third-party module? ;) >> >> Using self cuts down on the possibilities which: >> >> a) Saves the interpreter from doing a bunch of useless dict lookups. >> b) Saves the reader from trying to figure out what's going on. > > OK, I'm sure that I'm missing something. Couldn't this be resolved with > something like: > > 1) B.foo() > 2) super.foo() # or ..foo() with dot_as_self convention expanded to > emulate a file system Give me back my polymorphism! > 3) thirdparty.foo() So we're removing the "from thirdparty import foo" syntax in order to solve the ambiguity? > 4) __builtins__.foo( > My eyes, they burn. > Moreover, since we're talking about not using self for items that are > local to the class, the example should really be: > > class A(B): > def foo: > ... # foo statements > ... # other statements > def bar(): > foo() > > In this case, the resolving of 'foo' ranges from blindingly easy (when > the distance from foo's declaration is short), to search-ably easy (when > the distance from foo's declaration is long) > OOP without polymorphism OR inheritance? I don't want to live in your world. :) Kelly -- > On Jun 22, 2008, at 11:53 PM, Kelly Yancey wrote: >> >> I admit, I have had to call self by another name on a couple of >> occasions. Specifically, I had a factory method in one class that >> returned instances of a dynamically-created class. The returned >> objects had methods that referenced both their own instance's >> attributes and attributes of the factory method's parent object (via >> closure). Since there were two objects involved, I used self to refer >> to the factory-generated object itself, and had the factory method use >> a different name for its own object reference argument so the >> generated object could still access both. >> Complicated? Sure. >> Possible with implicit self? Nope > > > As with other languages, 'self' can still be available; simply not > required. > > On Jun 23, 2008, at 2:48 AM, Shannon -jj Behrens wrote: >> As a blanket statement, polymorphism can be confusing. When I see >> "self", at least it's a good sign that I should start looking at >> either the class's documentation or one of its ancestor's >> documentation. >> >> -jj > > These days, I do a search within the IDE, a spotlight search (Mac), or a > Google Desktop search (PC). WIth several IDEs, I can right click to find > the definitions. Some tools provide an accurate call graph. Many of > these tools weren't available when Python was first created. > Thirty-three years ago, most of my debugging involved reading code > printouts from a KSR 35 teletype. Since then, my habits have changed. > > My guess is that: > 1) 9 times out of 10 the var in self.var is blindingly obvious, > 2) 99 times out of 100 it is obvious after a 10 second search, > 3) when non-obvious, it may indicate an obscure namespace that > a) has ballooned out to a complexity that should be refactored > b) wouldn't be solved by simple code reading aides, like self > > \~/ > > >
|
https://mail.python.org/pipermail/baypiggies/2008-June/003557.html
|
CC-MAIN-2014-15
|
refinedweb
| 601
| 63.09
|
polynomial
class
p (n) might mean either the coefficient of the
n-th power of the polynomial, or it might be the evaluation of the
polynomial at n. The meaning of this subscripted referencing is
determined by the
subsref method.
Perform the subscripted element selection extract the two first columns of a matrix
val = magic (3) ⇒ val = [ 8 1 6 3 5 7 4 9 2 ] idx.type = "()"; idx.subs = {":", 1:2}; subsref (val, idx) ⇒ [ 8 1 3 5 4 9 ]
Note that this is the same as writing
val(:,1:2).
If idx is an empty structure array with fields ‘type’ and ‘subs’, return val.
See also: subsasgn, substruct.
For example}; subsasgn (val, idx, 0) ⇒ [ 0 0 6 0 0 7 0 0 2 ]
Note that this is the same as writing
val(:,1:2) = 0.
If idx is an empty structure array with fields ‘type’ and ‘subs’, return rhs.
See also: subsref, substruct.
Query or set the internal flag for subsasgn method call optimizations. If true, Octave will attempt to eliminate the redundant copying when calling subsasgn method of a user-defined class.
When called from inside a function with the
"local" option, the
variable is changed locally for the function and any subroutines it calls.
The original variable value is restored when exiting the function.
Note that the
subsref and
subsasgn methods always receive the
whole index chain, while they usually handle only the first element. It is the
responsibility of these methods to handle the rest of the chain (if needed),
usually by forwarding it again to
subsref or
subsasgn.
If you wish to use the
end keyword in subscripted expressions
of an object, then the user needs to define the
end method for
the class. For example, the
end method for our polynomial class might
look like
function r = end (obj, index_pos, num_indices) if (num_indices != 1) error ("polynomial object may only have one index") endif r = length (obj.poly) - 1; endfunction
which is a fairly generic
end method that has a behavior similar to
the
end keyword for Octave Array classes. is a class object
defined with a class constructor, then
subsindex is the
overloading method that allows the conversion of this class object to
a valid indexing vector. It is important to note that
subsindex must return a zero-based real integer vector of the
class
"double". For example, if the class constructor
Method of a class to construct a range with the
: operator. For
example:
a = myclass (…); b = myclass (…); c = a : b
See also: class, subsref, subsasgn.
Next: Indexed Assignment Optimization, Up: Indexing Objects [Contents][Index]
|
http://www.gnu.org/software/octave/doc/interpreter/Defining-Indexing-And-Indexed-Assignment.html
|
CC-MAIN-2014-15
|
refinedweb
| 433
| 62.17
|
Performance testing for SharePoint Server 2013
Applies to: SharePoint Server 2013 Standard, SharePoint Server 2013 Enterprise
Topic Last Modified: 2013-12-18
Summary:Learn about how to plan and execute performance testing of a SharePoint Server 2013 environment.
This article describes how to test the performance of SharePoint Server 2013. The testing and optimization stage is a critical component of effective capacity management. You should test new architectures before you deploy them to production and you should conduct acceptance testing in conjunction with following monitoring best practices in order to ensure the architectures you design achieve the performance and capacity targets. This allows you to identify and optimize potential bottlenecks before they impact users in a live deployment. If you are upgrading from an Office SharePoint Server 2007 environment and plan to make architectural changes, or are estimating user load of the new SharePoint Server 2013 features, then testing particularly important to make sure your new SharePoint Server 2013-based environment will meet performance and capacity targets.
Once you have tested your environment, you can analyze the test results to determine what changes need to be made in order to achieve the performance and capacity targets you established in Step 1: Model of Capacity planning for SharePoint Server 2013.
These are the recommended sub steps needed.
Deploy to the production environment.
Before you read this article, you should read Capacity management and sizing overview for SharePoint Server 2013.
In this article:
Verify that your plan includes:
Hardware that is designed to operate at expected production performance targets. Always measure the performance of test systems conservatively.
If you have custom code or custom component, it is important that you test the performance of those components in isolation first to validate their performance and stability. After they are stable, you should test the system with those components installed and compare performance to the farm without them installed. Custom components are often a major culprit of performance and reliability problems in production systems.
Know the goal of your testing. Understand ahead of time what your testing objectives are. Is it to validate the performance of some new custom components that were developed for the farm? Is it to see how long it will take to crawl and index a set of content? Is it to determine how many requests per second your farm can support? There can be many different objectives during a test, and the first step in developing a good test plan is deciding what your objectives are.
Understand how to measure for your testing goal. If you are interested in measuring the throughput capacity of your farm for example, you will want to measure the RPS and page latency. If you are measuring for search performance then you will want to measure crawl time and document indexing rates. If your testing objective is well understood, that will help you clearly define what key performance indicators you need to validate in order to complete your tests.
Once your test objectives have been decided, your measurements have been defined, and you have determined what the capacity requirements are for your farm (from steps 1 and 2 of this process), the next objective will be to design and create the test environment. The effort to create a test environment is often underestimated. It should duplicate the production environment as closely as possible. Some of the features and functionality you should consider when designing your test environment include:
Authentication: Decide whether the farm will use Active Directory Domain Services (AD DS), forms-based authentication (and if so with what directory), claims-based authentication, etc. Regardless of which directory you are using, how many users do you need in your test environment and how are you going to create them? How many groups or roles are you going to need and how will you create and populate them? You also need to ensure that you have enough resources allocated to your authentication services that they don't become a bottleneck during testing.
DNS: Know what the namespaces are that you will need during your testing. Identify which servers will be responding to those requests and make sure you've included a plan that has what IP addresses will be used by which servers, and what DNS entries you will need to create.
Load balancing: Assuming you are using more than one server (which you normally would or you likely wouldn't have enough load to warrant load testing), you will need some kind of load balancer solution. That could be a hardware load balancing device, or you could use software load balancing like Windows NLB. Figure out what you will use and write down all of the configuration information you will need to get it set up quickly and efficiently. Another thing to remember is that load test agents typically try and resolve the address to a URL only once every 30 minutes. That means that you should not use a local hosts file or round robin DNS for load balancing because the test agents will likely end up going to the same server for every single request, instead of balancing around all available servers.
Test servers: When you plan your test environment, you not only need to plan for the servers for the SharePoint Server 2013 farm, you also need to plan for the machines needed to execute the tests. Typically that will include 3 servers at a minimum; more may be necessary. If you are using Visual Studio Team System (Team Test Load Agent) to do the testing, one machine will be used as the load test controller. There are generally 2 or more machines that are used as load test agents. The agents are the machines that take the instructions from the test controller about what to test and issue the requests to the SharePoint Server 2013 farm. The test results themselves are stored on a SQL Server-based computer. You should not use the same SQL Server-based computer that is used for the SharePoint Server 2013 farm, because writing the test data will skew the available SQL Server resources for the SharePoint Server 2013 farm. You also need to monitor your test servers when running your tests, the same way as you would monitor the servers in the SharePoint Server 2013 farm, or domain controllers, etc. to make sure that the test results are representative of the farm you're setting up. Sometimes the load agents or controller can become the bottleneck themselves. If that happens then the throughput you see in your test is typically not the maximum the farm can support.
SQL Server: In your test environment, follow the guidance in the sections "Configure SQL Server" and "Validate and monitor storage and SQL Server performance" in the article Storage and SQL Server capacity planning and configuration (SharePoint Server 2013).
Dataset validation: As you decide what content you are going to run tests against, remember that in the best case scenario you will use data from an existing production system. For example, you can back up your content databases from a production farm and restore them into your test environment, then attach the databases to bring the content into the farm. Anytime you run tests against made up or sample data, you run the risk of having your results skewed because of differences in your content corpus.
If you do have to create sample data, there are a few considerations to keep in mind as you build out that content:
All pages should be published; nothing should be checked out
Navigation should be realistic; don't build beyond what you would reasonably expect to use in production.
You should have an idea of the customizations the production site will be using. For example, master pages, style sheets, JavaScript, etc. should all be implemented in the test environment as closely as possible to the production environment.
Determine how many SharePoint groups and/or permission levels you are going to need, and how you are going to associate users with them.
Figure out whether you'll need to do profile imports, and how long that will take.
Determine whether you'll need Audiences, and how you'll create and populate them.
Determine whether you need additional search content sources, and what you will need to create them. If you won't need to create them, determine whether you'll have network access to be able to crawl them.
Determine whether you have enough sample data – documents, lists, list items, etc. If not, create a plan for how you will create this content.
Have a plan for enough unique content to adequately test search. A common mistake is to upload the same document – maybe hundreds or even thousands of times – to different document libraries with different names. That can impact search performance because the query processor will spend an ordinate amount of time doing duplicate detection that it wouldn't otherwise have to in a production environment with real content.
After the test environment is functional, it is time to create and fine-tune the tests that will be used to measure the performance capacity of the farm. This section will at times make references specifically to Visual Studio Team System (Team Test Load Agent), but many of the concepts are applicable irrespective of which load test tool you use. For more information about Visual Studio Team System, see Visual Studio Team System at MSDN (" ).
You can also use the SharePoint Load Test Kit (LTK) in conjunction with VSTS for load testing of SharePoint 2010 farms. The Load Test Kit generates a Visual Studio Team System 2008.
The Load Test Kit is included in the Microsoft SharePoint 2010 Administration Toolkit v1.0, available from the Microsoft Download Center ().
A key criterion to the success of the tests is to be able to effectively simulate a realistic workload by generating requests across a wide range of the test site data, just as users would access a wide range of content in a production SharePoint Server 2013 farm. In order to do that, you will typically need to construct your tests such that they are data driven. Rather than creating hundreds of individual tests that are hard-coded to access a specific page, you should use just a few tests that use data sources containing the URLs for those items to dynamically access that set of pages.
In Visual Studio Team System (Team Test Load Agent), a data source can come in a variety of formats, but a CSV file format is often easiest to manage and transport between development and test environments. Keep in mind that creating CSV files with that content might require the creation of custom tools to enumerate the SharePoint Server 2013-based environment and record the various URLs being used.
You may need to use tools for tasks like:
Creating users and groups in Active Directory or other authentication store if you're using forms based authentication
Enumerating URLs for sites, lists and libraries, list items, documents, etc. and putting them into CSV files for load tests
Uploading sample documents across a range of document libraries and sites
Creating site collections, webs, lists, libraries, folders and list items
Creating My Sites
Creating CSV files with usernames and passwords for test users; these are the user accounts that the load tests will execute as. There should be multiple files so that, for example, some contain only administrator users, some contain other users with elevated privileges (like author / contributor, hierarchy manager, etc.), and others are only readers, etc.
Creating a list of sample search keywords and phrases
Populating SharePoint groups and permission levels with users and Active Directory groups (or roles if you are using forms based authentication)
When creating the web tests, there are other best practices that you should observe and implement. They include:
Record simple web tests as a starting point. Those tests will have hard-coded values in them for parameters like URL, ID's, etc. Replace those hard-coded values with links from your CSV files. Data binding those values in Visual Studio Team System (Team Test Load Agent) is extremely easy.
Always have validation rules for your test. For example, when requesting a page, if an error occurs you will often get the error.aspx page in response. From a web test perspective it appears as just another positive response, because you get an HTTP status code of 200 (successful) in the load test results. Obviously an error has occurred though so that should be tracked differently. Creating one or more validation rules allows you to trap when certain text is sent as a response so that the validation fails and the request is marked as a failure. For example, in Visual Studio Team System (Team Test Load Agent) a simple validation rule might be a ResponseUrl validation – it records a failure if the page that is rendered after redirects is not the same response page that was recorded in the test. You could also add a FindText rule that will record a failure if it finds the word "access denied", for example, in the response.
Use multiple users in different roles for tests. Certain behaviors such as output caching work differently depending on the rights of the current user. For example, a site collection administrator or an authenticated user with approval or authoring rights will not get cached results because we always want them to see the most current version of content. Anonymous users, however, will get the cached content. You need to make sure that your test users are in a mix of these roles that approximately matches the mix of users in the production environment. For example, in production there are probably only two or three site collection administrators, so you should not create tests where 10% of the page requests are made by user accounts that are site collection administrators over the test content.
Parsing dependent requests is an attribute of a Visual Studio Team System (Team Test Load Agent) that determines whether the test agent should attempt to retrieve just the page, or the page and all associated requests that are part of the page, such as images, style sheets, scripts, etc. When load testing, we usually ignore these items for a few reasons:
After a user hits a site the first time these items are often cached by the local browser
These items don't typically come from SQL Server in a SharePoint Server 2013-based environment. With BLOB caching turned on, they are instead served by the Web servers so they don't generate SQL Server load.
If you regularly have a high percentage of first time users to your site, or you have disabled browser caching, or for some reason you don't intend to use the blob cache, then it may make sense to enable parsing dependent requests in your tests. However this is really the exception and not the rule of thumb for most implementations. Be aware that if you do turn this on it can significantly inflate the RPS numbers reported by the test controller. These requests are served so quickly it may mislead you into thinking that there is more capacity available in the farm than there actually is.
Remember to model client application activity as well. Client applications, such as Microsoft Word, PowerPoint, Excel and Outlook generate requests to SharePoint Server 2013 farms as well. They add load to the environment by sending the server requests such as retrieving RSS feeds, acquiring social information, requesting details on site and list structure, synchronizing data, etc. These types of requests should be included and modeled if you have those clients in your implementation.
In most cases a web test should only contain a single request. It's easier to fine-tune and troubleshoot your testing harness and individual requests if the test only contains a single request. Web tests will typically need to contain multiple requests if the operation it is simulating is composed of multiple requests. For example, to test this set of actions you will need a test with multiple step: checking out a document, editing it, checking it in and publishing it. It also requires reserving state between the steps – for example, the same user account should be used to check it out, make the edits, and check it back in. Those multi-step operations that require state to be carried forward between each step are best served by multiple requests in a single web test.
Test each web test individually. Make sure that each test is able to complete successfully before running it in a larger load test. Confirm that all of the names for web applications resolve, and that the user accounts used in the test have sufficient rights to execute the test.
Web tests comprise the requests for individual pages, uploading documents, view list items, etc. All of these are pulled together in load tests. A load test is where you plug in all of the different web tests that are going to be executed. Each web test can be given a percentage of time that it will execute – for example, if you find that 10% of requests in a production farm are search queries, then in the load test you would configure a query web test to run 10% of the time. In Visual Studio Team System (Team Test Load Agent), load tests are also how you configure things like the browser mix, network mix, load patterns, and run settings.
There are some additional best practices that should be observed and implemented for load tests:
Use a reasonable read/write ratio in your tests. Overloading the number of writes in a test can significantly impact the overall throughput of a test. Even on collaboration farms, the read/write ratios tend to have many more reads than writes.
Consider the impact of other resource intensive operations and decide whether they should be occurring during the load test. For example, operations like backup and restore are not generally done during a load test. A full search crawl may not be usually run during a load test, whereas an incremental crawl may be normal. You need to consider how those tasks will be scheduled in production – will they be running at peak load times? If not, then they should probably be excluded during load testing, when you are trying to determine the maximum steady state load you can support for peak traffic.
Don't use think times. Think times are a feature of Visual Studio Team System (Team Test Load Agent) that allow you to simulate the time that users pause between clicks on a page. For example a typical user might load a page, spend three minutes reading it, then click a link on the page to visit another site. Trying to model this in a test environment is nearly impossible to do correctly, and effectively doesn't add value to the test results. It's difficult to model because most organizations don't have a way to monitor different users and the time they spend between clicks on different types of SharePoint sites (like publishing versus search versus collaboration, etc.). It also doesn't really add value because even though a user may pause between page requests, the SharePoint Server 2013-based servers do not. They just get a steady stream of requests that may have peaks and valleys over time, but they are not waiting idly as each user pauses between clicking links on a page.
Understand the difference between users and requests. Visual Studio Team System (Team Test Load Agent) has load pattern where it asks you to enter the number of users to simulate. This doesn't have anything to do with application users, it's really just how many threads are going to be used on the load test agents to generate requests. A common mistake is thinking that if the deployment will have 5,000 users for example, then 5,000 is the number that should be used in Visual Studio Team System (Team Test Load Agent) – it is not! That's one of the many reasons why when estimating capacity planning requirements, the usage requirements should be based on number of requests per second and not number of users. In a Visual Studio Team System (Team Test Load Agent) load test, you will find that you can often generate hundreds of requests per second using only 50 to 75 load test "users".
Use a constant load pattern for the most reliable and reproducible test results. In Visual Studio Team System (Team Test Load Agent) you have the option of basing load on a constant number of users (threads, as explained in the previous point), a stepped up load pattern of users, or a goal based usage test. A stepped load pattern is when you start with a lower number of users and then "step up" adding additional users every few minutes. A goal based usage test is when you establish a threshold for a certain diagnostic counter, like CPU utilization, and test attempts to drive the load to keep that counter between a minimum and maximum threshold that you define for it. However, if you are just trying to determine the maximum throughput your SharePoint Server 2013 farm can sustain during peak load, it is more effective and accurate to just pick a constant load pattern. That allows you to more easily identify how much load the system can take before starting to regularly exceed the thresholds that should be maintained in a healthy farm.
Each time you run a load test remember that it is changing data in the database. Whether that's uploading documents, editing list items, or just recording activity in the usage database, there will be data that is written to SQL Server. To ensure a consistent and legitimate set of test results from each load test, you should have a backup available before you run the first load test. After each load test is complete the backup should be used to restore the content back to the way it was before the test was started.
|
https://technet.microsoft.com/en-us/library/ff758659(d=printer).aspx
|
CC-MAIN-2015-27
|
refinedweb
| 3,698
| 56.69
|
I am trying to sort strings using stdlib qsort. I have created two sort functions sort1 and sort2. sort1 input argument is char** and sort2 input argument is char[][]. My program crashes when use sort1 function to sort array of strings.
#include "stdafx.h"
#include <stdlib.h>
#include <string.h>
int compare(const void* a, const void* b)
{
const char *ia = (const char *)a;
const char *ib = (const char *)b;
return strcmp(ia, ib);
}
//program crashes
void sort1(char **A, int n1) {
int size1 = sizeof(A[0]);
int s2 = n1;
qsort(A,s2,size1,compare);
}
//works perfectly
void sort2(char A[][10], int n1) {
int size1 = sizeof(A[0]);
int s2 = n1;
qsort(A,s2,10,compare);
}
int _tmain(int argc, _TCHAR* argv[])
{
char *names_ptr[5] = {"norma","daniel","carla","bob","adelle"};
char names[5][10] = {"norma","daniel","carla","bob","adelle"};
int size1 = sizeof(names[0]);
int s2 = (sizeof(names)/size1);
sort1(names_ptr,5); //doesnt work
sort2(names,5); //works
return 0;
}
The
qsort function receives a pointer to the thing being sorted. In
sort2 you are sorting arrays of 10 char. In
sort1 you are sorting pointers to char.
So the
compare function is wrong for
sort1, because the arguments are pointers to pointers to char (converted to
void *), but you cast to pointer to char.
The
sort2 works because converting a pointer to array of 10 char into a pointer to char (via
void *) produces a pointer that points to the first character of those 10.
You need to use a different compare function for each of these two cases, because you are sorting different things.
|
https://codedump.io/share/UKKq329Hv8lC/1/difference-between-char-and-char
|
CC-MAIN-2016-50
|
refinedweb
| 267
| 69.21
|
Yesod.ReCAPTCHA
Synopsis
- class YesodAuth master => YesodReCAPTCHA master where
- recaptchaPublicKey :: GHandler sub master Text
- recaptchaPrivateKey :: GHandler sub master Text
- insecureRecaptchaBackdoor :: GHandler sub master (Maybe Text)
- recaptchaAForm :: YesodReCAPTCHA master => AForm sub master ()
- recaptchaMForm :: YesodReCAPTCHA master => MForm sub master (FormResult (), [FieldView sub master])
- recaptchaOptions :: Yesod master => RecaptchaOptions -> GWidget sub master ()
- data RecaptchaOptions = RecaptchaOptions {
Documentation
class YesodAuth master => YesodReCAPTCHA master whereSource
Class used by
yesod-recaptcha's fields. It should be
fairly easy to implement a barebones instance of this class
for you foundation data type:
instance YesodReCAPTCHA MyType where recaptchaPublicKey = return "[your public key]" recaptchaPrivateKey = return "[your private key]"
You may also write a more sophisticated instance. For
example, you may get these values from your
settings.yml
instead of hardcoding them. Or you may give different keys
depending on the request (maybe you're serving to two
different domains in the same application).
The
YesodAuth superclass is used only for the HTTP
request. Please fill a bug report if you think that this
YesodReCAPTCHA may be useful without
YesodAuth.
Minimum complete definition:
recaptchaPublicKey and
recaptchaPrivateKey.
Methods
recaptchaPublicKey :: GHandler sub master TextSource
Your reCAPTCHA public key.
recaptchaPrivateKey :: GHandler sub master TextSource
Your reCAPTCHA private key.
insecureRecaptchaBackdoor :: GHandler sub master (Maybe Text)Source
A backdoor to the reCAPTCHA mechanism. While doing automated tests you may need to fill a form that is protected by a CAPTCHA. The whole point of using a CAPTCHA is disallowing access to non-humans, which hopefully your test suite is.
In order to solve this problem, you may define
insecureRecaptchaBackdoor = return (Just "<secret CAPTCHA>")
Now, whenever someone fills
<secret CAPTCHA> as the
CAPTCHA, the
yesod-recaptcha library will not contact
reCAPTCHA's servers and instead will blindly accept the
secret CAPTCHA.
Note that this is a *huge* security hole in the wrong
hands. We do not recommend using this function on a
production environment without a good reason. If for
whatever reason you must use this function on a production
environment, please make use of its access to
GHandler
in order to return
Just only when strictly necessary.
For example, you may return
Just only when the request
comes from
localhost and read its contents from a secret
file accessible only by SSH which is afterwards removed.
By default, this function returns
Nothing, which
completely disables the backdoor.
recaptchaAForm :: YesodReCAPTCHA master => AForm sub master ()Source
A reCAPTCHA field. This
AForm returns
() because
CAPTCHAs give no useful information besides having being typed
correctly or not. When the user does not type the CAPTCHA
correctly, this
AForm will automatically fail in the same
way as any other
yesod-form widget fails, so you may just
ignore the
() value.
recaptchaMForm :: YesodReCAPTCHA master => MForm sub master (FormResult (), [FieldView sub master])Source
Same as
recaptchaAForm, but instead of being an
AForm, it's an
MForm.
recaptchaOptions :: Yesod master => RecaptchaOptions -> GWidget sub master ()Source
Define the given
RecaptchaOptions for all forms declared
after this widget. This widget may be used anywhere, on the
head or on the
body.
Note that this is not required to use
recaptchaAForm or
recaptchaMForm.
data RecaptchaOptions Source
Options that may be given to reCAPTCHA. In order to use
them on your site, use
recaptchaOptions anywhere before the
form that contains the
recaptchaField.
Note that there's an instance for
Default, so you may use
def.
Constructors
Instances
|
http://hackage.haskell.org/package/yesod-recaptcha-1.0/docs/Yesod-ReCAPTCHA.html
|
CC-MAIN-2016-22
|
refinedweb
| 548
| 52.8
|
If you are using Master Pages in an ASP.NET application and you need to add an attribute to the
<BODY> tag from a Content Page -- for instance, to set a client script function for the
onload event of the page -- you will find that you can't do it directly because the
<BODY> tag is in the Master Page, not in your Content Page.
Make the
<BODY> tag on the Master Page a
public property, so you can access it from any Content Page. First, promote the
<BODY> tag in the Master Page to an ASP.NET server control. Change:
<BODY>
to:
<BODY id="MasterPageBodyTag" runat="server">
Now that the body tag is a server control, you can configure access to it as a public property in the Master Page code behind file:
using System.Web.UI.HtmlControls; public partial class MyMasterPage : System.Web.UI.MasterPage { public HtmlGenericControl BodyTag { get { return MasterPageBodyTag; } set { MasterPageBodyTag = value; } } ...
Note that the
MasterPageBodyTag server control is of type
System.Web.UI.HtmlControls.HtmlGenericControl. To demonstrate this, just set a breakpoint in the
Page_Load function in the code behind file, run the ASP.NET project in debug mode to that point, and execute
?MasterPageBodyTag.GetType().ToString() in the Immediate Window. To use this property from a Content Page, first declare the type of your Master Page in your Content Page's ASPX file:
<%@ MasterType TypeName="MyMasterPage" %>
Then somewhere in your Content Page's code behind file, use the Master Page's
BodyTag property to add an attribute to the
<BODY> tag:
protected void Page_Load(object sender, EventArgs e) { Master.BodyTag.Attributes.Add("onload", "SayHello()"); ...
This example, of course, assumes that there is a
SayHello() client script in this Content Page. Running the application to the Content Page and then viewing the source code in the browser will show that the
onload="SayHello()" attribute was added to the
<BODY> tag. This technique should work for any HTML tag in the Master Page that you wish to access from a Content Page.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/aspnet/MasterBodyTag.aspx
|
crawl-002
|
refinedweb
| 343
| 61.16
|
Use this dialog to manage PL/SQL functions available to Discoverer Administrator. To register functions, you can:
import PL/SQL functions defined in the database (use the Import button)
This method is recommended because the function details are entered automatically after you select a function to import.
enter PL/SQL function details manually (use the New button)
You might use this option if the list of functions in the database is very long, you want to reduce the search time by entering the function details yourself.
For more information, see:
"Why do you need PL/SQL functions?"
"How to register custom PL/SQL functions automatically"
"How to register custom PL/SQL functions manually"
This field displays a list of Oracle-supplied PL/SQL functions and user-defined PL/SQL functions that have already been registered for use with Discoverer Administrator. When the Available in Discoverer Plus check box is selected, these registered functions are also available to Discoverer end users to use in calculations.
Attributes for <function selected in Functions field>
This area displays information about the selected PL/SQL function.
Use this field to enter a name for the function. The Function Name can be different from the Display Name.
Use this field to enter a name for the function. This name will be visible in the Discoverer Administrator in the Edit Calculation window, and in Discoverer Plus if the Available in Discoverer Plus check box has been selected.
Use this field to enter the user ID of the owner of the function. Changing this value enables you to move the reference from a development environment to a production environment.
Use this field to enter the package that contains this function. A package is used in the Oracle database to group many functions together by category for easier management.
Use this drop down list to select the database that stores the function. This is a particularly useful feature if you are using distributed databases.
Use this drop down list to enter the data type of the data returned by the function. For example, a function might return a character string (char) or a number.
Use this field to enter additional information about the function.
Use this field to change the unique name that Discoverer uses to identify EUL and workbook objects. When matching objects common to different EULs, Discoverer uses identifiers to locate objects in different EULs that refer to the same business object.
Available in Discoverer Plus
Use this check box to enable Discoverer end users to use this function in calculations.
Use this button to test the validity and accuracy of the information you have entered. The Discoverer Administrator generates a test SQL statement that includes the PL/SQL function and tests the query. This test validates the existence of the function (by properly naming and locating it), its owner, and tests the data return type.
Use this button to register a new PL/SQL function. A default function name is generated and fields are initialized with default values. You can then edit the default values to configure the new function as required.
Use this button to remove the PL/SQL function currently selected in the Functions field.
Use this button to display the "Import PL/SQL Functions dialog", where you search for and select the PL/SQL functions (defined in the database) that you want to import. After you import PL/SQL functions, you register them so that Discoverer Administrator can use them. For more information, see "How to register custom PL/SQL functions automatically".
This dialog becomes read-only if the current user does not have the Create/Edit Business Area privilege (for more information, see "Privileges dialog: Privileges tab").
|
http://docs.oracle.com/cd/E23943_01/bi.1111/b32519/adpsff01.htm
|
CC-MAIN-2014-52
|
refinedweb
| 615
| 53.71
|
This tutorial is a continuation of the SGDK article. Create an image.
Import the sprite.
Download the sprite mug with MEGA. And throw it in the res folder, in the directory of your project.
Open the previous project, and in resources.res, add the following line.
SPRITE spr_cup "cup.png" 8 8 BEST
The syntax here is as follows
type name path_to_file width_in_tiles height_in_tiles compression_type
That is, to the sprite cup.png refer to by the name spr_cup.
The size of the sprite is 64×64 pixels, or 8×8 in tiles (1 tile = 8×8 pixels)
Compression chose the best BEST.
As with images, all resource information is in recomp.txt located in SGDK/bin
Write the code.
Copy the following code.
#include <genesis.h> #include "resources.h" Sprite* cup_obj; s16 x = 0; s16 y = 0; int main() { VDP_drawImage(BG_A, &img, 0, 0); SPR_init(); VDP_setPalette(PAL3, spr_cup.palette->data); cup_obj = SPR_addSprite(&spr_cup, x, y, TILE_ATTR(PAL3, 0, FALSE, FALSE)); while(1) { SPR_update(); SYS_doVBlankProcess(); } return (0); }</genesis.h>
Now, let’s break it down.
Sprite* cup_obj;
The Sprite type, as the name implies, stores sprites. The asterisk makes it clear that we are creating a pointer *, therefore, we will pass it by link &. In this variable, we will store the sprite.
s16 x = 0; s16 y = 0;
Created variables in which we will store the coordinates of the sprite.
Type s16 is a 16-bit number with a sign (+ or -), in other words signed 16. C s8 and s32 are the same principle.
There is also a type u16 – 16 bit number without a sign, in other words unsigned 16. C u8 and u32 are the same principle.
SPR_init();
SPR_init – initializes the sprite engine (allocates space in VRAM for sprites)
Always insert SPR_init,at the very beginning,and after adding sprites SPR_addSprite.
VRAM (Video Random Access Memory) – RAM capacity of 64 KB,storing tiles of the image.
VDP_setPalette(PAL3, spr_cup.palette->data);
In this line, we set the colors in the 4th palette (countdown starts from zero),colors taken from the sprite of the mug.
Sega Genesis supports 4 palettes of 16 colors (PAL0-PAL3), and stores them in CRAM.
cup_obj = SPR_addSprite(&spr_cup, x, y, TILE_ATTR(PAL3, 0, FALSE, FALSE));
SPR_addSprite adds a sprite to the screen, the syntax is as follows.
SPR_addSprite(sprite, x, y, tile_attributes);
Consider the tile_attributes.
TILE_ATTR(palette, priority, flip_vertical, flip_horizontal)
- Palette – Specify the palette that the tiles will use (in our case, the sprite)
- Priority – sets the priority of the sprite (tile), i.e. a sprite with a smaller number, will overlap the sprite with a large one.
- the rest, the title implies.
In our case
TILE_ATTR(PAL3, 0, FALSE, FALSE)
- We used a 4th palette, in which just above, we placed a palette taken from the sprite of the mug.
- The priority is maximum.
- On the x-axis, we will notflip the sprite.
- Also on the y-axis.
So we created a sprite, and put cup_obj in thevariable.
SPR_update();
Updates and displays sprites on the screen.
SYS_doVBlankProcess();
SYS_doVBlankProcess – does all the behind-the-scenes processing, you need it when you are using sprites, music, joystick. In general, it is always needed.
Now, compile and run. You should get the following.
Okay, it’s left to make this cup move. Let’s make it start from the boundaries of the screen.
Move the mug.
Add variables responsible for the speed and size of the sprite.
s16 x_spd = 3; s16 y_spd = 3; u16 cup_width = 64; u16 cup_height = 64;
In the whileloop , add the following code
x += x_spd; y += y_spd; if(x > 320-cup_width || x < 0) x_spd *= -1; if(y > 240-cup_height || y < 0) y_spd *= -1; SPR_setPosition(cup_obj, x, y);
The sprite is constantly moving at a given speed.
x += x_spd;
y += y_spd;
If it has gone beyond the boundaries of the screen, then change the speed to the opposite.
if(x > 320-cup_width || x < 0)
x_spd *= -1;
if(y > 240-cup_height || y < 0)
y_spd *= -1;
And set the sprite position to x,y.
SPR_setPosition(cup_obj, x, y);
As a result, we got a wall-repelled cup sprite .
Final result.
|
https://under-prog.ru/en/sgdk-move-the-sprite-across-the-screen/
|
CC-MAIN-2022-27
|
refinedweb
| 683
| 76.32
|
Hi,
If we call a partial with a name such that there also exists a
instance variable with the same name (ex: @identity) in the controller
then passing :object => nil to render() and calling (!
identity).inspect in ‘_identity.html.erb’ prints false while
identity.inspect prints nil.
Here is a more specific description -
Let there be a controller named Check.
class CheckController < ApplicationController
def index
@identity = nil
end
end
views/check/index.html.erb contains -
<%= render :partial => ‘identity’, :object => nil %>
views/check/_identity.html.erb contains -
identity is <%= identity.inspect %> !identity is <%= (!identity).inspect %>
calling localhost:3000/check/ prints -
identity is nil
!identity is false
NOTE:
- There must be a controller instance variable with the same name as
of partial.
- I have been able to trace the problem down to between line 31 and
42 of file
I’m using -
Rails 2.2.2
Ruby 1.8.6 (2007-09-14 patchlevel 111) [i386-mswin32]
RubyGems 1.3.1
Windows XP Home Ed. SP3
I’ve also uploaded a sample app folder at
- thanks
|
https://www.ruby-forum.com/t/bug-nil-returns-false-for-partial-instance-variable-in-specific-conditions/155061
|
CC-MAIN-2021-31
|
refinedweb
| 174
| 61.53
|
I have a project utilizing ASP.NET MVC and Razor page layouts. The page in question will be a survey whose questions, datatypes, and answers have been configured by an admin user and retrieved from a database. For example:
public class ExampleViewModel { //the user define dquestion public string Question1Text { get; set; } //this is an enum with "Text","YesNo","DropDown" public AnswerType Question1Type { get; set; } //this would hold options for the drop down list public string Question1Options { get; set; } //the user input answer public string Question1Answer { get; set; } }
What I am not sure is how to structure the Razor view to create the appropriate type of form input field depending on the AnswerType. I seem to recall something about creating templates for the various DataType() annotations but I am not sure where to start looking at that and if that applies in this case?
--------------Solutions-------------
You want to use Templated Helpers - Here is a good walkthrough -
In the helper itself you can do stuff like:
@if (model.AnswerType is xxx)
{
<button> xxx </button> - or your html
}
etc
|
http://www.pcaskme.com/form-fields-generated-based-on-variable-types/
|
CC-MAIN-2019-04
|
refinedweb
| 176
| 54.97
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Nominal types, such as classes and interfaces, give us names to "hang" other names on - methods, data member names, nested types and classes, etc. Interfaces/mixins/traits provide some challenges vs. the well known single inheritance "v-table" of virtual methods, but I found an interesting IBM paper on efficient execution of interface defined methods.
So my question is: What about structural types? The structural types used for (among other things) the "static duck typing" that Scala most recently has brought to general attention?
What is a means for a) correct and b) efficient execution of method resolution, data member access, etc.?
To make it interesting, consider two examples.
-- One structurally typed argument
fun print-name(thing : { .name(-> Str) }) =
print(thing.name());
-- Arg is List of structurally typed things
fun print-names(things : [ { .name(-> Str) } ]) =
for x <- things in
print(x.name());
I considering the first case and thought that the compiler could pass a hidden function parameter, or some indexes that provide a route to a method in a table of the parameter.
But then I considered the second case, and my (relatively speedy) scheme fell apart. In the second case, any given item in the list could (1) be some object whose class defines a virtual or final .name method, noting that final methods don't typically show up in vtable/RTTI type data; (2) it might be an object that implements a .name method required by an interface; (3) it might be an object who gets an actual .name method definition from a mixin/trait; (4) or it could be some object itself instantiated (via new) directly from some unnamed structural type.
I don't see a way to distinguish this mess of cases until runtime, particularly in the second example of a list of structurally typed objects, and how to compile such code for efficient execution.
I tried CiteSeerX but didn't turn up any papers directly addressing approaches to analysis and compilation.
Any pointers? Thanks VERY much in advance.
Scott
p.s. (I'm very interested in this question because I think an answer might also help me to efficiently compile a set of general, functional record operators and record type operators described in a paper by Cardelli.)
In a dynamic language, the problem you describe tends to evaporate due to the large amount of data readily available in the environment. You also wouldn't need to specify '{.name(->Str)}' for your functions.
In statically compiled languages, however, you're ignoring one of the capabilities you possess: you're free to lose data that isn't leveraged by a client of the object.
This 'collection' you've developed prior to passing it into the 'print_names' method will have a statically determined type just like everything else, and that type might look something like [{.name(->string), .favorite_color(->(red|blue|green)), .quest(->Goal)}].
The necessary type of this collection could be annotated by a programmer or simply inferred from the set of clients that ultimately observe the collection. And, ultimately, each object with a reference interred to this collection will need to expose the associated parameters.
If there are no side-effects, it's worth noting that the simple implementation would be for the collection to simply map to a collection of flat records of possibly lazily computed values. hat's what Haskell would do. However, it is clear that you're assuming an object-oriented collection with 'methods' that are statically determined and whose names aren't available to a dynamic dispatcher.
In C++ this sort of thing is sometimes done by inverting templates:
// ...these usually constructed with macros...
template<class T, class R> class get_name {
static R v()(void* pT) { return ((T*)pT)->name(); }};
template<class T, class R> class get_quest {
static R v()(void* pT) { return ((T*)pT)->quest(); }};
template<class T, class R> class get_favorite_color {
static R v()(void* pT) { return ((T*)pT)->favorite_color(); }};
class NFCQ {
void* pObject;
string (*pfnGetName)(void*);
Color (*pfnGetFavoriteColor)(void*);
Goal (*pfnGetQuest)(void*);
public:
template<typename T> NFCQ(T& obj)
: pObject(&obj)
, pfnGetName(&get_name<T,string>::v)
, pfnGetColor(&get_favorite_color<T,Color>::v)
, pfnGetQuest(&get_quest<T,Goal>::v) {}
string name() { return (*pfnGetName)(pObject); }
Color favorite_color() { return (*pfnGetColor)(pObject); }
Goal quest() { return (*pfnGetQuest)(pObject); }
};
One could use NFCQ to wrap any object, whether it be final or dynamic, with something that grabs its name, color, quest, etc. One could even make a collection (std::list<NFCQ>). The use of template computed methods acts a great deal like a typeclass with a default derivation that can optionally be specified by the programmer:
template<> get_name<MyObject,string> {
static string v(void* pO) {
return ((MyObject*)pO)->MyObjectName();
}
};
Anyhow, sprucing this solution up a bit, providing greater safety than is achieved by use of 'void*', etc. is entirely possible, and is essentially what you'd end up doing for collections of arbitrary types in structural typing. Properly, what you have is a collection of [(objectRef,typeclassRefA,typeclassRefB,typeclassRefC)] four-tuples that are known by the compiler and accessed in the code.
But before you jump on a solution like this one, you should ask yourself: to what degree do I really need objects with side-effects that don't even know their own method names? Why not just stick a message dispatch table on every object that will potentially receive messages from an unknown source, and be done with it? Am I even taking the right approach here?
You can always map method names to unique integers to reduce lookup times. It has always been my experience that increasing uniformity among components of the language buys you features, and the increased simplicity often makes automated optimizations viable that gain back most of what you lost. If you are developing a new language, go ahead and stick vtables on objects for reference to 'final' fields and such if it simplifies this other stuff... you're still free to optimize the message dispatch if you know the exact type of an object statically, and you won't need to deal with this slow 'typeclass' approach to collections of objects (which buys you back a bit of performance and space).
but my current hobby project is a structurally typed language which is pretty much this sort of thing.
For the collection of things, I don't particularly have a problem with a variety of options. Something in the list is guaranteed to be a sub-type of {.name->string} because structurally its type has (at least) a member name that returns string. If the source of that member was the type itself, or a trait/mix-in, or a super-type... that's all already done before the compiler does parameter checking to let an object into the list.
Granted, most of the mess still can't be resolved until runtime. You don't know specifically what method will be invoked since the runtime type might overload it, just that to get there you do member-access 'name', invoke. That invocation then looks at the compile time type, and the runtime type to do proper dispatch to the most appropriate method (based on some rule system you setup). That dispatch (at least for me) can be cached if desired since types are constant entities. For my current [naive] prototype that is... 'fast enough'. It's not in the top targets for optimization and even hitting many small methods the dispatch remains a tiny fraction of the processing time.
Morrow (previously mentioned here and here) uses a form of evidence to index the values of a record. Perhaps it'll help you.
I wonder if you could solve your problem by putting the "hidden parameters" everywhere, in addition to function parameter lists. So, in your list, each element would actually be the regular object pointer along with the hidden parameters.
var people : [ { .age(-> Int), .name(-> Str) } ]
people <- ...
// Each element in 'people' is actually a 3-tuple
// (object, age_func, name_func)
Width subtyping becomes difficult, but it might be workable:
var things : [ { .name(-> Str) } ]
things <- people
// Use the same data as 'people', but have an
// indicator at the beginning of 'things' that
// says which of the elements of the three
// tuple are actually relevant. In our case,
// it's the first and third.
I haven't worked this out past this simple example, so there could be a point at which this scheme fails to work.
Also, Daan Leijen's paper on extensible records seems related:.
For statically type-checked languages, an efficient implementation is outlined in Xavier Leroy's paper on ZINC.
Below is a rough summary from my memory, somewhat generalized to non-ML languages.
Let's assume your language has some notion of a named function signature used for resolving the function being invoked via an interface. This named signature is presumably a (function_name, function_signature) pair or the type-annotated/mangled function name or its moral equivalent. In the case of ZINC, Caml/OCaml doesn't allow function overloading, so the name is sufficient as the named signature. Your language may use something more exotic, but presumably it has some notion of function equivalence classes by which function call sites are annotated for resolution purposes.
As I remember Leroy's scheme, your linker/class loader needs to keep an enumeration of all named function signatures that may be invoked through interfaces. (In your case, this may be all member functions.) When a class is loaded, the linker/loader needs to assign a unique integer to each of the named signatures that may be called via an interface, and find (preferably the smallest) a modulus which maps these integers to unique small integers via modular division. This modulus and a table of function pointers (similar to a vtable) are then associated with the class by the class loader. (I imagine you would make them the first and second slots in every vtable to avoid bloating each class instance.) The linker/loader also needs to patch every interface call site with the integer corresponding to the named signature of the called function.
Function resolution is as follows: the caller finds the modulus and the function pointer table for the target. The unique integer for the called function equivalence class is then converted into an offset into the table by modular division. The function pointer is retrieved from the table and invoked.
In the worst case, the modulus ends up being equal to one more than the largest unique integer assigned to the functions exported by the class. However, Leroy's paper contains an analysis which estimates that the mean is much much smaller than the worst case. (It may average a small multiple of the number of exported functions. I forget.)
In essence, this scheme is a hash table of function pointers. However, compile-time type checking has determined that an entry will exist, so the call site doesn't have to deal with the case of missing entries in the table. Also, the linker/loader has sized each class's table such that there are no collisions, so the call site doesn't have to deal with this case.
Modular division is slow on most CPUs. Presumably, one could find another parameterized compression function that is more efficiently calculated than modular division, or the linker/loader could restrict itself to moduli that can be efficiently implemented as a small number of fast operations.
If the majority of functions in your language are invoked via interfaces, it may be most efficient to get rid of the traditional vtable entirely and put the modulus at the start of the function pointer table, and store a pointer to the function table where the vtable pointer would normally be stored. If I remember correctly, this approach is what is actually outlined in Leroy's paper.
I use binary search on tuples of types and methods, O(log(N)).
I didn't fully understand your post. As I understand it the size of the lookup table becomes #(methods) * #(types). Can your comment be abbreviated as Leroy gives a O(1) lookup at the expense of growing the lookup table? Or is it a hashed map?
[Forget it, got it]
|
http://lambda-the-ultimate.org/node/3173
|
CC-MAIN-2017-47
|
refinedweb
| 2,046
| 51.58
|
Agenda
See also: IRC log
<scribe> Scribe: Roy Fielding
<scribe> ScribeNick: Roy
DC: we haven't published anything in
a while
... a working draft sort of thing
NM: I volunteer to scribe next week
<dorchard> possible regrets, may be on a plane
DC: regrets for next week
VQ: Any objections the minutes of 12 April telcon?
[no objections]
RESOLUTION: Accept minutes of 12 April
DC: it would be great if they used a
URI, but they don't
... unclear if this is a new issue or an old one
VQ: Can it be addressed in the
context of one of the existing issues [scribe missed which
ones]
... preferencce for issue 41
NM: 41 carries too much baggage already, perhaps 9 (no)
NW: perhaps if it is hard to find the issue, it deserves a new one
<DanC> "The decision to identify XML namespaces with URIs was an architectural mistake that has caused much suffering for XML users and needless complexity for XML tools. "
<Zakim> DanC, you wanted to note
DC: it seems we have failed to convince at least one person
VQ: inclined to proceed with a new issue
RF: okay by me
<DanC> issue makingNewTerms ...
<DanC> issue linkRelationshipNames
DC: trying to think up a name
NM: preamble about media types would indicate relationship to issue 9, but I guess that was just a lead in -- we should be clearer that this is about the short name issue
VQ: just link relationships, or broader?
NW: would prefer broader issue of short names
<DanC> standarizedAttributeValues
<DanC> hmm
RF: standardized attribute values in
content
... I meant attribute values in general, not just in XML syntax
<DanC> standardizedFieldValues
RF: works for me
<Norm> Works for me
[no objections]
RESOLUTION: create new issue standardizedFieldValues-51
DC: does anyone else think they should use URIs?
<Norm> Interestingly Atom "gets this right" AFAICS. "foo" =>
RF: I proposed the IANA-based registry used by Atom relationships
DC: what does it mean to add a relation name? Can they be a URI? How do you get an IANA name?
<Norm> I suppose someone should define the mechanism for adding to the registry, writing an RFC maybe?
DC: suppose I just introduce a short name
NM: is it formally defined as a relative URI?
RF: it is formally defined as a suffix of the IANA base URI if the value is given as a short name instead of a URI
<scribe> ACTION: DanC to introduce new issue standardizedFieldValues-51 [recorded in]
<DanC> (hmm... "standardized" is perhaps narrower than I'd like, but no matter)
VQ: shall we wait and see the feedback from the introduction before continuing?
[agree]
NW: CR was published, in the process
of implementation reports
... inclined to continue this until PR
DC: looking at how this impacts other
(existing) specs and what tests are needed
... trying to address concern about W3C having many individual specs that don't always work well together
... for example, Chris looked at this and provided examples where various specs (like CSS) should be updated/revised to reflect the change
NW: I don't think it would be appropriate for CSS to say anything about xml:id because the current [algorithm?] will pick up the new id automatically because it starts with the infoset
NM: I think both views of this are
right, there is a case to be said that the infoset way is
architecturally better
... OTOH, Dan is right as well and we need to provide [details missing]
NW: agrees in general
<Norm> ACTION: Norm to point out to the Core WG that it would be good to get the CSS working group to buy into xml:id [recorded in]
<DanC> (the corresponding concern applies to xpath etc.)
VQ: I guess we can conclude that we should not close the issue? Do we agree?
DC: yes, but would like to hear from other TAG members
NW: Xpath 2 has support for xml:id construction, Xpath 1 can support it providing that it starts with an infoset
DC: surprised, so that means something that used to conform will no longer conform?
NW: both still conform
[technical discussion of XML processing continues by NW, NM, and DC, specifically regarding how many XML technologies are now specified by way of the infoset rather than a specific document format, and how that impacts conformance testing]
RF: This infoset issue may lead to problems in regard to the other discussion about binary XML and the perceived notion that XML == text.
VQ: let's get back to the specific issue at hand
<DanC>
DC: I would like for this section C to have a test case prior to going into effect
NW: would like Dan to send mail to
public-xml to that effect
... I don't know how to construct a test for CSS, but the introduction of xml:id does not change historical documents and its presence will be ignored by parsers ignorant of xml:id. The CSS spec doesn't need to say anything about that.
VQ: I suggest revisiting this after Norm completes action ... are there any other specs beyond CSS that are impacted?
DC: six specs are listed
VQ: let's move on
VQ: Henry is not here, but can Ed provide his feedback?
Ed: I sent HT some feedback this morning but haven't heard back yet (my delay)
VQ: we need to reply to the WG by the
end of this month
... and work on a longer document
Ed: HT is working on the longer document ... after feedback, will have a better idea how to proceed toward sending comments to WG
VQ: should we prepare something specific for the XRI team?
Ed: yes, they deserve a direct feedback as opposed to a general reference
RF: agree on direct response (as well as later general document)
VQ: running out of time, will we have time to make a TAG decision?
DC: we already have general feedback in the form of the webarch doc
DO: we need to take a look at the
examples given and explain how we can solve those problems using
URI, HTTP, etc.
... on my blog, I got comments about change of ownership of a domain and broke it down into examples
... that show how the points can be responded to
<DanC>
<dorchard>
<Zakim> DanC, you wanted to propose: 1. XRI follows the pattern of an administrative hierarchy of delegation ending with a path, 2. http/dns handle that case, and is ubiquitously deployed
DC: I wonder what parts of that argument would fail to convince?
DO: what's not established is
"http/dns handle that case"
... they wrote a document that shows (in their mind) why 2) is not the case. What we need to do is come up with examples that show an alternative interpretation/solution to the examples they provided in the documents.
DC: did their writing convince you?
DO: it did give me pause to wonder about the two scenarios already mentioned
Ed: most cases of domain change can be handled by redirects
NM: is it obvious to people on this call that redirect is something that we can point to for longevity of URIs?
DC: yes, one of the many reasons why
HTTP is better for this type of thing
... I am willing to try to make that case (am writing a related article)
VQ: can you do that such that we have something to approve next week?
<DanC> (my target is now end of day weds)
<DanC> ACTION DanC to elaborate on "http/dns handle the case of an administrative hierarchy followed by a path"
Ed: we should be able to have enough material to reply next week
Ed: I have this afternoon set aside for this
<DanC> sounds like ed's action continues
Ed: should it be in finding form or just an email?
NM: perhaps less formality is desired for a response to a WG
Ed: agree
DC: endPointRefs-47
<DanC>
DC: suppose we just withdrew this from our list?
NM: what about the general concern
that they are using something other than URIs for general
identity?
... what WSA did was remove the distinction that indicated a parameter was being used for identity, but they didn't remove the mechanism itself. Some people still use that feature for the purpose of identification.
DO: similar to cookies in that WSA
does not prevent the use of those fields for the sake of passing
identifying data
... but not all such fields are used in that way
<DanC> (hmm... [[ WS-Addressing is conformant to the SOAP 1.2 [SOAP 1.2 Part 1: Messaging Framework] processing model ...]] )
NM: I think these questions are still present, and though not directly tied to this issue it may be our last chance to deal with non-URI addressing
VQ: we are out of time
ADJOURN
|
http://www.w3.org/2001/tag/2005/04/19-minutes.html
|
CC-MAIN-2015-48
|
refinedweb
| 1,470
| 64.04
|
Slashdot Log In
Why not Ruby?
flounder_p queries: "I have recently started playing with the Ruby programming language and think it's really great. I was just wondering why you guys think Ruby has not caught on more in the open source community than it has? How many of you guys are using it? Will it ever catch on or will it always be looked at as yet another scripting language? Don't get me wrong scripting languages are great (and I live by Perl) but I still hope to see Ruby catch on more. I would like to hear opinions on things on why Ruby is good or bad not on why OOP is good or bad. We have already had that discussion here." On a side note, a little birdy tells me that BlackAdder has plans for Ruby support in its next beta.
This discussion has been archived. No new comments can be posted.
Why not Ruby? | Log In/Create an Account | Top | 316 comments (Spill at 50!) | Index Only | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
1995: Who needs Java when we have C? (Score:3)
Or 'who needs Linux when we have UNIX®', 'who needs Netscape when we have Mosaic', etc.
Don't write Ruby off until you play with it. And, having played with it, I've written it off. I was looking for something on the client side that was more powerful than JavaScript but not as hefty as Java. Perl moved across the wire would be beautiful and that was the goal of the Penguin [cpan.org] module. Alas, it seems to have withered on the vine.
Be a little more open. It'll keep you young.
Less tran truthful advertising... (Score:3)
Python types are more limited (no inheritance/subclassing; cannot add methods to existing types). What are they smoking ? No inheritance in Python ? Python has had inheritance since day one as far as I know. Cannot add methods to existing types. Not ? How about:
class chair:
"Class with no methods"
pass
def siton(obj):
print "You sit on the chair"
chair.siton = siton
Just an example, I don't know Perl or tcl well enough to comment, but when I find mistakes in simple factual claims I get a whole lot more skeptical of other claims that I am myself unable to verify.
Re:Object Oriented programming is overrated (Score:3)
When you domain model is appropriate for OOP, then by all means use an OO language. Sure, you can do it in C. But you could also do it in assembler. If you're going to use OO, then use a language that supports and facilitates OO.
Re:Namespaces (Score:3)
What annoys me more, is 'OO' languages which tie namespaces to modules or objects, instead of the other way 'round.
-- Crutcher --
#include <disclaimer.h>
Re:Because Ruby Rocks! :-) (Score:3)
Such orthogonality has aesthetic merit, but is bad for performance. There are a lot of things one can do to reduce the cost, but there is a cost.
Variable punctuation is evil regardless of whether it determines scope or type. Sure, some people like putting an MFC-ish "m_" before member names etc., and they should be free to do so, but they shouldn't be forced to do so by the language.
Again the performance issue rears its ugly head, and also the issue of assignments etc. having side effects. Sure, it can be "cool" to overload access/modification, e.g. to enforce range/consistency limits or to create "magic" variables such as r/theta when what you're really storing is x/y. However, the cost and potential for abuse aren't worth it. You can already get almost the same effect with explicit accessor functions, or with a keyword attached to declarations. People who really like being able to go in after the fact and change the semantics of an assignment in one of their classes can just always use the keyword; people who want to be able to do the same for other people's classes generally have no business doing so lest they cause all sorts of "spooky" failures when they violate the class implementation's internal dependencies.
BTW, I'm not really that hung up on performance, in the usual sense. If your application doesn't run fast enough in an interpreted (including byte-code interpreted) language, you should profile, refactor, and rewrite necessary portions in native code. However, I am concerned with performance in the sense that I hate to see billions upon billions of cycles wasted for little or no functional benefit. Machine cycles are cheap, but programmer cycles aren't. If a language runs 10% faster then that might be enough for some large number of applications, so instead of all that refactoring and rewriting I just mentioned the programmers can spend more time on adding features or making the program more robust.
What, and Python doesn't? (Score:3)
If you're going to bash a language, you really should make it a point to at least learn the language first.
Python 2.1 is free software (Score:3)
Please don't spout FUD regarding Python's license. Instead, call a spade a spade and say, "Python is free software, but is not GPL-compatible. In this sense, it's no worse than the Mozilla Public License."
Re:Perl (Score:3)
It could be said, as well,
Currently I'm using, and loving, Perl. It has a very active and helpful community, plus tons of modules that come with the system. While I do like Python, it doesn't have the support behind it that Perl does. Thats why I use Perl, and not Python.
Java seems to have the mark of the corporate beast on it: while it has its benefits and benefactors, it hasn't kept steam like Perl has. Personally, I'm liking the looks of JDK 1.4, with select(), assert(), faster Swing, and mostly-Perl-friendly regex classes built in.
Another Scripting Language, Ho Hum (Score:3)
If Ruby has some features that I need and no scripting language I already know fits the bill, I might make an effort to learn it, but I'm not going to go out of my way to pick up Yet Another Scripting Language.
Popularity of Ruby in Japan (Score:3)
I've seen Ruby used for AI/machine-learning code as well as some math applications. It turns out that one may extend the language using other code, such as C. Add in the untyped OO as others have discussed and you can easily write programs for multiple platforms/languages without giving up speed (write speed-critical code in a C extension).
Re:Python (Score:3)
For more Ruby info, check out their homepage [ruby-lang.org]
-----
"Goose... Geese... Moose... MOOSE!?!?!"
Object Oriented programming is overrated (Score:3)
I say Object Oriented Programming is highly overrated.
Many programmers I know seem to believe there are about 2 ways to program in the world: procedural (uncivilized, messy spaghetti code, no support for large programs, etc.) and object oriented (civilized). The fact is that all modern languages spanning many different paradigms support civilized programming, even if they are not "object oriented". And what I infer from this is that objects are primarily useful only when the data you are processing are neatly expressed as objects, that is, they actually *are* objects. (For instance, if they describe operations, then higher order functions are much handier.)
Every civilized language that I know of supports the features that I believe most people think of as object-oriented. Even languages which are adamantly *not* Object Oriented (such as SML, of which I am a stalwart fan) support them.
Some examples are aggregate data (in the case of SML, aggregate data is supported much more cleanly than OO languages I know of, since one can make anonymous "classes"), abstract data types, exceptions, threads, polymorphism, garbage collection, and type safety. (The advanced languages I'm implicitly referring to also support some really nice features typically not in OO languages, such as higher-order functions, static typing, parameterized modules, and generics or "parametric polymorphism".)
So what really separates Objects from regular old modern programming? I say two things: inheritance and subtyping. Essentially, if you are not making use of subtyping (using it for polymorphism doesn't count, since other modern languages support polymorphism in their own way) in your program, then you aren't using any OO-exclusive features. Do you actually write programs the way that they introduce OO in textbooks? (A motorcycle derives from wheeled_vehicle, which derives from vehicle?)
So I guess what I'm saying is, be sure you know what you mean when you say "OOP", since there is very little which is particularly special about OO languages. In my opinion, there is not much need in scripting language for subtyping. So I say that emulating Java or C++ is not a very worthwile goal, except inasmuch as it might engender comfortable syntax/semantics for those who have only used those kinds of languages. Let's look to some other advanced languages to get us useful features in our scripting languages, and encourage the use of them.
The Truth Ain't Purdy (Score:3)
Why not Haskell?
Why not Mozart-Oz?
Why not Prolog?
Why not Pict?
Why not Programming Language X?
The truth is, most people do not choose a programming language based on the technical merits of the language, but instead, most people choose a programming language based on a mix of the following list of reasons:
Hey, its the reason most have for why they use their natural language. I use English because it is the first language I learned, but English is not necessarily the best natural language.
For example, most people simply aren't educated in computer science, and therefore don't understand Object-Orientation, functional programming, declarative programming, etc, and therefore, these people are turned off my languages that they simply cannot understand. Why do you think that Visual Basic is so popular?
Never under estimate the horrible effects of legacy. It comes in many forms, from having large amounts of code written in previous languages, to only having experience with writing code in previous languages. If you have legacy code, then moving to another language requires allot of work to migrate the code or you could end up complicating things by keeping the old code base and introducing the new latest and greatest programming language. And the other form of legacy, mindshare legacy, is even worse. A programmer should constantly be on the hunt for tools that will make him/her more productive, but the fact is that most people are lazy and really only know how to efficiently code in one programming language... even when something better comes out, people that have already become efficient in their one favorite programming language are very reluctant to change. Why do you think that C++ is so popular?
Its obvious that hype and its flip-side, FUD, heavily influence the average person's choice in programming languages. Over the past 5 years, the ultimate way to sell a programming language was to fill the description of the language with all sorts of "Object-Oriented" buzz words. However, big dollar marketing campaigns have made at least two programming languages catch on: Java and Visual Basic. Meanwhile, FUD has been used to slam alternative programming languages into the background. Whoever thought that the words "procedural programming" would become programming language profanity?
Too many already (Score:3)
Too much diversity can be a bad thing, especially in open source where people have to be able to read the code to extend it.
Just my two cents.
Somebody's looking for free advertising (Score:3)
Given the number of posts above to the effect of "What the heck is Ruby?" -- as well as the lack of any critical details in the post (such as comparisons between Ruby and the alternatives) -- one can't help but get the impression that the poster is merely looking for free advertising for his/her pet language.
Re:Answer is simple (Score:3)
I don't know how many times I've heard a fellow techhead complain "Yeah, I went to work for these guys. But they have a proprietary system that they have to teach to people. It took weeks to understand." In this world, "proprietary" means "rarely used and I've barely seen it before". That's why you see ads in the paper for jobs requiring Perl and C++ and less requiring Ada programmers.
Give Ruby time, a strong open source (or not) base, and people using it to create prefabricated programs not requiring little reprogramming, and it will get the audience it deserves.
From Java to Ruby (Score:3)
In my case, I came to Ruby from the Java world. A friend forwarded me an email announcing the release of the Programming Ruby [rubycentral.com] book and so I decided to check out the language. Since I enjoy learning about new programming languages I wasn't agaist learning "yet another language." A search on Google yielded the main Ruby-lang [ruby-lang.org] web-site, and after some reading I decided it was worthwhile to take the time to really learn it. That was about 4 months ago.
Since then, I've read through the on-line version of Programming Ruby as well as the printed version, which I recommend very highly. It is one of the best computer language books I have ever read (and I have a Computer Engineering degree.) I have also gotten very good at programming Ruby after only a month and half of serious study. In fact, I'm probably as good (or better) at programming Ruby as I am in Java (which I've been using for 3 years.) Now that is impressive. Of course I will admit I've been somewhat obsessive with Ruby and have studied it very extensively over this last month and a half, so your mileage may vary. But still: 3 years versus 1.5 months? Hmmm....
Of course I can't say the same wouldn't happen if I seriously studied Perl or Python, but I will say I don't intend to learn those languages now. They are fine and dandy for what they do, but just like all those out there who don't want to switch to Ruby since they know Perl (or Python), I don't want to switch to them because I know Ruby. So given that, I can probably respect those who decide not to learn Ruby for this reason.
But I have heard other Ruby users who have used Perl or Python say it is an improvement to them in some ways, so it may actually be worthwhile to at least take an hour or so to give Ruby a good look. I would say the same for Java programmers. If you've never touched a so called "scripting language", learning Ruby will change how you think about programming permanently. I'm sure former Java users now using Perl or Python could say the same thing. Of course Ruby is much more than a scripting language. In fact, I really wish I could just totally stop programming Java and just use Ruby (since it can solve the same problems), but I really don't think that is possible now since Ruby is so new (to the United States.) And of course Java is pretty much the corporate mantra these days.
But in the long run I could certainly forsee Ruby replacing Java in the enterprise. In fact, I think this should in some way unite Perl, Python and Ruby users, since we have a "common enemy" in Java, heh. Of course Java has it's uses too I suppose. And before Java advocates flame me, consider that I hold this view after 3 years of being a Java advocate and switching to Ruby for about 1.5 months (as noted above.) That's how much better I think Ruby is compared to Java.
Now other complaints about Ruby usually revolve around it's newness:
So, to conclude, at least give Ruby a chance and try not to be so fanatical about programming languages
:)
--
Ryan
Re:This has been mentioned before, but... (Score:4)
Consensus Answer: I don't need it (Score:4)
If people don't need it, it doesn't stand a chance. The helpfulness of Ruby is outweighed by the cost of learning it.
The cost is greater than the benefit, just like [insert your other underused neato technology here]. Come back when the benefits are greater than the costs; preferably when they're MUCH, MUCH greater.
We use OO-style in scripts all the time (Score:4)
If you are doing sysadmining, you may or may not want OO. Perhaps you have some conceptual objects, perhaps you don't. For the most part, I'd suggest that you don't.
I found for web applications, the database handles so much of your processing, that many of your public display components are simple are simple procedural displays.
However, if you are building an infrastructure, real OO style approachs can help you build up your concepts. I've found that we can often take advantage of OO design, even if not implementation.
We try to do, as you call it, text-book OO. It helps as the project scales. We have tremendous code-reuse for our client projects. This lets us stay in business despite being smaller than our competitors, because we can reuse so much of our code base. It also lets us undercut the big-boys, because we've maximized code reuse, not just taken what we've got.
Alex
Namespaces (Score:4)
I really like coding in object oriented languages. Right now I'm working on quite a major project in Python, and objectising everything is making it a lot more convenient. I'm using inheritance and polymorphism and so on, but it took me a while to figure out how that was useful in a scripting language where there isn't any strong typing.
I don't have anything against procedural languages, although I tend to write in objects more when they're available just because I'm more used to the technique. In general though, I think one of the most useful things that I get out of using objects besides all the polymorphism stuff is namespaces.
Classes are simply a really convenient way to package related things together without getting them messed up. I know this can be done without too much trouble without objects using packages or naming conventions, but classes are a much more general way to do it. Certainly it's one of the main reasons I prefer C++ over C, even for relatively simple programs, because C doen't have any natural way of assigning namespaces in a clean way that's guaranteed not to clash.
===
Ruby Resources (Score:5)
Here's some references
...
DDJ's January Article on Ruby [ddj.com] (Thomas and Hunt)
Ruby Presentation [pragmaticprogrammer.com] (Thomas and Hunt)
Programming in Ruby Book [rubycentral.com] (Thomas and Hunt. Available from Addison Wesley, online version is under an open content license)
And some web pages
...
Ruby Home Page [ruby-lang.org]
Ruby Central [rubycentral.com]
Why not not switch? (Score:5)
I think the simple answer is that most people are quite happy with the scripting languages they already use.
Many people enjoy Perl, many enjoy Python, some enjoy
/bin/tcsh. The latter population should however, needless to say, be put into working camps. Many also enjoy other languages, but I see Ruby mostly as a contender with Python and Perl.
So why should people switch to Ruby? Because they can do everything in Ruby they can in their current choice? Not likely. They can by definition already do that already. Because Ruby has an extensive library of ready-made code? No, because it doesn't have one compared to Perl or Python. Becuse it's a nice language design? Thats not enough reason to learn a completely new language if the one you use does what you want.
I might be prejudiced here, but i basically believe that many who like Perl do so because it's a very free-form (write-only
:-) language, suitable for quick hacks. And those who like Python do so because it is a "cleaner" language, suited to write easy-to-read code. Both camps also enjoy the fauna of ready-made objects/functions/classes/modules that lets you do things easily without reimplementing the wheel.
There are probably Perlies that think Perl is a bit too loose, and Pythonettes that think Python is a bit too strict, and these people can probably find a new friend in Ruby once it gets the library support Python and Perl has.
But for most people that are already into Perl or Python, I think that the potential gain of switching languages simply isn't even close to the effort. And most Python-lovers don't want a looser language, just like most Perl-lovers probably don't want a stricter one. They're quite happy with what they've got.
In order for a new language to be able to make it as a strong contender to Python and Perl, I think it would have to supply something that neither language has, but that all programmers want (I can't think of anything matching that criterium at the moment, if I could I would've implemented that super language already!
:-) If the language was then inbetween Python and Perl, both sides would have an easier time switching.
But only being between Perl and Python, which is just about everything Ruby seems to be at this point, isn't a reason to switch. It's just an advantage to ease migration if you happen to have something unique. Until Ruby actually invents something that makes it that much more valuable, I think most people will stay with the language they already use.
I do however wish the Ruby developers the best of luck. It is indeed a quite nice language, one I could definately imagine myself switching to if it gave me a clear advantage.
Some Annoying Features of Ruby (Score:5)
1. Strings are not value objects. Ouch! So you constantly have to worry about aliasing when you're passing strings around. Java got this right. (Though both languages fail on this count when it comes to date/time objects...sigh.)
2. I18N support is poor. Again, Java did this right (or got it much better, anyway)and made Strings sequences of characters, not bytes. This forces you to worry about your character set at the place (input) where you're actually in a position to deal with it, and then you never have to worry about it elsewhere. Ruby has some things (such as the Integer#char method) that just make no sense from an I18N point of view. Return the character represented by the receiver's value? In what charset?
3. Float uses the native architecture's floaing point. So FP programs' behaviour may differ (in very interesting ways, if you work with numbers such as the infinities and NaN) from system to system.
4. It's only related to certain styles, of course, but the semicolon-free syntax is, for me, more annoying than the semicolons. For continuation lines, I often try to split at operators (+, =, etc.) and put the operator at the beginning of the continuation line. Since a statement can't start with an operator (aside from C/Java constructs like ++, which I don't use in those situations), this makes it very natural to see the continuations, but in Ruby I have to put backslashes at the ends of all the continuation lines, now, and worse yet, make sure I edit them in and out properly when reformatting.
5. The Time object includes time zone information. This is confusing. Most stuff (such as comparating two times) seems to operate on the UTC value of the time, regardless of the time zone. But does #hour return the UTC hour or the hour in that time zone? If the latter, we can have two time objects that compare equal but where a.hour != b.hour.
Time zones are complex things. UTC and GMT are not the same thing (as Ruby seems to claim). Time zones do not have standardised unique three-character abbreviations (which is what Ruby seems to use for them. The time zone support, besides being fundementally broken in this way, is also implemented poorly; there's no easy way even to figure out the offset of the time zone of a given Time object.
And all this even before we start to get into date processing. Ruby doesn't seem to acknowledge the existence of different calendars. (Yes, even today different calendars are in use in a fairly major way. Take a look at a Japanese driver's licence if you don't believe me.)
I'm sure I could find more. And there's a bunch of stuff in Ruby that I like, too. But just from this glance, the language seems to have enough annoyances of its own that I can't see any reason for it to take over from Perl, Python, Java, or whatever.
cjs
Advantages over Perl (Score:5)
1) RUby has much nicer OO syntax than Perl - advantage is that when you go back to read the code after a month you can tell what's going on.
2) Perl's alarm doesn't work on the windoze platforms (sometimes in the corporate environement they make you use windoze), Ruby's timeout does.
3) Threads - With perl you actually have to compile a special threaded version. Ruby threads work - even on windoze.
4) Ruby has dRuby a distributed object system that is very easy to use (compared to SOAP, and other XML based approaches).
5) Hashes, arrays, strings have many more builtin functions (methods since they are objects) than Perl's Hashes, arrays and strings.
6) ease of writing extensions in C for Ruby(thoug
|
http://slashdot.org/developers/01/07/08/1955209.shtml
|
crawl-001
|
refinedweb
| 4,409
| 71.44
|
See starters using this
Community PluginView plugin on GitHub
Quickstart guide
Install this with npm:
npm install gatsby-theme-shopify-manager
Or with yarn:
yarn add gatsby-theme-shopify-manager
Set up your
gatsby-config.js:
{ resolve: `gatsby-theme-shopify-manager`, options: { shopName: `your-shop-name`, accessToken: `your-storefront-api-access-token`, }, },
Import a hook:
import {useCart} from 'gatsby-theme-shopify-manager';
Start coding. 🚀
Full documentation
The full docs are found at.
Contributing
To contribute to this repo, pull the repo and ask for the appropriate
.env values for the
/docs site. Then to start the project, simply run
yarn start at the project root.
To add a new version, take the following steps:
- Increment the
/docsversion of
gatsby-theme-shopify-managerto whatever it will be.
- Stage any changes you want to be part of the commit.
- Run
yarn versionwithin the
gatsby-theme-shopify-managerdirectory.
- Change the version number to the appropriate release number (major, minor, patch).
- Run
git push --tagsand
git push.
- Run
npm publish.
|
https://www.gatsbyjs.com/plugins/zolon-gatsby-theme-shopify-manager/
|
CC-MAIN-2021-31
|
refinedweb
| 166
| 51.14
|
Hello, I have a homework assignment due for my Intro to Programming class, I have no idea what I am doing and wanted some help. I complete it in RAPTOR and generated the C++ code from there, but my professor doesnt want us to do it like that, he wants us to write it ourselves. Can anyone help me translate this from the RAPTOR format? I would really appreciate it.
#include <iostream> #include <string> using namespace std; int main() { string raptor_prompt_variable_zzyz; ?? password; ?? length; ?? index; ?? validpassword; while (1) { raptor_prompt_variable_zzyz ="Please enter your password:"; cout << raptor_prompt_variable_zzyz << endl; cin >> Password; validPassword =false; Length =password'Length; if (Length<6) { cout << "Your password should be at least 6 characters long" << endl; } else { Index =1; while (1) { if (Password(Index)>='0' && Password(Index)<='9') { validPassword =true; } else { Index =Index+1; } if (validPassword==true || Index>Length)) break; } if (validPassword==true) { cout << "Thank you, that is a valid Password!" << endl; } else { cout << "Your password must include at least one digit (1-9)" << endl; } } if (validPassword==true)) break; } return 0; }
|
https://www.daniweb.com/programming/software-development/threads/442547/c-newbie
|
CC-MAIN-2017-34
|
refinedweb
| 171
| 56.89
|
Followers 0 Following 0 Joined Last Online
- fraserspeirs
I'm trying to write a script for the 1.6 app extension that will:
- Take some text passed in from the share sheet
- Format the text in a specific way
- Place the formatted text on the pasteboard
- Launch Pythonista and run another script that will take the text on the pasteboard and do something with it.
Here's the script:
import appex import clipboard import webbrowser initial_text = appex.get_text() # text processing stuff clipboard.set(processed_text) webbrowser.open('pythonista://NewFromClipboard.py')
Is it possible to launch a pythonista:// URL from the app extension? If not, is it possible to do something with objc_util?
Worst case, I can make it a two-step process but it would be great to have it in one place.
- fraserspeirs
Thanks. It's not a huge big deal to make it a two-step process. Just glad to know I shouldn't spend any more time on this.
|
https://forum.omz-software.com/user/fraserspeirs
|
CC-MAIN-2022-21
|
refinedweb
| 161
| 71.85
|
Your ASP.NET MVC application needs reports. What do you do? In this article, I will demonstrate how simple it is to weave SQL Server Reporting Services (SSRS) into your ASP.NET MVC Applications.
Just about every application deals with data in one form or another. It also seems that no application is complete unless it contains at least one report. In ASP.NET MVC, the most basic option is to “roll your own” reporting solution. That is certainly feasible but it is not as preferable as using a tool that is dedicated to a specific task, reporting in this case. In other words, why reinvent the wheel?
Instead of going into extensive detail on the specifics of ASP.NET MVC and SSRS, this article will focus on integrating SSRS into ASP.NET MVC. Even if you are not familiar with these technologies, this article will give you a clear understanding of the power of each of these applications in their own right and when combined. For a more extensive discussion on the details of SSRS and ASP.NET MVC, please consult the CODE Magazine archives.
In order to apply the concepts in this article, you will need SQL Server 2008 with Reporting Services installed. In addition, you will need either Visual Studio 2008 or 2010 with either ASP.NET MVC 1.0 or 2.0 installed. The code examples illustrated herein were created with Visual Studio 2008 and ASP.NET MVC 2.0. The example report displays table and column metadata from the Northwind Traders sample database. If you don’t have the Northwind database, don’t worry. Any database will work because the same metadata structures apply to all SQL Server databases. It is also assumed that you will implement these examples locally under a login that has administrator privileges. This will avoid security issues that would normally be encountered in production where the solution is distributed across multiple servers and hardware. Don’t worry though. In Part 2, I will tackle those issues!
The roadmap for this example is very simple. First, each component will be developed separately. Second, the SSRS and ASP.NET MVC components will be brought together to form a consolidated solution. In Part 2, I will expand upon the concepts presented in Part 1 and I will discuss how to pass parameter values from the ASP.NET MVC environment to SSRS. In addition, I will review important issues that may arise when deploying from a development/test environment to a production IIS Server.
The SSRS Component
The example report will use one dataset based on the SQL Query in Listing 1.
Listing 1 shows a very simple query that uses INFORMATION_SCHEMA to return a list of tables, columns and associated data types for the current database. Figure 1 and Figure 2, respectively, illustrate the data source and data set required to drive the report. Figure 3 illustrates the design session for the sample report and Figure 4 illustrates the report in preview mode. As previously mentioned, if you don’t have the Northwind Traders database available, you can use any database you have installed. The SQL query will work with any database. Just set the data source properties appropriately so that the correct database is selected.
ASP.NET MVC views are Web Forms - minus the support for view state.
The only remaining task is to deploy the report. Since you have SQL Server installed locally, go ahead and use the local report server:. Before you can deploy the report, you need to tell the Reporting Services which server to deploy the report to. To do this, you must maintain the TargetServerURL property in the Project Properties dialog box illustrated in Figure 5. Once you have specified the Server URL, you can deploy the report and data source. To do that, simply right-click the main project node in the Solution Explorer and then select the Deploy menu option. Figure 6 illustrates how the newly deployed report appears in the browser.
As far as the SSRS project is concerned, that’s it. The report is deployed and is ready to be used. The next step is to create the base ASP.NET MVC component.
The ASP.NET MVC Component
The initial version of the ASP.NET MVC component is based on the default project template, sans the testing project. There are, however, a few additions required:
- Controller method to launch report
- ASP.NET Web Form to host the Report Viewer control
That’s right! In order to pull this solution off, you need to incorporate ASP.NET Web Forms. The good news is that ASP.NET MVC is based on Web Forms. Look at any ASP.NET MVC solution and you will find the System.Web namespace. Therefore, in a very real sense, you aren’t adding anything new to an ASP.NET MVC solution.
With respect to the controller method used to launch the report, technically speaking, even that is not required. However, if at some point you wish to pass information to the SSRS context from the ASP.NET MVC context, then the controller method becomes required. On the other hand, if no such information passing requirement exists, then you are free to just call the report URL itself. Going one step further, the ASP.NET Web Form to host the controller isn’t 100% required. If all you want to do is launch the report, you can simply invoke the URL to launch the report in the browser as I’ve illustrated in Figure 6. However, if you do need to pass information from one context to another, then just as you need the controller method to send the data, you will need the ASP.NET Web Form to receive the data. In order to set the stage for the second part of this article, I will go ahead and incorporate the ASP.NET Web Form.
This brings up an interesting question, if ASP.NET MVC Views are based on Web Forms, then why do you need to include a Web Form? The answer has to do with view state. If you look at the source of an ASP.NET Web Form, you will find a lot of code that is dedicated to maintaining state in a manner similar to that of a Windows Forms application. That, after all, was a main design goal of ASP.NET; to open the world of web development to Windows developers.
With view state comes a potpourri of ASP.NET controls, which, as you might guess, require view state to work. The SSRS Report Viewer is one such control. ASP.NET MVC Views, their ASP.NET Web Form lineage notwithstanding, have no such view state. Consequently, ASP.NET MVC Views cannot use ASP.NET controls that require view state. To some, ASP.NET MVC provides a purer and cleaner web experience like Ruby on Rails. Still, some of those ASP.NET controls are awfully slick! As is often the case, the best solutions come from taking the best of different worlds and mashing them together for a consolidated solution. As you will see, apart from a few differences, ASP.NET MVC Views and ASP.NET Web Forms are not from different worlds and are not all that different.
With the SSRS report in place, let’s turn our attention to integrating the report into the ASP.NET MVC application.
Adding a Web Form to the ASP.NET MVC Application
Starting with the default ASP.NET MVC project, Figure 7 illustrates the newly added ASP.NET Web Form into the Reports folder. So far, it’s just an empty form. From the toolbox, you can easily add the MicrosoftReportViewer control. Figure 8 illustrates the newly added control to the TableListing.aspx page. As you can see, the ReportViewer control looks just like the browser view in Figure 6. Best of all, you get full navigation and export functionality for free!
With the Report Viewer control in place, you only need to perform a few more steps. First, you need to tell the control the location and name of the report. Figure 9 illustrates the necessary entries in the ReportViewer1 Properties dialog box. For the ServerReport object property, the following properties have been set:
As is often the case, the best solutions come from taking the best of different worlds and mashing them together for a consolidated solution.
- DisplayName: tablelisting
- ReportPath: /ASPMVCReports/tablelisting
- ReportServerUrl:
For the second step, you need to specify a controller method for the Home Controller. For this example, the method will be called TableListingReport(). Figure 10 illustrates how simple this new controller method is. For now, the method contains one line of code that re-directs the browser to the report viewer page. Just to close the loop, I’ll had an action link the home view that points to the new controller method. Figure 11 illustrates how the home controller’s index view appears. Clicking the Table Listing Report link launches the report as is illustrated in Figure 12.
Conclusion
If you have been wondering when you would ever combine Web Forms in an ASP.NET MVC application, hopefully, this article has helped answer that question. The example illustrated in this article is very simple. Nevertheless, it provides a strong foundation to build upon. Because a controller method was created and because the Report Viewer controller was hosted in a Web Form, the possibility exists to pass data from the ASP.NET MVC context to the SSRS report context. In Part 2 of this article I will demonstrate that concept in detail. In addition, I’ll also cover the deployment issues from the development/test to a production environment. Until the next issue - Happy Coding!!
|
https://www.codemag.com/article/1009061
|
CC-MAIN-2020-24
|
refinedweb
| 1,623
| 67.65
|
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also
#include <slp.h>);
The SLPReg() function registers the URL in pcSrvURL having the lifetime usLifetime with the attribute list in pcAttrs. The pcAttrs list is a comma-separated list of attribute assignments in on-the-wire format (including escaping of reserved characters). The sLif RFC 2608. Registrations and updates take place in the language locale of the hSLP handle.
The API library is required to perform the operation in all scopes obtained through configuration.
The language specific SLPHandle on which to register the advertisement. hSLP cannot be NULL.
The URL to register. The value of pcSrvURL cannot be NULL or the empty string.
An unsigned short giving the life time of the service advertisement, in seconds. The value must be an unsigned integer less than or equal to SLP_LIFETIME_MAXIMUM.
The service type. If pURL is a service: URL, then this parameter is ignored. pcSrvType cannot be NULL.
A comma-separated list of attribute assignment expressions for the attributes of the advertisement. pcAttrs cannot be NULL. Use the empty string, "", to indicate no attributes.
An SLPBoolean that is SLP_TRUE if the registration is new or SLP_FALSE if it is a reregistration.
A callback to report the operation completion status. callback cannot be NULL.
Memory passed to the callback code from the client. pvCookie can be NULL.
This function or its callback may return any SLP error code. See the ERRORS section in slp_api(3SLP).
The following example shows an initial registration for the “service:video://bldg15” camera service for three hours:
SLPError err; SLPHandle hSLP; SLPRegReport regreport; err = SLPReg(hSLP, "service:video://bldg15", 10800, "", "(location=B15-corridor), (scan-rate=100)", SLP_TRUE, regRpt, NULL);
When set, use this file for configuration.
See attributes(5) for descriptions of the following attributes:
slpd(1M), slp_api
|
https://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et3o/index.html
|
CC-MAIN-2019-09
|
refinedweb
| 302
| 52.26
|
new feature implemented in the Snapd 2.27 release is Android boot support, which should bring the Ubuntu Snappy technologies to a wide range of devices that are powered by Google's Linux-based Android mobile operating system, implementing support for transactional updates. Another interesting feature introduces in the Snapd 2.27 release is the snap-update-ns tool, which has been in development for a very long time. The tool promises to allow for changes to be performed dynamically in the file system inside the Snap mount namespace, which wasn't possible until now.
Complete Story
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
|
https://www.linuxtoday.com/upload/canonical-wants-to-bring-its-ubuntu-snappy-technologies-to-android-devices-170910102026.html
|
CC-MAIN-2018-26
|
refinedweb
| 112
| 56.25
|
]
Creates a trail that specifies the settings for delivery of log data to an Amazon S3 bucket.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
create] [--tags-list <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--name (string)
Specifies the name of the trail. The name and my--namespace are not valid.
- Not be in IP address format (for example, 192.168.5.4)
- is created in the current region or in all regions. The default is false, which creates a trail only in the region where you are signed in. As a best practice, consider creating trails that log events in all regions.
--enable-log-file-validation | --no-enable-log-file-validation (boolean)
Specifies whether log file integrity validation is enabled. The default is false.
Note
When you disable log file integrity validation, the chain of digest files is broken after one hour. CloudTrail does not create digest files for log files that were delivered during a period in which log file integrity validation was disabled. For example, if you enable log file integrity validation at noon on January 1, disable it at noon on January 2, and re-enable it at noon on January 10, digest files will not be created for the log files delivered from noon on January 2 to noon on January 10. The same applies whenever you stop CloudTrail logging or delete a trail.
--cloud-watch-logs-log-group-arn (string)
Specifies a log group name using an Amazon Resource Name (ARN), a unique identifier that represents the log group to which CloudTrail logs will created for all accounts in an organization in Organizations, or only for the current Amazon Web Services account. The default is false, and cannot be true unless the call is made on behalf of an Amazon Web Services account that is the management account for an organization in Organizations.
--tags-list (list)
A list of tags.
(structure)
A custom key-value pair associated with a resource such as a CloudTrail trail.
Key -> (string)The key in a key-value pair. The key must be must be no longer than 128 Unicode characters. The key must be unique for the resource to which it applies.
Value -> (string)The value in a key-value pair of a tag. The value must be no longer than 256 Unicode characters. trail
The following create-trail command creates a multi-region trail named Trail1 and specifies an S3 bucket:
aws cloudtrail create-trail --name Trail1 --s3-bucket-name my-bucket --is-multi-region-trail CloudTrail Log Files . created. The format of a trail ARN is
IsOrganizationTrail -> (boolean)
Specifies whether the trail is an organization trail.
|
https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/create-trail.html
|
CC-MAIN-2022-21
|
refinedweb
| 447
| 54.83
|
Phillip J. Eby wrote: > At 04:23 PM 11/10/2005 -0600, Ian Bicking wrote: > >> The sandbox in this case is when setuptools runs setup.py with fake >> file routines, to see if the setup.py file writes things to weird >> locations. Now that I think about it, this isn't for zip-safe >> testing, but to test if setuptools can properly wrap this. >> >> I can understand why to do this when running a distutils setup.py >> file, but could this be surpressed for a setup.py file which imports >> from setuptools? I think it should be presumed that it is safe in >> that case. > > > This is a red herring. Just because a package uses setuptools, doesn't > mean it's safe. The author might have simply taken an older script and > changed it to import setuptools. That doesn't fix any issues like > custom data installation commands, or code in the body of setup.py does > any installation. Yes, it is a red herring. After putting some print statements in my setup.py file, I realized that the problem is namespace packages. When I "import paste" it is importing another namespace package (PasteWebKit, but I don't know why that one specifically). I'm guessing the module is loaded because it is an egg, and provides an entry point, and entry points are being scanned. Or maybe just because it is a namespace package, and I don't understand how they work. Well, I do understand that they cause me constant problems, and it seems like namespace packages that aren't installed multi-version are highly problematic. I'm not sure exactly how to do this, except maybe to put paste/util on the path itself. I suppose that would work well enough. -- Ian Bicking / ianb at colorstudy.com /
|
https://mail.python.org/pipermail/distutils-sig/2005-November/005339.html
|
CC-MAIN-2016-36
|
refinedweb
| 300
| 76.11
|
Recursive browsing to get a list of all the tags is really slow because for each new folder/level the client has to send a new request to the server. I have had my recursive tag browse method tag actual minutes to complete.
Is there any way to get a flat list of tag paths from the gateway?
Here is my Browse method for reference
def browseTags(path, filt): try: results = system.tag.browse(path, filt) returnResult = [] for result in results.getResults(): # If the item has children, call the function to repeat the process but starting at the child. if result['hasChildren'] == True: try: returnResult.extend(browseTags(result['fullPath'], filt)) except: print("Failed on Path: " + result['fullPath']) else: print("Tag Found: "+ str(result)) returnResult.append(result) except: print("Failed on Path: " + path) print("Result: " + str(results)) return returnResult
|
https://forum.inductiveautomation.com/t/get-a-flat-list-of-all-tags-paths-on-the-gateway/49975
|
CC-MAIN-2021-39
|
refinedweb
| 138
| 58.18
|
In his recent post 100 most read R posts for 2012 (stats from R-bloggers) – big data, visualization, data manipulation, and other languages Tal Galili – the guy behind R-Bloggers – presents his wishlist for 2013. Among other things he states
“The next step will be to have a “publish to your WordPress/blogger” button right from the RStudio console – allowing for the smoothest R blogging experience one could dream of.
I hope we’ll see this as early as possible in 2013.”
Given that I had some troubles myself recently with finding a convenient way of creating post for this blog (a blogspot site) and that I didn't want to wait until RStudio may integrate such functionality, I decided to come up with my own work around for the time being.
But let's start from the beginning:
What I want:
- I want to blog
- I want to blog using RStudio's R markdown framework (as it is very convenient)
- I want to have code highlighting in my posts
What I don't want:
- I don't want to manually fiddle around in the html code for each of the posts I create with RStudio
- I don't want write blog posts using the compose section on blogger
- Basically, I don't want to spend a lot of time on formatting (I rather spend the time on coding)
Why I want this:
- For starters, because I'm lazy
- Secondly, I want to integrate blogging into my teaching in the near future and for that I need a convenient and straight forward solution on how to create nice posts (after all I want to teach the students how to use R, not html)
The solution (as of now) is as follows (at least for me):
Yihui Xie has produced a SyntaxHighlighter Brush for the R Language which can be used for highlighting R code in blog posts.
In order to get the SyntaxHighlighter working on your blog, simply follow this tutorial at thebiobucket*.
However, the SyntaxHighlighter by Alex Gorbatchev, and subsequently also Yihui's brush for R, works a little different from the knitr code highlighting implementation. Basically, a chunk of code created with knitr is prepended by this little bit of html code:
whereas SyntaxHighlighter requires the following format for it to work:
In order to automate the process of adjusting the brush definition needed for code highlighting using the SyntaxHighlighter brush I created the follwing function:
rmd2blogger <- function(rmdfile) { stopifnot(require(knitr)) ## knit rmdfile to - only html knit2html(rmdfile, fragment.only = TRUE) ## read knitted html file htm.name <- gsub(".Rmd", ".html", rmdfile) tmp <- readLines(htm.name) ## substitue important parts tmp <- gsub("", tmp) ## write with .blg file ending writeLines(tmp, paste(htm.name, ".blg", sep ="")) }", "", "
", "", tmp) tmp <- gsub("
This function has two important components to it:
- it uses knitr's knit2html with fragment.only set to TRUE, which means your only creating the
part of the html.
- it produces a .html.blg file in the path where the .Rmd is located where all syntax highlighting brushes are formatted to work with Yihui's R brush
To use it, simply provide the path to the .Rmd file you want to publish on your blogspot.
rmd2blogger("someRmdFile.Rmd")
Now, you can simply open the .html.blg file in a text editor and copy the complete contents into the html editor for a new post on your blogger site and the code R should look like the one above.
In case you find any bugs, please let me know.
sessionInfo()
## R version 2.15.2 (2012-10-26) ## grid stats graphics grDevices utils datasets ## [8] methods base ## ## other attached packages: ## [1] rgdal_0.7-25 raster_1.9-92 gridBase_0.4-6 ## [4] latticeExtra_0.6-19 lattice_0.20-10 RColorBrewer_1.0-5 ## [7] sp_0.9-99 knitr_0.9 ## ## loaded via a namespace (and not attached): ## [1] digest_0.6.0 evaluate_0.4.3 formatR_0.7 markdown_0.5.3 ## [5] plyr_1.7.1 stringr_0.6 tools_2.15...
|
https://www.r-bloggers.com/semi-automating-the-r-markdown-to-blogger-workflow/
|
CC-MAIN-2017-43
|
refinedweb
| 661
| 57.4
|
We have to design and create a stack that performs the operations in constant time. Here we have one problem which is how to create mergable stack? Here we perform the below operation for merge two stacks.
- push(element): Insert the element in the stack.
- pop(): Remove the top element in the stack.
- merge (stack 1, stack 2): Merge or join 2 stacks.
Example
Here is some example of creating a mergable stack please have a look.
Input
push(1, s1);
push(2, s1);
push(3, s1);
push(4, s2);
push(5, s2);
push(6, s2);
merge(s1, s2);
display(s1);
Output
Merged Stack : 3 2 1 6 5 4
Input
push(1, s1);
push(5, s2);
push(6, s2);
merge(s1, s2);
display(s1);
Output
Merged Stack : 1 6 5
Algorithm
Here we first create two stacks and then try making mergable stack.
- Create two stacks s1 and s2.
- Create a function push that accepts integer value and stack as parameter. Initialize a node in it. update data of the new node as given integer and link it to the head of the stack.
- If the head of the stack is null update tail of stack as a new node. Else update the head of the stack as a new node.
- Create a function pop that accepts stack as parameter. Check if the head of the stack is null print “stack underflow” else store the head of the stack in a new node, update head as next of head. Return the data of the new node and delete the node.
- Create a function merge which accepts two stacks as a parameter. Check if the head of stack 1 is null update it’s head as a head of stack 2 and tail as the tail of stack 2 and returns.
- Else update next of the tail of stack 1 as head of stack 2 and tail of stack 1 as the tail of stack 2.
C++ Program to create mergable stack
#include <iostream> using namespace std; class node{ public: int data; node* next; }; class newStack{ public: node* head; node* tail; newStack(){ head = NULL; tail = NULL; } }; newStack* create(){ newStack* ns = new newStack(); return ns; } void push(int data, newStack* ns){ node* temp = new node(); temp->data = data; temp->next = ns->head; if(ns->head == NULL) ns->tail = temp; ns->head = temp; } int pop(newStack* ns){ if (ns->head == NULL) { cout << "stack underflow" << endl; return 0; } else{ node* temp = ns->head; ns->head = ns->head->next; int popped = temp->data; delete temp; return popped; } } void merge(newStack* ns1, newStack* ns2){ if (ns1->head == NULL){ ns1->head = ns2->head; ns1->tail = ns2->tail; return; } ns1->tail->next = ns2->head; ns1->tail = ns2->tail; } void display(newStack* ns){ node* temp = ns->head; cout<<"Merged Stack : "; while(temp != NULL) { cout << temp->data << " "; temp = temp->next; } } int main(){ newStack* s1 = create(); newStack* s2 = create(); push(1, s1); push(2, s1); push(3, s1); push(4, s2); push(5, s2); push(6, s2); merge(s1, s2); display(s1); return 0; }
Merged Stack : 3 2 1 6 5 4
Complexity Analysis to create mergable stack
Time Complexity: O(1) as all the operations are using constant time i.e. O(1).
Auxiliary Space: O(1) because we are using no extra space.
|
https://www.tutorialcup.com/interview/stack/how-to-create-mergable-stack.htm
|
CC-MAIN-2021-25
|
refinedweb
| 545
| 75.44
|
The class stores information about one class/rule of a vector layer renderer in a unified way that can be used by legend model for rendering of legend. More...
#include <qgslegendsymbolitem.h>
The class stores information about one class/rule of a vector layer renderer in a unified way that can be used by legend model for rendering of legend.
Definition at line 36 of file qgslegendsymbolitem.h.
Constructor for QgsLegendSymbolItem.
Construct item.
Does not take ownership of symbol (makes internal clone)
Definition at line 21 of file qgslegendsymbolitem.cpp.
Definition at line 39 of file qgslegendsymbolitem.cpp.
Definition at line 34 of file qgslegendsymbolitem.cpp.
Returns extra information for data-defined size legend rendering.
Normally it returns null.
Definition at line 92 of file qgslegendsymbolitem.cpp.
Returns whether the item is user-checkable - whether renderer supports enabling/disabling it.
Definition at line 62 of file qgslegendsymbolitem.h.
Determine whether given scale is within the scale range. Returns true if scale or scale range is invalid (value <= 0)
Definition at line 66 of file qgslegendsymbolitem.cpp.
Returns text label.
Definition at line 58 of file qgslegendsymbolitem.h.
Used for older code that identifies legend entries from symbol pointer within renderer.
Definition at line 65 of file qgslegendsymbolitem.h.
Indentation level that tells how deep the item is in a hierarchy of items. For flat lists level is 0.
Definition at line 83 of file qgslegendsymbolitem.h.
Definition at line 45 of file qgslegendsymbolitem.cpp.
Key of the parent legend node.
For legends with tree hierarchy
Definition at line 89 of file qgslegendsymbolitem.h.
Returns unique identifier of the rule for identification of the item within renderer.
Definition at line 60 of file qgslegendsymbolitem.h.
Max scale denominator of the scale range.
For range 1:1000 to 1:2000 this will return 2000. Value <= 0 means the range is unbounded on this side
Definition at line 80 of file qgslegendsymbolitem.h.
Min scale denominator of the scale range.
For range 1:1000 to 1:2000 this will return 1000. Value <= 0 means the range is unbounded on this side
Definition at line 74 of file qgslegendsymbolitem.h.
Sets extra information about data-defined size.
If set, this item should be converted to QgsDataDefinedSizeLegendNode rather than QgsSymbolLegendNode instance as usual. Passing null removes any data-defined size legend settings.
Takes ownership of the settings object.
Definition at line 86 of file qgslegendsymbolitem.cpp.
Sets the symbol of the item.
Does not take ownership of symbol – an internal clone is made of the symbol.
Definition at line 79 of file qgslegendsymbolitem.cpp.
Returns associated symbol. May be null.
Definition at line 56 of file qgslegendsymbolitem.h.
|
https://qgis.org/api/classQgsLegendSymbolItem.html
|
CC-MAIN-2018-51
|
refinedweb
| 443
| 53.27
|
HEPnOS client API: DataStore and DataSetHEPnOS client API: DataStore and DataSet
Let's dive into the HEPnOS client-side API. This API is in C++ and provides a number of classes, among which DataStore and DataSet, which are discussed in this section.
The HEPnOS API is located in
hepnos.hpp, which is a small file including other files containing class definitions. In the following, we assume that
#include <hepnos.hpp> is present at the top of your file, and will not explicitly write it.
All the classes and functions of the HEPnOS API are in the
hepnos namespace. To simplify codes, we assume that
using namespace hepnos; is present, and we will not be using the
hepnos:: prefix.
DataStoreDataStore
The DataStore class is the main class to instantiate to point to a running HEPnOS service.
InstantiationInstantiation
DataStore can be instantiated in two ways:
// using the path to the configuration file generated by the service DataStore datastore("/path/to/config.yml"); // using the HEPNOS_CONFIG_FILE environment variable, which should point to the file DataStore datastore();
Important note: the created DataStore instance must outlive any instances of other HEPnOS classes (DataSet, Run, SubRun, Event, etc.), since instances of these classes hold a pointer to the DataStore object to which they belong.
Creating DataSetsCreating DataSets
New DataSets can be created at the root of the datastore using the following method:
DataSet ds = datastore.createDataSet("mydataset");
This function creates a new dataset at the root of the datastore and returns a handle to it. If a dataset with the same name already exists, this function does nothing but returns a handle to the existing dataset.
Important note: the name of the created DataSet should not have
/,
%, or
# characters, since those are reserved by HEPnOS.
Accessing DataSetsAccessing DataSets
The DataStore class presents an interface similar to that of an
std::map<std::string, DataSet>. It includes
iterator and
const_iterator classes that can be dereferenced into
DataSet& and
const DataSet& respectively, as well as the following functions.
begin(),
end(): return an iterator (or a const_iterator if the DataStore is const) pointing to the beginning and the end of the list of datasets contained by the root of the datastore, sorted alphabetically.
cbegin(),
cend()return const_iterators to the beginning and the end of the DataStore.
find(name): returns an iterator (or a const_iterator if the DataStore is const) pointing to the DataSet matching the provided name (or
end()if there is no such DataSet).
lower_bound(name): returns an iterator (or a const_iterator if the DataStore is const) pointing to the first DataSet in the DataStore whose name is greater than or equal to the provided name (when compared alphabetically), or
end()if no such DataSet exists.
upper_bound(name): returns an iterator (or a const_iterator if the DataStore is const) pointing to the first DataSet in the DataStore who name is strictly less than the provided name (when compared alphabetically), or
end()if no such DataSet exists.
operator[](name): returns a DataSet object whose name matches the provided name. Contrary to the bracket operator from
std::map, this operator does not create the DataSet if it does not exist in the underlying service. Instead, it returns a DataSet instance
dssuch that
ds.valid()is
false.
Note: the DataStore class also provides a
shutdown() method that is used to shut down all nodes running the HEPnOS service. This function can be useful for a user-deployed HEPnOS service that should be terminated with the application that used it. In a facility-scale deployment of HEPnOS, this method would not be enabled to users.
|
https://xgitlab.cels.anl.gov/sds/hep/HEPnOS/-/wikis/datastore-datasets?version_id=33ce3ca4d64ce32dc1cdb0cf02184b670a83ee19
|
CC-MAIN-2021-39
|
refinedweb
| 593
| 51.58
|
A video input stream using libdc1394. More...
#include "vidl_istream.h"
#include "vidl_iidc1394_params.h"
#include <vcl_string.h>
Go to the source code of this file.
A video input stream using libdc1394.
WARNING this stream requires version 2 of the libdc1394 API. if you are also using the ffmpeg streams make sure your libavcodec and libavformat library are compiled without dc1394 support. ffmpeg supplies limited 1394 support through the libdc1394 version 1 API. Linking to both versions of libdc1394 will result in linking errors.
Currently this code works with libdc1394 version 2.0.0-rc9. The authors note that the libdc1394 API is subject to change in prerelease version. As a result, the vidl_dc1394_istream is is subject to change with it.
Definition in file vidl_dc1394_istream.h.
|
http://public.kitware.com/vxl/doc/release/core/vidl/html/vidl__dc1394__istream_8h.html
|
crawl-003
|
refinedweb
| 124
| 70.8
|
A better way to write your layouts
Author: Mislav Javor.
Looking for contributors
This project is currently a one man operation. I’ve dedicated a large chuck of
my free time to this, and would be immensely grateful for any help.
Thank you for contributing ❤️
Why Kandinsky
- Storyboards are hard to maintain, obfuscate functionality, hard to reuse
and almost impossible to merge.
- Writing in code is extremely verbose, time consuming and lacks a high level
overview of the layout
Kandinsky combines the expressiveness of storyboards with the power of Swift
language.
- Swift Powered DSL
- Easy to write and read
- Modular (build reusable components)
- Reactive (RxSwift/ReactiveCocoa) friendly
- Interactive prototyping using playgrounds
- Easy to merge
- Turing complete (
forloops,
ifstatemets, protocols, interitance etc…)
layout code
Example
If we write this:
import UIKit import Kandinsky func makeLayout(buttonTitle: String) -> Kandinsky<UIView> { return UIView.set { $0.id = "root" $0.view.backgroundColor = .lightGray }.add { r in UIButton.set { $0.id = "myButton" $0.view.setTitle(buttonTitle, for: .normal) $0.view.setTitleColor(.black, for: .normal) $0.centerInParent() }/r } } class ViewController: UIViewController, Controller { let layout = makeLayout(buttonTitle: "Push me!") override func loadView() { super.loadView() setContentView(with: layout) } func didRender(views: ViewHolder, root: AnyCanvas) { let button = views["myButton"] as? UIButton // handle button is a method which presents an alert button?.addTarget(self, action: #selector(handleButton), for: .touchUpInside) } }
We get this:
Easily add new elements to your layout. Use playgrounds for live preview
Apply behaviour to your layout while you’re writing it
Quickly iterate over multiple versions of your layout. Use playground to visualise
Requirements
- iOS 9.0+
- Xcode 8.0+
Getting Started
CocoaPods
CocoaPods is a dependency manager for Cocoa projects.
To install Kandinsky, simply add the following line to your Podfile:
pod 'Kandinsky', '~> 1.0.1'
Carthage
Carthage support coming soon
Basic layout
First make sure to
import Kandinsky.
The syntax for the DSL is really simple and intuitive. After you’ve imported
Kandinsky, any class inheriting from
UIView (e.g.
UIButton,
UILabel)
will have a
set method exposed like so:
UIView.set { (param: Kandinsky<UIView>) -> Void in /* The ID of the view, which can later be used for styling and querying */ param.id = "<id>" /* The `view` property is the instance of the view which is being created (e.g. if you're creating a UIButton the `view` property would be a UIButton) */ param.view.backgroundColor = .red }
This takes care of view creation and customization. The next step is to
add subviews. We can set this by calling the
add method (note: unlike
set,
add is not static and must be called after the
set block)
UIView.set { $0.id = "root" $0.view.backgroundColor = .black }.add { r in // calling the `add` method, `r` is placeholder for `root` UIButton.set { $0.id = "demoButton" $0.view.setTitle("Push me!", for: .normal) /* The framework exposes many methods for adding constraints like `centerInParent`, `alignParentLeading`, `under(id:)`, `toLeftOf(:)`. You can also easily create your own constraint helper methods */ $0.centerInParent() }/r // The `/` operator adds the left side item to the right side item // in this case it means it adds the Kandinsky<UIButton> to the `r` // variable which we declared above and which is an instance of // Kandinsky<UIView>. The `/` operator can add any two elements // of type `Canvas to one another` }
Here is a simple example:
let layout = UIView.set { $0.view.backgroundColor = .white }.add { r in UILabel.set { $0.id = "titleLabel" $0.view.text = "Hello world" $0.fontSize = 30 // fontSize is a helper function $0.centerInParent() }/r UIButton.set { $0.view.setTitle("Push me!", for: .normal) $0.view.setTitleColor(.blue, for: .normal) $0.under("titleLabel") $0.centerHorizontallyInParent() }/r }
This produces a view that looks like this:
Implementing the layout
In order to use your layout, simply make your
UIViewController implement the
Controller protocol. This means adding the
didRender(ViewHolder:root:) method
to your
UIViewController.
Then in the
loadView method of your
UIViewController,
call the
setContentView function and pass the instance of your layout
The
didRender method will be called after all of the views have been added
and constraints set.
You can use it to extract views from the
ViewHolder by using the
let myView = views["<view_id>"] as? UIButton // cast to your specific view
class DemoVC: UIViewController, Controller { var views: ViewHolder = [:] override func loadView() { super.loadView() setContentView(with: layout) } func didRender(views: ViewHolder, root: AnyCanvas) { self.views = views let button = views["pressMeButton"] as? UIButton button?.addTarget(self, action: #selector(didTouchButton), for: .touchUpInside) } func didTouchButton() { let title = views["titleLabel"] as? UILabel title?.text = "Pressed the button" PlaygroundHelper.alert(over: self, message: "Pressed the button") } }
Note –
setContentView only sets the
view property of the
UIViewController and
calls the
didRender method. You can call it at any time, but it’s recommended to
call it in the
loadView method
Getting the view
If you don’t want to inherit the
Controller and just want the view from your
canvas, you can do it like this:
let view = CanvasRenderer.render(demoLayout)
Extending the framework
This framework is build by following the latest and greatest in the protocol
oriented world of Swift. If you wish to add additional functionality, you only
need to extend the
Canvas protocol
extension Canvas { func alignParentLeadingAndTrailing(offset: Int) { // If you're working with constraints - you must append your code // to the `deferAfterRender` array. Otherwise your app will fail deferToAfterRender.append({ views in self.view.snp.makeConstraints { make in make.leading.equalToSuperview().offset(offset) make.trailing.equalToSuperview().offset(-offset) } }) } }
And after you’ve done that you can call it:
... UIButton.set { ... $0.alignParentLeadingAndTrailing(offset: 20) ... } ...
You can also be more specific:
extension Canvas where UIKitRepresentation == UITableView { func setDelegateAndDataSource<T>(item: T) where T: UITableViewDelegate, T: UITableViewDataSource { self.view.delegate = item self.view.dataSource = item } } extension Canvas where UIKitRepresentation: UILabel { func setTextToLoremIpsum() { self.view.text = "Lorem ipsum dolor sit..." // ... } }
And then those properties will only appear on those types of
Canvases
UITableView.set { $0.setDelegateAndDataSource(item: delegate) } UILabel.set { $0.setTextToLoremIpsum() }
Getting involved
- If you want to contribute please feel free to submit pull requests.
- If you have a feature request please open an issue.
- If you found a bug or need help please check older issues, FAQ and threads on StackOverflow (Tag ‘Kandinsky’) before submitting an issue..
Before contribute check the CONTRIBUTING file for more info.
Examples
Follow these 3 steps to run Example project:
- Clone Kandinsky repository
- Open
Kandinsky.xcworkspace
- Run the
Exampleproject
OR
- Open the
Example/Playgroundand play around with live-preview
Author
Change Log
This can be found in the CHANGELOG.md file.
Latest podspec
{ "name": "Kandinsky", "version": "1.0.1", "summary": "Swift powered, modular, UIKit compatible storyboard replacement DSL", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Mislav Javor": "[email protected]" }, "source": { "git": "", "tag": "1.0.1" }, "social_media_url": "", "platforms": { "ios": "8.0" }, "requires_arc": true, "ios": { "source_files": "Sources/*.{swift}", "frameworks": [ "UIKit", "Foundation" ] }, "dependencies": { "SnapKit": [ "~> 3.0" ] }, "pushed_with_swift_version": "3.0" }
Thu, 11 May 2017 09:40:06 +0000
|
https://tryexcept.com/articles/cocoapod/kandinsky
|
CC-MAIN-2019-35
|
refinedweb
| 1,138
| 50.12
|
BufferWriter¶
Synopsis¶
#include <ts/BufferWriterForward.h> // Custom formatter support only. #include <ts/BufferWriter.h> // General usage.
Description¶
BufferWriter is intended to increase code reliability and reduce complexity in the common
circumstance of generating formatted output strings in fixed buffers. Current usage is a mixture of
snprintf and
memcpy which provides a large scope for errors and verbose code to
check for buffer overruns. The goal is to provide a wrapper over buffer size tracking to make such
code simpler and less vulnerable to implementation error.
BufferWriter itself is an abstract class to describe the base interface to wrappers for
various types of output buffers. As a common example,
FixedBufferWriter is a subclass
designed to wrap a fixed size buffer.
FixedBufferWriter is constructed by passing it a
buffer and a size, which it then tracks as data is written. Writing past the end of the buffer is
clipped to prevent overruns.
Consider current code that looks like this.
char buff[1024]; char * ptr = buff; size_t len = sizeof(buff); //... if (len > 0) { auto n = std::min(len, thing1_len); memcpy(ptr, thing1, n); len -= n; } if (len > 0) { auto n = std::min(len, thing2_len); memcpy(ptr, thing2, n); len -= n; } if (len > 0) { auto n = std::min(len, thing3_len); memcpy(ptr, thing3, n); len -= n; }
This is changed to
char buff[1024]; ts::FixedBufferWriter bw(buff, sizeof(buff)); //... bw.write(thing1, thing1_len); bw.write(thing2, thing2_len); bw.write(thing3, thing3_len);
The remaining length is updated every time and checked every time. A series of checks, calls to
memcpy, and size updates become a simple series of calls to
BufferWriter::write().
For other types of interaction,
FixedBufferWriter provides access to the unused buffer via
BufferWriter::auxBuffer() and
BufferWriter::remaining(). This makes it possible to easily
use
snprintf, given that
snprint returns the number of bytes written.
BufferWriter::fill() is used to indicate how much of the unused buffer was used. Therefore
something like (riffing off the previous example):
if (len > 0) { len -= snprintf(ptr, len, "format string", args...); }
becomes:
bw.fill(snprintf(bw.auxBuffer(), bw.remaining(), "format string", args...));
By hiding the length tracking and checking, the result is a simple linear sequence of output chunks, making the logic much eaier to follow.
Usage¶
The header files are divided in to two variants. include/tscore/BufferWriter.h provides the basic
capabilities of buffer output control. include/tscore/BufferWriterFormat.h provides the basic
formatted output mechanisms, primarily the implementation and ancillary
classes for
BWFSpec which is used to build formatters.
BufferWriter is an abstract base class, in the style of
std::ostream. There are
several subclasses for various use cases. When passing around this is the common type.
FixedBufferWriter writes to an externally provided buffer of a fixed length. The buffer must
be provided to the constructor. This will generally be used in a function where the target buffer is
external to the function or already exists.
LocalBufferWriter is a templated class whose template argument is the size of an internal
buffer. This is useful when the buffer is local to a function and the results will be transferred
from the buffer to other storage after the output is assembled. Rather than having code like:
char buff[1024]; ts::FixedBufferWriter bw(buff, sizeof(buff));
it can be written more compactly as:
ts::LocalBufferWriter<1024> bw;
In many cases, when using
LocalBufferWriter this is the only place the size of the buffer
needs to be specified and therefore can simply be a constant without the overhead of defining a size
to maintain consistency. The choice between
LocalBufferWriter and
FixedBufferWriter
comes down to the owner of the buffer - the former has its own buffer while the latter operates on
a buffer owned by some other object. Therefore if the buffer is declared locally, use
LocalBufferWriter and if the buffer is recevied from an external source (such as via a
function parameter) use
FixedBufferWriter.
Writing¶
The basic mechanism for writing to a
BufferWriter is
BufferWriter::write().
This is an overloaded method for a character (
char), a buffer (
void *, size_t)
and a string view (
std::string_view). Because there is a constructor for
std::string_view
that takes a
const char* as a C string, passing a literal string works as expected.
There are also stream operators in the style of C++ stream I/O. The basic template is
template < typename T > ts::BufferWriter& operator << (ts::BufferWriter& w, T const& t);
The stream operators are provided as a convenience, the primary mechanism for formatted output is
via overloading the
bwformat() function. Except for a limited set of cases the stream operators
are implemented by calling
bwformat() with the Buffer Writer, the argument, and a default
format specification.
Reading¶
Data in the buffer can be extracted using
BufferWriter::data(). This and
BufferWriter::size() return a pointer to the start of the buffer and the amount of data
written to the buffer. This is effectively the same as
BufferWriter::view() which returns a
std::string_view which covers the output data. Calling
BufferWriter::error() will indicate
if more data than space available was written (i.e. the buffer would have been overrun).
BufferWriter::extent() returns the amount of data written to the
BufferWriter. This
can be used in a two pass style with a null / size 0 buffer to determine the buffer size required
for the full output.
Advanced¶
The
BufferWriter::clip() and
BufferWriter::extend() methods can be used to reserve space
in the buffer. A common use case for this is to guarantee matching delimiters in output if buffer
space is exhausted.
BufferWriter::clip() can be used to temporarily reduce the buffer size by
an amount large enough to hold the terminal delimiter. After writing the contained output,
BufferWriter::extend() can be used to restore the capacity and then output the terminal
delimiter.
Warning
Never call
BufferWriter::extend() without previously calling
BufferWriter::clip() and always pass the same argument value.
BufferWriter::remaining() returns the amount of buffer space not yet consumed.
BufferWriter::auxBuffer() returns a pointer to the first byte of the buffer not yet used. This
is useful to do speculative output, or do bounded output in a manner similar to using
BufferWriter::clip() and
BufferWriter::extend(). A new
BufferWriter instance
can be constructed with
ts::FixedBufferWriter subw(w.auxBuffer(), w.remaining());
or as a convenience
ts::FixedBuffer subw{w.auxBuffer()};
Output can be written to subw. If successful, then
w.fill(subw.size()) will add that
output to the main buffer. Depending on the purpose,
w.fill(subw.extent()) can be used -
this will track the attempted output if sizing is important. Note that space for any terminal
markers can be reserved by bumping down the size from
BufferWriter::remaining(). Be careful of
underrun as the argument is an unsigned type.
If there is an error then subw can be ignored and some suitable error output written to w instead. A common use case is to verify there is sufficient space in the buffer and create a “not enough space” message if not. E.g.
ts::FixedBufferWriter subw{w.auxWriter()}; this->write_some_output(subw); if (!subw.error()) w.fill(subw.size()); else w.write("Insufficient space"sv);
Examples¶
For example, error prone code that looks like
char new_via_string[1024]; // 512-bytes for hostname+via string, 512-bytes for the debug info char * via_string = new_via_string; char * via_limit = via_string + sizeof(new_via_string); // ... * via_string++ = ' '; * via_string++ = '['; // incoming_via can be max MAX_VIA_INDICES+1 long (i.e. around 25 or so) if (s->txn_conf->insert_request_via_string > 2) { // Highest verbosity via_string += nstrcpy(via_string, incoming_via); } else { memcpy(via_string, incoming_via + VIA_CLIENT, VIA_SERVER - VIA_CLIENT); via_string += VIA_SERVER - VIA_CLIENT; } *via_string++ = ']';
becomes
ts::LocalBufferWriter<1024> w; // 1K internal buffer. // ... w.write(" ["sv); if (s->txn_conf->insert_request_via_string > 2) { // Highest verbosity w.write(incoming_via); } else { w.write(std::string_view{incoming_via + VIA_CLIENT, VIA_SERVER - VIA_CLIENT}); } w.write(']');
There will be no overrun on the memory buffer in w, in strong contrast to the original code. This can be done better, as
if (w.remaining() >= 3) { w.clip(1).write(" ["sv); if (s->txn_conf->insert_request_via_string > 2) { // Highest verbosity w.write(incoming_via); } else { w.write(std::string_view{incoming_via + VIA_CLIENT, VIA_SERVER - VIA_CLIENT}); } w.extend(1).write(']'); }
This has the result that the terminal bracket will always be present which is very much appreciated by code that parses the resulting log file.
Formatted Output¶
The base
BufferWriter was made to provide memory safety for formatted output. Support for
formmatted output was made to provide type safety. The implementation deduces the types of the
arguments to be formatted and handles them in a type specific and safe way.
The formatting style is of the “prefix” or “printf” style - the format is specified first and then all the arguments. This contrasts to the “infix” or “streaming” style where formatting, literals, and argument are intermixed in the order of output. There are various arguments for both styles but conversations within the Traffic Server community indicated a clear preference for the prefix style. Therefore formatted out consists of a format string, containing formats, which are replaced during output with the values of arguments to the print function.
The primary use case for formatting is formatted output to fixed buffers. This is by far the
dominant style of output in Traffic Server and during the design phase I was told any performance loss must be
minimal. While work has and will be done to extend
BufferWriter to operate on non-fixed
buffers, such use is secondary to operating directly on memory.
Important
The overriding design goal is to provide the type specific formatting and flexibility of C++
stream operators with the performance of
snprintf and
memcpy.
This will preserve the general style of output in Traffic Server while still reaping the benefits of type safe formatting with little to no performance cost.
Type safe formatting has two major benefits -
- No mismatch between the format specifier and the argument. Although some modern compilers do better at catching this at run time, there is still risk (especially with non-constant format strings) and divergence between operating systems such that there is no universally correct choice. In addition the number of arguments can be verified to be correct which is often useful.
- Formatting can be customized per type or even per partial type (e.g.
T*for generic
T). This enables embedding common formatting work in the format system once, rather than duplicating it in many places (e.g. converting enum values to names). This makes it easier for developers to make useful error messages. See this example for more detail.
As a result of these benefits there has been other work on similar projects, to replace
printf a better mechanism. Unfortunately most of these are rather project specific and don’t
suit the use case in Traffic Server. The two best options, Boost.Format and fmt,
while good, are also not quite close enough to outweight the benefits of a version specifically
tuned for Traffic Server.
Boost.Format is not acceptable because of the Boost footprint.
fmt has the
problem of depending on C++ stream operators and therefore not having the required level of
performance or memory characteristics. Its main benefit, of reusing stream operators, doesn’t apply
to Traffic Server because of the nigh non-existence of such operators. The possibility of using C++ stream
operators was investigated but changing those to use pre-existing buffers not allocated internally
was very difficult, judged worse than building a relatively simple implementation from scratch. The
actual core implementation of formatted output for
BufferWriter is not very large - most of
the overall work will be writing formatters, work which would need to be done in any case but in
contrast to current practice, only done once.
BufferWriter supports formatting output in a style similar to Python formatting via
BufferWriter::print(). Looking at the other versions of work in this area, almost all of them
have gone with this style. Boost.Format also takes basically this same approach, just using
different paired delimiters. Traffic Server contains increasing amounts of native Python code which means many
Traffic Server developers will already be familiar (or should become familiar) with this style of formatting.
While not exactly the same at the Python version, BWF (
BufferWriter Formatting) tries to
be as similar as language and internal needs allow.
As noted previously and in the Python and even
printf way, a format string consists of
literal text in which formats are embedded. Each format marks a place where formatted data of
an argument will be placed, along with argument specific formatting. The format is divided in to
three parts, separated by colons.
While this seems a bit complex, all of it is optional. If default output is acceptable, then BWF
will work with just the format
{}. In a sense,
{} serves the same function for output as
auto does for programming - the compiler knows the type, it should be able to do something
reasonable without the programmer needing to be explicit.
format ::= "{" [name] [":" [specifier] [":" extension]] "}" name ::= index | ICHAR+ index ::= non-negative integer extension ::= ICHAR* ICHAR ::= a printable ASCII character except for '{', '}', ':'
name
The
nameof the argument to use. This can be a non-negative integer in which case it is the zero based index of the argument to the method call. E.g.
{0}means the first argument and
{2}is the third argument after the format.
bw.print("{0} {1}", 'a', 'b')=>
a b
bw.print("{1} {0}", 'a', 'b')=>
b a
The
namecan be omitted in which case it is treated as an index in parallel to the position in the format string. Only the position in the format string matters, not what names other format elements may have used.
bw.print("{0} {2} {}", 'a', 'b', 'c')=>
a c c
bw.print("{0} {2} {2}", 'a', 'b', 'c')=>
a c c
Note that an argument can be printed more than once if the name is used more than once.
bw.print("{0} {} {0}", 'a', 'b')=>
a b a
bw.print("{0} {1} {0}", 'a', 'b')=>
a b a
Alphanumeric names refer to values in a global table. These will be described in more detail someday. Such names, however, do not count in terms of default argument indexing.
specifier
Basic formatting control.
specifier ::= [[fill]align][sign]["#"]["0"][[min][.precision][,max][type]] fill ::= fill-char | URI-char URI-char ::= "%" hex-digit hex-digit fill-char ::= printable character except "{", "}", ":", "%" align ::= "<" | ">" | "=" | "^" sign ::= "+" | "-" | " " min ::= non-negative integer precision ::= positive integer max ::= non-negative integer type ::= type: "g" | "s" | "S" | "x" | "X" | "d" | "o" | "b" | "B" | "p" | "P" hex-digit ::= "0" .. "9" | "a" .. "f" | "A" .. "F"
The output is placed in a field that is at least
minwide and no more than
maxwide. If the output is less than
minthen
The
fillcharacter is used for the extra space required. This can be an explicit character or a URI encoded one (to allow otherwise reserved characters).
The output is shifted according to the
align.
- <
Align to the left, fill to the right.
- >
Align to the right, fill to the left.
- ^
Align in the middle, fill to left and right.
- =
Numerically align, putting the fill between the sign character and the value.
The output is clipped by
maxwidth characters and by the end of the buffer.
precisionis used by floating point values to specify the number of places of precision.
typeis used to indicate type specific formatting. For integers it indicates the output radix and if
#is present the radix is prefix is generated (one of
0xb,
0,
0x). Format types of the same letter are equivalent, varying only in the character case used for output. Most commonly ‘x’ prints values in lower cased hexadecimal (
0x1337beef) while ‘X’ prints in upper case hexadecimal (
0X1337BEEF). Note there is no upper case decimal or octal type because case is irrelevant for those.
For several specializations the hexadecimal format is taken to indicate printing the value as if it were a hexidecimal value, in effect providing a hex dump of the value. This is the case for
std::string_viewand therefore a hex dump of an object can be done by creating a
std::string_viewcovering the data and then printing it with
{:x}.
The string type (‘s’ or ‘S’) is generally used to cause alphanumeric output for a value that would normally use numeric output. For instance, a
boolis normally
0or
1. Using the type ‘s’ yields
true` or ``false. The upper case form, ‘S’, applies only in these cases where the formatter generates the text, it does not apply to normally text based values unless specifically noted.
extension
- Text (excluding braces) that is passed to the type specific formatter function. This can be used to provide extensions for specific argument types (e.g., IP addresses). The base logic ignores it but passes it on to the formatting function which can then behave different based on the extension.
Usage Examples¶
Some examples, comparing
snprintf and
BufferWriter::print().
if (len > 0) { auto n = snprintf(buff, len, "count %d", count); len -= n; buff += n; } bw.print("count {}", count); // -- if (len > 0) { auto n = snprintf(buff, len, "Size %" PRId64 " bytes", sizeof(thing)); len -= n; buff += n; } bw.print("Size {} bytes", sizeof(thing)); // -- if (len > 0) { auto n = snprintf(buff, len, "Number of items %ld", thing->count()); len -= n; buff += n; } bw.print("Number of items {}", thing->count());
Enumerations become easier. Note in this case argument indices are used in order to print both a
name and a value for the enumeration. A key benefit here is the lack of need for a developer to know
the specific free function or method needed to do the name lookup. In this case,
HttpDebugNuames::get_server_state_name. Rather than every developer having to memorize the
assocation between the type and the name lookup function, or grub through the code hoping for an
example, the compiler is told once and henceforth does the lookup. The internal implementation of
this is here
if (len > 0) { auto n = snprintf(buff, len, "Unexpected event %d in state %s[%d] for %.*s", event, HttpDebugNames::get_server_state_name(t_state.current.state), t_state.current.state, static_cast<int>(host_len), host); buff += n; len -= n; } bw.print("Unexpected event {0} in state {1}[{1:d}] for {2}", event, t_state.current.state, std::string_view{host, host_len});
Using
std::string, which illustrates the advantage of a formatter overloading knowing how to
get the size from the object and not having to deal with restrictions on the numeric type (e.g.,
that
%.*s requires an
int, not a
size_t).
if (len > 0) { len -= snprintf(buff, len, "%.*s", static_cast<int>(s.size()), s.data); } bw.print("{}", s);
IP addresses are much easier. There are two big advantages here. One is not having to know the
conversion function name. The other is the lack of having to declare local variables and having to
remember what the appropriate size is. Beyond there this code is more performant because the output
is rendered directly in the output buffer, not rendered to a temporary and then copied over. This
lack of local variables can be particularly nice in the context of a
switch statement where
local variables for a
case mean having to add extra braces, or declare the temporaries at an
outer scope.
char ip_buff1[INET6_ADDRPORTSTRLEN]; char ip_buff2[INET6_ADDRPORTSTRLEN]; ats_ip_nptop(ip_buff1, sizeof(ip_buff1), addr1); ats_ip_nptop(ip_buff2, sizeof(ip_buff2), add2); if (len > 0) { snprintf(buff, len, "Connecting to %s from %s", ip_buff1, ip_buff2); } bw.print("Connecting to {} from {}", addr1, addr2);
User Defined Formatting¶
To get the full benefit of type safe formatting it is necessary to provide type specific formatting
functions which are called when a value of that type is formatted. This is how type specific
knowledge such as the names of enumeration values are encoded in a single location. Additional type
specific formatting can be provided via the
extension field. Without this, special formatting
requires extra functions and additional work at the call site, rather than a single consolidated
formatting function.
To provide a formatter for a type
V the function
bwformat is overloaded. The signature
would look like this:
BufferWriter& ts::bwformat(BufferWriter& w, BWFSpec const& spec, V const& v)
w is the output and spec the parsed specifier, including the extension (if any). The calling framework will handle basic alignment as per spec therfore the overload does not need to unless the alignment requirements are more detailed (e.g. integer alignment operations) or performance is critical. In the latter case the formatter should make sure to use at least the minimum width in order to disable any additional alignment operation.
It is important to note that a formatter can call another formatter. For example, the formatter for pointers looks like:
// Pointers that are not specialized. inline BufferWriter & bwformat(BufferWriter &w, BWFSpec const &spec, const void * ptr) { BWFSpec ptr_spec{spec}; ptr_spec._radix_lead_p = true; if (ptr_spec._type == BWFSpec::DEFAULT_TYPE || ptr_spec._type == 'p') { // if default or specifically 'p', switch to lower case hex. ptr_spec._type = 'x'; } else if (ptr_spec._type == 'P') { // Incoming 'P' means upper case hex. ptr_spec._type = 'X'; } return bw_fmt::Format_Integer(w, ptr_spec, reinterpret_cast<intptr_t>(ptr), false); }
The code checks if the type
p or
P was used in order to select the appropriate case, then
delegates the actual rendering to the integer formatter with a type of
x or
X as
appropriate. In turn other formatters, if given the type
p or
P can cast the value to
const void* and call
bwformat on that to output the value as a pointer.
To help reduce duplication, the output stream operator
operator<< is defined to call this
function with a default constructed
BWFSpec instance so that absent a specific overload
a BWF formatter will also provide a C++ stream output operator.
Enum Example¶
For a specific example of using BufferWriter formatting to make debug messages easier, consider the
case of
HttpDebugNames. This is a class that serves as a namespace to provide various
methods that convert state machine related data into descriptive strings. Currently this is
undocumented (and even uncommented) and is therefore used infrequently, as that requires either
blind cut and paste, or tracing through header files to understand the code. This can be greatly
simplified by adding formatters to proxy/http/HttpDebugNames.h
inline ts::BufferWriter & bwformat(ts::BufferWriter &w, ts::BWFSpec const &spec, HttpTransact::ServerState_t state) { if (spec.has_numeric_type()) { // allow the user to force numeric output with '{:d}' or other numeric type. return bwformat(w, spec, static_cast<uintmax_t>(state)); } else { return bwformat(w, spec, HttpDebugNames::get_server_state_name(state)); } }
With this in place, any one wanting to print the name of the server state enumeration can do
bw.print("state {}", t_state.current_state);
There is no need to remember names like
HttpDebugNames nor which method in it does the
conversion. The developer making the
HttpDebugNames class or equivalent can take care of
that in the same header file that provides the type.
Note
In actual practice, due to this method being so obscure it’s not actually used as far as I can determine.
Argument Forwarding¶
It will frequently be useful for other libraries to allow local formatting (such as
Errata).
For such cases the class methods will need to take variable arguments and then forward them on to
the formatter.
BufferWriter provides the
BufferWriter::printv() overload for this
purpose. Instead of taking variable arguments, these overloads take a
std::tuple of
arguments. Such as tuple is easily created with std::forward_as_tuple. A standard implementation that
uses the
std::string overload for
bwprint() would look like
template < typename ... Args > std::string message(string_view fmt, Args &&... args) { std::string zret; return ts::bwprint(zret, fmt, std::forward_as_tuple(args...)); }
This gathers the argument (generally references to the arguments) in to a single tuple which is then passed by reference, to avoid restacking the arguments for every nested function call. In essence the arguments are put on the stack (inside the tuple) once and a reference to that stack is passed to nested functions.
Specialized Types¶
These are types for which there exists a type specific BWF formatter.
std::string_view
Generally the contents of the view.
- ‘x’ or ‘X’
- A hexadecimal dump of the contents of the view in lower (‘x’) or upper (‘X’) case.
- ‘p’ or ‘P’
- The pointer and length value of the view in lower (‘p’) or upper (‘P’) case.
The
precisionis interpreted specially for this type to mean “skip
precisioninitial characters”. When combined with
maxthis allows a mechanism for printing substrings of the
std::string_view. For instance, to print the 10th through 20th characters the format
{:.10,20}would suffice. Given the method
substrfor
std::string_viewis cheap, it’s unclear how useful this is.
sockaddr const*
The IP address is printed. Fill is used to fill in address segments if provided, not to the minimum width if specified.
IpEndpointand
IpAddrare supported with the same formatting. The formatting support in this case is extensive because of the commonality and importance of IP address data.
Type overrides
- ‘p’ or ‘P’
- The pointer address is printed as hexadecimal lower (‘p’) or upper (‘P’) case.
The extension can be used to control which parts of the address are printed. These can be in any order, the output is always address, port, family. The default is the equivalent of “ap”. In addition, the character ‘=’ (“numeric align”) can be used to internally right justify the elements.
- ‘a’
- The address.
- ‘p’
- The port (host order).
- ‘f’
- The IP address family.
- ‘=’
- Internally justify the numeric values. This must be the first or second character. If it is the second the first character is treated as the internal fill character. If omitted ‘0’ (zero) is used.
E.g.
void func(sockaddr const* addr) { bw.print("To {}", addr); // -> "To 172.19.3.105:4951" bw.print("To {0::a} on port {0::p}", addr); // -> "To 172.19.3.105 on port 4951" bw.print("To {::=}", addr); // -> "To 127.019.003.105:04951" bw.print("Using address family {::f}", addr); bw.print("{::a}",addr); // -> "172.19.3.105" bw.print("{::=a}",addr); // -> "172.019.003.105" bw.print("{::0=a}",addr); // -> "172.019.003.105" bw.print("{:: =a}",addr); // -> "172. 19. 3.105" bw.print("{:>20:a}",addr); // -> " 172.19.3.105" bw.print("{:>20:=a}",addr); // -> " 172.019.003.105" bw.print("{:>20: =a}",addr); // -> " 172. 19. 3.105" }
Format Classes¶
Although the extension for a format can be overloaded to provide additional features, this can become
too confusing and complex to use if it is used for fundamentally different semantics on the same
based type. In that case it is better to provide a format wrapper class that holds the base type
but can be overloaded to produce different (wrapper class based) output. The classic example is
errno which is an integral type but frequently should be formatted with additional information
such as the descriptive string for the value. To do this the format wrapper class
ts::bwf::Errno
is provided. Using it is simple:
w.print("File not open - {}", ts::bwf::Errno(errno));
which will produce output that looks like
“File not open - EACCES: Permission denied [13]”
For
errno this is handy in another way as
ts::bwf::Errno will preserve the value of
errno across other calls that might change it. E.g.:
ts::bwf::Errno last_err(errno); // some other code generating diagnostics that might tweak errno. w.print("File not open - {}", last_err);
This can also be useful for user defined data types. For instance, in the HostDB the type of the entry is printed in multiple places and each time this code is repeated
"%s%s %s", r->round_robin ? "Round-Robin" : "", r->reverse_dns ? "Reverse DNS" : "", r->is_srv ? "SRV" : "DNS"
This could be wrapped in a class,
HostDBType such as
struct HostDBType { HostDBInfo* _r { nullptr }; HostDBType(r) : _r(r) {} };
Then define a formatter for the wrapper
BufferWriter& bwformat(BufferWriter& w, BWFSpec const& spec, HostDBType const& wrap) { return w.print("{}{} {}", wrap._r->round_robin ? "Round-Robin" : "", r->reverse_dns ? "Reverse DNS" : "", r->is_srv ? "SRV" : "DNS"); }
Now this can be output elsewhere with just
w.print(“{}”, HostDBType(r));
If this is used multiple places, this is cleaner and more robust as it can be updated everywhere with a change in a single code location.
These are the existing format classes in header file
bfw_std_format.h. All are in the
ts::bwf namespace.
- class
Errno¶
Formatting for
errno. Generically the formatted output is the short name, the description, and the numeric value. A format type of
dwill generate just the numeric value, while a format type of
swill generate just the short name and description.
- template<typename ...
Args>
FirstOf(Args&&... args)¶
Print the first non-empty string in an argument list. All arguments must be convertible to
std::string_view.
By far the most common case is the two argument case used to print a special string if the base string is null or empty. For instance, something like this:
w.print("{}", name != nullptr ? name : "<void>")
This could also be done like:
w.print("{}", ts::bwf::FirstOf(name, "<void>"));
In addition, if the first argument is a local variable that exists only to do the empty check, that variable can eliminated entirely. E.g.:
const char * name = thing.get_name(); w.print("{}", name != nullptr ? name : "<void>")
can be simplified tow.print(“{}”, ts::bwf::FirstOf(thing.get_name(), “<void>”));
In general avoiding ternary operators in the print argument list makes the code cleaner and easier to understand.
- class
Date¶
Date formatting in the
strftimestyle.
Date(time_t epoch, std::string_view fmt = "%Y %b %d %H:%M:%S")¶
epoch is the time to print. fmt is the format for printing which is identical to that of strftime. The default format looks like “2018 Jun 08 13:55:37”.
Date(std::string_view fmt = "%Y %b %d %H:%M:%S")¶
As previous except the epoch is the current epoch at the time the constructor is invoked. Therefore if the current time is to be printed the default constructor can be used.
When used the format specification can take an extention of “local” which formats the time as local time. Otherwise it is GMT.
w.print("{}", Date("%H:%M"));will print the hour and minute as GMT values.
w.print("{::local}", Date("%H:%M"));will When used the format specification can take an extention of “local” which formats the time as local time. Otherwise it is GMT.
w.print("{}", Date("%H:%M"));will print the hour and minute as GMT values.
w.print("{::local}", Date("%H:%M"));will print the hour and minute in the local time zone.
w.print("{::gmt}"), ...);will output in GMT if additional explicitness is desired.
- class
OptionalAffix¶
Affix support for printing optional strings. This enables printing a string such the affixes are printed only if the string is not empty. An empty string (or
nullptr) yields no output. A common situation in which is this is useful is code like
printf("%s%s", data ? data : "", data ? " " : "");
or something like
if (data) { printf("%s ", data); }
Instead
OptionalAffixcan be used in line, which is easier if there are multiple items. E.g.w.print(“{}”, ts::bwf::OptionalAffix(data)); // because default is single trailing space suffix.
OptionalAffix(const char *text, std::string_view suffix = " ", std::string_view prefix = "")¶
Create a format wrapper with suffix and prefix. If text is
nullptror is empty generate no output. Otherwise print the prefix, text, suffix.
OptionalAffix(std::string_view text, std::string_view suffix = " ", std::string_view prefix = "")¶
Create a format wrapper with suffix and prefix. If text is
nullptror is empty generate no output. Otherwise print the prefix, text, suffix. Note that passing
std::stringas the first argument will work for this overload.
Global Names¶
As a convenience, there are a few predefined global names that can be used to generate output. These
do not take any arguments to
BufferWriter::print(), the data needed for output is either
process or thread global and is retrieved directly. They also are not counted for automatic indexing.
- now
- The epoch time in seconds.
- tick
- The high resolution clock tick.
- timestamp
- UTC time in the format “Year Month Date Hour:Minute:Second”, e.g. “2018 Apr 17 14:23:47”.
- thread-id
- The id of the current thread.
- thread-name
- The name of the current thread.
- ts-thread
- A pointer to the Traffic Server
Threadobject for the current thread. This is useful for comparisons.
- ts-ethread
- A pointer to the Traffic Server
EThreadobject for the current thread. This is useful for comparisons or to indicate if the thread is an
EThread(if not, the value will be
nullptr).
For example, to have the same output as the normal diagnostic messages with a timestamp and the current thread:
bw.print("{timestamp} {ts-thread} Counter is {}", counter);
Note that even though no argument is provided the global names do not count as part of the argument indexing, therefore the preceeding example could be written as:
bw.print("{timestamp} {ts-thread} Counter is {0}", counter);
Working with standard I/O¶
BufferWriter can be used with some of the basic I/O functionality of a C++ environment. At the lowest
level the output stream operator can be used with a file descriptor or a
std::ostream. For these
examples assume
bw is an instance of
BufferWriter with data in it.
int fd = open("some_file", O_RDWR); bw >> fd; // Write to file. bw >> std::cout; // write to standard out.
For convenience a stream operator for
std::stream is provided to make the use more natural.
std::cout << bw; std::cout << bw.view(); // identical effect as the previous line.
Using a
BufferWriter with
printf is straight forward by use of the sized string
format code.
ts::LocalBufferWriter<256> bw; bw.print("Failed to connect to {}", addr1); printf("%.*s\n", static_cast<int>(bw.size()), bw.data());
Alternatively the output can be null terminated in the formatting to avoid having to pass the size.
ts::LocalBufferWriter<256> bw; printf("%s\n", bw.print("Failed to connect to {}\0", addr1).data());
When using C++ stream I/O, writing to a stream can be done without any local variables at all.
std::cout << ts::LocalBufferWriter<256>().print("Failed to connect to {}\n", addr1);
This is handy for temporary debugging messages as it avoids having to clean up local variable
declarations later, particularly when the types involved themselves require additional local
declarations (such as in this example, an IP address which would normally require a local text
buffer for conversion before printing). As noted previously this is particularly useful inside a
case where local variables are more annoying to set up.
Reference¶
- class
BufferWriter¶
BufferWriteris the abstract base class which defines the basic client interface. This is intended to be the reference type used when passing concrete instances rather than having to support the distinct types.
- BufferWriter &
write(void *data, size_t length)¶
Write to the buffer starting at data for at most length bytes. If there is not enough room to fit all the data, none is written.
- BufferWriter &
write(std::string_view str)¶
Write the string str to the buffer. If there is not enough room to write the string no data is written.
- BufferWriter &
write(char c)¶
Write the character c to the buffer. If there is no space in the buffer the character is not written.
- BufferWriter &
fill(size_t n)¶
Increase the output size by n without changing the buffer contents. This is used in conjuction with
BufferWriter::auxBuffer()after writing output to the buffer returned by that method. If this method is not called then such output will not be counted by
BufferWriter::size()and will be overwritten by subsequent output.
- size_t
remaining() const¶
Return the number of available remaining bytes that could be written to the buffer.
- BufferWriter &
clip(size_t n)¶
Reduce the available space by n bytes.
- BufferWriter &
extend(size_t n)¶
Increase the available space by n bytes. Extreme care must be used with this method as
BufferWriterwill trust the argument, having no way to verify it. In general this should only be used after calling
BufferWriter::clip()and passing the same value. Together these allow the buffer to be temporarily reduced to reserve space for the trailing element of a required pair of output strings, e.g. making sure a closing quote can be written even if part of the string is not.
- size_t
extent() const¶
Return the total number of bytes in all attempted writes to this buffer. This value allows a successful retry in case of overflow, presuming the output data doesn’t change. This works well with the standard “try before you buy” approach of attempting to write output, counting the characters needed, then allocating a sufficiently sized buffer and actually writing.
- BufferWriter &
Print the arguments according to the format. See bw-formatting.
- template<typename ...
Args>
BufferWriter &
printv(TextView fmt, std::tuple<Args...> &&args)¶
Print the arguments in the tuple args according to the format. See bw-formatting.
- std::ostream &
operator>>(std::ostream &stream) const¶
Write the contents of the buffer to stream and return stream.
- class
FixedBufferWriter: public BufferWriter¶
This is a class that implements
BufferWriteron a fixed buffer, passed in to the constructor.
FixedBufferWriter(void *buffer, size_t length)¶
Construct an instance that will write to buffer at most length bytes. If more data is written, all data past the maximum size is discarded.
- FixedBufferWriter &
reduce(size_t n)¶
Roll back the output to have n valid (used) bytes.
- FixedBufferWriter &
reset()¶
Equivalent to
reduce(0), provide for convenience.
- FixedBufferWriter
auxWriter(size_t reserve = 0)¶
Create a new instance of
FixedBufferWriterfor the remaining output buffer. If reserve is non-zero then if possible the capacity of the returned instance is reduced by reserve bytes, in effect reserving that amount of space at the end. Note the space will not be reserved if reserve is larger than the remaining output space.
- template<size_t
N>
class
LocalBufferWriter: public BufferWriter¶
This is a convenience class which is a subclass of
FixedBufferWriter. It which creates a buffer as a member rather than having an external buffer that is passed to the instance. The buffer is N bytes long. This differs from its super class only in the constructor, which is only a default constructor.
LocalBufferWriter
::
LocalBufferWriter()¶
Construct an instance with a capacity of N.
- class
BWFSpec¶
This holds a format specifier. It has the parsing logic for a specifier and if the constructor is passed a
std::string_viewof a specifier, that will parse it and loaded into the class members. This is useful to specialized implementations of
bwformat().
- template<typename
V>
BufferWriter &
bwformat(BufferWriter &w, BWFSpec const &spec, V const &v)¶
A family of overloads that perform formatted output on a
BufferWriter. The set of types supported can be extended by defining an overload of this function for the types.
- template<typename ...
Args>
std::string &
bwprint(std::string &s, std::string_view format, Args&&... args)¶
Generate formatted output in s based on the format and arguments args. The string s is adjusted in size to be the exact length as required by the output. If the string already had enough capacity it is not re-allocated, otherwise the resizing will cause a re-allocation.
- template<typename ...
Args>
std::string &
bwprintv(std::string &s, std::string_view format, std::tuple<Args...> args)¶
Generate formatted output in s based on the format and args, which must be a tuple of the arguments to use for the format. The string s is adjusted in size to be the exact length as required by the output. If the string already had enough capacity it is not re-allocated, otherwise the resizing will cause a re-allocation.
This overload is used primarily as a back end to another function which takes the arguments for the formatting independently.
|
https://docs.trafficserver.apache.org/en/latest/developer-guide/internal-libraries/buffer-writer.en.html
|
CC-MAIN-2019-04
|
refinedweb
| 6,556
| 56.15
|
Create survival curves using kaplanmeier, the log-rank test and making plots.
Project description
kaplanmeier
- kaplanmeier is Python package to compute the kaplan meier curves, log-rank test, and make the plot instantly. This work is build on the lifelines package.
Contents
Installation
- Install kaplanmeier from PyPI (recommended). kaplanmeier is compatible with Python 3.6+ and runs on Linux, MacOS X and Windows.
- Distributed under the MIT license.
Requirements
- It is advisable to create a new environment. Pgmpy requires an older version of networkx and matplotlib.
conda create -n env_KM python=3.6 conda activate env_KM pip install matplotlib numpy pandas seaborn lifelines
Quick Start
pip install kaplanmeier
- Alternatively, install kaplanmeier from the GitHub source:
git clone cd kaplanmeier python setup.py install
Import kaplanmeier package
import kaplanmeier as km
Example:
df = km.example_data() time_event=df['time'] censoring=df['Died'] labx=df['group'] # Compute survival out=km.fit(time_event, censoring, labx)
Make figure with cii_alpha=0.05 (default)
km.plot(out)
km.plot(out, cmap='Set1', cii_lines=None, cii_alpha=0.05)
km.plot(out, cmap='Set1', cii_lines='line', cii_alpha=0.05)
km.plot(out, cmap=[(1, 0, 1),(0, 1, 1)])
km.plot(out, cmap='Set2')
km.plot(out, cmap='Set2', methodtype='custom')
- df looks like this:
time Died group 0 485 0 1 1 526 1 2 2 588 1 2 3 997 0 1 4 426 1 1 .. ... ... ... 175 183 0 1 176 3196 0 1 177 457 1 2 178 2100 1 1 179 376 0 1 [180 rows x 3 columns]
Citation
Please cite kaplanmeier in your publications if this is useful for your research. Here is an example BibTeX entry:
@misc{erdogant2019kaplanmeier, title={kaplanmeier}, author={Erdogan Taskesen}, year={2019}, howpublished={\url{}}, }
References
Maintainers
Contribute
- All kinds of contributions are welcome!
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/kaplanmeier/
|
CC-MAIN-2020-05
|
refinedweb
| 322
| 51.44
|
How to print instances of a class using print()?
class Test: def __repr__(self): return "Test()" def __str__(self): return "member of Test"t = Test() tTest()print(t)member of Test
The
__str__ method is what gets called happens when you print it, and the
__repr__ method is what happens when you use the
repr() function (or when you look at it with the interactive prompt).
If no
__str__ method is given, Python will print the result of
__repr__ instead. If you define
__str__ but not
__repr__, Python will use what you see above as the
__repr__, but still use
__str__ for printing.
As Chris Lutz mentioned, this is defined by the
__repr__ method in your class.
From the documentation of
repr():.
Given the following class Test:
class Test: def __init__(self, a, b): self.a = a self.b = b def __repr__(self): return "<Test a:%s b:%s>" % (self.a, self.b) def __str__(self): return "From str method of Test: a is %s, b is %s" % (self.a, self.b)
..it will act the following way in the Python shell:
123, 456) t<Test a:123 b:456>print repr(t)<Test a:123 b:456>print(t)From str method of Test: a is 123, b is 456print(str(t))From str method of Test: a is 123, b is 456t = Test(
If no
__str__ method is defined,
print(t) (or
print(str(t))) will use the result of
__repr__ instead
If no
__repr__ method is defined then the default is used, which is pretty much equivalent to..
def __repr__(self): return "<%s instance at %s>" % (self.__class__.__name__, id(self))
A generic way that can be applied to any class without specific formatting could be done as follows:
class Element: def __init__(self, name, symbol, number): self.name = name self.symbol = symbol self.number = number def __str__(self): return str(self.__class__) + ": " + str(self.__dict__)
And then,
elem = Element('my_name', 'some_symbol', 3)print(elem)
produces
__main__.Element: {'symbol': 'some_symbol', 'name': 'my_name', 'number': 3}
|
https://codehunter.cc/a/python/how-to-print-instances-of-a-class-using-print
|
CC-MAIN-2022-21
|
refinedweb
| 337
| 80.72
|
Hello everyone! My name is Danielle M. Villegas, and I’m the “resident blogger” at Adobe DITA World 2017. In this blog post, I will sum up the presentations of Day 1 of the conference.
There was a lot of information on this first day of Adobe DITA World 2017, but hopefully, I’ll be able to give you some of the highlights of each talk.
After Adobe TechComm Evangelist Stefan Gentz and Adobe Solutions Consulting Manager Dustin Vaughn opened up the virtual conference room, things started quickly. We were told that last year, +1,400 attendees signed up for the event. This year, Adobe DITA World got +2,500 registrations worldwide. That’s a lot of people attending!
The conference started off with a short welcome note from Adobe President and CEO, Shantanu Narayen. His main message was that our devices enable us to do so much more and in a personalized way, and we are the creators! He emphasized that this week, we’ll be hearing from experts who will help us to create, manage, and deliver world-class experiences for the best customer experiences. Adobe provides all the tools to make this happen!
In this post:
- [Keynote] Scott Abel: “The Cognitive Era and the Future of Content”
- Juhee Garg: “Technical content as part of your Marketing Strategy”
- Philipp Baur: “The Triple C of Good DITA”
- Ulrike Parson: “Bringing together what belongs together: DITA as the glue between content silos”
- Tom Aldous: “Using DITAMAP / FrameMaker for non-DITA content”
- Sarah O’Keefe: “Content – Is it really a business asset?”
- Robert Anderson: “What Is DITA Open Toolkit, and What Should FrameMaker Authors Know About It?”
Keynote from Scott Abel: “The Cognitive Era and the Future of Content”
Scott Abel is the CEO of “The Content Wrangler” company, which is the official media partner of Adobe DITA World 2017. Scott is always a dynamic speaker!
The main focus of Scott’s talk was centered around how the future of technical communications will be about creating content that does things FOR our customers by producing on machine-ready content, as content is a business asset!
Scott started his talk talking about obesity and provided some stats about that. As someone who is watching his own health, he used the business of his nutritionist, Manuel, as an example to explain how Manuel needed to create better capabilities in his content. Manuel hired Scott after Manuel helped Scott reach one of his health goals (a satisfied customer!). Manuel needed to publish his content to multiple channels, but lacked some capabilities like personalized content. His content was created to be read by humans, but not computers. As a result, this prevented the automatic interchange between systems. This problem could be fixed through single-source publishing, adopting a unified content strategy, creating intelligent content, or even adopting DITA for topic-based content. However, it might not be enough to beat the competition. A differentiator was needed, but right now, Manuel’s not able to be scalable. Patients want exceptional experiences – we make them search for what they need. As content creators, we need to focus on how we deliver those exceptional experiences. Customers don’t want to learn your jargon or search for things; they don’t want to do the work that should’ve been already done for them to get to what they want.
This is where Scott cognitive computing comes into play. Cognitive computing involves self-learning systems that learn at scale and can make reason with purpose from the data. They interact with humans naturally with natural language processing. It’s a collection of different applications. Manuel could use cognitive computing to collect various preferences and habits, as well as family and other health history data, combine it with customer personal data and public data to create a personalized content experience for his customers.
What if he could connect his services to others offering similar services? Scott presented the idea that personal service managed using content management can yield an exceptional customer experience.
What if you could do the same thing? Scott suggested that it takes at least five steps to go in this direction:
You must have a willingness to explore, not always have ROI in mind,
You will need a disruptive mindset,
You will need intrapreneurial thinking – be a risk taker,
You will need top-level leadership support, and
You will need to have the resources, time and budget.
While cognitive content is the future, it’s not as close as we’d like to think. Depending on whom you ask, artificial intelligence (AI) is estimated to be used in full practice somewhere in the next 28-75 years from now! Cognitive content relies on AI, which was originally derived from science fiction ideas.
There are three main types of AI, as Scott explained:
Strong AI – This is AI like in the movie “Her,” where the AI had god-like intelligence
Full AI – This would be more generalized AI, set to perform intellectual tasks, like HAL in the movie, “2001: A Space Odyssey” performing a Turing Test.
Narrow AI – This is what we have now, also known as Weak AI. Example of weak AI would include digital assistants like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, or Google Home. These all require machine ready content, as they are mostly chatbots. We provide commands, and the chatbots provide answers within their programmed scope.
We’re stuck in the assistive past utilizing assistive solutions. We need to move towards acting on behalf of our users to help them achieve their goal, which means we need an agentive solution that works like personal valets. We are starting to move in that direction, but we’re not quite there yet. For example, a narrow AI agent would be Roomba or a Nest Thermostat, in which the AI in each of these devices learns your behavior. Information awareness plus machines doing the work equals an agentive action like Google alerts.
How do you decide between assistive versus agentive solutions?
Agentive solutions are delegated, measurable, have user focus, and have user input. Otherwise, it’s for assistance or automatic. They are vigilant and don’t need reminders or suffer from decreasing expertise. They are attentive to details, don’t look for workarounds, and are built for speed. Assistive solutions don’t employ these features.
Scott warned that the perceived dangers using AI are
“AI Washing,” which is basically marketing mumbo-jumbo,
AI will create autonomous weapons used to kill us, and
Robots will replace us.
Scott concluded that there are many types of niche content professionals that will be needed moving forward. Technical communicators are important in the content equation! He recommended that we can learn more with a book he recommends, Designing Agentive Technology by Christopher Noessel. He also invited us all to attend the conference he runs, Information Development World, which is set to be a great conference about preparing for chatbots and other cognitive computing, which takes place on November 28th – 30th, 2017.
Juhee Garg: “Technical content as part of your Marketing Strategy”
Juhee Garg works for Adobe as the Senior Product Manager for the XML Documentation Add-on for Adobe Experience Manager, Adobe’s enterprise-class DITA Component Content Management System (CCMS).
Juhee started talking out by talking about the digital evolution, whereby the behavior of buyers is changing because they can learn a lot of information just through a click of a button. Buyers are forming opinions based on digital searches now. Business buyers don’t contact suppliers until 57% of the purchase process is complete.
A typical buyer research process might be something where the user starts at the product website, then proceeds to investigate white papers, product manuals, how-to videos, user guides, and case studies, then looking at a competitive comparison before finally looking at an admin guide. Buyers are now looking between marketing content and technical content, as it is all product information. Boundaries are blurring between these kinds of content, yet the technical content is not usually part of marketing strategy because it’s considered a cost center and lacks IT support. A better alignment of these kinds of content is needed. However, it’s hard to do when ecosystems are creating different content. System integration is an IT nightmare. It can be hard to coordinate tech content with web CMS/Marketing content, difficult to keep templates in sync, keep content integrity, push updates; shared content can get duplicated, and it’s difficult to maintain multiple systems.
How do we break down the silos? We can bring the appropriate tools, and bring the two content creation groups together on a common platform and content model that could go out to the users. The advantages of this approach would be a unified content strategy, a consistent user experience, shared and reused content, resulting in effective content and communication.
The XML Documentation Add-on for Adobe Experience Manager provides that link between authoring and collaboration. Authoring and collaboration using DITA content directly on AEM can be done, providing end-to-end content management capabilities and multi-channel publishing.
The benefits are that it produces blended publishing, and it allows you to inject technical communication content based on DITA directly into AEM websites. This way, Marketing and Technical Communication content can be mixed on one website.
Juhee gave us a demo of how this works in AEM directly. The add-on tool provides a WYSYWIG-friendly editor that allows someone who is not familiar with writing in DITA to write and edit in AEM in a DITA-friendly way. There is still a source view as well, so you can see all the XML tags and tweak as needed if you are a tech writer. All DITA features are supported by this editor. The publishing model is also very user-friendly, and easy to move elements around in the structure to change taxonomy as needed. DITA can be published as an AEM site. You can reuse templates from the marketing site if needed. It’s easy to publish, as you can publish content as a website, a PDF, HTML5, an EPub, and other advanced features. Pagewise PDF is a special output feature to create a PDF of each AEM page in the site.
Much of the editing of a website in AEM is “drag and drop” of components/widgets, which looked very easy to do! Through the demo, Juhee was able to show how marketing and tech comm can align easily using these tools, and how it worked when it was published. The add-on can be specified on whichever version of DITA you are using, as well as DITA specialization.
Adobe Experience Manager is part of the Adobe Marketing Cloud. Therefore, other components like Adobe Campaign, Adobe Target, Adobe Primetime and Adobe Social. The new 2.5 release is expected next week showing these features and new ones as well! Check back on this blog for the announcement.
Philipp Baur: “The Triple C of Good DITA”
Philipp Baur is from Congree Language Technologies, a 30-year-old company based in Germany which focuses on software and services for author assistance, serving about 90 customers. Congree Authoring Server software is an authoring server which checks spelling, grammar, style according to company standards, terminology according to the term database, abbreviation use, looks up similar sentences, looks up terminology information, and stores new content for everyone to use automatically and in real-time as you are using it. It is directly integrated into the editor you are using and can be used company-wide for consistency.
Philipp started his talk reviewing talking about topic-oriented documentation and DITA. He started with a definition of a topic, which he defined as
Independent information carrier
Contains enough information to be viable by itself
Answers a specific & unique question
Can be combined freely with other topics
Not created for specific documents but for the entire company
Why would we write this way?
Topics make content more manageable
Several authors write on the same document
Makes proofreading and translation more flexible
Saves money by reusing content
Modern devices require optimal space management
Single point of truth
Easy to apply thanks to standards like DITA
DITA offers a predefined structured for topics, and with the help of metadata topics, they can serve different target groups, products, and purposes.
The Triple Cs of good DITA was defined as cohesion, consistency, and coherence.
Cohesion:
It’s the glue between two sentences.
It’s Necessary for the reader to link two sentences.
Examples would be words like and, so, yet, etc. or pronouns like this, some, or it.
Unnecessary or wrong use of cohesion undermines the purpose of topics.
Example: I like my cat. The cat would kill me if she could.
Change to: I like my cat. But she would kill me if she could.
Coherence:
Ensures that content has some sort of inner connection.
Avoids contradictions.
Avoids confusion.
Incomplete topics increase the risk of confusion.
Example: My cat is not for sale. Contact me if you want to buy my cat.
Consistency:
The invisible thread accompanying the reader through your documentation.
It can be split into language consistency, style consistency, and content consistency.
Language consistency – British vs. American English and spelling, etc.
Style consistency – how the user is addressed; tone of voice; use of passive voice; level of politeness; sentence complexity; use of modal verbs
Content consistency – identical sentences for identical ideas; using the same word for the same concept; violations are problematic for translation and for the reader
Hard to achieve
Inconsistencies throw off the reader, interrupt concentration and can lead to misunderstandings
How Congree can help
Users can use Congree in conjunction with FrameMaker.
Philipp gave a demo, which showed that Congree can display violations of what needs correcting in a FrameMaker document so that it can be fixed for consistency. You can click on each violation to make changes as needed, and it provides the style guide integrated into Congree
You can learn more about Congree on their website.
Ulrike Parson: “Bringing together what belongs together: DITA as the glue between content silos”
Ulrike Parson is also from Germany and owns Parson AG. She presented a case study based on work she did with a semiconductor company that showed how she and her colleagues broke down the content silos of her client using DITA as the glue.
The challenges they faced:
They had to look at customer facing, certification, and developer technical documentation.
Documents were created by different groups.
Information was created in different life cycle phases.
There was a diversity of tools for authoring, content management, and publication.
Reuse across lifecycle phases and systems were done mostly by copy and paste.
There was a high effort for changing information and keeping it consistent.
It was impossible to estimate the consequences of change.
It would all feed into the middleware, which would include:
A semantic model for intelligent information
A graph database software (i-views) as semantic middleware. Comparable products: neo4j, IBM Watson
Import-Export interfaces (connectors)
Used established standards, DITA, RDF, ReqIF
Use established interfaces like REST
No interface? Use a standard exchange format. Start with one direction only
Reusable DITA modules
Use of DITA
Use centralized framework, templates and subject scheme maps
Use intelligent referencing mechanisms and configurable data as keys
If no CMS, find a way to trace use of modules
Consider how much of the semantics to transport to DITA
As a result of the project, the documentation is now formed by combining generated and authored text. This enables single-sourcing of documents and the single-sourcing of variants for documents for publication on the website, internal use, and certifications. Documents could automatically be published via a build server. The semantic middleware monitors generated DITA modules for changes in the original system. All authors use a company-wide DITA framework.
They had less success building a dashboard in the web portal. They found that it was not as successful as hoped. Despite creating a central access point to all information for development projects, it was hard for workers to migrate as they were used to their old ways. Features of the dashboard:
Visualization of the traces between the objects from the different domains (requirement linked to test case linked to component linked to documentation, for example).
Coverage analysis and metrics from relations such as between requirements and test cases.
Lessons learned from this experience were the following:
Reuse of information must be based on a solid and scalable metadata model.
Use of standards makes your solution future proof.
DITA provides a good basis for intelligent content.
Creating integrated information for reuse requires a corporate effort.
Integrated information requires new processes.
Migration can be a huge effort.
“More authors” means “more training.”
Ulrike stated that this project has not yet reached the stage of Information 4.0, but it was the required step towards it, towards intelligent information.
Intelligent content is more than reusing information.
Intelligent content is modular, machine-readable content, enriched and delivered with metadata for enhanced usage.
DITA is the perfect basis for intelligent content as it supports modularization and metadata
Standards for metadata for technical communication are emerging (like iiRDS).
Technical communicators will become content and metadata curators.
Tom Aldous: “Using DITAMAP / FrameMaker for non-DITA content”
Thomas Aldous has been in the technical communications industry for 30 years, including stints at InTech, Adobe, Acrolinx, and now consulting as The Content Era.
The goal of this session was to provide solutions for those who had non-DITA XML content in a non-FrameMaker application, but would like to change authoring and publishing environments, those who were currently authoring non-DITA XML Structured content but would like to slowly migrate from current structured or unstructured content to DITA, and those would like to manage all content in DITA XML structure and publish to output like a complete website, HTML5, PDF, mobile app, or help.
Tom was going a little fast for me to keep up with him, but this is what I was able to glean:
A DITAMAP can let you organize topics that you want to publish. You can also generate navigation files based on the map structure and generate links that get added to the topics.
A map file references one or more of any XML file using <topicref> elements. The <topicref> elements can be nested to reflect the desired hierarchical relationship of the topics.
Why does it matter? FrameMaker supports DITA, including DITAMAPs, even if the content is structured in a none-DITA structure, and can be configured for most structures.
Tom called FrameMaker the “monkey wrench” of structured publishing, as it can handle just about anything related to DITA.
XML content comes in several “flavors”:
1 long file
1 small file map with pointers to file in the order they should be published. Entity reference do not normally have DTD callouts
1 small file map with a pointer to a file in the order they should be published. With Arbortext and others, use DTD callouts
DITA-DITAMAP and BOOK MAP have pointers to topic files in the order of publishing
If starting with one long XML file (the example he used had over 6,000 lines in it), the long XML File could be converted into a DITAMAP, whereby it was cut up into chunks of content using some scripting, then mapped.
Tom noted that there are lots of examples of custom XML structures and other standards and that you don’t have to move completely to DITA, but you can also create an XSL stylesheet used to transform your current XML into the DITA structured.
Tom proceeded with a demo, which he started by opening the long XML file, which showed that you could bring in the DTD, name your application, create a template, read/write rules, namespace, and define doctypes, and also support entity locations.
By using an ExtendScript utility that The Content Era created that can chunk the files, he was able to create the DITAMAP as well. The XML view configures content in any way you want, showing that the ExtendScript will merge all the chunks seamlessly.
The way he did this within FrameMaker was to access from the top navigation Structure > Structured Application Designer. You would load up an existing application, then add all the details in the pop-up screen. Tom warned that rules are the most difficult and powerful, but it’s easily editable now in FrameMaker, as you can add the template, add doctypes, etc.
His advice was that you should understand your own domain content – make it intuitive, and create solutions for your content.
Tom likes complex challenges, so contact him if you are really stuck! He reminded us that XML is 16 years old now, so it’s a strong standard.
Sarah O’Keefe: “Content – Is it really a business asset?”
Sarah O’Keefe from Scriptorium Publishing contends that content is a business asset, especially if it’s good content. It means that people don’t return products or call customer service. Quoting Tim O’Reilly, “Technical information changes the world by spreading the knowledge of innovators.”
How is content an asset?
Meets regulatory requirements
Enables customer to use a product successfully
Provides reference information to prospective buyer doing research
Support brand message
When assets go wrong, it can be due to a number of reasons. They can include:
Product is recalled because of incorrect content
Frustrated customers return products – 25 % of returns are due to bad instructions
Prospective buyers don’t find what they need
It contradicts the branding
Information is out of date
Bad execution
The content is not appropriate for audience
How do you determine if your content an asset or liability? It needs to meet a hierarchy of content needs:
The minimum amount of viable content are the Available, Accurate, and Appropriate levels shown. If these aren’t met, then these are liabilities. Content that is Connected and Intelligent is an asset.
The customer journey now has to be looked at holistically. Content types are converging – we used to have a marketing funnel, but now we have a circular process. In marketing funnel, you matter until you buy, then you don’t matter. It’s the battle between pre-sale documents versus post-sales documents and persuasive information versus product information. In customer journey, we care about you in every step – you matter through the whole process. Convergence happens when using all the different documentation.
Sarah gave an example by telling a story about the disconnection between the website and the instructions included in the box of a product she bought. She emphasized that, unfortunately, you can’t control content use in the customer journey.
The Internet of things (IoT) and connected enterprise pulls in many of these concepts in which content is a huge asset. In the connected home, you can communicate with devices in your smart home by getting information and the device performing actions. The connected enterprise is the connected factory, such as industry 4.0, robotics, and automation, with concerns related to security.
IoT devices require intelligent content that islocation-aware, time aware, context-aware, system context-aware, and provide context-sensitive help. This can be achieved by improving search, such as the searchability (information is exposed to a search engine), findability (information shows up when people search for it, performs well with certain keywords, etc.), and discoverability (other people create links to your content, others recommend your content, reputation matters). Your reputation affects content distribution!
Digital business transformation occurs through good data hygiene. The ways to achieve this include:
No more back-formation of data
Single source of truth
Content is derived from data
Content is not data storage
For example, a product gets made, then technical publications capture information. Then product specifications change. But instead, corrections aren’t being made at the source. The document is now the source of truth, which is not an appropriate role for tech pubs. Content Management 1.0 needed namely traceability (where did content come from), content usable in various forms, distribution, and localization workflows (reduce reuse recycle). Localization is very important is this process.
Sarah concluded by saying that good content is an asset if you are following content trends by going beyond technical accuracies.
Sarah has written a white paper on the topic called, The Age of Accountability: Unifying Marketing and Technical Content with Adobe Experience Manager which you can access for more information. Technical documentation is all about scalability. Sarah concluded that content needs to be useful and consistent to the customer at an affordable rate.
Robert Anderson: “What Is DITA Open Toolkit, and What Should FrameMaker Authors Know About It?”
Robert D. Anderson from IBM has been working on DITA-OT almost since its inception.
What is DIT Open Toolkit?
Open Source software
It’s a program (technically a collection of programs) intended to read DITA and produce something else
It’s not part of DITA, but it’s there to make your DITA do something
DITA-OT is software that turns your stuff into something else (that’s not usually DITA)
It’s an implementation of DITA
Originally a developer works project at IBM
DITA-OT became open source when DITA became an open standard
Without tools who would use DITA? If it’s not a shared standard, who would want DITA- OT?
- DITA-OT was created to help all DITA users off the ground more easily, including authors and vendors trying to support DITA.
DITA-OT core features:
Key resolution
Content references
Link and metadata management
Filtering
Branch filtering, and more
It also includes pre-processing steps like merging DITAVAL conditions, merging maps, retrieving link text, evaluating @copyto, adding ditaval flags, and more
How do I pre-process? You don’t – usually, it’s for those who want to super-customize things
From core to publish:
Ships formats out of the box: HTML5 PDF, XHTML, Eclipse Help, CHM, Troff and a few others (RTF, ODT, Java Help). Some are add-ons that are not maintained anymore
Plugins available for other formats
Styles are generic and meant for customization. Check out Jarno’s PDF generator to create a custom PDF.
More exciting stuff that DITA-OT can do:
Add preprocessing steps
Add or modify generated text
Custom HTML5 navigation
Switch or extend CSS
Use XSLT to override styles
Create entirely new output formats
Extensions usually stored in a plugin as with PDF plugin generator
FrameMaker does not use DITA-OT, as it can publish in PDF, which is not DITA-OT. The more complicated you get, more than you use the toolkit.
Should you care about toolkit updates?
If you’ve decided to use an open standard, you or your tools or any partners using DITA OT, or want the benefit of common, shared Open Source, then yes! Update!
When working with business partners who use custom HTML5 framework, or use an elaborate PDF style w/custom plugins, need to publish as Morse Cole, or as XML input into an automated system, then you need DITA OT
Updates are like:
Common preprocess fixes
Changes to how final rendered content is generated for all
Who governs DITA-OT?
Active participants – anybody can participate, the more you participate, the more influence you have
Most are from language, communication, and comp science backgrounds
With great open source, comes great responsibility.
Most are volunteers or report to their own managers
If anyone CAN fix a bug or add a feature … then sometimes you have to add it on your own
Useful skills to have to use DITA-OT:
The best way to suggest changes?
GitHub pull request
GitHub issue tracker
Attend contributor calls
Ask your DITA vendor
Day 1 Conclusions
The day concluded with Stefan and Dustin thanking today’s presenters, and inviting everyone to return for tomorrow’s presentations.
See you tomorrow on Day 2 of Adobe DITA World 2017!
[…] DITA World 2017 “resident blogger” Danielle M. Villegas is here again. I hope you liked yesterday’s summaries. There was a lot going on yesterday, and still a lot today! After some technical difficulties on my […]
[…] 2017 “resident blogger” Danielle M. Villegas is here again. I hope you liked my summaries for Day 1 and Day 2 of Adobe DITA World […]
[…] Adobe DITA World 2017 “resident blogger” Danielle M. Villegas is here again. I hope you liked yesterday’s summaries. There was a lot going on yesterday, and still a lot today! After some technical difficulties on my […]
|
https://blogs.adobe.com/techcomm/2017/10/adobe-dita-world-2017-day-1-summary.html
|
CC-MAIN-2020-34
|
refinedweb
| 4,765
| 52.6
|
0.6.4-release
From OpenSim
r8959 | chi11ken | 2009-04-01 07:50:18 -0700 (Wed, 01 Apr 2009) | 1 line
Update svn properties.
r8958 | melanie | 2009-04-01 05:28:46 -0700 (Wed, 01 Apr 2009) | 4 lines
Add a "user" config option to the IRC module config. Like all other IRC config options, this has NO default, if you use the IRC module, you MUST add this setting to your ini file.
r8957 | melanie | 2009-04-01 05:13:42 -0700 (Wed, 01 Apr 2009) | 2 lines
Add a PIDFile in [Startup], which the PID will be written to
r8956 | afrisby | 2009-04-01 04:03:42 -0700 (Wed, 01 Apr 2009) | 4 lines
- MRM Adjustments
- Renamed 'Material' to PhysicsMaterial (Wood, Glass, Metal, etc.). May want to place in subclass with other physics specific properties. (We however need to support these features in ODE/etc first.)
- Renamed Faces to Materials. IObjectFace to IObjectMaterial - this is for clarity for those coming from a 3D Programming background (it also makes more sense if/when we support Meshes in core). Properties and members remain identical.
- Added XMLDoc comments to IObject to assist people writing MRMs in XMLDoc aware editors.
r8955 | afrisby | 2009-04-01 02:31:40 -0700 (Wed, 01 Apr 2009) | 5 lines
- MRM Adjustments
- Changes World.Objects from Array IObject[] to IObjectAccessor.
- Syntactically identical in most behaviour, however the indexer is now ranges not from 0..Count, but any valid internal LocalID. Additional indexers have been added for UUID.
- Example: for(int i=0;i<World.Objects.Count;i++) will not work any more, however foreach(World.Objects) will remain functional.
- This prevents us needing to create a list for each access to World.Objects which should [in theory] present a dramatic speed improvement to MRM scripts frequently accessing World.Objects.
r8954 | afrisby | 2009-03-31 23:55:39 -0700 (Tue, 31 Mar 2009) | 2 lines
- Adds World.Avatars[] to MRM Scripting. Contains an enumerable array containing IAvatar instances for each avatar in the region.
- Adds Test/TestModule.cs which demonstrates a very quick and simple MRM Test.
r8953 | lbsa71 | 2009-03-31 23:11:51 -0700 (Tue, 31 Mar 2009) | 2 lines
- Added NUnit tested utility function GetHashGuid() for future use.
- Did some aligning refactoring of the MD5 and SHA-1 functions.
r8952 | afrisby | 2009-03-31 22:58:07 -0700 (Tue, 31 Mar 2009) | 2 lines
- Removes some hard-coded magic numbers relating to RegionSize. We now use Constants.RegionSize as expected. (Working towards enlarged or smaller regionsizes that arent multiples of 256m)
- Adds minor functionality to MRM Scripting.
r8951 | melanie | 2009-03-31 18:41:40 -0700 (Tue, 31 Mar 2009) | 6 lines
Finally clean up the Scene.Permissions and permissions module. Permissions now use proper events and not delegate lists, which makes for much easier reading and much less work adding new methods. I finally found a way to raise events with return values without it becoming late bound.
r8950 | diva | 2009-03-31 18:18:21 -0700 (Tue, 31 Mar 2009) | 1 line
Added AllowLoginWithoutInventory to LoginService, to be overwritten in subclasses. Default is false. HGLoginAuthService sets it true. Better error handling dealing with inventory service faults.
r8949 | diva | 2009-03-31 15:28:56 -0700 (Tue, 31 Mar 2009) | 1 line
Replacing OpenMetaverse.StructuredData.dll again with one compiled under Windows. Apparently there's something wrong with that dll when it is compiled under mono.
r8948 | melanie | 2009-03-31 14:34:29 -0700 (Tue, 31 Mar 2009) | 2 lines
Adding the Length override to the KillPacket
r8947 | melanie | 2009-03-31 14:16:14 -0700 (Tue, 31 Mar 2009) | 2 lines
Committing LibOMV 0.6.1.1 (r2568) binaries. Sources are in -libs
r8946 | diva | 2009-03-31 11:50:40 -0700 (Tue, 31 Mar 2009) | 1 line
Replacing OpenMetaverse.StructuredData.dll with another one that jhurliman gave me. Hopefully this will ease the teleport and login problems reported today (Mantis #3366 #3373)
r8945 | diva | 2009-03-31 09:17:13 -0700 (Tue, 31 Mar 2009) | 1 line
Turning the wind module off by default.
r8944 | drscofield | 2009-03-31 05:45:34 -0700 (Tue, 31 Mar 2009) | 5 lines
From: Alan M Webb <alan_webb@us.ibm.com>
Add sanity check to fly-height calculation so that it does not attempt to retrieve information from non-existent regions.
r8943 | melanie | 2009-03-31 04:32:30 -0700 (Tue, 31 Mar 2009) | 4 lines
Thank you, StrawberryFride, for a patch that adds offline inventory functionality to the MSSQL module. Fixes Mantis #3370
r8942 | lbsa71 | 2009-03-30 22:51:28 -0700 (Mon, 30 Mar 2009) | 2 lines
- Refactored out and de-duplicated Base64ToString(string)
- Fixed minor typo
r8941 | chi11ken | 2009-03-30 22:47:53 -0700 (Mon, 30 Mar 2009) | 1 line
Thanks rtomita for a patch to add a handler for the RemoveInventoryObjects packet. (bug #3304)
r8940 | ckrinke | 2009-03-30 19:33:19 -0700 (Mon, 30 Mar 2009) | 9 lines
Thank you kindly, MCortez for a patch that: With some support from HomerH, this patch adds support for Wind Model plugins via the mono.Addin framework.
- Adds console & OSSL access to Wind Parameters
- Adds plug-in support for custom wind models
- Provides two example Wind Model plug-ins
Documentation for the wind module is temporarily located at [^] -- will move this documentation to [^] after the patch has been committed.
r8939 | chi11ken | 2009-03-30 19:00:33 -0700 (Mon, 30 Mar 2009) | 1 line
Update svn properties, add copyright header, formatting cleanup.
r8938 | melanie | 2009-03-30 14:57:18 -0700 (Mon, 30 Mar 2009) | 3 lines
Committing the changed binaries to get jhurliman's patch into our repo Fixes Mantis #3362
r8937 | diva | 2009-03-30 12:35:55 -0700 (Mon, 30 Mar 2009) | 1 line
Adds support at the inventory server for direct inventory manipulation from authorized clients using capabilities. Provided keys are verified with the designated authority. The added code is only executed for clients following HGLoginAuth procedure or similar. It does not remove any existing behavior.
r8936 | diva | 2009-03-30 12:26:25 -0700 (Mon, 30 Mar 2009) | 1 line
HGInventoryService now uses the actual authority portion of the user's key to verify the key.
r8935 | justincc | 2009-03-30 12:09:57 -0700 (Mon, 30 Mar 2009) | 2 lines
- Fix test breakage by always inserting a gods module when testing
r8934 | sdague | 2009-03-30 11:49:01 -0700 (Mon, 30 Mar 2009) | 2 lines
set MONO_THREADS_PER_CPU for the test runs, see if this makes the breaks happen less randomly.
r8933 | justincc | 2009-03-30 11:34:43 -0700 (Mon, 30 Mar 2009) | 2 lines
- minor: remove mono compiler warnings
r8932 | justincc | 2009-03-30 11:20:41 -0700 (Mon, 30 Mar 2009) | 2 lines
- refactor: Move god related methods in Scene out to a module
r8931 | diva | 2009-03-30 10:34:36 -0700 (Mon, 30 Mar 2009) | 1 line
Sigh. Manual data typing grief.
r8930 | teravus | 2009-03-30 07:13:56 -0700 (Mon, 30 Mar 2009) | 1 line
- Remove a debug line of localIDs
r8929 | teravus | 2009-03-30 07:10:24 -0700 (Mon, 30 Mar 2009) | 2 lines
- Fixing thread safety of avatar adding and removing from the Physics Scene in the ODEPlugin
- This may help one of the symptoms or mantis 3363 , however it probably won't solve the occasional NonFinite Avatar Position detected.. issues that some people see. That is probably an entirely different issue(NaN).
r8928 | melanie | 2009-03-30 04:51:34 -0700 (Mon, 30 Mar 2009) | 3 lines
Add PickInfoReply packet. Fixes Mantis #3324
r8927 | dahlia | 2009-03-29 16:59:14 -0700 (Sun, 29 Mar 2009) | 1 line
Thank you Flyte Xevious for Mantis #3361 - Implementation of llEdgeOfWorld
r8926 | diva | 2009-03-29 16:39:00 -0700 (Sun, 29 Mar 2009) | 1 line
Added Authorization client code that interfaces with HGLoginAuthService. Improved error handling in HGLoginAuthService. Instrumented HGInventoryService so that it can interface both with local and remote user and asset services.
r8925 | diva | 2009-03-29 15:04:45 -0700 (Sun, 29 Mar 2009) | 1 line
Another bit of refactoring to try to make sense of OpenSim.Framework.Communications. Everything that looks like a service, with service handlers, moved to .Services -- i.e. LoginService and Response, and GridInfoService. The rest of the changes were to adapt to the new locations of those files.
r8924 | diva | 2009-03-29 13:29:13 -0700 (Sun, 29 Mar 2009) | 1 line
Moved some files around, so that it's easier to share code between standalone and the grid services. Should not affect any functionality.
r8923 | melanie | 2009-03-29 08:24:50 -0700 (Sun, 29 Mar 2009) | 3 lines
Don't let a missing configuration cause a NRE Fixes Mantis #3355
r8922 | melanie | 2009-03-29 04:18:45 -0700 (Sun, 29 Mar 2009) | 3 lines
Add AcceptNotices member to GroupMembershipData and an overload to IGroupsModule interface
r8921 | melanie | 2009-03-28 23:14:54 -0700 (Sat, 28 Mar 2009) | 2 lines
Module interface change
r8920 | melanie | 2009-03-28 22:42:27 -0700 (Sat, 28 Mar 2009) | 4 lines
Change the client API to use GridInstantMessage for the "last mile" of IM sending. With this change, all methods that handle IM now use GridInstantMessage rather than individual parameters.
r8919 | melanie | 2009-03-28 17:48:34 -0700 (Sat, 28 Mar 2009) | 4 lines
Finish the offline IM module (still needs a server). Add rudimentary support for the mute list (no functionality yet, but allows the RetrieveInstantMessages event to fire now).
r8918 | diva | 2009-03-28 16:50:37 -0700 (Sat, 28 Mar 2009) | 1 line
Minor bug fix in UpdateItem (meta data).
r8917 | teravus | 2009-03-28 13:50:08 -0700 (Sat, 28 Mar 2009) | 3 lines
- Adding some heuristic error correction to the j2k decoder module to combat some of the situations that we see in mantis 3049 .
- This may help people on certain 64 bit systems where the end byte position of each layer data packet is incorrect but the start positions are correct.
- The console will still be extremely chatty with 'Inconsistent packet data in JPEG2000 stream:' messages, however.. if OpenSimulator was able to recover the data, it will say HURISTICS SUCCEEDED
r8916 | melanie | 2009-03-27 21:21:44 -0700 (Fri, 27 Mar 2009) | 2 lines
Add mute list request event and dummy response
r8915 | melanie | 2009-03-27 21:02:30 -0700 (Fri, 27 Mar 2009) | 3 lines
Fix the plumbing in the offline message module. No functionality yet.
r8914 | melanie | 2009-03-27 19:58:12 -0700 (Fri, 27 Mar 2009) | 2 lines
Add a module skeleton for offline IM storage. No functionality yet.
r8913 | teravus | 2009-03-27 19:41:51 -0700 (Fri, 27 Mar 2009) | 1 line
- Remove redundancies in ScenePresence
r8912 | teravus | 2009-03-27 18:40:33 -0700 (Fri, 27 Mar 2009) | 2 lines
- Adds AgentUUIDs into the CourseLocationUpdate to improve compatibility with LibOMV based clients.
- Modifies the IClientAPI! So client stacks will need to be modified!
r8911 | diva | 2009-03-27 17:08:13 -0700 (Fri, 27 Mar 2009) | 1 line
Small bugs fixed related to ownership and permissions.
r8910 | melanie | 2009-03-27 15:47:41 -0700 (Fri, 27 Mar 2009) | 3 lines
Add the events needed for profiles. Fixes Mantis #3324
r8909 | teravus | 2009-03-27 15:24:51 -0700 (Fri, 27 Mar 2009) | 1 line
- Adding a few more requirements for *nix
r8908 | teravus | 2009-03-27 15:16:40 -0700 (Fri, 27 Mar 2009) | 2 lines
- Thanks arthursv for a patch in mantis 3336 that updated several portions of code to use the new libOMV.
- fixes mantis 3336
r8907 | teravus | 2009-03-27 15:13:09 -0700 (Fri, 27 Mar 2009) | 3 lines
- This updates LibOMV to the current release 0.6.0 on March 19 2009
- Important: HttpServer.dll was changed to HttpServer_OpenSim.dll so that the HttpServer references do not conflict if you've copied the OpenMetaverse.Http.dll and requirements to the OpenSimulator bin folder.
This means that if you reference HttpServer.dll in any projects, you will need to change the reference to HttpServer_OpenSim.dll. It still uses the Same HttpServer namespace though.
r8906 | justincc | 2009-03-27 13:41:35 -0700 (Fri, 27 Mar 2009) | 2 lines
- refactor: call some EventManager triggers directly rather than through scene
r8905 | diva | 2009-03-27 13:18:55 -0700 (Fri, 27 Mar 2009) | 1 line
Moved a method GetDefaultVisualParameters from Scene to AvatarAppearance, where it belongs. Better error handling in ScenePresence.CopyFrom.
r8904 | justincc | 2009-03-27 13:03:20 -0700 (Fri, 27 Mar 2009) | 2 lines
- minor: remove one mono compiler warning
r8903 | justincc | 2009-03-27 12:45:07 -0700 (Fri, 27 Mar 2009) | 3 lines
- Implement * wildcard in save iar requests
- not yet ready for use
r8902 | justincc | 2009-03-27 11:53:11 -0700 (Fri, 27 Mar 2009) | 3 lines
- Fix single item iar saving
- Not yet ready for use
r8901 | melanie | 2009-03-27 11:51:45 -0700 (Fri, 27 Mar 2009) | 2 lines
Remove a hardcoded flow/dependency on the money module from LLCLientView
r8900 | justincc | 2009-03-27 11:13:34 -0700 (Fri, 27 Mar 2009) | 2 lines
- minor: move RegionSettingsSerializer into OpenSim.Framework.Serialization
r8899 | justincc | 2009-03-27 10:19:58 -0700 (Fri, 27 Mar 2009) | 2 lines
- Also temporarily disable T032_CrossAttachments() since this relies on the execution of T021_TestCroswsToNewRegion()
r8898 | justincc | 2009-03-27 10:17:12 -0700 (Fri, 27 Mar 2009) | 4 lines
- Apply
- This puts in example [Wind] settings into OpenSim.ini.example to match the patch which introduced those settings from last week
- Thanks maimedleech
r8897 | justincc | 2009-03-27 10:01:07 -0700 (Fri, 27 Mar 2009) | 3 lines
- Temporarily disable ScenePresenceTests.T021_TestCrossToNewRegion() as this has both WaitOnes() which don't time out and tight loops
- Going to see if this stops the freeze failures where (though there may also be a separate occasional failure in the save oar test)
r8896 | justincc | 2009-03-27 09:33:15 -0700 (Fri, 27 Mar 2009) | 4 lines
- For each test in OpenSim.Region.Framework.Scenes.Tests, tell the console when the test starts
- This is to help identify which test is freezing, since all the tests in the previous dll (coremodules) succeed
- Unfortunately they are not executed in the same order in which the results are listed in Bamboo
r8895 | diva | 2009-03-27 09:23:52 -0700 (Fri, 27 Mar 2009) | 1 line
Added the hg login procedure to the user server.
r8894 | diva | 2009-03-27 09:13:25 -0700 (Fri, 27 Mar 2009) | 1 line
svn:eol-style property set.
r8893 | diva | 2009-03-27 08:11:21 -0700 (Fri, 27 Mar 2009) | 1 line
svn:eol-style property set.
r8892 | drscofield | 2009-03-27 05:49:27 -0700 (Fri, 27 Mar 2009) | 6 lines
From: Alan Webb <alan_webb@us.ibm.com>
Fixed problem with REST services caused by changes to the OpenSimulator core code base - the comms manager had been 'modularized'.
Also added additional debugging to RemoteAdmin interface.
r8891 | diva | 2009-03-26 15:21:39 -0700 (Thu, 26 Mar 2009) | 1 line
Forgot to comment an unnecessary log message on my last commit.
r8890 | diva | 2009-03-26 15:17:57 -0700 (Thu, 26 Mar 2009) | 1 line
Notecard updates bypassing the regions. (HGStandalone only)
r8889 | justincc | 2009-03-26 13:34:02 -0700 (Thu, 26 Mar 2009) | 3 lines
- correct iar root folder location for saving of individual items
- however, rest of the path components are still currently wrong so this is broke
r8888 | justincc | 2009-03-26 13:15:36 -0700 (Thu, 26 Mar 2009) | 2 lines
- Fix build break - went a const or two too far
r8887 | justincc | 2009-03-26 13:09:12 -0700 (Thu, 26 Mar 2009) | 3 lines
- minor: change some static readonlys to consts
- adjust user profile iar saving path
r8886 | dahlia | 2009-03-26 11:12:10 -0700 (Thu, 26 Mar 2009) | 1 line
add x-axis mirror capability to sculpted prim mesh - addresses Mantis #3342
r8885 | justincc | 2009-03-26 11:04:35 -0700 (Thu, 26 Mar 2009) | 2 lines
- Ooops, wasn't that - it was the lack of a Types reference isntead
r8884 | justincc | 2009-03-26 11:02:50 -0700 (Thu, 26 Mar 2009) | 2 lines
- Add missing '.dll' to Serialization OpenMetaverse use to fix windows build break
r8883 | justincc | 2009-03-26 10:43:05 -0700 (Thu, 26 Mar 2009) | 4 lines
- Apply
- Removes long unused -useexecutepath switch
- Thanks coyled
r8882 | justincc | 2009-03-26 10:42:02 -0700 (Thu, 26 Mar 2009) | 4 lines
- Apply
- Reimplements "terrain rescale <min> <max>" command which rescales current terrain to be inbetween min and max
- Thanks jonc
r8881 | justincc | 2009-03-26 10:30:43 -0700 (Thu, 26 Mar 2009) | 2 lines
- Fix build break from last commit
r8880 | justincc | 2009-03-26 10:25:12 -0700 (Thu, 26 Mar 2009) | 2 lines
- iars: Serialize information about item creators to archive
r8879 | diva | 2009-03-26 09:05:00 -0700 (Thu, 26 Mar 2009) | 1 line
Small refactoring in Caps, no functional changes.
r8878 | melanie | 2009-03-26 08:06:20 -0700 (Thu, 26 Mar 2009) | 3 lines
Read the .map files in on sim startup. Also clean them up when an assembly is deleted.
r8877 | melanie | 2009-03-26 07:49:39 -0700 (Thu, 26 Mar 2009) | 5 lines
Avoid preprocessing scripts on region restart just to generate the line number map. Instead, write the map to a file for later use. That is not yet used, so currently runtime errors after a sim restart will have wrong line numbers
r8876 | melanie | 2009-03-26 07:28:00 -0700 (Thu, 26 Mar 2009) | 4 lines
Avoid writing script state to the filesystem if the state has not changed. Remove the unneccessary double check that was only used to provide a meaningless warning message for a corner case.
r8875 | drscofield | 2009-03-26 05:08:18 -0700 (Thu, 26 Mar 2009) | 5 lines
- adding osGetAgents() which returns a list of all avatars in the region in which the script is running.
- found a bag of space characters under my desk, thought i'd donate them to the JSON OSSL function (aka clean up)
r8874 | lbsa71 | 2009-03-25 23:56:10 -0700 (Wed, 25 Mar 2009) | 1 line
- Minor fixes, inverted an if for readability and introduced a virtual pre-process step on the asset cache
r8873 | diva | 2009-03-25 21:14:33 -0700 (Wed, 25 Mar 2009) | 1 line
One more -- CopyItem.
r8872 | diva | 2009-03-25 20:45:49 -0700 (Wed, 25 Mar 2009) | 1 line
Half-way through supporting inventory access from outside the regions -- HG standalones only, for now.
r8871 | dahlia | 2009-03-25 20:10:30 -0700 (Wed, 25 Mar 2009) | 1 line
make some arrays static to prevent excessive re-initialization - suggested by jhurliman
r8870 | melanie | 2009-03-25 18:02:19 -0700 (Wed, 25 Mar 2009) | 4 lines
Make the error messages passed to RegionReady more descriptive Patch by antont, thank you. Fixes Mantis #3338
r8869 | sdague | 2009-03-25 13:15:46 -0700 (Wed, 25 Mar 2009) | 7 lines
- Appearance patches suite: These patches are applied to allow libomv bots to wear outfits in the future.
This functionality will be upstreamed later.
- Fixed call of new AvatarAppearance without arguments, which caused bots look like clouds of gas
- Added a SendAvatarData in ScenePresence.SetAppearance, which is expected after SetAppearance is run
- Fixed AssetXferUploader: CallbackID wasn't being passed on on multiple packets asset uploads
- Set VisualParams in AvatarAppearance to stop the alien looking bot from spawning and now looks a little better.
- TODO: Set better VisualParams value then 150 to everything
r8868 | justincc | 2009-03-25 12:54:07 -0700 (Wed, 25 Mar 2009) | 2 lines
iar: centralize user uuid gathering
r8867 | lbsa71 | 2009-03-25 12:30:36 -0700 (Wed, 25 Mar 2009) | 1 line
- Changed a recursive BeginRobustReceive loop to a flat while loop to avoid lethal stack overflows.
r8866 | justincc | 2009-03-25 12:21:28 -0700 (Wed, 25 Mar 2009) | 3 lines
- minor: Adjust exception catching on load/save xml[2]/oar.
- Allow non FileNotFoundExceptions to propogate rather than post a misleading error message
r8865 | justincc | 2009-03-25 12:14:36 -0700 (Wed, 25 Mar 2009) | 3 lines
- minor: spit out creator name on save iar
- not yet ready for use
r8864 | drscofield | 2009-03-25 11:48:30 -0700 (Wed, 25 Mar 2009) | 1 line
adding presence.ControllingClient.Kick(msg) to the brew.
r8863 | drscofield | 2009-03-25 11:04:33 -0700 (Wed, 25 Mar 2009) | 2 lines
enhances the console command "kick user" with an optional alert message which will be dialog-ed to the user just before being kicked.
r8862 | melanie | 2009-03-25 04:05:01 -0700 (Wed, 25 Mar 2009) | 3 lines
Thank you, dslake, for a patch that fixes passing the start param to scripts Fixes Mantis #3330
r8861 | drscofield | 2009-03-25 00:36:56 -0700 (Wed, 25 Mar 2009) | 1 line
cleanup
r8860 | diva | 2009-03-24 22:21:47 -0700 (Tue, 24 Mar 2009) | 1 line
HGStandaloneInventoryService now serves inventory assets. No need for clients to have direct access to the asset service.
r8859 | melanie | 2009-03-24 15:12:48 -0700 (Tue, 24 Mar 2009) | 3 lines
Change llGetOwnerKey to use another overload of GetSceneObject. Fixes Mantis #3326
r8858 | justincc | 2009-03-24 14:05:20 -0700 (Tue, 24 Mar 2009) | 4 lines
- minor: remove load oar logging I accidentally left in a few commits ago
- reduce noisiness of uuid gatherer
- stop bothering to pointless complain about directory tar entries when loading an oar
r8857 | justincc | 2009-03-24 13:57:02 -0700 (Tue, 24 Mar 2009) | 2 lines
- minor: remove a couple more compiler warnings
r8856 | justincc | 2009-03-24 13:48:27 -0700 (Tue, 24 Mar 2009) | 4 lines
- Use memory more efficiently when loading oars
- This change starts the script immediately after an object is loaded, rather than waiting till they are all loaded
- This should be okay, but please report any new errors
r8855 | justincc | 2009-03-24 13:36:32 -0700 (Tue, 24 Mar 2009) | 2 lines
- minor: remove mono compiler warnings
r8854 | justincc | 2009-03-24 12:04:28 -0700 (Tue, 24 Mar 2009) | 2 lines
- Fix edit scale command - was looking for one too few arguments
r8853 | diva | 2009-03-24 11:56:32 -0700 (Tue, 24 Mar 2009) | 1 line
Added the login region's http to the login response.
r8852 | melanie | 2009-03-24 05:18:31 -0700 (Tue, 24 Mar 2009) | 4 lines
Thank you, dslake, for a patch that speeds up the Delete Old Files option in the compiler. Committed with changes. Fixes Mantis #3325
r8851 | drscofield | 2009-03-24 01:21:50 -0700 (Tue, 24 Mar 2009) | 5 lines
From: Alan Webb <alan_webb@us.ibm.com>
Changes to AssetCache and DynamicTextureModule to eliminate opportunities for lost texture updates.
r8850 | diva | 2009-03-23 19:28:17 -0700 (Mon, 23 Mar 2009) | 1 line
Preparing the loginauth service for gridmode logins.
r8849 | diva | 2009-03-22 19:37:19 -0700 (Sun, 22 Mar 2009) | 1 line
Root agent retrieval via http/REST. This is a pull, the caller gets the agent. This is not used by the regions yet, but it may be a better alternative to transfer agents even when that is done by the regions. The data is still trivial; soon it will have attachments, scripts and script state. Also, authorization tokens still to come. Serialization using OSD/json, as the other methods.
r8848 | melanie | 2009-03-22 19:02:12 -0700 (Sun, 22 Mar 2009) | 4 lines
Finish folder gives. With this commit, single item and folder gives now work across regions and also to offline avatars. Scripted gives are not yet tested and may not work.
r8847 | melanie | 2009-03-22 17:11:34 -0700 (Sun, 22 Mar 2009) | 2 lines
Committing partial work on passing folders across instances. This may crash.
r8846 | melanie | 2009-03-22 13:05:11 -0700 (Sun, 22 Mar 2009) | 3 lines
Send proper creation date on item gives, so objects will appear at the top of "Objects", not at the bottom
r8845 | melanie | 2009-03-22 11:35:16 -0700 (Sun, 22 Mar 2009) | 2 lines
Make offline gives work in SQLite standalones
r8844 | melanie | 2009-03-22 11:25:04 -0700 (Sun, 22 Mar 2009) | 2 lines
Make single item inventory gives work across regions
r8843 | melanie | 2009-03-22 09:12:48 -0700 (Sun, 22 Mar 2009) | 3 lines
MYSQL Only: Make items given while offline appear in inventory without the need to clear cache.
r8842 | melanie | 2009-03-22 08:42:22 -0700 (Sun, 22 Mar 2009) | 3 lines
Add QueryItem method to secure inventory and HG inventory, change method sig to provide additional information the HG needs.
r8841 | melanie | 2009-03-22 08:19:43 -0700 (Sun, 22 Mar 2009) | 2 lines
Fox a null ref in the inventory give module
r8840 | melanie | 2009-03-22 07:32:15 -0700 (Sun, 22 Mar 2009) | 3 lines
Cause the inventory give module to be more selective and not attempt to deliver other modules' IM types
r8839 | melanie | 2009-03-22 04:57:00 -0700 (Sun, 22 Mar 2009) | 3 lines
Thank you, dslake, for a patch that fixes XEngine linemap handling. Fixes Mantis #3321
r8838 | diva | 2009-03-21 23:31:32 -0700 (Sat, 21 Mar 2009) | 1 line
Explicit tests for local regions.
r8837 | diva | 2009-03-21 21:39:16 -0700 (Sat, 21 Mar 2009) | 1 line
Moving the LoginAuth service up, so that it can be shared among standalones and the User Server.
r8836 | diva | 2009-03-21 13:16:35 -0700 (Sat, 21 Mar 2009) | 3 lines
Initial support for authentication/authorization keys in UserManagerBase, and use of it in HGStandaloneLoginService (producer of initial key for user, and of subsequent keys) and HGStandaloneInventoryService (consumer of a key). Keys are of the form http://<authority>/<random uuid> and they are sent over http header "authorization".
r8835 | diva | 2009-03-21 12:37:35 -0700 (Sat, 21 Mar 2009) | 1 line
Minor changes in names inside.
r8834 | melanie | 2009-03-21 11:14:06 -0700 (Sat, 21 Mar 2009) | 3 lines
Add code to the inventory transfer module to use the new DB functionality Not tested!
r8833 | diva | 2009-03-21 11:03:44 -0700 (Sat, 21 Mar 2009) | 1 line
Moving HGStandaloneAssetService to a new place, and giving it a more generic name. MXP is going to use it too.
r8832 | melanie | 2009-03-21 10:46:58 -0700 (Sat, 21 Mar 2009) | 4 lines
Add a QueryItem method to the inventory subsystem. Currently implemented for MySQL only, stubs for the others. This allows updating the cache with a single item from the database.
r8831 | idb | 2009-03-21 04:42:31 -0700 (Sat, 21 Mar 2009) | 2 lines
Move a check for null PhysActor in applyImpulse so that attachments can move avatars. Fixes Mantis #3160
r8830 | teravus | 2009-03-20 16:15:16 -0700 (Fri, 20 Mar 2009) | 1 line
- Finishing up the last commit by adding ISunModule
r8829 | melanie | 2009-03-20 15:42:21 -0700 (Fri, 20 Mar 2009) | 3 lines
Thank you, mcortez, for patch to add functionality to the sun module. Fixes Mantis #3313
r8828 | lbsa71 | 2009-03-20 12:58:00 -0700 (Fri, 20 Mar 2009) | 1 line
- Ignored some gens
r8827 | lbsa71 | 2009-03-20 10:25:12 -0700 (Fri, 20 Mar 2009) | 1 line
- Normalized and pulled GetInventorySkeleton up.
r8826 | idb | 2009-03-20 08:59:11 -0700 (Fri, 20 Mar 2009) | 2 lines
Ensure the remembered velocity is zero when physical is turned off on a prim. Without this the velocity gets sent to the client and the prim appears to move. Fixes Mantis #3303
r8825 | melanie | 2009-03-20 06:57:22 -0700 (Fri, 20 Mar 2009) | 2 lines
Change DropObject to public. Fixes Mantis #3314
r8824 | lbsa71 | 2009-03-19 23:49:12 -0700 (Thu, 19 Mar 2009) | 3 lines
- De-duplicated login service some more
* Normalized m_inventoryService * Pulled AddActiveGestures up
r8823 | diva | 2009-03-19 14:43:35 -0700 (Thu, 19 Mar 2009) | 1 line
Moving GetInventoryItem up to InventoryServiceBase, since this seems like a pretty fundamental function.
r8822 | justincc | 2009-03-19 14:16:02 -0700 (Thu, 19 Mar 2009) | 4 lines
- Remove compiler warnings
- These have actually been removed from HGHyperLink.TryUnlinkRegion, because some parameters were parsed but never used.
- This might be a situation where the warnings have shown up an oversight
r8821 | justincc | 2009-03-19 12:21:17 -0700 (Thu, 19 Mar 2009) | 2 lines
- Group OpenSim.Framework.Servers interfaces together
r8820 | justincc | 2009-03-19 11:11:44 -0700 (Thu, 19 Mar 2009) | 2 lines
- refactor: Create IHttpServer interface instead of accessing BaseHttpServer via CommunicationsManager directly
r8819 | justincc | 2009-03-19 10:07:00 -0700 (Thu, 19 Mar 2009) | 3 lines
- Lock http handlers dictionary in other places as well to avoid race conditions
- No adverse effects on a quick multi-machine grid test
r8818 | justincc | 2009-03-19 09:51:21 -0700 (Thu, 19 Mar 2009) | 2 lines
- Add necessary locking to BaseHttpServer.RemoveHTTPHandler()
r8817 | justincc | 2009-03-19 09:41:23 -0700 (Thu, 19 Mar 2009) | 2 lines
- Add documentation to BaseHttpServer.AddHTTPHandler()
r8816 | drscofield | 2009-03-19 01:47:05 -0700 (Thu, 19 Mar 2009) | 2 lines
reformatting README (just noticed that that line was a bit on the long side.
r8815 | drscofield | 2009-03-19 01:42:59 -0700 (Thu, 19 Mar 2009) | 1 line
adding missing ChatSessionRequest voice capability for direct AV-AV calls.
r8814 | dahlia | 2009-03-19 00:06:30 -0700 (Thu, 19 Mar 2009) | 1 line
Thanks to mpallari for Mantis #3310: Make EventQueueGetModule more inheritable
r8813 | diva | 2009-03-18 20:33:20 -0700 (Wed, 18 Mar 2009) | 1 line
Making a couple of methods public.
r8812 | justincc | 2009-03-18 13:24:53 -0700 (Wed, 18 Mar 2009) | 4 lines
- Apply
- Store script timers in a dictionary rather than a list to make unset much more efficient
- Thanks dslake
r8811 | diva | 2009-03-18 09:37:26 -0700 (Wed, 18 Mar 2009) | 1 line
Adds support for unlink-region command in hypergrid.
r8810 | melanie | 2009-03-17 16:52:30 -0700 (Tue, 17 Mar 2009) | 2 lines
Add an event to process undelivered IMs
r8809 | justincc | 2009-03-17 14:20:58 -0700 (Tue, 17 Mar 2009) | 2 lines
- minor: remove compiler warning
r8807 | mw | 2009-03-17 11:18:24 -0700 (Tue, 17 Mar 2009) | 1 line
Fixed the looping on llParcelMediaCommandList, now PARCEL_MEDIA_COMMAND_PLAY will make the media play only once like its meant to, and PARCEL_MEDIA_COMMAND_LOOP can be used to make it loop.
r8806 | justincc | 2009-03-17 11:02:11 -0700 (Tue, 17 Mar 2009) | 2 lines
- Remove config preview 2.
r8804 | drscofield | 2009-03-17 00:03:53 -0700 (Tue, 17 Mar 2009) | 14 lines
From: Christopher Yeoh <yeohc@au1.ibm.com>
Attached is a patch which enabled through an OpenSim.ini option the ability to read long notecard lines. Currently although the data is read from the notecard it is truncated at 255 characters (same as for the LL servers. This patch allows the setting of that limit to a different value.
; Maximum length of notecard line read ; Increasing this to large values potentially opens ; up the system to malicious scripters ; NotecardLineReadCharsMax = 255
this allows for save/restore functionality using notecards without having to worry about very short line length limits.
r8803 | homerh | 2009-03-16 14:41:51 -0700 (Mon, 16 Mar 2009) | 2 lines
Mantis#3306: Thanks tlaukkan for a patch that adds primitive hierarchies support to MXP and improves client disconnect handling.
r8802 | mikem | 2009-03-15 17:43:26 -0700 (Sun, 15 Mar 2009) | 1 line
Remove OpenSim/Framework/Archive folder
r8801 | mikem | 2009-03-15 17:12:25 -0700 (Sun, 15 Mar 2009) | 3 lines
Rename OpenSim.Framework.Archive to OpenSim.Framework.Serialization
Update using statements and prebuild.xml. Also trim trailing whitespace.
r8800 | homerh | 2009-03-15 14:34:28 -0700 (Sun, 15 Mar 2009) | 4 lines
This patch improves MXP connect and disconnect functionality. - Avatars are now properly on top of terrain. - ScenePresence is now removed from Scene only once. Fixes Mantis #3302. Thanks tlaukkan.
r8799 | homerh | 2009-03-15 14:01:04 -0700 (Sun, 15 Mar 2009) | 3 lines
regionInfo isn't defined here yet, which leads to a NRE. Grid-server provided us with the data, so let's use it for now. Hopefully fixes Mantis #3297.
r8798 | ckrinke | 2009-03-15 13:22:07 -0700 (Sun, 15 Mar 2009) | 4 lines
Fixes Mantis#3301. Thank you kindly, MaimedLeech for a patch that: patch allows wind to be enabled/disabled, and wind strength set, from ini file
r8797 | ckrinke | 2009-03-15 12:45:42 -0700 (Sun, 15 Mar 2009) | 6 lines
Fixes Mantis #3294. Thank you kindly, Godfrey, for a patch that: Attached is a patch which provides osAvatarPlayAnimation() the ability to also trigger animations contained within the same prim as the script, as llStartAnimation() does. (It also modifies osAvatarStopAnimation(), otherwise the script wouldn't be able to stop animations it had started.)
r8796 | ckrinke | 2009-03-15 12:39:43 -0700 (Sun, 15 Mar 2009) | 3 lines
Fixes Mantis #3289. Thank you kindly, Ewe Loon, for a patch that: fixes Sporadic Errors in "Dictionary<InstanceData, DetectParams[]" Causing total script Failure
r8795 | diva | 2009-03-15 12:21:43 -0700 (Sun, 15 Mar 2009) | 1 line
Changing a few methods to public. This is the collection of methods that will be moved to a library somewhere else.
r8794 | dahlia | 2009-03-15 09:17:01 -0700 (Sun, 15 Mar 2009) | 2 lines
Thanks Tommil for a patch which added support for creating user accounts automatically in local sandbox if accounts authenticate is set off and connecting with MXP protocol. Mantis #3300
r8793 | dahlia | 2009-03-15 02:05:35 -0700 (Sun, 15 Mar 2009) | 1 line
fixed propagation of normalized sculpt mesh vertex normals
r8792 | chi11ken | 2009-03-14 18:22:42 -0700 (Sat, 14 Mar 2009) | 1 line
Update svn properties.
r8791 | chi11ken | 2009-03-14 15:55:17 -0700 (Sat, 14 Mar 2009) | 1 line
Thanks rtomita for a patch to add handlers for prim scale updates from libomv-based clients. (#3291)
r8790 | melanie | 2009-03-13 16:45:02 -0700 (Fri, 13 Mar 2009) | 4 lines
Thank you, mcortez, for a patch that fixes a number of long standing issues with the sun module. Fixes Mantis #3295
r8789 | justincc | 2009-03-13 13:46:53 -0700 (Fri, 13 Mar 2009) | 2 lines
- Support loading empty folders in an iar
r8788 | justincc | 2009-03-13 11:36:24 -0700 (Fri, 13 Mar 2009) | 3 lines
- Remove asset cache size and texture stat reports from ASSET STATS since these are now inaccurate
- Correct count of assets in cache
r8787 | justincc | 2009-03-13 10:34:11 -0700 (Fri, 13 Mar 2009) | 2 lines
- Config preview round 2
r8786 | mikem | 2009-03-12 22:58:32 -0700 (Thu, 12 Mar 2009) | 1 line
Mark AssetBase.Metadata with [XmlIgnore]
r8785 | justincc | 2009-03-12 13:38:28 -0700 (Thu, 12 Mar 2009) | 2 lines
- Don't fail the client login if there are no OnClientConnect listeners
r8784 | justincc | 2009-03-12 13:37:15 -0700 (Thu, 12 Mar 2009) | 2 lines
- minor: Lable the heartbeat thread with the region it's beating for
r8783 | drscofield | 2009-03-12 11:51:28 -0700 (Thu, 12 Mar 2009) | 5 lines
From: Christopher Yeoh <yeohc@au1.ibm.com>
Patch to RegionReady which adds a field which adds to the message whether the region is ready due to a server startup or due to an oar file loading.
r8782 | justincc | 2009-03-12 11:13:51 -0700 (Thu, 12 Mar 2009) | 3 lines
- Move SceneObject tests into their proper namespace
- Add some more debug code to narrow down where the tests are freezing
r8781 | drscofield | 2009-03-12 11:00:18 -0700 (Thu, 12 Mar 2009) | 3 lines
- renaming OpenSim.ini.example to OpenSim.ini.example.preview as the config reorg is still under discussion - re-installing complete OpenSim.ini.example for the time being
r8780 | drscofield | 2009-03-12 09:50:44 -0700 (Thu, 12 Mar 2009) | 4 lines
merging XmlRpcCreateUserMethod and XmlRpcCreateUserMethodEmail, adding optional about_virtual_world and about_real_world parameters to XmlRpcUpdateUserAccountMethod to allow setting of "About" and "First Life" tab in avatar profile.
r8779 | chi11ken | 2009-03-12 08:34:25 -0700 (Thu, 12 Mar 2009) | 1 line
Add rtomita to Contributors.txt.
r8778 | chi11ken | 2009-03-12 08:28:30 -0700 (Thu, 12 Mar 2009) | 1 line
Update svn properties, formatting cleanup.
r8777 | lbsa71 | 2009-03-12 04:06:41 -0700 (Thu, 12 Mar 2009) | 1 line
- Ignored some gens
r8776 | lbsa71 | 2009-03-12 03:50:59 -0700 (Thu, 12 Mar 2009) | 5 lines
- Another stab at refactoring up the CustomiseResponse function. Two fixes:
* Sometimes, null is a valid return value to indicate 'none found'. doh. * Sometimes, the Grid server does not send simURI - this you need to reconstruct yourself. Euw.
(I believe) this solves mantis issue #3287
r8775 | mikem | 2009-03-11 23:04:17 -0700 (Wed, 11 Mar 2009) | 8 lines
Move ArchiveConstants to OpenSim.Framework.Archive
- move a couple constants from InventoryArchiveConstants to
ArchiveConstants, now only one of these is needed
- change InventoryArchiveConstants references to ArchiveConstants - remove InventoryArchive AssetInventoryServer plugin dependency on
OpenSim.Region.CodeModules
- trim trailing whitespace
r8774 | mikem | 2009-03-11 23:03:59 -0700 (Wed, 11 Mar 2009) | 5 lines
Moving TarArchive to OpenSim.Framework.Archive
We now build OpenSim.Framework.Archive.dll which aims to contain code used for archiving various things in OpenSimulator. Also remove trailing whitespace.
r8773 | diva | 2009-03-11 18:43:22 -0700 (Wed, 11 Mar 2009) | 1 line
Minor bug fix. Thanks daTwitch.
r8772 | chi11ken | 2009-03-11 18:14:54 -0700 (Wed, 11 Mar 2009) | 1 line
Update svn properties, minor formatting cleanup.
r8771 | justincc | 2009-03-11 14:30:30 -0700 (Wed, 11 Mar 2009) | 2 lines
- Preliminary preview of a split of a split for OpenSim.ini.example into separate .ini.example files in a config/ directory
r8770 | lbsa71 | 2009-03-11 12:19:48 -0700 (Wed, 11 Mar 2009) | 1 line
- Reverted r8750 to do another round of debugging on mantis #3287
r8769 | chi11ken | 2009-03-11 11:46:52 -0700 (Wed, 11 Mar 2009) | 1 line
Thanks rtomita for a patch to fix inventory listings for clients using libomv. (#3285)
r8768 | justincc | 2009-03-11 11:21:47 -0700 (Wed, 11 Mar 2009) | 2 lines
- fix build break
r8767 | justincc | 2009-03-11 11:02:22 -0700 (Wed, 11 Mar 2009) | 4 lines
- Make all coded defaults match settings in OpenSim.ini.example
- In most cases, the setting in OpenSim.ini.example is taken as the canonical one since this is the file virtually everyone ends up using
- OpenSimulator will start up with a blank OpenSim.ini, in which case sqlite is the default database (as before)
r8766 | teravus | 2009-03-11 06:38:36 -0700 (Wed, 11 Mar 2009) | 1 line
- Fix silly windows prebuild borkage. To use System.Xml, the project must have it as a reference in prebuild.xml
r8764 | dahlia | 2009-03-11 02:31:02 -0700 (Wed, 11 Mar 2009) | 1 line
update some ini defaults in code - all defaults from beginning of OpenSim.ini.example thru DefaultScriptEngine = "XEngine"
r8763 | drscofield | 2009-03-11 02:07:50 -0700 (Wed, 11 Mar 2009) | 5 lines
From: Alan M Webb <alan_webb@us.ibm.com>
This fixes *another* sync error in a list/dictionary iterator. This time in WorldComm. I'm beginning to think something is going on...
r8762 | mikem | 2009-03-11 00:38:35 -0700 (Wed, 11 Mar 2009) | 12 lines
Adding AssetInventory InventoryArchive plugin
This plugin exposes an HTTP handler on the AssetInventoryServer which serves a gzipped tar file containing the contents of a user's inventory. The assets referenced by the inventory are not yet archived. At the moment only export functionality is implemented, restore functionality is missing.
prebuild.xml had to be shuffled around a bit in order for the plugin to build, as it has a dependency on OpenSim.Region.CoreModules.
Also, close a MemoryStream in a few places.
r8761 | dahlia | 2009-03-10 21:13:35 -0700 (Tue, 10 Mar 2009) | 1 line
add a taint to SOP.UpdateShape() - addresses Mantis #3277
r8760 | mikem | 2009-03-10 17:33:34 -0700 (Tue, 10 Mar 2009) | 8 lines
Remove chained tests in BasicGridTest.cs.
It's good practice to isolate unit tests so their outcome (pass/fail) does not depend on whether another test has been run/passed/failed. A method is used to populate the DB independently for each test, and a TearDown method cleans up the database after each test.
Also adding extra comment in C-style comment test.
r8759 | lbsa71 | 2009-03-10 13:42:44 -0700 (Tue, 10 Mar 2009) | 1 line
- Cleanup and CCC (Code Convention Conformance)
r8757 | lbsa71 | 2009-03-10 13:27:41 -0700 (Tue, 10 Mar 2009) | 1 line
- Cleanup and CCC (Code Convention Conformance)
r8756 | lbsa71 | 2009-03-10 13:06:25 -0700 (Tue, 10 Mar 2009) | 1 line
- Cleanup and CCC (Code Convention Conformance)
r8755 | lbsa71 | 2009-03-10 12:55:59 -0700 (Tue, 10 Mar 2009) | 1 line
- Cleanup and CCC (Code Convention Conformance)
r8754 | justincc | 2009-03-10 11:22:46 -0700 (Tue, 10 Mar 2009) | 2 lines
- minor: reduce some code duplication in BaseHttpServer
r8753 | justincc | 2009-03-10 10:57:04 -0700 (Tue, 10 Mar 2009) | 2 lines
- Enable test logging for TestSaveOarV0p2 to capture more information the next time this hiccups
r8752 | drscofield | 2009-03-10 08:54:00 -0700 (Tue, 10 Mar 2009) | 3 lines
From: Alan M Webb <alan_webb@us.ibm.com>
Fix a null reference loophole in ScenePresence.
r8751 | drscofield | 2009-03-10 08:51:17 -0700 (Tue, 10 Mar 2009) | 2 lines
fixing out-of-sync error in BaseHttpServer
r8750 | lbsa71 | 2009-03-10 05:11:19 -0700 (Tue, 10 Mar 2009) | 6 lines
- POTENTIAL BREAKAGE ***
- Finally got to the point where I could pull up the CustomiseResponse function. Major de-duplication.
- Introduced FromRegionInfo on RegionProfileData
- This revision needs both grid and standalone testing galore.
Work in progress!
r8749 | lbsa71 | 2009-03-10 04:47:34 -0700 (Tue, 10 Mar 2009) | 1 line
- Re-aligned CustomiseResponse function for imminent up-pulling
r8748 | lbsa71 | 2009-03-10 02:20:27 -0700 (Tue, 10 Mar 2009) | 1 line
- Removed unused and uncommented file
r8747 | lbsa71 | 2009-03-10 02:05:06 -0700 (Tue, 10 Mar 2009) | 3 lines
- Refactored out Create() methods to ensure proper transformation between RegionProfileData and RegionInfo
- Created ToRegionInfo method, still not using it pending peer review.
- This is a preparatory commit for a subsequent login service refactoring.
r8746 | drscofield | 2009-03-09 23:14:29 -0700 (Mon, 09 Mar 2009) | 4 lines
From: Alan Webb <alan_webb@us.ibm.com>
This commit fixes the attachment position problem described in OpenSimulator Mantis 2841 (and a couple of duplicate tickets).
r8745 | drscofield | 2009-03-09 23:04:51 -0700 (Mon, 09 Mar 2009) | 5 lines
From: Alan Webb <alan_webb@us.ibm.com>
Fix a rather significant error in the UpdateUserAccountMethod. The request was failing to set user location and orientation correctly.
r8744 | chi11ken | 2009-03-09 17:03:26 -0700 (Mon, 09 Mar 2009) | 1 line
Update svn properties, minor formatting cleanup.
r8743 | chi11ken | 2009-03-09 16:31:10 -0700 (Mon, 09 Mar 2009) | 2 lines
Thanks M1sha for a patch to reinstate the original functionality of the TreePopulatorModule. Note that the planting command 'tree' has been changed to 'tree plant'. (#3264)
r8742 | justincc | 2009-03-09 12:58:39 -0700 (Mon, 09 Mar 2009) | 2 lines
- minor: remove some mono compiler warnings
r8741 | justincc | 2009-03-09 12:40:32 -0700 (Mon, 09 Mar 2009) | 2 lines
- Add basic asset cache get test
r8740 | justincc | 2009-03-09 11:35:26 -0700 (Mon, 09 Mar 2009) | 3 lines
- Move method documentation from AssetCache up to IAssetCache
- No functional changes
r8739 | justincc | 2009-03-09 11:04:23 -0700 (Mon, 09 Mar 2009) | 4 lines
- Apply
- Some small syntax and refactoring tweaks for asset and inventory MSSQL
- This means the MSSQL db plugin now requires SQL Server 2005
r8738 | justincc | 2009-03-09 10:55:08 -0700 (Mon, 09 Mar 2009) | 6 lines
- Address
- A saved archive now immediately expires the data in the asset cache that it used, rather than retaining all the assets (esp textures) in the cache.
- This is an imperfect solution. Ideally we would only expire the assets newly requested for the archive (not existing ones). But doing that would require a lot more
restructuring.
- I don't believe there are any locking issues due to the locking performed by the underlying memory cache, but please report any issues.
r8737 | sdague | 2009-03-09 08:20:36 -0700 (Mon, 09 Mar 2009) | 3 lines
- Added TXXX in front of unit tests to make sure they are running in the correct order. Although it might not make a difference here, this pattern should be followed to avoid further issues.
From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com>
r8736 | mikem | 2009-03-09 01:07:12 -0700 (Mon, 09 Mar 2009) | 5 lines
Refactor login test class.
There were multiple tests in one test method (T011_Auth_Login). This test has been refactored into multiple tests. Common setup code was placed into a SetUp method executed before each test.
r8735 | mikem | 2009-03-09 00:29:53 -0700 (Mon, 09 Mar 2009) | 7 lines
Fix tests broken in r8732.
Recent changes in the code handling login_to_simulator XMLRPC method calls caused two tests to fail because not enough parameters were being supplied with the method call. The parameters added in this patch work, but I'm not sure whether they are actually correct or even relevant. Diva, please look over this.
r8734 | mikem | 2009-03-09 00:29:34 -0700 (Mon, 09 Mar 2009) | 11 lines
Implemented FetchAssetMetadataSet in DB backends.
This method fetches metadata for a subset of the entries in the assets database. This functionality is used in the ForEach calls in the asset storage providers in AssetInventoryServer. With this implemented, frontends such as the BrowseFrontend should now work.
- MySQL: implemented, sanity tested - SQLite: implemented, sanity tested - MSSQL: implemented, not tested - NHibernate: not implemented
r8733 | teravus | 2009-03-08 21:33:53 -0700 (Sun, 08 Mar 2009) | 4 lines
- Tweak llMoveToTarget per mantis 3265
- Add some comments to the Wind Module
- Add the BinBVH decoder/encoder as a scene object (to encode/decode animations programmatically).
- Add m_sitState for upcoming code to improve sit results.
r8732 | diva | 2009-03-08 16:17:49 -0700 (Sun, 08 Mar 2009) | 6 lines
Making the web_login_key code work, even if the LL Viewer doesn't support it. Other clients can launch the LL Viewer with something like this, for example: Process.Start("C:\\Program Files\\SecondLife\\SecondLife.exe", "-loginuri " + loginuri + "?web_login_key=" + web_login_key + " -login " + firstName + " " + lastName + " -multiple"); This requires a prior step for actually getting the key, which can be done like this:
r8731 | lbsa71 | 2009-03-08 12:33:19 -0700 (Sun, 08 Mar 2009) | 10 lines
Thank you tlaukkan for a patch that: Upgraded to MXP 0.4 version and cleaned up field naming.
- Updated code to compile against MXP 0.4 version.
- Cleaned up field naming conventions.
- Added support for logging in with region name.
- Filled in new fields of JoinResponseMEssage.
- Added support for SynchronizationBeginEvent and SynchronizationEndEvent.
- Commented out periodic debug log.
- Added networking startup log messages.
This closes mantis #3277
r8730 | adjohn | 2009-03-07 09:27:07 -0700 (Sat, 07 Mar 2009) | 1 line
Moving Windows Installer to forge.
r8729 | chi11ken | 2009-03-07 09:16:00 -0700 (Sat, 07 Mar 2009) | 1 line
Minor formatting cleanup.
r8728 | idb | 2009-03-07 07:39:42 -0700 (Sat, 07 Mar 2009) | 2 lines
Correct a typo, purely cosmetic. FixesMantis #3263
r8727 | idb | 2009-03-07 07:16:26 -0700 (Sat, 07 Mar 2009) | 3 lines
Limit the message length from llInstantMessage to 1024 characters Also truncate messages that may exceed the limit set by the packet size. The limit in OpenMetaverse is 1100 bytes including a zero byte terminator. Fixes Mantis #3244
r8726 | idb | 2009-03-07 05:58:00 -0700 (Sat, 07 Mar 2009) | 2 lines
Added the ability to set User-Agent in llHTTPRequest. No new default value has been set since having no User-Agent seems to work well but the facility is now available to set this if required. Using something based on the pattern of SLs User-Agent may well cause problems, not all web servers respond well to it. See the notes in the SL Wiki Fixes Mantis #3143
r8725 | idb | 2009-03-07 03:37:15 -0700 (Sat, 07 Mar 2009) | 2 lines
Correct casts so that the target id in the at_target event matches the original target id. Fixes Mantis #2861
r8724 | teravus | 2009-03-07 00:17:43 -0700 (Sat, 07 Mar 2009) | 2 lines
- Making the minimum ground offset for flying a configurable offset in the OpenSim.ini. This is the code that causes you to rise off the ground when you press the fly button and attempts to keep you above ground automatically when flying in a simulator.
- minimum_ground_flight_offset, by default is 3 meters, as per Kitto Flora See OpenSim.ini.example for an example.
r8723 | teravus | 2009-03-06 23:51:27 -0700 (Fri, 06 Mar 2009) | 2 lines
- fixes mantis 3259
- I'm concerned however that the 'minimum fly height' should really be implemented in ScenePresence and not in the specific physics plugin so that all of the physics plugins can take advantage of it and if desired, a person could swap out the 'minimum fly height' functionality with other functionality.
r8722 | teravus | 2009-03-06 23:14:31 -0700 (Fri, 06 Mar 2009) | 1 line
- Adding application/x-oar to the list of content types to which the HTTP Server will return the response as if it was a binary file pending discussion on the [opensim-dev] mailing list to be initiated by dmiles.
r8721 | chi11ken | 2009-03-06 19:39:27 -0700 (Fri, 06 Mar 2009) | 1 line
Update svn properties, minor formatting cleanup.
r8720 | chi11ken | 2009-03-06 19:11:50 -0700 (Fri, 06 Mar 2009) | 1 line
Add copyright headers.
r8719 | chi11ken | 2009-03-06 19:00:18 -0700 (Fri, 06 Mar 2009) | 1 line
Update svn properties.
r8718 | teravus | 2009-03-06 18:18:59 -0700 (Fri, 06 Mar 2009) | 3 lines
- Fixes mantis: #3241
- Uses 'mouselook' or left mouse button down, to determine when to use the camera's UP axis to determine the direction of movement.
- We crouch-slide no more.
r8717 | teravus | 2009-03-06 17:27:56 -0700 (Fri, 06 Mar 2009) | 1 line
- Added some limits to the maximum force applied per second by llMoveToTarget. Currently, it's 350 times the mass in newtons applied per second, maximum.
r8716 | ckrinke | 2009-03-06 16:01:35 -0700 (Fri, 06 Mar 2009) | 7 lines
Fixes Mantis #3260. Thank you kindly, MCortez for a patch that: llSetHoverHeight() should not clamp the x/y position of an object the way MoveTo does, and it should recalculate the absolute height to hover at as an object moves to reflect the current ground/water height under it. Correctly implementing required adjusting the Physics interfaces and implementing at the physics plug-in level. The attached is a patch that correctly implements llSetHoverHeight() including updates to the ODE physics plug-in.
r8715 | sdague | 2009-03-06 14:14:50 -0700 (Fri, 06 Mar 2009) | 5 lines
add back .config files for all tests in an attempt to debug why these things crash so much.
This will generate a lot more log messages on make test, even some scary looking exceptions. Don't worry, that's normal.
r8714 | justincc | 2009-03-06 14:00:15 -0700 (Fri, 06 Mar 2009) | 2 lines
- minor: remove some mono compiler warnings
r8713 | justincc | 2009-03-06 13:44:31 -0700 (Fri, 06 Mar 2009) | 3 lines
- refactor: Remove GetLandOwner function from Scene
- Simplify since the land is never null
r8712 | justincc | 2009-03-06 13:12:08 -0700 (Fri, 06 Mar 2009) | 5 lines
- Improve memory usage when writing OARs
- This should make saving large OARs a somewhat better experience
- However, the problem where saving an archive pulls large numbers of assets into the asset cache isn't yet resolved
- This patch also removes lots of archive writing spam that crept in
r8711 | sdague | 2009-03-06 12:25:33 -0700 (Fri, 06 Mar 2009) | 1 line
- Protects RestClient from crashing with dictionary exception, which leads to the client thread crashing if uncaught.
r8710 | mw | 2009-03-06 02:57:31 -0700 (Fri, 06 Mar 2009) | 1 line
Added a output message to CreateCommsManagerPlugin for when a user tries to run with both -hypergrid=true and -background=true command line arguments. As these two don't work together as they initialise different root OpenSimulator classes. I was going to change it back to the old behaviour where in that usecase it would just startup in the background but without hyerpgrid enabled. But think its better to give a error about this and then exit, so the user knows to change their settings. Rather than later wondering why hypergrid isn't working.
r8709 | mikem | 2009-03-05 17:54:39 -0700 (Thu, 05 Mar 2009) | 1 line
Add missing parameter to m_log.DebugFormat().
r8708 | teravus | 2009-03-05 14:59:27 -0700 (Thu, 05 Mar 2009) | 1 line
- Fixing a few mass calculation errors suggested by jhurliman
r8707 | justincc | 2009-03-05 14:36:48 -0700 (Thu, 05 Mar 2009) | 3 lines
- Add more status information when an oar is being saved
- Among other messages, a log entry is posted for every 50 assets added to the archive
r8706 | melanie | 2009-03-05 14:20:57 -0700 (Thu, 05 Mar 2009) | 4 lines
Prevent ICommander-generated subcommand trees from generating an exception when the tree root command is executes without another verb following it. Fixes Mantis #3258
r8705 | justincc | 2009-03-05 14:10:39 -0700 (Thu, 05 Mar 2009) | 2 lines
- Replace Scene.GetLandHeight() with a straight query to Scene.Heightmap (which is used in other contexts)
r8704 | justincc | 2009-03-05 13:53:23 -0700 (Thu, 05 Mar 2009) | 2 lines
- refactor: move media and music url setting from scene into LandObject
r8703 | justincc | 2009-03-05 13:32:35 -0700 (Thu, 05 Mar 2009) | 2 lines
- simplify media and music url setting since we never get back a null land object
r8702 | justincc | 2009-03-05 12:32:27 -0700 (Thu, 05 Mar 2009) | 2 lines
- Replace some string to byte conversions for object/item name/description fields with the LLUtil function that prevents the max string size from being breached
r8701 | justincc | 2009-03-05 11:36:37 -0700 (Thu, 05 Mar 2009) | 2 lines
- remove now unused serialization code
r8700 | mikem | 2009-03-05 05:57:27 -0700 (Thu, 05 Mar 2009) | 1 line
Make DeserializeUUID explicitly private.
r8699 | mw | 2009-03-05 04:23:31 -0700 (Thu, 05 Mar 2009) | 1 line
Made the OpenSimInventoryFrontendPlugin.DeserializeUUID(Stream stream) method static to get past the build errors. Mikem really needs to check this change over to see its the right approach for what he wanted.
r8698 | mikem | 2009-03-05 01:30:23 -0700 (Thu, 05 Mar 2009) | 5 lines
Fix moving folders.
Casting from base class to inherited class is a no-no, and we must preserve the folder type when moving folders, otherwise it gets set to a Texture folder (type 0).
r8697 | mikem | 2009-03-05 01:30:15 -0700 (Thu, 05 Mar 2009) | 4 lines
Fix creating inventory items and folders.
The order of deserialization needed to be changed. Also corrected a bug that caused no inventory items to be returned on login.
r8696 | mikem | 2009-03-05 01:30:08 -0700 (Thu, 05 Mar 2009) | 4 lines
Implemented all Inventory frontend handlers.
This doesn't mean they all work as expected, though. More changes to come as testing unveils bugs.
r8695 | mikem | 2009-03-05 01:30:00 -0700 (Thu, 05 Mar 2009) | 1 line
Implementing more inventory storage methods.
r8694 | mikem | 2009-03-05 01:29:52 -0700 (Thu, 05 Mar 2009) | 3 lines
Use Inventory{Item,Folder}Base in AssetInventoryServer.
Also the first inventory storage methods are implemented.
r8693 | mikem | 2009-03-05 01:29:42 -0700 (Thu, 05 Mar 2009) | 1 line
A couple cosmetic changes in inventory storage plugin.
r8692 | ckrinke | 2009-03-04 21:24:22 -0700 (Wed, 04 Mar 2009) | 7 lines
Fixes Mantis #3255. Thank you kindly, MCortez, for a patch that: Changes to IWindModule interface: Change from assuming a single array of 256 Vector2 values to a lookup function that takes region x, y, z and returns a Vector3
- Changed llWind() to use new lookup method of IWindModule
- Moved logic for determining the wind at a given point in the data array from
llWind() to the Wind Module itself.
r8691 | ckrinke | 2009-03-04 20:20:28 -0700 (Wed, 04 Mar 2009) | 3 lines
Fixes Mantis #3194. Thank you kindly, Godfrey for a patch that: fixes llSetLinkPrimitiveParams() - PRIM_ROTATION rotates the prim containing the script, rather than the specified child prim
r8690 | ckrinke | 2009-03-04 20:15:30 -0700 (Wed, 04 Mar 2009) | 3 lines
Fixes Mantis #3253. Thank you kindly, Godfrey, for a patch that: Corrects the incomplete implementation of llXorBase64StringsCorrect() so that it returns the proper reversible result.
r8689 | afrisby | 2009-03-04 17:52:59 -0700 (Wed, 04 Mar 2009) | 3 lines
MRM Scripting Changes
- Renames MiniRegionModule to MRMModule to make it more distinct from the actual Mini Region Module[s] executed in Scene.
- Renames MiniRegionModuleBase to MRMBase for convenience. MRM's need to be adjusted to inherit from MRMBase.
r8688 | afrisby | 2009-03-04 17:16:06 -0700 (Wed, 04 Mar 2009) | 2 lines
- Implements a number of members on SOGObject for use with the MRM Script Engine API.
- It's lag-tacular! :D
r8687 | afrisby | 2009-03-04 15:14:40 -0700 (Wed, 04 Mar 2009) | 2 lines
- Fleshed out the MRM Module a little.
- Please don't use this yet, it represents a very heavy security risk if you enable it.
r8686 | justincc | 2009-03-04 13:36:09 -0700 (Wed, 04 Mar 2009) | 2 lines
- For now, restore file extension for default oar name I accidentally removed on the last commit
r8685 | justincc | 2009-03-04 13:31:03 -0700 (Wed, 04 Mar 2009) | 2 lines
- Add the abilty to load and save iar item nodes where folders have identical names
r8684 | afrisby | 2009-03-04 13:29:50 -0700 (Wed, 04 Mar 2009) | 1 line
- Whoops. Left MiniModule enabled to anyone. (potential security risk). Disabled - edit code to load.
r8683 | afrisby | 2009-03-04 13:28:11 -0700 (Wed, 04 Mar 2009) | 1 line
- More work on MiniRegionModule module.
r8682 | justincc | 2009-03-04 11:33:05 -0700 (Wed, 04 Mar 2009) | 3 lines
- Add gnu tar format long file name support to tar reading and writing.
- Not actually tested yet though existing code which doesn't require long file names looks fine
r8681 | mikem | 2009-03-03 20:58:11 -0700 (Tue, 03 Mar 2009) | 1 line
IObjectFace needs to be public to compile.
r8680 | afrisby | 2009-03-03 19:29:51 -0700 (Tue, 03 Mar 2009) | 1 line
- More work on MiniRegionModule module.
r8679 | afrisby | 2009-03-03 18:38:22 -0700 (Tue, 03 Mar 2009) | 1 line
- Implementing some interfaces for aformentioned script engine. Ignore this.
r8678 | afrisby | 2009-03-03 16:25:16 -0700 (Tue, 03 Mar 2009) | 3 lines
CONTRIBUTORS.txt cleanup
- Reverting CONTRIBUTORS.txt change in r1370, restoring to original 'semi-order-of-appearance' format.
- Added some missing contributors. May change this in future to simply cut and paste from the Wiki contributors page.
r8677 | chi11ken | 2009-03-03 10:39:57 -0700 (Tue, 03 Mar 2009) | 1 line
Avoid NRE if client sends unrecognized packet type.
r8676 | chi11ken | 2009-03-03 10:23:11 -0700 (Tue, 03 Mar 2009) | 1 line
Update svn properties.
r8675 | mw | 2009-03-03 09:36:21 -0700 (Tue, 03 Mar 2009) | 1 line
Renamed ILoginRegionsConnector to ILoginServiceToRegionsConnector and moved it from OpenSim.Client.Linden to OpenSim.Framework.
r8674 | mw | 2009-03-03 08:45:52 -0700 (Tue, 03 Mar 2009) | 1 line
forgotten files
r8673 | mw | 2009-03-03 08:41:21 -0700 (Tue, 03 Mar 2009) | 3 lines
Moved Linden protocol login handling to modules in OpenSim.Client.Linden. There are two region modules in there LLStandaloneLoginModule (for standalone mode) and LLProxyLoginModule (for grid mode which just handles incoming expect_user and logoff_user messages from the remote login server) Changed OpenSim.Framework.Communications.Tests.LoginServiceTests to use the LLStandaloneLoginService (from the LLStandaloneLoginModule) rather than LocalLoginService. Really these login tests should most likely be somewhere else as they are testing specific implementations of login services. Commented out the old LocalLoginService as its no longer used, but want to check there are no problems before it gets deleted.
r8672 | mw | 2009-03-03 05:51:54 -0700 (Tue, 03 Mar 2009) | 2 lines
Refactoring of CreateCommsManagerPlugin. Plus some general cleanup of a few other files (deleting excess blank lines etc)
r8671 | mw | 2009-03-02 11:04:00 -0700 (Mon, 02 Mar 2009) | 1 line
Renamed OpenSimBase m_autoCreateLindenStack to m_autoCreateClientStack
r8670 | mw | 2009-03-02 10:47:42 -0700 (Mon, 02 Mar 2009) | 1 line
Added more error info to CreateCommsManagerPlugin.
r8669 | mw | 2009-03-02 10:29:21 -0700 (Mon, 02 Mar 2009) | 1 line
Added some debug output to CreateCommsManagerPlugin
r8668 | mw | 2009-03-02 10:18:24 -0700 (Mon, 02 Mar 2009) | 1 line
Added OpenSim.Client.Linden which is a (non shared) region module that creates and initialises the LindenClientStack (or actually whatever client stack was set in opensim.ini) for that region. Currently this module is still at a early stage so just for testing, so its hardcoded to be disabled. To enable first turn off auto creation of the client stack in opensimbase (see last revision) and then in OpenSim.Client.Linden.LLClientStackModule change bool m_createClientStack = false; to true.
r8667 | mw | 2009-03-02 09:33:11 -0700 (Mon, 02 Mar 2009) | 4 lines
Moved the SetupScene methods from RegionApplicationBase to OpenSimBase [Do we really still need RegionApplicationBase?] Added a flag (bool m_autoCreateLindenStack = true) which says if the ClientStack will be autocreated and initialised when creating regions. This helps with moving ClientStacks to Region modules. Currently this flag is hardcoded to true, as it is only for testing at the moment, so you need to change the value in the code if you want to turn off auto creating.
r8666 | mw | 2009-03-02 07:42:01 -0700 (Mon, 02 Mar 2009) | 1 line
Changed IClientNetworkServer.AddScene method from void AddScene(Scene x) to void AddScene(IScene x). As there should be no need for the client view to have a reference to Scene. IScene should be all it needs.
r8665 | mw | 2009-03-02 04:21:18 -0700 (Mon, 02 Mar 2009) | 1 line
Removed the commented out InitialiseStandaloneServices and InitialiseGridServices (which are now preformed in CreateCommsManagerPlugin) methods from OpenSimBase and HGOpenSimNode. As if we decided to swap back to the old methods we can always re-add them, rather than leave them commented out.
r8664 | mw | 2009-03-02 04:03:11 -0700 (Mon, 02 Mar 2009) | 1 line
After another heroic and bloody battle, OpenSimulator Dino Expedition 1, killed off OsSetParcelMediaTime, which was only ever added for testing. And all the logic code of it has been commented out for a long time.
r8663 | mw | 2009-03-02 03:52:27 -0700 (Mon, 02 Mar 2009) | 1 line
As part of a dinosaur hunting expedition, IScenePresenceBody.cs was terminated. The expedition leader, MW, believes it never lead a meaningful life, and is sure it hasn't contributed anything in the last 500,000 years (or 2 years).
r8662 | ckrinke | 2009-03-01 12:33:12 -0700 (Sun, 01 Mar 2009) | 3 lines
Mantis#3249. Thank you kindly, Tlaukkan (Tommil) for a patch that:
- Removed compiler warnings
- Updated protobuf-net and MXP license files.
r8661 | dahlia | 2009-03-01 11:31:27 -0700 (Sun, 01 Mar 2009) | 1 line
Thanks tommil for mantis #3248 - a patch that adds support for avatar movement to MXP module.
r8660 | chi11ken | 2009-03-01 02:15:31 -0700 (Sun, 01 Mar 2009) | 1 line
Update svn properties, add copyright headers, minor formatting cleanup.
r8659 | mw | 2009-02-28 09:42:13 -0700 (Sat, 28 Feb 2009) | 1 line
Added check so Util.ReadSettingsFromIniFile doesn't try to set static fields.
r8657 | mw | 2009-02-28 09:13:20 -0700 (Sat, 28 Feb 2009) | 1 line
Copied the Util.ReadSettingsFromIniFile method from the branch to trunk.
r8655 | mw | 2009-02-28 08:16:12 -0700 (Sat, 28 Feb 2009) | 1 line
Changed it so only .ini file types are loaded from the (optional) config directory rather all files types in that folder.
r8652 | mw | 2009-02-28 07:04:02 -0700 (Sat, 28 Feb 2009) | 1 line
Applied Patch from mantis #3245. Thanks tlaukkan/Tommil
r8647 | mw | 2009-02-27 14:19:32 -0700 (Fri, 27 Feb 2009) | 1 line
updating svn ignore properties
r8646 | mw | 2009-02-27 10:03:27 -0700 (Fri, 27 Feb 2009) | 2 lines
Changed the CreateCommsManagerPlugin so it requests a IRegionCreator and subscribes to the OnNewRegionCreated event on that interface rather than requesting the LoadRegionsPlugin directly. Removed the reference to OpenSim.ApplicationPlugins.LoadRegions from the CreateCommsManagerPlugin project.
r8644 | mw | 2009-02-27 09:07:11 -0700 (Fri, 27 Feb 2009) | 1 line
Changed the order of the OpenSim.Grid.GridServer and OpenSim.Grid.GridServer.Modules projects in prebuild.xml. Hopefully this will fix the mono build problem.
r8643 | mw | 2009-02-27 08:57:09 -0700 (Fri, 27 Feb 2009) | 4 lines
Added GridServerPlugin class (which implements IGridPlugin) to OpenSim.Grid.GridServer.Modules. This class handles all the initialising of the grid server. And made GridServer into basically a generic server that just loads plugins. So this is a step towards having a generic server that loads service modules.
r8642 | mw | 2009-02-27 07:50:49 -0700 (Fri, 27 Feb 2009) | 1 line
Applied patch from Mantis# 3240, thanks tlaukkan/Tommil
r8641 | mw | 2009-02-27 07:17:57 -0700 (Fri, 27 Feb 2009) | 3 lines
Added support for reading ini files from a (optional) config folder. This allows the spliting up of opensim.ini into multiple ini files. The ini files in this folder are loaded after the masterini file (if that is set) and before opensim.ini. The default folder it looks for and searches is "bin\config", but that can be set by using the command arg "-inidirectory=<path>" (path is local to bin\) when starting up opensim.exe.
r8640 | sdague | 2009-02-26 15:54:50 -0700 (Thu, 26 Feb 2009) | 2 lines
svn attribute fixes so that we can play nice between windows and linux
r8639 | mw | 2009-02-26 15:51:52 -0700 (Thu, 26 Feb 2009) | 2 lines
Added IRegionCreator interface that all ApplicationPlugins that are creators of Scenes should implement and register with the ApplicationRegistry.StackModuleInterface<>(); So that other plugins can attach to their OnNewRegionCreated event. Made some changes to IRegistryCore and RegistryCore so they support "Stacked" interfaces.
r8638 | sdague | 2009-02-26 15:37:02 -0700 (Thu, 26 Feb 2009) | 4 lines
This adds a new osGetAgentIP function with threat level set to High. It isn't tested, but it doesn't break anything else. The reason for this function is to let in world tools be used to coordiante out of world network services that need access to client ip addresses.
r8637 | mw | 2009-02-26 15:14:24 -0700 (Thu, 26 Feb 2009) | 2 lines
Another change to how the CreateCommsManagerPlugin checks if it should be creating HG or normal CommunicationsManager.
r8636 | mw | 2009-02-26 15:03:53 -0700 (Thu, 26 Feb 2009) | 2 lines
Changed CreateCommsManagerPlugin so it handles external subclasses of OpenSimBase. This process of checking if it should be creating HG or normal CommunicationsManager needs to change. So look out for a revert of this whole plugin soon.
r8635 | mw | 2009-02-26 14:30:12 -0700 (Thu, 26 Feb 2009) | 3 lines
Moved the Initialisation of the CommunicationsManager to a ApplicationPlugin. Also in that plugin it registers the IUserService with all the Scenes (as they are created). So now we can start changing over all uses of IUserService, that currently access it from the CommunicationsManager to accessing it from the Scene.RequestModuleInterface call. Once that is done we can move the UserService creation out to its own plugin and remove all references to it from the CommunicationsManager. Then we can take the next CommunicationsManager interface and repeat.
r8634 | sdague | 2009-02-26 14:29:25 -0700 (Thu, 26 Feb 2009) | 3 lines
- This patch reduces the excessive number of threads opened by the Timer event. Also simplifies the walking around method.
From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com>
r8633 | sdague | 2009-02-26 14:29:16 -0700 (Thu, 26 Feb 2009) | 4 lines
- Update ScenePresenceTests to reflect current REST communication workflow.
- Fixed an issue with AssetCache where it would break unit tests randomly.
From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com>
r8632 | justincc | 2009-02-26 14:00:33 -0700 (Thu, 26 Feb 2009) | 4 lines
- Apply
- Make load/save oar and load/save xml2 behave a little better when there is an io problem
- Thanks dslake
r8631 | mw | 2009-02-26 13:18:29 -0700 (Thu, 26 Feb 2009) | 1 line
opps forgot to commit a changed file.
r8630 | melanie | 2009-02-26 13:11:55 -0700 (Thu, 26 Feb 2009) | 3 lines
Plumb in the RetrieveInstantMessages event that is sent by the viewer when it is ready to receive offline IM
r8629 | mw | 2009-02-26 13:11:13 -0700 (Thu, 26 Feb 2009) | 1 line
Changed the type of the ApplicationRegistry member from RegistryCore to IRegistryCore
r8628 | mw | 2009-02-26 13:01:20 -0700 (Thu, 26 Feb 2009) | 5 lines
Added IRegistryCore and RegistryCore to OpenSim.Framework. Added a ApplicationRegistry to OpenSimBase. Changed LoadRegionsPlugin so it registers itself to that application registry. Added a event to LoadRegionsPlugin, that is triggered when it creates a new scene ,although maybe this event should actually be in opensimBase incase other plugins are creating regions (like the RemoteAdminPlugin).
r8627 | sdague | 2009-02-26 10:06:06 -0700 (Thu, 26 Feb 2009) | 1 line
Attempt to fix the "region starts but doesn't load anything" issue
r8626 | mw | 2009-02-26 08:21:06 -0700 (Thu, 26 Feb 2009) | 2 lines
Added a PostInitialise method to IApplicationPlugin, this allows us to do work in there knowing that all other ApplicationPlugins have been initialised by that time. Moved the loadRegions code in LoadRegionsPlugin to the PostInitialise method.
r8625 | mw | 2009-02-26 08:06:27 -0700 (Thu, 26 Feb 2009) | 1 line
Add check in SceneManager to stop opensim.exe crashing if no regions/scenes were loaded.
r8624 | lbsa71 | 2009-02-26 04:50:49 -0700 (Thu, 26 Feb 2009) | 1 line
- Got rid of concrete GridDBService references
r8623 | lbsa71 | 2009-02-26 04:44:16 -0700 (Thu, 26 Feb 2009) | 5 lines
- renamed IRegionProfileService to IRegionProfileRouter to better reflect use (naming is a work in progress...)
- introduced new IRegionProfileService that is going to be _one_ profileService
- Had GridDBService inherit the IRegionProfileService
(preparing for re-wiring things and de-duplicating eventually)
r8622 | mw | 2009-02-25 14:00:58 -0700 (Wed, 25 Feb 2009) | 2 lines
Renamed IMessageUserServerService to IInterServiceUserService.cs Renamed MessageUserServerModule to InterMessageUserServerModule
r8621 | justincc | 2009-02-25 13:53:02 -0700 (Wed, 25 Feb 2009) | 2 lines
- minor: Remove most mono compiler warnings
r8620 | justincc | 2009-02-25 13:07:25 -0700 (Wed, 25 Feb 2009) | 4 lines
- Properly load items into correct folders when an iar is loaded
- At the moment, any existing folders with the same name are reused - will need an option to always create new folders
- not yet ready for general use
r8619 | mw | 2009-02-25 12:39:56 -0700 (Wed, 25 Feb 2009) | 1 line
Renamed IUGAIMCore to IGridServiceCore, still not really happy with this name as it could be confused with the Grid Server namespace or with the IGridService in the region servers.
r8618 | mw | 2009-02-25 11:47:19 -0700 (Wed, 25 Feb 2009) | 1 line
Added IGridServiceModule to be the base interface for the Service Modules for the Grid, User and Messaging servers.
r8617 | mw | 2009-02-25 11:33:15 -0700 (Wed, 25 Feb 2009) | 1 line
More refactoring of the Grid, User and Messaging servers.
r8616 | justincc | 2009-02-25 11:32:39 -0700 (Wed, 25 Feb 2009) | 3 lines
- Fix my own unit test
- Disable folder iar creation code for now (though this wasn't actually causing the test failure)
r8615 | justincc | 2009-02-25 11:07:32 -0700 (Wed, 25 Feb 2009) | 4 lines
- Add InventoryArchiveConstants that I missed from last commit
- This commit also does a first pass at creating folders for an inventory archive (previously everything was dumped in the same destiantion folder).
- This code might not work yet and nobody else should be using it yet anyway :)
r8614 | justincc | 2009-02-25 10:30:15 -0700 (Wed, 25 Feb 2009) | 2 lines
- Store inventory data in an 'inventory' directory rather than in the root of an iar
r8613 | lbsa71 | 2009-02-25 09:31:09 -0700 (Wed, 25 Feb 2009) | 1 line
- ignored some gens
r8612 | lbsa71 | 2009-02-25 09:29:43 -0700 (Wed, 25 Feb 2009) | 8 lines
- Applied a patch that: Added prim parameters support to MXP client
* Updated MXP to contain extension fragment with prims and updated MXPClientView to fill in the parameters. * Added google protobuffers dll. * Update MXP dll. * Updated MXPClientView to send prim parameters as Perception event extension * Started OpenSimulator and connected with IdealistViewer via MXP and ensured from log that parameters are being sent. * Ensured that nant test target runs succesfully.
This closes mantis #3229. Thanks, tlaukkan!
r8611 | sdague | 2009-02-25 07:19:15 -0700 (Wed, 25 Feb 2009) | 5 lines
From: Alan Webb <awebb@linux.vnet.ibm.com>
The mono addin filter for the AssetCache is incorrect, this fixes it. The problem only shows up when you have more than one AssetCache to choose from.
r8610 | lbsa71 | 2009-02-25 06:00:32 -0700 (Wed, 25 Feb 2009) | 1 line
- Ignored gens
r8609 | lbsa71 | 2009-02-25 05:26:00 -0700 (Wed, 25 Feb 2009) | 1 line
- Experimental softening of SOG waiting for update on link - changing from abort to forced update.
r8608 | lbsa71 | 2009-02-25 04:01:38 -0700 (Wed, 25 Feb 2009) | 2 lines
- Refactored SOP.FolderID weirdness by removing calls to empty setter. YEs, I do realize the setter has to be there for legacy reasons, but since the calls will never acually DO anyhting, I'm removing them.
- So, SOP.FolderID is actually a cruft field that should be removed.
r8607 | mikem | 2009-02-24 22:37:57 -0700 (Tue, 24 Feb 2009) | 4 lines
Allow /* C-style comments */ in LSL scripts.
This fixes Mantis #3199. opensim-libs SVN r87 contains the corresponding changes.
r8606 | ckrinke | 2009-02-24 21:38:06 -0700 (Tue, 24 Feb 2009) | 4 lines
Fixes Mantis #3220. Thank you kindly, MPallari, for a patch that: This patch changes InformClientOfNeighbour, CrossRegion and SendRegionTeleport methods to virtual.
r8605 | mikem | 2009-02-24 21:37:33 -0700 (Tue, 24 Feb 2009) | 3 lines
Comment out HttpProxy and HttpProxyExceptions in OpenSim.ini.example.
Fixes Mantis #3221. Thanks cmickeyb for the patch.
r8604 | ckrinke | 2009-02-24 21:29:02 -0700 (Tue, 24 Feb 2009) | 2 lines
Fixes Mantis #3187. Thank you kindly, DoranZemlja for a patch that: Deals with the multiple warning side affect introduced earlier.
r8603 | mikem | 2009-02-24 19:14:19 -0700 (Tue, 24 Feb 2009) | 3 lines
Distinguish 404 errors in RestClient.Request().
Mantis #3225.
r8602 | mikem | 2009-02-24 17:32:26 -0700 (Tue, 24 Feb 2009) | 7 lines
A few updates necessary for load balancer.
- handle GetUser request for nonexistent user gracefully - include throttle levels in ClientInfo - code to save/restore throttles in client stack - only update/send updates to active clients - make animation classes serializable
r8601 | mikem | 2009-02-24 16:40:08 -0700 (Tue, 24 Feb 2009) | 1 line
Setting svn:eol-style=native on new files.
r8600 | diva | 2009-02-24 16:06:15 -0700 (Tue, 24 Feb 2009) | 1 line
Close-to-final tweaking with appearance. This time sending *everything*. Addresses mantis #3223.
r8599 | mw | 2009-02-24 12:00:36 -0700 (Tue, 24 Feb 2009) | 1 line
More work on modulising the User Server.
r8598 | mw | 2009-02-24 11:06:06 -0700 (Tue, 24 Feb 2009) | 1 line
Removed the additions from the last revision for the "ShowHelp" delegate handling, as it seems that system isn't in use anymore.
r8597 | mw | 2009-02-24 10:57:26 -0700 (Tue, 24 Feb 2009) | 1 line
More refactoring of the Grid/user/messaging servers.
r8596 | mw | 2009-02-24 09:13:16 -0700 (Tue, 24 Feb 2009) | 1 line
Same treatment for the MessagingServer... added OpenSim.Grid.MessagingServer.Modules for the modules/components of it.
r8595 | mw | 2009-02-24 08:57:25 -0700 (Tue, 24 Feb 2009) | 1 line
Added OpenSim.Grid.GridServer.Modules, for the GridServer modules/components.
r8594 | mw | 2009-02-24 08:37:03 -0700 (Tue, 24 Feb 2009) | 2 lines
First step in separating out the Userserver console command handling to a "module". Added OpenSim.Grid.UserServer.Modules project/dll which now contains the components of the userserver. With the OpenSim.Grid.UserServer being the setup and initiate exe.
r8593 | mw | 2009-02-24 07:14:34 -0700 (Tue, 24 Feb 2009) | 1 line
Deleted the files from Messagingserver that are now in OpenSim.Grid.Framework
r8592 | mw | 2009-02-24 07:12:25 -0700 (Tue, 24 Feb 2009) | 1 line
Updated MessagingServer to use OpenSim.Grid.Framework
r8591 | mw | 2009-02-24 07:00:29 -0700 (Tue, 24 Feb 2009) | 1 line
Some cleaning up in the MesssagingServer and GridServer.
r8590 | mw | 2009-02-24 06:53:38 -0700 (Tue, 24 Feb 2009) | 2 lines
Added OpenSim.Grid.Framework project. Changed the Gridserver so it uses/references OpenSim.Grid.Framework
r8589 | mw | 2009-02-24 06:33:57 -0700 (Tue, 24 Feb 2009) | 1 line
More refactoring of the UserServer.
r8588 | dahlia | 2009-02-23 23:23:28 -0700 (Mon, 23 Feb 2009) | 1 line
update version number for bamboo zip file output
r8587 | dahlia | 2009-02-23 23:02:44 -0700 (Mon, 23 Feb 2009) | 2 lines
remove log4net dependency and from PrimMesher.cs sync PrimMesher.cs with PrimMesher.dll version 29 on forge
r8586 | diva | 2009-02-23 21:00:54 -0700 (Mon, 23 Feb 2009) | 1 line
Minor guard protecting against hackers like me who manipulate region UUIDs directly.
r8585 | ckrinke | 2009-02-23 16:14:04 -0700 (Mon, 23 Feb 2009) | 3 lines
Thank you kindly, TLaukkan (Tommil) for a patch that solves: If -background=true is specified on the command line, a null pointer exception crashes the server in OpenSim/Region/Application/OpenSimBase.cs in method StartupSpecific. Its trying to dereference m_console which is null, presumably because we're in background mode.
r8584 | mw | 2009-02-23 13:01:03 -0700 (Mon, 23 Feb 2009) | 2 lines
Renamed IGridMessagingModule to IGridMessagingMapper. Plus some general cleanup of the GridMessagingModule.
r8583 | mw | 2009-02-23 12:38:36 -0700 (Mon, 23 Feb 2009) | 1 line
more refactoring of the Grid server, to separate them into modules
r8582 | sdague | 2009-02-23 05:52:32 -0700 (Mon, 23 Feb 2009) | 7 lines
From: Christopher Yeoh <yeohc@au1.ibm.com>
This patch fixes a bug where if a script in a child prim has taken control of an avatar when they sit, although permission for camera control is revoked when they stand, free camera control is not restored. Currently it is only restored if the script is in the root prim (though its not clear to me where this happens!).
r8581 | lbsa71 | 2009-02-23 03:38:25 -0700 (Mon, 23 Feb 2009) | 1 line
- This should fix the 'Solution Folder' annoyance on express versions.
r8580 | chi11ken | 2009-02-23 03:36:16 -0700 (Mon, 23 Feb 2009) | 1 line
Update svn properties, add copyright headers, minor formatting cleanup.
r8579 | afrisby | 2009-02-23 00:57:54 -0700 (Mon, 23 Feb 2009) | 2 lines
- Commenting out threaded Scene update for the moment.
- It works, but makes certain building tasks slow to update.
r8578 | afrisby | 2009-02-23 00:31:13 -0700 (Mon, 23 Feb 2009) | 1 line
- Fix for recent thread patch - IsAlive apparently is not as reliable as ThreadState.
r8577 | afrisby | 2009-02-22 23:55:42 -0700 (Sun, 22 Feb 2009) | 4 lines
- Performance Changes:
- Moves Entity Updates into a seperate thread, allowing for OpenSimulator to utilize a computers CPU more effectively in return for potentially greater user and prim capacity.
- Removes an expensive Sqrt call performed during Update on each object. This should lower CPU requirements for high-prim regions with physics enabled.
- MXP Changes: Centers the region around 0,0 for primitives instead of 128,128. Prim display should now look more correct for MXP viewers.
r8576 | mikem | 2009-02-22 21:39:08 -0700 (Sun, 22 Feb 2009) | 1 line
Load default assets when AssetInventory starts.
r8575 | mikem | 2009-02-22 21:07:46 -0700 (Sun, 22 Feb 2009) | 7 lines
Prevent avatar from walking along z-axis
Thanks mirceakitsune for a patch that prevents the avatar from trying to walk along the Z-axis in mouselook mode (or left-click the avatar and walk) while looking up or down.
Fixes Mantis #946.
r8574 | ckrinke | 2009-02-22 19:43:51 -0700 (Sun, 22 Feb 2009) | 4 lines
Mantis#3187. Thank you kindly, DoranZemlja for a patch that: Adds a warning for an LSL construct that exploits a popular list memory saving hack.
r8573 | diva | 2009-02-22 17:51:31 -0700 (Sun, 22 Feb 2009) | 3 lines
A little bit more tweaking with appearance. Now passing both the wearables and the textures referred to in the Texture faces of AvatarAppearance. The textures are still not being acted upon on the other side, but they will. Note: will make avies coming from older sims casper or grey. Upgrade! Related to mantis #3204.
r8572 | ckrinke | 2009-02-22 13:52:55 -0700 (Sun, 22 Feb 2009) | 6 lines
Mantis#3218. Thank you kindly, TLaukkan (Tommil) for a patch that:
- Added log4net dependency to physxplugin in prebuild.xml.
- Added missing m_log fields to classes.
- Replaced Console.WriteLine with appropriate m_log.Xxxx
- Tested that nant test target runs succesfully.
- Tested that local opensim sandbox starts up without errors.
r8571 | melanie | 2009-02-22 13:17:12 -0700 (Sun, 22 Feb 2009) | 2 lines
Allow delivery of object messages gridwide
r8570 | mw | 2009-02-22 12:19:24 -0700 (Sun, 22 Feb 2009) | 1 line
First step in giving the messaging server the modular refactoring treatment. As with the other two servers, this is very much a work in progress.
r8569 | afrisby | 2009-02-22 05:45:23 -0700 (Sun, 22 Feb 2009) | 2 lines
- MXP Clients are now treated as full root agents - including being given a default avatar.
- MXP Clients now are capable of displaying primitives and objects within the Scene.
r8568 | afrisby | 2009-02-22 05:39:46 -0700 (Sun, 22 Feb 2009) | 1 line
- Fixes an assumption whereby Scene assumes that each client is capable of producing a circuit. This affects non-Linden derived viewers who do not utilize circuits.
r8567 | mw | 2009-02-22 04:01:26 -0700 (Sun, 22 Feb 2009) | 1 line
Part 1 of refactoring the userserver. Changed it so instead of subclassing the User dataBase access class (UserManagerBase) and then adding the http handlers to that. There is now a UserDataBaseService that is passed to the other classes so they can access the db. This should make it easier to have multiple "modules" that can register http handlers and access the db.
r8566 | afrisby | 2009-02-22 03:21:41 -0700 (Sun, 22 Feb 2009) | 1 line
- And a little more
r8565 | afrisby | 2009-02-22 03:20:53 -0700 (Sun, 22 Feb 2009) | 1 line
- Removing some C#3.0 that snuck in.
r8564 | afrisby | 2009-02-22 03:18:42 -0700 (Sun, 22 Feb 2009) | 1 line
- Restoring
r8563 | afrisby | 2009-02-22 03:18:21 -0700 (Sun, 22 Feb 2009) | 1 line
- Fixing bad SVN commit.
r8562 | afrisby | 2009-02-22 02:31:24 -0700 (Sun, 22 Feb 2009) | 2 lines
- Updates MXP.dll to latest version.
- MXP: Corrects an issue whereby session requests were never correctly acknowledged.
r8561 | chi11ken | 2009-02-22 02:02:27 -0700 (Sun, 22 Feb 2009) | 1 line
Update svn properties.
r8560 | afrisby | 2009-02-22 01:53:56 -0700 (Sun, 22 Feb 2009) | 1 line
- There's always something. Fixes MXP Server so that when it starts up, it actually starts up.
r8559 | afrisby | 2009-02-22 01:48:55 -0700 (Sun, 22 Feb 2009) | 7 lines
- Adds initial support for the MXP Virtual Worlds protocol ()
- Handled via the MXPModule.cs located in OpenSim.Client.MXP namespace.
- Also implements MXPClientView and MXPPacketServer for IClientAPI compatibility.
- No changes were required to Core to implement this - the thing is self contained in OpenSim.Client.MXP.dll.
- Includes reference implementation of MXP as MXP.dll - this is under the Apache 2.0 license.
- Requires OpenSim.ini setting to enable. "[MXP] \n Enabled=true \n Port=1253"
- May break. Highly untested.
r8558 | chi11ken | 2009-02-21 18:26:18 -0700 (Sat, 21 Feb 2009) | 1 line
Refactor log4net logger handling in script engine. (#3148)
r8557 | diva | 2009-02-21 18:26:11 -0700 (Sat, 21 Feb 2009) | 2 lines
Addresses some issues with appearance after TPs. Appearance.Owner was not being set, and that's what's being used in SendAppearanceToOtherAgent. Mantis #3204.
r8556 | chi11ken | 2009-02-21 18:18:49 -0700 (Sat, 21 Feb 2009) | 1 line
Update svn properties, add copyright headers, minor formatting cleanup.
r8555 | mw | 2009-02-21 14:03:20 -0700 (Sat, 21 Feb 2009) | 1 line
Applied patch from mantis #3217, which allows Dynamic Images of type RGB (so with no alpha value). Thanks BlueWall.
r8554 | mw | 2009-02-21 11:41:28 -0700 (Sat, 21 Feb 2009) | 1 line
More Grid server refactoring
r8553 | ckrinke | 2009-02-21 10:50:46 -0700 (Sat, 21 Feb 2009) | 3 lines
Thank you kindly, DoranZemlja for a patch that: Solves the Object-Key problem when using llHTTPRequest()
r8552 | diva | 2009-02-21 10:44:33 -0700 (Sat, 21 Feb 2009) | 2 lines
A small improvement in the UserLoginService, hence the User Server: users are now being given a default appearance if there is none in the user database. This issue affected newly created accounts, which aren't given an appearance at time of creation. May address some of the issues reported in mantis #3204 (but the incompatibility with pre-8447 is unaffected and continues to exist).
r8551 | mw | 2009-02-21 08:15:54 -0700 (Sat, 21 Feb 2009) | 1 line
Some more refactoring of GridServer.
r8550 | mw | 2009-02-21 07:45:10 -0700 (Sat, 21 Feb 2009) | 1 line
Applied patch from mantis #3213. Which adds a check to create region command, to make sure the .xml is passed in the command arguments. Thanks BlueWall
r8549 | mw | 2009-02-21 07:36:29 -0700 (Sat, 21 Feb 2009) | 1 line
Added missing header to a file (before chi11ken does it)
r8548 | mw | 2009-02-21 07:30:17 -0700 (Sat, 21 Feb 2009) | 1 line
Added a check to LLClientView.RegisterInterface<T>(T iface), so that it can't try to add duplicate interfaces and cause a exception.
r8547 | mw | 2009-02-21 07:24:25 -0700 (Sat, 21 Feb 2009) | 1 line
Added a check to GridServerBase.RegisterInterface<T>(T iface), so that it can't try to add duplicate interfaces and cause a exception.
r8546 | mw | 2009-02-21 07:19:40 -0700 (Sat, 21 Feb 2009) | 1 line
A bit more refactoring of the GridServer. To make the "modules" share a common Initialise method.
r8544 | lbsa71 | 2009-02-21 07:12:06 -0700 (Sat, 21 Feb 2009) | 1 line
- Upping to interface version 3 - let's see how this goes.
r8543 | mw | 2009-02-21 06:44:03 -0700 (Sat, 21 Feb 2009) | 2 lines
Refactored the GridServer into a GridDBService and a set of "modules". Currently they aren't plugin modules as the support for dynamically loading them isn't complete.
r8541 | melanie | 2009-02-21 04:48:50 -0700 (Sat, 21 Feb 2009) | 3 lines
Allow entry of '?' in http URIs. If the field being typed begins with "http", the ? is just an ordinary character in that field.
r8540 | lbsa71 | 2009-02-21 02:39:33 -0700 (Sat, 21 Feb 2009) | 9 lines
- Applied a patch that: Added estate ban table to migration scripts and nhibernate mapping. Refactored property getters and setters for estate ban object to support NHibernate.
- Added estate ban table to migration scripts of all supported databases.
- Added nhibernate mapping for EstateBans property of EstateSettings
- Refactored property accessors for EstateBan object.
- Added comments for EstateBan properties.
- Ensured that NHibernate tests pass with NUnitGUI.
- Ensured that nant test target passes.
This fixes mantis #3210. Thank you, tlaukkan!
r8539 | chi11ken | 2009-02-20 20:32:25 -0700 (Fri, 20 Feb 2009) | 1 line
Add copyright headers. Minor formatting cleanup.
r8538 | chi11ken | 2009-02-20 20:00:17 -0700 (Fri, 20 Feb 2009) | 1 line
Update svn properties.
r8537 | melanie | 2009-02-20 17:14:47 -0700 (Fri, 20 Feb 2009) | 5 lines
Thank you, robsmart, for a patch that allows the shard to be set. The built-in default is OpenSim, unless a user server url is given, then that is used, unless "shard" is also goven, then shard takes precedence. The defult in OpenSim.ini is "OpenSim" for compatibility.
r8536 | idb | 2009-02-20 15:56:40 -0700 (Fri, 20 Feb 2009) | 3 lines
- Apply
- Fixes NHibernate problem where prim contents show as textures
- Thanks Tommil!
r8535 | drscofield | 2009-02-20 12:15:39 -0700 (Fri, 20 Feb 2009) | 3 lines
From: Arthur Rodrigo S Valadares <arthursv@br.ibm.com>
Re-fixing remote admin XmlRpc handler registration.
r8534 | lbsa71 | 2009-02-20 10:18:07 -0700 (Fri, 20 Feb 2009) | 2 lines
- Renamed and encapsulated m_sceneGraph as SceneGraph for ccc
r8533 | lbsa71 | 2009-02-20 09:47:31 -0700 (Fri, 20 Feb 2009) | 1 line
- Upped VersionInfo to 0.6.3 and in the process, changed assemblyinfo to 0.6.3.* to better track down dll ref and overwrite problems.
r8532 | justincc | 2009-02-20 07:36:53 -0700 (Fri, 20 Feb 2009) | 4 lines
- Apply
- Access NHibernate Manager as read-only property rather than public field
- Thanks Tommil
r8531 | justincc | 2009-02-20 07:04:29 -0700 (Fri, 20 Feb 2009) | 6 lines
- Consistently lock part.TaskInventory as pointed out in
- Not locking causes enumeration exceptions as described in this matis
- part.TaskInventory needs to be locked for every access as it's a dictionary
- Extra locking will hopefully not cause any major issues - in places where the enumeration of the dictionary performs other lock or long running operations, the dictionary is
cloned instead
r8530 | melanie | 2009-02-20 05:48:46 -0700 (Fri, 20 Feb 2009) | 2 lines
Revert previous commit
r8529 | melanie | 2009-02-20 05:15:40 -0700 (Fri, 20 Feb 2009) | 2 lines
Committing interface and stubs for IM interception
r8528 | mikem | 2009-02-20 00:40:36 -0700 (Fri, 20 Feb 2009) | 2 lines
Thanks DoranZemlja for a patch implementing non-shortcircuiting in logical and and logical or in LSL. Fixes Mantis #3174.
r8527 | mikem | 2009-02-19 21:55:09 -0700 (Thu, 19 Feb 2009) | 1 line
Update TESTING.txt. Mantis #3174.
r8526 | diva | 2009-02-19 21:15:10 -0700 (Thu, 19 Feb 2009) | 2 lines
Safe to remove remoting_listener_port out of OpenSim.ini.
r8525 | diva | 2009-02-19 20:39:50 -0700 (Thu, 19 Feb 2009) | 2 lines
THE BIG ANTI-REMOTING SCHLEP -- StartRemoting is no more. Sims in older versions will have a hard time communicating with sims on this release and later, especially if they haven't transitioned to RESTComms at all. There's still some cleanup to do on assorted data structures, but the main functional change here is that sims no longer listen on remoting ports.
r8524 | chi11ken | 2009-02-19 19:33:54 -0700 (Thu, 19 Feb 2009) | 1 line
Update svn properties, add copyright headers, minor formatting cleanup.
r8523 | lbsa71 | 2009-02-19 19:26:27 -0700 (Thu, 19 Feb 2009) | 1 line
- Another stab at removing AssetServer.exe dependencies
r8522 | diva | 2009-02-19 17:18:18 -0700 (Thu, 19 Feb 2009) | 1 line
This moves the 2 friends-related interregion messages out of OGS1 and into the FriendsModule. No functional changes. Those messages were sent over XMLRPC, and that's how it continues to be for now. Just moving this couple of interregion messages out of OGS1, in preparation for the big shlep ahead.
r8521 | lbsa71 | 2009-02-19 13:03:17 -0700 (Thu, 19 Feb 2009) | 4 lines
- Fixed erroneously reverted xmlns
- sigh*
r8520 | lbsa71 | 2009-02-19 12:32:53 -0700 (Thu, 19 Feb 2009) | 1 line
- Reverted the AssetServer fix, apparently something was dependent on IAssetDataPlugin being in OpenSim.Data
r8519 | lbsa71 | 2009-02-19 12:04:51 -0700 (Thu, 19 Feb 2009) | 5 lines
- Moved the AssetStreamHandlers to OpenSim.Framework.Servers
- And there, all refs to OpenSim.Grid.AssetServer.exe gone.
/me takes a bow.
r8518 | lbsa71 | 2009-02-19 11:57:59 -0700 (Thu, 19 Feb 2009) | 1 line
- moved the Get/PostAssetStreamHandler to the Servers namespace... slowly getting there...
r8517 | lbsa71 | 2009-02-19 11:53:43 -0700 (Thu, 19 Feb 2009) | 2 lines
- Split RestService.cs into GetAssetStreamHandler.cs and PostAssetStreamHandler.cs - then killed off original (misnomed) file.
- Really, who wrote this jurassic shit code all with totally wrong file names? Ah yeah, that'd be me. Sorry.
r8516 | lbsa71 | 2009-02-19 11:48:46 -0700 (Thu, 19 Feb 2009) | 2 lines
- Changed Prebuild.xml back to specifying xmlns
- Consequentially, dropped Prebuild solution from Prebuild.xml as the 1.7 schema does not allow for more than one solution per xml file. (*rolls eyes*)
r8515 | lbsa71 | 2009-02-19 11:40:32 -0700 (Thu, 19 Feb 2009) | 2 lines
- Extracted IAssetData and moved it to OpenSim.Framework to prepare to get rid of ugly CoreModules dependency on AssetServer.exe
- And yes, the IAssetDataPlugin is misnomed, which became apparent on extracting it.
r8514 | justincc | 2009-02-19 11:31:45 -0700 (Thu, 19 Feb 2009) | 4 lines
- Apply
- Fixes NHibernate overflow exception when saving some objects (under at least PostgreSQL 8.3)
- Thanks Tommil!
r8513 | lbsa71 | 2009-02-19 11:27:19 -0700 (Thu, 19 Feb 2009) | 2 lines
- It think it actually works now. Only that AssetService weirdness left to fix.
- Ignored some gens
r8512 | justincc | 2009-02-19 11:09:10 -0700 (Thu, 19 Feb 2009) | 6 lines
- Apply
- Changes varchar(36) columns to UUID type in MSSQL - this will be much more efficient
- ===As always, please, please backup your database before applying this patch===
- Thanks Ruud Lathrop (for the patch) and StrawberryFride (for the review)
r8511 | lbsa71 | 2009-02-19 11:01:33 -0700 (Thu, 19 Feb 2009) | 8 lines
- Okay, so finally got my head around this. Problem is that upstream Prebuild copied dlls promiscuously, and this led to the references being all mixed up (/bin dlls overwritten by different versions on every csc)
- Something that thus needs fixing is the fact that ProjectReferences has to be marked
<ProjectReference> <Private>False</Private> </ProjectReference>
but that is not configurable in the upstream Xml Schema. I've hardcoded it in our repo for now.
r8510 | justincc | 2009-02-19 10:57:40 -0700 (Thu, 19 Feb 2009) | 3 lines
- Fix
- Make it possible once again to set a console log level threshold in OpenSim.exe.config
r8509 | justincc | 2009-02-19 10:19:08 -0700 (Thu, 19 Feb 2009) | 2 lines
- refactor: Rename new class AssetGatherer to UuidGatherer to reflect what it actually does
r8508 | justincc | 2009-02-19 10:08:00 -0700 (Thu, 19 Feb 2009) | 4 lines
- Do deep inspection when saving inventory items in order to capture all the necessary assets (textures, objects within objects, textures referenced in scripts contained in
objects contained in another object, etc.)
- Not yet ready for general use
|
http://opensimulator.org/wiki/0.6.4-release
|
CC-MAIN-2016-07
|
refinedweb
| 16,244
| 67.79
|
Welcome to "The Debugging Book"! Software has bugs, and finding bugs can involve lots of effort. This book addresses this problem by automating software debugging, specifically by locating errors and their causes automatically. Recent years have seen the development of novel techniques that lead to dramatic improvements in automated software debugging. They now are mature enough to be assembled in a book – even with executable code.
from bookutils import YouTubeVideo YouTubeVideo("-nOxI6Ev_I4")
You can use this book in four ways:
You can read chapters in your browser. Check out the list of chapters in the menu above, or start right away with the introduction to debugging or how debuggers work. interactive debuggers.
You can use the code in your own projects. You can download the code as Python programs; simply select "Resources → Download Code" for one chapter or "Resources → All Code" for all chapters. These code files can be executed, yielding (hopefully) the same results as the notebooks. Once the book is out of beta, you can also install the Python package.
You can present chapters as slides. This allows for presenting the material in lectures. Just select "Resources → View slides" at the top of each chapter. Try viewing the slides for how debuggers work.
This work is designed as a textbook for a course in software debugging; as supplementary material in a software testing or software engineering course; and as a resource for software developers. We cover fault localization, program slicing, input reduction, automated repair, and much more, illustrating all techniques with code examples that you can try out yourself.
This book is work in progress, with new chapters being released every week. To get notified on updates, follow us on Twitter.
News from @Debugging_Book
This book is written by Andreas Zeller, a long-standing expert in automated debugging, software analysis and software testing. Andreas is happy to share his expertise and making it accessible to the public..
mybinder.org imposes a limit of 100 concurrent users for a repository. Also, as listed on the mybinder.org status and reliability page,.
There are alternatives to mybinder.org; see below.
If mybinder.org does not work or match your needs, you have a number of alternatives:
For details, see our article on Using Debuggingbook Code in your own Programs. Enjoy!.
Technically, yes; but this would cost money and effort, which we'd rather spend on the book at this point. If you'd like to host a JupyterHub or BinderHub instance for the public, please do so and let us know.
We use Python to implement our tools and techniques because we can get things done quickly. Building an interactive debugger in Python is less than 100 lines of code and took us 2-3 days; doing the same for C is tens of thousands of lines and a year-long project. Instrumenting code, say for dynamic slicing, gets us savings of similar magnitude. Also, Python code allows us (and you) to focus on the main concepts, rather than implementation details that are out of place in a textbook.
Having said this, many of the techniques in this book can also be applied to C and other code. This is notably true for black-box techniques such as reducing inputs or changes or generalizers; these are all language-agnostic. Tools related to the debugging process such as bug tracking or mining repositories are language-agnostic as well. Finally, in all chapters, we provide pointers to implementations in and for other languages, for instance for assertions or program repair.
For changes to individual chapters, see the "Last change" link at the end of a chapter. For the
debuggingbook Python package, see the release notes for details. (or in Zoom) to discuss experiences with past notebooks and discuss future notebooks.
We have the students work on exercises from the book or work on larger (automated debugging).
We will compile a number of tours through the book for various audiences. Our Sitemap lists the dependencies between the individual chapters.
Download the Jupyter Notebooks (using the menu at the top) and adapt the notebooks at your leisure (see above), including "Slide Type" settings. Then,
You can tweet to @debugging_book on Twitter, allowing the community of readers to chime in. For bugs you'd like to get fixed, report an issue on the development page.
We prioritize issues as follows:
We're glad you ask that. The development page has all sources and some supplementary material. Pull requests that fix issues are very welcome.
Again, we're glad you're here! We are happy to accept
See our Guide for Authors for instructions on coding and writing.
|
https://nbviewer.org/github/uds-se/debuggingbook/blob/master/docs/notebooks/index.ipynb
|
CC-MAIN-2022-27
|
refinedweb
| 772
| 64.91
|
User talk:Modusoperandi
From Uncyclopedia, the content-free encyclopedia
Welcome to my talkpage. New stuff at the bottom, please.
PLS queries
so the PLS starts the 5th now? are we all set on judges? would you care to render some sort of official quote for the unsignpost as this year's poo-master? 02:06, October 1, 2009 (UTC)
- The what is on the when? This is the first I've heard about it. Sir Modusoperandi Boinc! 02:42, October 1, 2009 (UTC)
- a nice man on the street told me you were organizing it. he also tried to show me what he had under his trench coat but i was in such a hurry to get here that i told him i'd stop back by on the way home. 03:00, October 1, 2009 (UTC)
- The sordid details are here. I can mark you down as an emergency judge if you'd like (and if you wouldn't I can mark you down as two emergency judges).
- And you want a quote:
"This Poo Lit will be the Greatest PLS ever. Anyone who says otherwise is as much of a liar as they are dumb, and they are plenty dumb. Ergo, they are also plenty liar. That made more sense in my head."Sir Modusoperandi Boinc! 03:11, October 1, 2009 (UTC)
- well, i judged last go-around, and as most of the site's accomplished writers are already judging, i'm looking forward to expecting a clean sweep, and also the subsequent crushing disappointment associated with losing in every category. anyway, thanks for the quote, which will be mangled and mis-attributed as usual. 04:06, October 1, 2009 (UTC)
- I've already picked the winners. Sorry. Sir Modusoperandi Boinc! 04:18, October 1, 2009 (UTC)
- i see. well, it seems i've gone and purchased this rather large pheasant as a bribe to no avail after all. well, since i don't need it, i'll just go ahead and leave it right here on your talk page... 04:23, October 1, 2009 (UTC)
- *munch munch munch* I'll see what I can do. Sir Modusoperandi Boinc! 05:03, October 1, 2009 (UTC)
Po.jpg
There you:38, October 1, 2009 (UTC)
- Hurrah! Sir Modusoperandi Boinc! 13:01, October 1, 2009 (UTC)
Project Doggystyle with the Devil Revisited
I've just came back from a series of important events in my usually unimportant, uneventful life. But at last, the Lord has finally shown his grace by not coughing up his goddamn hairballs every four hours. Now that I'm ready to complete the project, please unhuff the articles.
Sincerely,
markchung
- Okay. Which articles? Sir Modusoperandi Boinc! 21:08, October 1, 2009 (UTC)
- I'm assuming District 9 (human) and related. --Mn-z 04:32, October 2, 2009 (UTC)
- Sure. Those. I undeleted two. If there were more, I have no knowledge of them. I'm like The Absentminded Professor, except without the doctorate. Sir Modusoperandi Boinc! 05:07, October 2, 2009 (UTC)
- I also believe that Category:MNU Approved and Category:MNU Banned should be unhuffed. --Mn-z 05:41, October 2, 2009 (UTC)
- Done. Keep in mind, markchung, that it needs to be more betterer, or it'll go away again. If you need lots of time, you can move it under your userpage. Also, don't forget to sign talkpages with four tildes (~~~~). Sir Modusoperandi Boinc! 05:46, October 2, 2009 (UTC)
Why?
I would like to inquire why you would huff the category People being hunted by ninja assault kittens Insight11 16:13, October 2, 2009 (UTC)
- Because no people are hunted by ninja assault kittens, I assume. Sir Modusoperandi Boinc! 17:17, October 2, 2009 (UTC)
Modus, why do people do bad things? --Pleb SYNDROME CUN medicate (butt poop!!!!) 17:30, October 2, 2009 (UTC)
- Because people are bad. Bad people! Off the furniture! Bad! Bad! Maybe that was pets. Sir Modusoperandi Boinc! 21:03, October 2, 2009 (UTC)
- Because Adam ate the forbidden fruit in the garden of Eden. --Mn-z 04:49, October 3, 2009 (UTC)
- No. It's because, not knowing good & evil (that is to say, the positive or negative consequences of actions) in the first place, they disobeyed God (by eating the fruit that would give them the knowledge that they shouldn't eat the fruit). The bad wasn't the eating, it was the disobedience (from two people made perfect by a perfect God who also made the devil that rebelled). If I could condense the Bible into one word (particularly the OT and especially the Pentauch), it would be "Obey". Sir Modusoperandi Boinc! 04:58, October 3, 2009 (UTC)
- Isn't that what I said? --Mn-z 05:37, October 3, 2009 (UTC)
- If I could condense this talk page into one word, it would be "Hamburgers." Not that there's anything about hamburgers on this page, but I'm kinda hungry and they're on my:55, Oct 3
- I just finished a Whopper. I'm just sayin'. Sir Modusoperandi Boinc! 06:21, October 3, 2009 (UTC)
- Why? do I love the title of this section so much? WHY???PuppyOnTheRadio 16:29, October 17, 2009 (UTC)
Trouble
Look, I've run out of ideas for Synesthesia, and this is an article that can be featured, surely. But I need help. Hey, I even included John Stamos for you.-Almost Sir Random Crap
- I'll take a look at it when I'm in a better mood. *Grumble* Sir Modusoperandi Boinc! 23:07, October 2, 2009 (UTC)
Relax
First, I think I'm pretty relaxed. Compulsive maybe, but probably relaxed. Second, WFP is important, at least to the status of Uncyclopedia. Third, I get aggravated when established and "valuable" editors and users of Uncyclopedia violate what is arguably its most important rule, leading me to wonder why I didn't leave before the average user became considered effectively worthless due to the high volume of new unfunny users. But I otherwise consider myself pretty dispassionate.
Thanks for writing anyways. Care to comment? I invite you wholeheartedly, Rbpolsen ♦ Come Rant · Come Look at all My Crap 01:25, October 8, 2009 (UTC)
- Hey, you do realize that you're getting mad on the internet? That's like crying over rainwater lost down a gutter, or falling in love with a telephone pole. It looks silly and it never goes anywhere. --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:10, October 8, 2009 (UTC)
- But seriously folks, speaking of looking silly and never going anywhere, Syndrome's here! But seriously folks, I'm here all week. But seriously folks, try the steak. (Drummer plays Modusoperandi out) Sir Modusoperandi Boinc! 04:07, October 8, 2009 (UTC)
- Things on VFP either work or they don't, and there's nuffin' you can do about it. If you have to explain the joke, you've already lost. Have you considered going Goth? That's what I did. Consider you going Goth, I mean. Don't. You look terrible in velvet. I, on the other hand, look Gothtastic, and magnificently, depressingly so. It's too bad that the only vampires around these days are vegetarian abstinence vampires. This century sucks. Sir Modusoperandi Boinc! 04:12, October 8, 2009 (UTC)
Don't do it when you want to go to it.
Seeing as how this'll be my first experience judging PLS, could you point me in the general direction of some judging guidelines? Is there a standard procedure I'm supposed to follow? Also, your talkpage seems a little blue. Cheer up Modus' talkpage, everything's gonna be okay. :24, 8 Oct
- ficksed • FreddThe Redd •
•
• 20:34 October 8 '09
- Ummm.... didn't I fix it right before you fixed it in a slightly different way? How do we keep overwriting each other? What the fuck is going on?
pillow talk 20:38, October 8, 2009 (UTC)
- No kidding? • FreddThe Redd •
•
• 20:40 October 8 '09
- I'm against standardizing how people judge, as comedy won't fit in a box, man.
- Have you done Pee Reviews? Judging like that seems to work for most. I, being a simple man, use a simpler system. I read each one (doing my best to not know who wrote it...going so far as to tape off the top part of the monitor), then go get drunk or punch a cop downtown or something. When I recover I read them again, and rate them from one to ten. If at the end I'm stuck with tied pages, I read them again and again, until the tie resolves itself in my head (like foxy boxing but with fat people and words).
- And my talkpage isn't blue. It's grumpy. Sir Modusoperandi Boinc! 20:38, October 8, 2009 (UTC)
- Anyplace in particular where I'd put my completely ignorant and arbitrary decisions? Or do we just all shout them out on the count of three? Also, your talkpage should stop being so grumpy. At least it's not a ginger. :46, 8 Oct
- When the time for judging is here, I'll put the link to the right place on your talkpage.
- And I'm grumpy because I forgot to put a page of mine on Pee Review so that I could VFH it in time for the anniversary of the thing that the page was about. Double-grumpy because the exact same thing happened this time last year. *Grumble* Plus I have a sore shoulder and can't even complain about it out loud since I'm pretty sure that it's because, rather than fighting Nazis or something equally cool, I slept on my arm. Sir Modusoperandi Boinc! 21:00, October 8, 2009 (UTC)
- Slept on your arm because you were exhausted from fighting Nazis, right? Just arbitrarily feature your article on the appropriate day. It's not like anybody reads Uncyclopedia anyways. :11, 8 Oct
- I does • FreddThe Redd •
•
• 21:13 October 8 '09
- Naw. I already featured me. Any chicanery from there would be a step down. Sir Modusoperandi Boinc! 21:16, October 8, 2009 (UTC)
- chicanery? wuzzat? • FreddThe Redd •
•
• 21:18 October 8 '09
Warn him
Listen, I do not know if you are an admin or not, but please warn or ban Killer 3.14. He keeps screwing and fucking up the forums, he hasn't made one article, he's already been banned 3 times, and he only cares about his retarded game. Finally, he acts like a 10-year old with ADHD. It's your choice, just please make him do something productive to our amount of 25,000 articles.-Almost Sir Random Crap
- Done. Sir Modusoperandi Boinc! 21:32, October 8, 2009 (UTC)
Question:
Are you planning on writing any articles for PLS? I know you're "running" it and all, but the rules don't limit you on that one (I don't think so, anyway...). I'll take you on in best alt. namespace. Bring it. • • • Necropaxx (T) {~} Friday, 07:10, Oct 9
- No. That would be wrong. Besides, I just bought a new mirror and can't stop gazing at my reflection in it. Full length mirror. I'm really quite breathtaking. Sir Modusoperandi Boinc! 07:58, October 9, 2009 (UTC)
Can has I me PLS entry proofreading?
being funny foreigner who speaking English for his 3rd language, and verbally retarded, so you can makes exception —Mahmoosh (Talk•stalk•Boobs•Anus•Poop) 10:54 October 9 '09
- I would say no. The idea is that you do it on your own. Having other people do stuff for your page is the opposite of that. Sir Modusoperandi Boinc! 13:06, October 9, 2009 (UTC)
- <begging>Oh, please. Writing humor in a language that is not your first is hard enough, believe me (and I'm not joking) please, don't make it harder for me, I just want somebody to correct the spelling grammar and mistakes</begging> you can make the proofreading thing an exception for foreigners —Mahmoosh (Talk•stalk•Boobs•Anus•Poop) 13:26 October 9 '09
- (Use Word's spellcheck/grammar thingy or get a friend or parent to check it. That's what I would do if I didn't know english so goodly. Sir Modusoperandi Boinc! 14:04, October 9, 2009 (UTC))
- I'm actually mostly on my mobile phone (no Word spealtshek) My parents' English is not by any means better than mine (they're foreigners too, remember?) and I only have 2 friends whose English is better than mine, and both of them are currently out of country. Dude, it's a total disaster, I must have my article spellchecked —Mahmoosh (Talk•stalk•Boobs•Anus•Poop) 14:30 October 9 '09
- I hate to sound like a jerk but...And so? Sir Modusoperandi Boinc! 14:48, October 9, 2009 (UTC)
- Do you have access to a computer somewhere? Most typing programs have some sort of spell check. Even the firefox browser has spellcheck. --Mn-z 15:04, October 9, 2009 (UTC)
We haz invalidz.
For PLS, Hydronium Ion has submitted an article for Best Alt. Namespace that was both created before PLS started, and plagiarized. What should the next course of action be? Where the Wild Colins Are - LET THE WILD RUMPUS START! 22:21, October 9, 2009 (UTC)
- Taken care:31, Oct 9
- What Leddy said. Sir Modusoperandi Boinc! 23:26, October 9, 2009 (UTC)
- What about this entry having been created before the competition and having been pee reviewed? MegaPleb • Dexter111344 • Complain here 23:29, October 9, 2009 (UTC)
- Yeah. Um. That. Sir Modusoperandi Boinc! 23:38, October 9, 2009 (UTC)
- That one article? With the words that were previously written somewhere else? I think That Balloon fellow did something wrong with that thing:53, 10 Oct
- I told him what to do on his talk page. MegaPleb • Dexter111344 • Complain here 15:00, October 10, 2009 (UTC)
- What's he supposed to do on his talkpage? Sir Modusoperandi Boinc! 15:11, October 10, 2009 (UTC)
- Fix that thing he did and then erase that other thing that wasn't supposed to be there:20, 10 Oct
- With his cock? MegaPleb • Dexter111344 • Complain here 15:23, October 10, 2009 (UTC)
- If by cock you mean penis, then no. Don't be ridiculous, this is a wiki. I'm sure he'll just use voodoo. :28, 10 Oct
Nice Block
You can haz cookie. ~Joey~ {Talk to meh} 06:38, October 10, 2009 (UTC)
- Give yourself a cookie too. Sir Modusoperandi Boinc! 06:40, October 10, 2009 (UTC)
Whoops. Sorry
I sincerely apologize for entering the article that I created prior to PLS. I didn't read the rules properly. It won't happen again. --BlueSpiritGuy 14:29, October 10, 2009 (UTC)
- We do original stuff here. Remember that. Sir Modusoperandi Boinc! 14:49, October 10, 2009 (UTC)
- Wrong thing to say, Modus. This is the guy who wrote an original article, but he did so before PLS. Plus he had it Pee Reviewed. The other guy c/p an article. MegaPleb • Dexter111344 • Complain here 14:53, October 10, 2009 (UTC)
- Wups. I was just coming back to change my comment to, simply, "Noob", too. Sir Modusoperandi Boinc! 15:10, October 10, 2009 (UTC)
- Oh so I'm also not allowed to have it Pee Reviewed? Mmmm, think I'll have to start reading rules and stuff thoroughly. Anyway, sorry. And yes, I deserve to be called n00b.--BlueSpiritGuy 15:27, October 10, 2009 (UTC)
- Don't worry, we were all noobs once. Except for Modus. He was always here, just waiting for Uncyclopedia to arrive. Oct
And the verdict is.. Guildy
Wow, the only contender for the best article category is Guildy. Dude, something is seriously wrong. I kid you not.. And it's not the only category with one entry.. Something is wrong, I tell you • FreddThe Redd •
•
• 16:28 October 10 '09
- Someone else is bound to enter the category. If need be, I will. MegaPleb • Dexter111344 • Complain here 16:29, October 10, 2009 (UTC)
- If you would like to panic, go right ahead. I, on the other hand, am planning on running the worst Poo Lit ever. It'll be my Waterloo, but with Napoleon instead of ABBA. Sir Modusoperandi Boinc! 20:11, October 10, 2009 (UTC)
- Whats a "napoleon"? • FreddThe Redd •
•
• 20:22 October 10 '09
- It's a kind of brandy. Sir Modusoperandi Boinc! 20:26, October 10, 2009 (UTC)
- I thought it was a special type of condoms • FreddThe Redd •
•
• 21:07 October 10 '09
Yo, I'll enter. When's the deadline again? February? —Syndrome (Penis•Penis•Penis•Penis•Penis) 20:46, October 10, 2009 (UTC)
- No, the Xth of Farch • FreddThe Redd •
•
• 21:07 October 10 '09
I might hurl up a literary fur ball for this. No promises. Congrats on O'Reilly Sir, I was slightly confused when it first went up then it grew on me.--10:55, October 11, 2009 (UTC)
And just to be clear on the issue
I delete any shock picture I encounter, as I did lately with Islam related articles, I try to keep a minimal respect for all defiled dead people in general and not just dead Jews. That picture had no place being here ~
18:17, October 11, 2009 (UTC)
- 0_0 that's very nice of you, thanks • FreddThe Redd •
•
• 19:07 October 11 '09
- I have been noticing some double standards regarding the Jews. For example, User:Roman Dog Bird was requested to remove a swastika from his sig, but no-one had a problem with him using the words "nigger" and "faggot". And, UnNews talk:Dictators hail Jewish peace plan was called "a piece of political slander" when attacks 10 worse against the USA and Bush (and everybody else) are permitted.
- As for the dead people images, it seems we don't have many, the only ones I found were this crudely drawn lynching and Saddam Hussein right before being executed, so I wouldn't say that deletion was bad per say. --Mn-z 19:59, October 11, 2009 (UTC)
- Gosh, it's almost like Mordillo's jewish or something. If Orian or Fag had a problem with the use of the word faggot, or if a black uncyclopedian took offense to the word nigger, by all means they can deal with that, but those are words. The differentiation here seems to be between words and pictures. Words are one thing; pictures are:14, Oct 11
- But Leddy, a picture is a merely one thousand words. I did the math. Sir Modusoperandi Boinc! 21:15, October 11, 2009 (UTC)
- What if I were to crop together two images? Wouldn't the resulting picture be worth 2000 words? Clearly, each pixel must be worth a given number of words. If we assume the 1000 word picture is the default size of an MS Paint bitmap, ie. 640x384, then each 245.76 pixels equals one word. So, a large 1024x768 image would be worth 3,200 words, while a 100x100 animated gif is worth 40.7 words per frame. --Mn-z 21:31, October 11, 2009 (UTC)
- also, mnbz, the use of swastikas is federally outlawed in germany and some other countries. swastikas were used by the nazi reich and are now used by neonazis, and that gives the swastika a political dimension. on the other hand the words nigger and faggot are being used inside black and gay communities to refer to each others and are only considered taboos if used by outsiders.. yes, the three subjects are equally demeaning and disrespectful, but only the swastika has the "political charge" • FreddThe Redd •
•
• 20:22 October 11 '09
- The fact the "nigger" and "faggot" are used by black people and homosexuals respectively does not make them any less racist. The swastika was used in non-political contexts, up on it the 1930's and 1940's, and has more non-racist usage than the word "nigger" which was always derogatory, if less so in the past. --Mn-z 21:11, October 11, 2009 (UTC)
- Also, my issue isn't with the pic per say, its with the scared cows and double standards. --Mn-z 21:17, October 11, 2009 (UTC)
- yes, i totally agree with you. i've been called a sandnigger on several occations, and it hurts.. double standards suck of course, but that's how it is • FreddThe Redd •
•
• 21:27 October 11 '09
- Scared what now? Sir Modusoperandi Boinc! 22:02, October 11, 2009 (UTC)
A sacred cow
- Sacred cows are golden statues which the Jews worship at high places in Dan and Bethel. I;m am also told that Hindus and some sub-Saharan African tribes also worship sacred cows. --Mn-z 01:02, October 12, 2009 (UTC)
- You had "scared" before. I was poking fun at that. (Also, look at my edit comment) Sir Modusoperandi Boinc! 01:07, October 12, 2009 (UTC)
- Yes, but I still think we have too much cow worship on this wiki. --Mn-z 01:16, October 12, 2009 (UTC)
- What did you expect? This is the Chubbychaserswiki, right? Oh. Crap. I've been wasting my time on the wrong wiki. Sir Modusoperandi Boinc! 01:28, October 12, 2009 (UTC)
This section of my talkpage isn't funny at all. You people suck. (and by "you people" I mean "you people". And by the second "you people" I mean "Uncyclopedians") Sir Modusoperandi Boinc! 21:15, October 11, 2009 (UTC)
- On a more serious note, I'm also offended by Orian57's constant degrading of my sexual
perversionorientation. --Mn-z 21:21, October 11, 2009 (UTC)
- That's because you're a monster. Sir Modusoperandi Boinc! 22:02, October 11, 2009 (UTC)
Theres a picture of Muhammad about to suck cock on User:An Ape that Only Exists on Thursdays userpage and talkpage. Will that be deleted cause its a shock image? --121.214.60.1 22:42, October 11, 2009 (UTC)
- I'm too busy being stunned by the fact that Ape has a userpage to be shocked by whateverelseitwas that you said. Sir Modusoperandi Boinc! 23:10, October 11, 2009 (UTC)
- Also, the image survived VFD so it's unlikely that an admin would ignore that and delete the image on a whim. —Syndrome (Penis•Penis•Penis•Penis•Penis) 23:16, October 11, 2009 (UTC)
- can I say that Ape's shock image is annoying as hell? If I'm ever delivering signpost again, unless there's nobody around I'm skipping his page! -- Soldat Teh PWNerator (pwnt!) 02:11, Oct 12
- No, you cant. Pup t 02:17, 12/10/2009
- I don't plan on just skipping it and not telling everyone. i'll say I'm done except for 1 person, whose page is not safe for work-- Soldat Teh PWNerator (pwnt!) 02:20, Oct 12
Moar Issues
Looking back on the history of events, it appears the questionable image was uploaded before Mordillo corrected Clemens177 in the vote dramaz. It appears that Mordillo recently resurrected Clemens177, so I assume he realized that. And the ban time was reasonable for VFH vote drama.
Although, this does raise the issue of overly nationalist (or otherwise group-identified) admins interpreting any overly edgy humor against a certain group as a personal insult. I think that could be minimized if we discouraged broadcasting one' nationality/sexual orientation/whatever-group-affiliation in sigs.
And while I'm complaining about policy, several users have been insulting me by mocking and degrading my sexual
perversion orientation. I think it is because I voted against their articles on VFH one time, or nomed their articles articles on VFD, or put there ip address on ban patrol, or something else they are having passive-aggressive drama over. It clearly can't be them actually finding pregnancy "humor" unfunny/disturbing. --Mn-z 17:47, October 12, 2009 (UTC)
- So you're saying that we shouldn't advertise our backgrounds because it would let people know that we're involved in some sort of worldwide conspiracy? Sir Modusoperandi Boinc! 18:08, October 12, 2009 (UTC)
You're an admin, right?
If so, please tell the mortals that they won't be banned for improving an article. I agree that you shouldn't significantly change the content of something while it's being voted on, but minor fixings won't hurt anyone. And tell POTR that I don't have to let him play with my fire truck. It's mine! I brought it! --Pleb SYNDROME CUN medicate (butt poop!!!!) 04:59, October 12, 2009 (UTC)
- SHARE! My fire truck Pup t 05:06, 12/10/2009
- Nooooo! I'm telling! --Pleb SYNDROME CUN medicate (butt poop!!!!) 05:10, October 12, 2009 (UTC)
Cookie
-- 18:55, October 12, 2009 (UTC)
- Hurrah! Also, you're doing it the hard way. You don't need to post all the code, just its name and modifiers, like so: {{cookie|Name=Killer 3.14}}. Sir Modusoperandi Boinc! 23:34, October 12, 2009 (UTC)
- Or maybe {{cookie|Killer 3.14|Thank you for your help in general and don't play in traffic}}. I think that would give the alternate message. WHY???PuppyOnTheRadio 03:10, October 19, 2009 (UTC)
- Apparently, yes. Sir Modusoperandi Boinc! 03:16, October 19, 2009 (UTC)
This!
Scared the living bejesus out of me. I looked at it, went away, and then came back to look at it and somebody had apparently huffed it! Pup t 23:38, 12/10/2009
- I know, right?! I was all, "Hey that's not supposed to be there" and "For Poo Lit, as it mentioned on the Poo Lit page for Poo Lit, pages for Poo Lit are supposed to be in userspace". Then I was all, like, "Wuppah!" and I moved it. Sir Modusoperandi Boinc! 23:42, October 12, 2009 (UTC)
- It's spelled "Huttah."
pillow talk 23:51, October 12, 2009 (UTC)
- Yeah, and someone explained it to me and I thought to myself, "I should fix that, but nobody's going to change it for a day or so. Especially someone who hasn't been around for a while." And then I looked at this morning and realised that it had been changed, and so I went away for a moment to get my breakfast, came back and it had gone, but in an extremely polite way. Pup t 23:58, 12/10/2009
- If you'd been a noob I would've put that instead. But you're not. Moron. Sir Modusoperandi Boinc! 00:01, October 13, 2009 (UTC)
- Funny you should say that, because there is a school of thought that I am a n00b which given that RCMurphy and I are on par makes sense. Pup t 00:11, 13/10/2009 14:16, October 13, 2009 (UTC)
- If I were Modus, I would rule that you may write the article as long as you didn't use the words "arse" or "iPhone." But I'm not Modus, so basically I'm just wasting everyone's time here.
pillow talk 14:21, October 13, 2009 (UTC)
- I came up with what I thought was a good solution, what do you think to it Modus/anyone who happens to be passing? —The preceding unsigned comment was added by ChiefjusticeDS (talk • contribs)
- I think you forgot to sign your post and your personal hygiene leaves something to be desired.
pillow talk 16:33, October 13, 2009 (UTC)
- Well, anything apart from that? --ChiefjusticeDS 19:05, October 13, 2009 (UTC)
- That page isn't blank. It's even got a joke. An unmolested page or an unedited spork, maybe. This, no. Sir Modusoperandi Boinc! 19:08, October 13, 2009 (UTC)
- That's just throwaway guff I put there to avoid being a total waste of someone's clicking time (look at the edit history if you can be bothered). It's not a page; it's not 'XTZ is gay'-as-content, it's just not an article. Yet. Well, I'll post it eventually anyway (as per the Chief's advice, blame him if it isn't funny - I'll leave out that iphone gag, which would probably do better as a surprise somewhere else, or forgotten entirely), and that shall be my revenge. Hope PLS turns over some better rocks than mine. Myocardialinfarction 00:26, October 14, 2009 (UTC)
Award from UN:REQ
- Thanks, Madmax. I'd forgotten that I'd done that thing that I didn't know I'd done in the first place. Sir Modusoperandi Boinc! 19:38, October 13, 2009 (UTC)
Cookie and username thing
That guy pointing at you with the username template doesn't work and I had to do the cookie the hard way so it says "Thank you for your help in general.". Just thought you'd like to know.-- 20:29, October 13, 2009 (UTC)
- Oh. Have you considered being lazier? Sir Modusoperandi Boinc! 20:56, October 13, 2009 (UTC)
- No.I just copied and pasted and added text.-- 21:18, October 13, 2009 (UTC)
- Don't try to dazzle me with your Big City learnin'! Sir Modusoperandi Boinc! 22:01, October 13, 2009 (UTC)
You're probably not aware of this
But you're pretty awesome. Just thought you should:10, 13 Oct
- I'm glad I'm not the only one who noticed. I tried soap, lave heavy-duty hand cleaner, even acetone, but the awesome won't wipe off. Worse, it leaches through my shoes, so behind me is a trail of awesome. It's like a curse, but awesome. Sir Modusoperandi Boinc! 23:53, October 13, 2009 (UTC)
- What's with the compliment? You think Modus is in charge of PLS or something? Oh yeah. Oh My God, MODUSOPERANDI IS THE TOTALLY COMPLETELY MOST AWESOMENESS! (Seriously, I know Optimuschris is a judge so can't win anyway). WHY???PuppyOnTheRadio 21:12, October 16, 2009 (UTC)
Whatever
That's Ok, I've given up on it. I'm focused on Stormtrooper 147-B now. Also, protect or destroy Mudkip fires his lazer!-Almost Sir Random Crap
- Okay. I'm glad I could, um, do nothing at all. Sir Modusoperandi Boinc! 23:53, October 13, 2009 (UTC)
PLS and my sad little entry
It has been bounced back and forth into and out of my userspace a fair bit, and the sound file from Zim is funny and it does improve it, especially with the two of them in there. Of course that would invalidate it from being PLS candidate if it stayed there with the amendments. So, given that you are God when it comes to PLS, I propose the following:
- This get moved back to here and this is my entry into PLS.
- This stays where it is and keep Rev Zim happy, but stays unrelated to PLS. That way Zim is happy, you're happy, I'm happy, and there is harmony in our family again, and I can get back to looking for jail bait cybersex. Pup t 23:08, 13/10/2009
- I didn't invalidate it. Since Zim's edit was the only not-you edit, and his edit was only adding the audio, I "hid" his contribution and let your page in PLS. Then he undid all that. You can have it one way or the other. Either it's in mainspace with the audio (and not in userspace and not entered in PLS), or it's in your userspace without the audio (and not in mainspace, but entered in PLS). Have a chat with Zim if you decide on the latter. Sir Modusoperandi Boinc! 23:53, October 13, 2009 (UTC)
Sorry to have caused all this trouble. Feel free to chastise me publicly and repudiate me:43, October 14, 2009 (UTC)
- Non-issue - you made my life awkward because I did something stupid with something good and you made it better. I'm not a prize whore so I'm happy to take it our of the running and keep the better version of it in place if I have to choose between the two. Pup t 03:31, 15/10/2009
- Puppy why don't you keep it in user space without Zim's work, then after voting's over move it to mainspace with Zim's audio? DAP Dame Pleb Com. Miley Spears (talk) 03:35, October 15, 2009 (UTC)
- Because that would make sense. We're through the looking glass here people. Sir Modusoperandi Boinc! 04:49, October 15, 2009 (UTC)
- Non-issue - It's off the main UnNews page, so it can (and should IMOP) be in the running. It's a fine article by a fine Aussie.:45, October 15, 2009 (UTC)
- Okay, I will re-establish it back in my userspace, hide zim's audio (for the moment) and re-validate my invalidated entry. Which means, thankfully, I can stop working on writing something else. I was afraid I'd have to come up with another original idea then too. Pup t 23:46, 15/10/2009
- Ah, protection... Modus, you know that thing you did with this once before? Can you do it again? Pup t 00:13, 16/10/2009
- Done. Sir Modusoperandi Boinc! 03:50, October 16, 2009 (UTC)
PLS question
Just thought you'd want to know this. I started an article after the PLS began, and an anonymous user edited it: [1]. It was two spelling corrections on a really rough draft that's full of misspellings, grammatical errors and the like (I don't bother worrying about such things in early drafts.) I reverted the edits [2] and later added {{PLS-WIP}} to help prevent it from happening again. I hope this isn't a problem. WHY???PuppyOnTheRadio 00:47, October 16, 2009 (UTC)
- It's okay. Sir Modusoperandi Boinc! 03:45, October 16, 2009 (UTC)
You blocked Electrified Mocha Chinchilla for rickrolling
Was that about what happened at the China forum? • FreddThe Redd •
•
• 10:45, October 16, 2009 (UTC)
- No. That one doesn't seem to be the annoying "can't close the window" kind of rickroll. Forum:Of Abe Vigoda and Death was the one I banned him for. Sir Modusoperandi Boinc! 12:47, October 16, 2009 (UTC)
- Checking the history, it is quite annoying. It looks like you need to ctrl-alt-delete and close the web browser to stop it. --Mn-z 16:29, October 16, 2009 (UTC)
- It falls under the "don't be a dick" rule. Hence the ban. Sir Modusoperandi Boinc! 18:03, October 16, 2009 (UTC)
- I push the limits, man. I push 'em real hard. --Hotadmin4u69 [TALK] 07:41 Oct 21 2009
Speaking of problems
When I tried clicking on a link on this talk page right now, I couldn't. When I'd move the cursor over what I wanted to click on, it would jump away before I could click. When I moved the cursor away just a tiny bit, it jumped back. I've ran into this before, but was able to figure out it was someone's signature and they fixed it. I'm not sure what it is here, but it's happening again. WHY???PuppyOnTheRadio 17:59, October 16, 2009 (UTC)
- I'm pretty sure it's something you did. Sir Modusoperandi Boinc! 18:05, October 16, 2009 (UTC)
- Any idea what? On the previous time, I finally realized it only happened when I was trying to edit a talk page that had comments from a particular user. I've never had this problem on any page that doesn't have user signatures. WHY???PuppyOnTheRadio 18:09, October 16, 2009 (UTC)
Followup: I just tried clicking on edit for this section (Speaking of problems) and couldn't do it--same problem. I could only edit by clicking the edit tab at top. WHY???PuppyOnTheRadio 18:07, October 16, 2009 (UTC)
- I'm still laying the blame on you. Computer problems are like farts that way. Sir Modusoperandi Boinc! 19:30, October 16, 2009 (UTC)
Penis
Penis penis penis penis penis penis penis penis. --Penis 18:13, October 16, 2009 (UTC)
- So, it turns out that you can say "Penis" on the wiki, but you cannot say "Penis" in your edit summary. I bet you're glad I chose to use your talk page to test that.
pillow talk 18:15, October 16, 2009 (UTC)
- That might be why HOTCAT can't add Category:Penis right. --Mn-z 18:19, October 16, 2009 (UTC)
- Thanks. You did this because I clogged your toilet again, right? Sir Modusoperandi Boinc! 19:32, October 16, 2009 (UTC)
Further problems to speak of
There's currently a shortage of awesome. I'm pretty sure it's your fault. :40, 16 Oct
- I doubt that very much. I'm like Ivory soap; almost pure. But pure awesome, instead of pure soap. Okay, a little soap. Sir Modusoperandi Boinc! 20:12, October 16, 2009 (UTC)
- Ah, I love Ivory soap. 99.44% pure soap, and 0.56% badly contaminated soap.
pillow talk 20:36, October 16, 2009 (UTC)
- That .56% is mostly rat feces. Sir Modusoperandi Boinc! 20:59, October 16, 2009 (UTC)
- That's the point really. You're hogging all the awesome. Why don't you leave a little for the rest of us? Or rather them, as I'm not qualified to handle:21, 16 Oct
- Awesome is like The Force. It's in all living creatures. Especially the funny looking ones. So...you've got a bunch of awesome. Sir Modusoperandi Boinc! 03:52, October 17, 2009 (UTC)
A little cynical...
Forgive me for being a little cynical, but I get the feeling that Padimir Padoffski is a sockpuppet generated to win PLS. It seems odd that only a couple of days after the competition starts, a n00b is created who churns out a number of articles in extremely short succession, and edit #10 is an entry in PLS. I'm happy to be proven wrong, but I'd like to think that n00b prizes are given to n00bs to encourage them to become regular editors, not to sockpuppets for the vainglorious. Completely your call. Pup t 01:17, 18/10/2009
- I have no way of checking (only Sannse seems to, and she only uses it to increase her power by crushing political rivals). To save time, I assume that all noobs are really banned users, returning under a new name. In a couple of months, the ego will get them banned again. They'll lay low for a while, then pop up as a new noob a few months later. Sir Modusoperandi Boinc! 03:32, October 18, 2009 (UTC)
- Well, either way it's a moot point now. Two edits since I wrote this and then offski. Pup
- There's nothing sadder than a point that's moot. Unless it's got tuberculosis. That's a little bit sadder. Sir Modusoperandi Boinc! 05:28, November 10, 2009 (UTC)
Rules of baseball Ѧ 03:56 18-Oct-09
- "A recent arrival of two months (me) should not totally discard the work that came before in order to win a prize." Stop being noble. That's my schtick. Really, I don't think there's a problem doing that on your own user space, which is where your article would go anyway. WHY???PuppyOnTheRadio 04:00, October 18, 2009 (UTC)
Rollback
Who do I talk to about it? I want rollback rights • FreddThe Redd •
•
• 10:57, October 18, 2009 (UTC)
- I don't know. Probably Sannes. Do you really thing you can handle them? Do you? Sir Modusoperandi Boinc! 17:19, October 18, 2009 (UTC)
- She's Sense, not Sannes :Þ. And yes, I thing can handle them, and I will. I've already reverted enough vandals to feed a starving African family (my family) forever. Also, thanks, I'll talk to Sannse. Also also, you made two typos in one post, which makes me worried about your health. Please get enough sleep. Seriously. • FreddThe Redd •
•
• 18:29, October 18, 2009 (UTC)
- I had just woken up, thang yu vary much! Friggin' word police. *Grumble* Sir Modusoperandi Boinc! 21:07, October 18, 2009 (UTC)
Judging.
So basically, just list the articles in the order you like them? And there's only four entries this time, so am I just supposed to leave the last spot blank? 14:02,18October,2009
- Yes, but don't start yet. It's not the 19th yet. Instead, start soon. Later, finish. Sir Modusoperandi Boinc! 17:20, October 18, 2009 (UTC)
- Okay. Just to let you know, I'm going to be gone October 20-25. So looks like I'm gonna have some work on the 19th. 17:40,18October,2009
- Just five days in jail? You got off easy. Sir Modusoperandi Boinc! 17:42, October 18, 2009 (UTC)
- That's actually 6 days. 20, 21, 22, 23, 24, & 25. 18:12,18October,2009
- Back off, math geek. Sir Modusoperandi Boinc! 21:04, October 18, 2009 (UTC)
hey
Can you add Category:Uncyclopedia to Uncyclopedia:Forest Fire Week? --Docile hippopotamus 22:48, October 18, 2009 (UTC)
- No. Sir Modusoperandi Boinc! 22:56, October 18, 2009 (UTC)
Protecting the PLS entries
They aren't yet. Are you planning on doing that? If you're not, I'll just be leaving to vandalize my competition. :D • • • Necropaxx (T) {~} Monday, 01:03, Oct 19
- I'll take you on any day of the week! Pup t 01:38, 19/10/2009
- Actually, you're the only other entry that I'm really worried about. (No offense to other, less funny writers.) But if you want a challenge, puppy, bah-ring it!!!! • • • Necropaxx (T) {~} Monday, 01:58, Oct 19
- I thought I had. Pup t 02:00, 19/10/2009
- Tuh! Details! • • • Necropaxx (T) {~} Monday, 02:09, Oct 19
- So Necropaxx, you worried that Puppy might have given a higher bribe than you? less funny writer--in your opinion but you ain't a judge so it don't mean doo doo 02:26, October 19, 2009 (UTC)
- But you were expecting that Pup t 02:41, 19/10/2009
- Can I help it if I'm so awesome that the competition pales in comparison? Also, the main reason I'm afraid of Puppy's entry is because I don't really get it. That scares me. Orian's late entry scares me too, but that's just because it's about something Scottish. • • • Necropaxx (T) {~} Monday, 02:58, Oct 19
- Och, aye dinnae. Pup t 03:56, 19/10/2009
- Heh heh, heh. "Entry". Sir Modusoperandi Boinc! 04:04, October 19, 2009 (UTC)
- Gimme a break, man. I'm at a place (not work!) where they keep distracting me with work. Sir Modusoperandi Boinc! 01:54, October 19, 2009 (UTC)
- But this is your job! Pup t 01:59, 19/10/2009
- No. If it was a job I'd be doing it half-assed. <thoughful pause> Maybe it is a job. Sir Modusoperandi Boinc! 02:36, October 19, 2009 (UTC)
Is it a problem if...
...I make HowTo:Be Homeless in America a redirect to my user page with the article? I know that probably seems like a silly request as articles in PLS will be moved to mainspace in a few days anyway. But there's an article or two that may be linked to it and I want to avoid creating future dreaded double redirects (shudder). Also that way links to my user space won't have to be changed again in a few days. WHY???PuppyOnTheRadio 02:24, October 20, 2009 (UTC)
- Does it really matter? Besides, I'm against have mainspace pages redirect to user subpages. User subpages are for works-in-progress and vanity that wouldn't survive outside the warm furry moistness of a user's flabby manbosom. Sir Modusoperandi Boinc! 02:42, October 20, 2009 (UTC)
- Point taken. I'll simply link to my user space, then change the links when it goes mainspace. Thanks for the
Ruling from on Highadvice! WHY???PuppyOnTheRadio 02:46, October 20, 2009 (UTC)
- It's less a "Ruling from on High" and more "pulling facts out of my ass". I'm practicing to be a political pundit. Death Panels! Sir Modusoperandi Boinc! 02:52, October 20, 2009 (UTC)
- No, never say that. Every word that flows from your mouth is the purest honey. By the way, how are my PLS entries going? Pup t 02:58, 20/10/2009
- No, Puppy, you should say it's the purest virgin Sun Bee honey. Also Modusoperandi is like a stream of bat's piss, by which I mean he shine's out like a shaft of gold when all around it is dark. And that comment has nothing to do with PLS. Really. Would I lie? WHY???PuppyOnTheRadio 03:07, October 20, 2009 (UTC)
Hey Mode... If I had to ask you to create me a pic (Book cover) for My article would ye be willing to? Would I have to degrade myself in any way? I am willing to if needed.
Please let me know kind sir. Sir ACROLO KUN • FPW • AOTM • FA •(SPAM) 12:19, October 20, 2009 (UTC)
- What do you want on it? Sir Modusoperandi Boinc! 17:53, October 20, 2009 (UTC)
- Well as you can see the current one is sucky... Anything that is mafia movie and winnie the pooh combined with the title "Winnie The Shit" on it will do...Your creativeness is not limited to anything I say, except for the characters... use Winnie as the main focus and you may include any or all of the following characters: Rabbit, Tigger, Piglet or Eeyore. Would that be doable?
I don't know if this is better, but it's something. Sir Modusoperandi Boinc! 04:27, October 28, 2009 (UTC)
Thanks so much
Sir ACROLO KUN • FPW • AOTM • FA •(SPAM) 18:36, October 20, 2009 (UTC)
- It could take me a while. I work nutty hours. Sir Modusoperandi Boinc! 21:57, October 21, 2009 (UTC)
I lost where I first mentioned this, but I'd like to nominate the most current version of this pic for VFP. But when I go to the source File:Winnnietheshit.jpg, I see the first version (arm not yellow) and not the later version. But on the UnBooks:Winnie The Shit article, I see the most current version (arm yellow). I've refreshed, cleared the cache, etc., and still get the same thing. WHY???PuppyOnTheRadio 04:01, October 28, 2009 (UTC)
- The only version is the newest version. Sometimes the internet is so fast that it takes time to catch up with itself. Sir Modusoperandi Boinc! 04:08, October 28, 2009 (UTC)
- I have no idea what that means. But as I tend to assume that people who make statements like that are either clever and thus probably right, or crazy and thus dangerous if you disagree, I'll agree with you. WHY???PuppyOnTheRadio 04:11, October 28, 2009 (UTC)
- If you click on the pic, then scroll down, the only version is the newest one. I deleted the original. So, as you can see, when you see the old one is just your fevered mind playing tricks on you.
- Oh, and if you want to nom it on VFP, let me change "Winnie the Shit" (which works better in context with the page its on, rather than standalone), to "The Poohfather" or something. Sir Modusoperandi Boinc! 04:14, October 28, 2009 (UTC)
- Oh, sorry, I just put it up for VFP. If you want, I can withdraw it and renom it later--or can I do that? I guess you can if you want. WHY???PuppyOnTheRadio 04:27, October 28, 2009 (UTC)
Questions about PLS for Signpost
Is there still going to be prize money? When's the next one going to be held exactly? --Hotadmin4u69 [TALK] 07:42 Oct 21 2009
- Yes. When the next guy remembers it's late. Sir Modusoperandi Boinc! 07:44, October 21, 2009 (UTC)
- Where's the money coming from? --Hotadmin4u69 [TALK] 07:48 Oct 21 2009
- Me. I'm, like, all magnanimous 'n' shit. Sir Modusoperandi Boinc! 08:11, October 21, 2009 (UTC)
Lockdown
I just noticed that my PLS entry isn't locked. Of course, back in the day, no one locked their articles. Everyone just trusted everyone else to pass by their articles without breaking in to steal their words or vandalise it with Chuck Norris memes. Society's gone to shit. Anyway, could you lock it please? -- 15Mickey20 (talk to Mickey) 21:26, October 21, 2009 (UTC)
- Wups. They tell me that cream rises to the top. They then inform me that I'm not cream. Luckily, blood is thicker than water. Unfortunately it also results in a terrible mixed drink, even if it's in a salted glass with a slice of pineapple on the rim. Sir Modusoperandi Boinc! 21:56, October 21, 2009 (UTC)
You Owe Me
Rememeber when you couldn't help my synesthesia article? Well, you owe me. I have a news article that means a lot to me and based opn its topic, should NOT be left in this shitty state. Please fix UnNews:2012 doomsday rescheduled due to inclimate weather, other doomsdays waiting to happen. Also, you owe me one russian prostitute and a Dean Martin CD.-Almost Sir Random Crap
- I'm going to tell you that I'll look at it, but what I'll actually do is stand up your page and go out with a prettier one. You'll eventually catch us together in a classy restaurant and go apeshit. Sir Modusoperandi Boinc! 05:02, October 22, 2009 (UTC)
PLS judging
When do I need to have it:49, 21 Oct
- I'm not in that category, so take your time. Also this isn't my talk page. Pup t 23:56, 21/10/2009
- But you're in mine, so it would be nice to know. Where the Wild Colins Are - LET THE WILD RUMPUS START! 00:05, October 22, 2009 (UTC)
- Ahem...
From 19rd ― 25st October, entries will be locked and judged.
- I go by Zulu time, because that's the most awesome time, even if it includes a bunch of foreign countries with odd accents and weird food. Sir Modusoperandi Boinc! 00:20, October 22, 2009 (UTC)
- I have my watch set on Zulu time. That's why I'm always several hours late. WHY???PuppyOnTheRadio 01:41, October 22, 2009 (UTC)
- I wasn't informed there'd be reading involved with this. *grumble grumble* , 22 Oct
- I didn't enter PLS, but I think I should win something just because I'm a girl. :D Really though why don't you have it in summer and winter when ppl are out of school? DAP Dame Pleb Com. Miley Spears (talk) 00:43, October 24, 2009 (UTC)
- We do. Or, rather, we're supposed to. Sir Modusoperandi Boinc! 14:02, October 24, 2009 (UTC)
You do realize...
Order your official Modusoperandi collector plate today!
that you're 2 features away from hitting 40? You'll be over the hill in no time... • • • Necropaxx (T) {~} Thursday, 01:26, Oct 22 2009
- Ssh... I'm trying to catch up here. Pup t 01:28, 22/10/2009
- My removal of the closing carrot/karat/carat was unintentional. I had started to make a comment that I thought might make me look foolish. So I removed the comment along with your carrot/karat/carat so I would definitely look foolish. Gosh, what a plan! WHY???PuppyOnTheRadio 02:32, October 22, 2009 (UTC)
- I also had a full head of hair when I started here and I didn't groan when I stood up. Sir Modusoperandi Boinc! 03:50, October 22, 2009 (UTC)
Would this be stupid
Would it violate some sort of rule if I posted a message on a judge's talk page about that judge's comments about an article I entered in PLS? Or would that just be stupid? And no, I would not be trying to change someone's opinion for two reasons: 1) It wouldn't work, and 2) if it did work, it would get everyone else who entered pissed at me. Should I wait until the contest is completely over, and then make a fool of myself? Or is it OK to do it now? I know I could just wait for three days, but by then I might forget what I wanted to say. WHY???PuppyOnTheRadio 22:41, October 23, 2009 (UTC)
- That you're worried about it is probably a good indicator that you shouldn't. Sir Modusoperandi Boinc! 22:50, October 23, 2009 (UTC)
- I was afraid you'd say something sensible like that. WHY???PuppyOnTheRadio 23:00, October 23, 2009 (UTC)
- I know, right?! Sir Modusoperandi Boinc! 13:06, October 24, 2009 (UTC)
FORTRAN )
PLS thingy
I have by the end of tomorrow to give my results yes? ~
13:23, October 24, 2009 (UTC)
- What, are Jews not allowed to judge others on Saturday:30, 24 Oct
- Yes. Sir Modusoperandi Boinc! 14:01, October 24, 2009 (UTC)
- According to the Uncyclopedia Mishnah, Orthodox editors may not edit between sundown Friday and sundown Saturday, unless they're on Daylight Savings Time, in which case they get an extra hour of editing. WHY???PuppyOnTheRadio 18:04, October 24, 2009 (UTC)
- Wait, that means Jews can't be doing the "caturday" thing on 4chan. Is that a curse or a blessing to them?-Almost Sir Random Crap
- Thank god I'm not religious then. ~
18:08, October 24, 2009 (UTC)
- I thought you were a Messianic? Don't they do any crazy shit? If not, they really should invent some. It's the nutty stuff that really makes a belief stand out. Have they considered silly hats? Sir Modusoperandi Boinc! 18:12, October 24, 2009 (UTC)
- What about a religious stunt, auch as a priest falling off the Empire State Building, like in those trippy action movies?-Almost Sir Random Crap
- It's moot anyway. On second glance, Rataube's the Messianic, assuming he's the IP there. Mordillo's a liberal Protestant. True story. Sir Modusoperandi Boinc! 18:17, October 24, 2009 (UTC)
- Wow, who let the religious dogs out? Also, FU Modus for edit conflicting me!-Almost Sir Random Crap
- It's my talkpage. I can edit conflict whomever I please. Moo ha-ha! Sir Modusoperandi Boinc! 18:27, October 24, 2009 (UTC)
- *gasp*! Anyway...you still owe me some help on one of my articles, as well as a Dean Martin CD, remember?-Almost Sir Random Crap
- I owe you nothing. All I can do is say I'll contribute if the Muse pushes me in that direction. Sir Modusoperandi Boinc! 18:39, October 24, 2009 (UTC)
What have you been huffing?
The PLS results are totally fucked up.. • FreddThe Redd •
•
• 19:07, October 25, 2009 (UTC)
- I'm trying to do this while doing something else (not work!). Sir Modusoperandi Boinc! 19:44, October 25, 2009 (UTC)
So when are we going to move the entries to mainspace?
Yeah. • • • Necropaxx (T) {~} Monday, 00:20, Oct 26 2009
- Nobody's stopping you. Unless the pages are still protected. There's always the possibility of that. It's like russian roulette, but without the bullet. Sir Modusoperandi Boinc! 02:47, October 26, 2009 (UTC)
- Yeah, they're still protected. Which makes moving pages rather difficult for a non-admin like me. • • • Necropaxx (T) {~} Monday, 02:53, Oct 26 2009
- Oh, wait! Some kind and extremely prolific writer-admin unprotected them! Hooray for Uncyclopedia! • • • Necropaxx (T) {~} Monday, 03:03, Oct 26 2009
- I set them to automatically unprotectifiy right after the judging...except the one, which was set to go off later, as I missed it the first time around and was too lazy to write in an actual date (rather, choosing "expire in 7 days"). Sir Modusoperandi Boinc! 03:06, October 26, 2009 (UTC)
- I moved mine just one minute after Necropaxx posted that they were still protected and only an admin could move them. Does that mean I'm secretly an admin? ARRRRRRGGGGGG! WHY???PuppyOnTheRadio 03:19, October 26, 2009 (UTC)
- Regarding mine, there is already FORTRAN in mainspace as I made an exact copy in my userspace to submit the work to PLS. Recall that, on Rules of baseball, Why? suggested I excise material written by others to make a pure submission. I have copied this article back to mainspace, with the award tag, but merged back in the best stuff that I had removed for the competition. Spıke Ѧ 05:13 26-Oct-09
- For FORTRAN, just copy the changes over (like the category I added). Sir Modusoperandi Boinc! 05:31, October 26, 2009 (UTC)
- Say, somebody followed my advice and it actually worked? That has got to be a first. WHY???PuppyOnTheRadio 16:33, October 26, 2009 (UTC)
- You had advice? What was it like? Sir Modusoperandi Boinc! 17:03, October 26, 2009 (UTC)
- It was like that thing that people do together. You know. That thing with all the squelching and sweat. Pup 05:12, 27/10/2009
- Aerobics? WHY???PuppyOnTheRadio 05:15, October 27, 2009 (UTC)
- Jazzercise - to the music of Barry White Pup 05:19, 27/10/2009
IP Vandal report
Special:Contributions/71.87.159.15 (Isn't there some special place we're supposed to put these? I'm not sure where that is.) WHY???PuppyOnTheRadio 01:30, October 27, 2009 (UTC)
- He's been got. Also, Ban Patrol. Sir Modusoperandi Boinc! 04:51, October 27, 2009 (UTC)
- I found it. Thanks! WHY???PuppyOnTheRadio 05:08, October 27, 2009 (UTC)
UnNews:Kids still fucking
A few days ago, I stumbled on UnNews:Kids still fucking and had one of my "damn, I wish I wrote that" moments. I really liked it, but I saw it when PLS was happening. I didn't want to post a note on your talk page then because, you know, any compliments made to a judge when judging is happening are pretty suspect. In my opinion, this is professional quality, and I like it. WHY???PuppyOnTheRadio 07:02, October 27, 2009 (UTC)
- Thanks. It's one of my favourites. And, to point out the obvious, you know I wasn't a judge, right? Sir Modusoperandi Boinc! 07:06, October 27, 2009 (UTC)
- Yeah, but you were setting up the competition. I know how these things work when there's millions in prize money at stake. ;) (Seriously, I knew, and you knew, but I didn't know if you'd know I knew. So that's why I waited). WHY???PuppyOnTheRadio 07:20, October 27, 2009 (UTC)
You're just a copycat!
# 20:26, October 28, 2009 Modusoperandi (Talk | contribs) Added or changed avatar
# 20:24, October 28, 2009 Andorin Kato (Talk | contribs) Added or changed avatar
That's all you are. Hey, everyone! This guy's a great big phony! --Andorin Kato 03:28, October 29, 2009 (UTC)
- First of all, yes. Second, what's it do? Sir Modusoperandi Boinc! 04:51, October 29, 2009 (UTC)
- We aren't sure yet. --Andorin Kato 04:52, October 29, 2009 (UTC)
- And another thing...it couldn't have been me, as I'm playing Borderlands online as we speak. As. We. Speak. So there! Sir Modusoperandi Boinc! 04:58, October 29, 2009 (UTC)
Gonzo journalism needs a Benadryl
Thanks for proofreading my proofreading. As I recall, I actually noticed Benadryl wasn't capitalized, but for some reason I forgot to correct that. Glad you caught it. WHY???PuppyOnTheRadio 03:36, November 1, 2009 (UTC)
- I didn't catch it, Word did. Sir Modusoperandi Boinc! 03:52, November 1, 2009 (UTC)
- Word? As in, you have a program that looks at articles for you or something? WHY???PuppyOnTheRadio 03:56, November 1, 2009 (UTC)
- Yes. And it does a terrible job! It's like me, but automated. Sir Modusoperandi Boinc! 04:05, November 1, 2009 (UTC)
- Hang on, you mean you're not a bot? Pup
- You know what I use for a spellchecker? A dictionary. Also could you archive this page? Going here slows up my computer. WHY???PuppyOnTheRadio 04:42, November 1, 2009 (UTC)
- No. Get a faster computer. Sir Modusoperandi Boinc! 04:48, November 1, 2009 (UTC)
- You saying 3.60 GHz isn't fast enough? With that, it takes me forever to edit this page--
- The last time I typed here, I typed several words, then had to sit and wait for them to appear on my screen. I don't know why. WHY???PuppyOnTheRadio 04:59, November 1, 2009 (UTC)
- Don't blame me. My computer is over a year old (I know, right!), and I don't have a problem. Sir Modusoperandi Boinc! 05:14, November 1, 2009 (UTC)
- I still suspect it's some code on this page--when I try to edit a section, the page still tends to jump around a lot. Although it's not as bad--it used to be I couldn't edit a section here at all, but had to edit the whole page, but when I do that my computer slows up. I suspect it's the code in someone's signature that's doing it. They probably changed their sig, so it's not as bad. WHY???PuppyOnTheRadio 05:18, November 1, 2009 (UTC)
- Are you having problems with the sigexpand thing again? I'll change my sig again.
- And on a semi related note, is this link cheating? Pup
- Since you changed your sig a while back, yours hasn't been causing me any problems. I don't know whose is, or even if it's a sig problem. It's weird, though. So's the link. WHY???PuppyOnTheRadio 05:26, November 1, 2009 (UTC)
- And yet it doesn't effect me. Hmmm. Have you tried being me? It's quite a trip, let me tell you. Sir Modusoperandi Boinc! 07:24, November 1, 2009 (UTC)
I finally realized what happened. I put the proofread tag on this, and had noticed benadryl, but didn't do any proofreading--I just marked it so someone could do it before it got featured. In fact I hadn't even read the article, just glanced long enough to notice the misspelling. I'm telling you, being a time and dimension traveler can get very confusing. WHY???PuppyOnTheRadio 19:49, November 4, 2009 (UTC)
- Wow. The inner labyrinth of your mind is as frightening as it is uninteresting. I kid. Seriously, I do. It's not frightening. Sir Modusoperandi Boinc! 22:10, November 4, 2009 (UTC)
- I'm incredibly fascinating in Dimension Benadryl IX. Also there I have the face of Mariah Carey and the body of Drew Carey. I keep those in the freezer. WHY???PuppyOnTheRadio 02:42, November 5, 2009 (UTC)
I hate partial archivists!
- I do it just to bother you...even before you joined. Sir Modusoperandi Boinc! 05:15, November 1, 2009 (UTC)
Attn: Regarding an article hosted on your website
It's a bit late, but not as late as a late parrot. Thanks!
Sir MacMania GUN—[14:02 3 Nov 2009]
- PS: This.
- PS: It didn't appear there because it wasn't featured normally. There. I said it. That page was featured freakily. It's a freak. Sir Modusoperandi Boinc! 19:15, November 3, 2009 (UTC)
You online?
We need someone to ban this guy right now. There are no admins in IRC. You and Mordillo recently edited so I'm forlornly posting in the thin hope that someone is paying some sort of attention to the wiki. --Andorin Kato 10:50, November 8, 2009 (UTC)
- ...paying attention to what now? Sir Modusoperandi Boinc! 10:53, November 8, 2009 (UTC)
- Ugh, thank you for finally getting him. RC is a mess; he wouldn't let up. I think more admins need to idle in IRC (not pointing fingers here, my good man) so we can whistle up the cavalry a bit quicker. --Andorin Kato 10:55, November 8, 2009 (UTC)
- When I'm at the place where I normally pause to visit Uncyclopedia I can't be on IRC. And when I'm not there, I don't want to be on IRC. Ignore that last part. Sir Modusoperandi Boinc! 11:32, November 8, 2009 (UTC)
VFD
Can I get an explanation? Spıke Ѧ 18:19 9-Nov-09
- When a user finds a page that they consider to be of poor quality, they put it on VFD. While there, other users can vote to keep or delete that page. After the votes are tallied, the page is either kept or deleted. On some occasions, it's redirected to another page or moved under a user's space so that they can work on it. Sir Modusoperandi Boinc! 18:29, November 9, 2009 (UTC)
- /me has fond recollection of Airplane. ~
18:34, November 9, 2009 (UTC)
- /me John, big tree! Sir Modusoperandi Boinc! 18:59, November 9, 2009 (UTC)
Sorry, not an explanation of VFD, an explanation why you reverted my recent contribution. Spıke Ѧ 18:42 9-Nov-09
- I accidentally opened it as a new window, then closed it. That's all I remember. Excuse my french, but the disparity between what I remember happening and what actually happened is pretty fucking awesome. Sir Modusoperandi Boinc! 18:59, November 9, 2009 (UTC)
- There is the slight problem of that article still having a WIP tag on. My only real ban was ZB banning me for doing the same thing. ~
19:05, November 9, 2009 (UTC)
- I would be willing to retract my VFD nomination for this reason. However, the WIP tag has been there since June and the kid has faithfully made token changes at least once a week, which renew its tenuous lease on life. A more fundamental reason is in my nomination: The kid is using Uncyclopedia to develop a fictional universe, and we have deleted pieces of it on the grounds that they don't relate to anything, they're not funny, and they're poorly written. I don't really care that HZ Corps. be deleted, but that y'all take a consistent position for or against his project. Spıke Ѧ 19:20 9-Nov-09
- Humor is in the eye of the beholder. With the tag abuse, tell an admin next time if we miss it and it will be dealt with. Also, this is not my talk page. ~
19:23, November 9, 2009 (UTC)
- ...a consistent what?! Sir Modusoperandi Boinc! 19:25, November 9, 2009 (UTC)
I didn't know of his "tag abuse" when I nominated it; studied it after Mordillo's comment. Thank you for locking the nomination before anyone voted. A consistent position--If there is a problem with this kid documenting in unfunny detail his alternate reality, do more than just shoot down the odd pages that someone stumbles across. If not, I will go do something useful. Spıke Ѧ 19:28 9-Nov-09
- I told him. I told him good. Sir Modusoperandi Boinc! 19:56, November 9, 2009 (UTC)
- OK. Although the previous section of his talk page suggests his only dealings with people here are to rudely ask for his articles back after they are deleted piecemeal, I assume you will watch the situation and I won't renominate him on VFD. Spıke Ѧ 20:03 9-Nov-09
- You'll assume I'll something something something? Is that the best plan? Sir Modusoperandi Boinc! 20:28, November 9, 2009 (UTC)
- You may have a point there. Perhaps better than politely advising him of the rules of WIP would be to explain to him exactly why his articles are disappearing, and probably will continue to do so. But even that is above my pay grade. Spıke Ѧ 20:53 9-Nov-09
- But if I tell one person that they suck, I have to tell everybody that. I simply don't have that kind of time. Oh, and while I've got you here, you suck. Sir Modusoperandi Boinc! 21:05, November 9, 2009 (UTC)
- What if you just change the Uncyclopedia potato image to a text image saying You suck. That way everybody is happy. Pup
- Naw. Too on-the-nose. Also, since you stopped by, you suck. Sir Modusoperandi Boinc! 21:24, November 9, 2009 (UTC)
- Well, that's two you've taken care of...Three, counting yourself. Modus, your latest post to his talk page is more like it. Only, he won't see it soon, because he's working this afternoon as 98.64.57.106. Spıke Ѧ 21:30 9-Nov-09
- I can't do anything about that. IPs are my kryptonite. Sir Modusoperandi Boinc! 21:58, November 9, 2009 (UTC)
- I'm actually really impressed that he managed to keep the WIP tag alive for 5 months and still going. Also, I suck. --Pleb SYNDROME CUN medicate (butt poop!!!!) 23:24, November 9, 2009 (UTC)
Pilgrim Fathers
Ironlung reviewed my article The Pilgrim Fathers and suggested that I ask an admin to get the page/pages Pilgrims to redirect to it. I'm told people of the American persuasion don't use the term pilgrim fathers but then they can't spell either.
There is an article already listed on Pilgrim, however. I'm no judge of its worth but Ironlung says it's crap and I'd tend to agree. Anyway, I'm assuming you're an admin so I guess you get to choose whether it's a good idea / whether you can be bothered to do the suggested redirect(s). I'm not going to beg. Much. Sog1970 16:54, November 10, 2009 (UTC)
- If you feel strongly about it, put Pilgrim up on VFD. If it fails and is deleted, redirect it to your page. If it doesn't, just put your page in a "see also" section at the bottom of the pilgrim page. Sir Modusoperandi Boinc! 17:54, November 10, 2009 (UTC)
Fair enough. --Sog1970 21:05, November 10, 2009 (UTC)
Hey
:07, 12 Nov
- Can't. Long story. Work. Okay, it's not so long of a story. Sir Modusoperandi Boinc! 23:22, November 12, 2009 (UTC)
- LOL. Fair 'nuff. Another time:25, 12 Nov
- I can in.../me looks at clock...eight hours or so. Sir Modusoperandi Boinc! 23:43, November 12, 2009 (UTC)
- Jesus Christ, I thought you lived in Canada? Working an overnight:46, 12 Nov
- Shiftwork. I only do it because they pay me. Sir Modusoperandi Boinc! 23:48, November 12, 2009 (UTC)
- Funny, I feel the same way about my porn:57, 12 Nov
- I remember you in Two Inches of Terror 2. I was one of the lighting crew. Sir Modusoperandi Boinc! 00:09, November 13, 2009 (UTC)
- Oh YEAH! I remember you. You were the one who was naked for no reason. Also, that title was very misleading. But I was flattered nonetheless. :15, 13 Nov
- I mostly just remember that the lighting crew had to bring their own flashlights. Hence the nudity. While my doctor says that it shouldn't glow like that, the Catholics keep making a shrine around it. Sir Modusoperandi Boinc! 00:17, November 13, 2009 (UTC)
>:(
TEN POINTS FROM GRYFFINDOR. Where the Wild Colins Are - LET THE WILD RUMPUS START! 05:31, November 13, 2009 (UTC)
- That's my favourite Findor! Sir Modusoperandi Boinc! 06:14, November 13, 2009 (UTC)
The Feature Q
It's empty, there's no Featured article for today. • FreddThe Redd •
•
• 07:22, November 13, 2009 (UTC)
- That's an outrage! An outrage! I'm so outraged that I just can't be here right now! Outrage! Sir Modusoperandi Boinc! 08:11, November 13, 2009 (UTC)!
You've defied my will!
Why did you unban user:Luvvy? She's really divisive, setting users against one another. I see you reverted the change to infinite ban by Roman Bird Dog. Had you noticed I'd upped her from a week to three months before that? I'd have thought you'd revert to my original three months and maybe ask me about it before returning her rights unilaterally.
I'm giving her a three month ban again because I feel strongly, along with others, that she deserves it, and she needs to have an enforced cooling off period. She's been warned and banned time and again. Please get back to me on my user talk page about this in any case. I want to know how you weigh in. Thanks,:15, November 13, 2009 (UTC)
Spanish Inquisition
I'm doing some stuff with the religion portal, and I got confused about a couple of articles. Spanish Inquisition and Spanish Inquisition (TV show) (which has the featured template on it, which, to me, doesn't seem to belong there) seem to be awry somehow, and it seems like you're the bloke to ask about it. What's up with:14, November 13, 2009 (UTC)
- The first is a one-joke gag and the second appears to be a feature from 2005. Our standards were, um, different then. Neither of them is particularly good. If Sophia is kind, the first one will be rewritten. Sir Modusoperandi Boinc! 18:26, November 13, 2009 (UTC)
- Ok, I'll leave things as they are, except to put them both as category:Religion... if this is ok with you. Let me know if you object, so I'll know how to treat them for the religion portal; I myself like the idea of that feature... to me, it still works under current :33, November 13, 2009 (UTC)
- The idea for the TV show one is alright, it just doesn't look finished. It's all steak, no sizzle, but with not much steak, either. But until somebody does them better, just leave 'em. Sir Modusoperandi Boinc! 18:43, November 13, 2009 (UTC)
- If only there were some week that was set aside for people to rewrite things. Wait, a week might not be enough. Make it two weeks. And we could do it twice a year! That's like, four weeks of rewriting right there! I'm excited. --Pleb SYNDROME CUN medicate (butt poop!!!!) 21:11, November 13, 2009 (UTC)
- You can't just go and "schedule" rewrites, man. Rewrites gotta run free, like the noble mountain goat or frisky electric eel. Sir Modusoperandi Boinc! 23:22, November 13, 2009 (UTC)
Ok.:46, November 13, 2009 (UTC)
Awesome
You are. That thing you did? Not so much. Meh. I'm not even supposed to be:52, 13 Nov
- I doubt very much that the thing I did wasn't awesome. For one thing, it assumes that I did something. I'm sure you'll agree how crazy that sounds. Sir Modusoperandi Boinc! 23:23, November 13, 2009 (UTC)
- Well I think it was awesome. I would go into more detail about exactly what I'm talking about here, but I'm kinda going for a mysterious tone of voice on this post and I think that if I did that it would totally spoil the whole, Nov 13
Awesome part the second
They discovered water on the moon. That's one step closer to my magical moon palace. One day soon I'll be there, and then you bastards will miss me! Bastards. Also, you're still pretty:03, 14 Nov
- So now the moon is made of wet green cheese? Ick. That must stink. And, yes, I do continue to be awesome. I find that half of being awesome is not having a smelly moon palace. Anyway, good luck with your moon fort or whatever. Sir Modusoperandi Boinc! 17:38, November 14, 2009 (UTC)
Please move text from user page to Forum
I need an admin! My Forum:CSS background-image is closed/archivable, but PuppyOnTheRadio continued a discussion of technical workarounds he had tried, which is now Section 8 of my user page. This section should be moved to the end of the Forum article so that people can find it. POTR promised to do it but his signature now says he is on "hiatus." Thanks for your help. Spıke Ѧ 12:44 15-Nov-09
- Done. If I didn't do it right you have only yourself to blame. Sir Modusoperandi Boinc! 19:27, November 15, 2009 (UTC)
lolwut
Two snippets of code from the welcome you left that poor bastard:
If you need help, ask me on my talk page
(...)
{{User:Modusoperandi/sig}}
Think he might get a little confused? Also, where are my pants? --Andorin Kato 09:31, November 16, 2009 (UTC)
- It was all a part of my plan: appear to help him and, if he needs to ask a question, pass him off on somebody else. I'm really quite a genius. Sir Modusoperandi Boinc! 09:39, November 16, 2009 (UTC)
- Noted. Also, Modusoperandi and TheLedBalloon have the same number of letters, apparently. It's like you two were made for each other. --Andorin Kato 09:41, November 16, 2009 (UTC)
- It gets weirder. One is an anagram of the other. Sir Modusoperandi Boinc! 10:44, November 16, 2009 (UTC)
VFD, bis
Nothing has happened in the last week except that TKDKidXism has no contributions, and HZ Corps. has none except via IP--presumably so that TKDKidXism is free to claim he didn't see your warnings. Unless you object, I'm inclined to renominate HZ Corps. for VFD. Spıke Ѧ 16:14 16-Nov-09
- Whatever. Don't come cryin' to me when he knocks you up. Sir Modusoperandi Boinc! 16:23, November 16, 2009 (UTC)
Ok im having a problem with vandalism by some cock sucking IP users who are editing my user page and talk page. This happen to me twice. Normally I don't care about people editing my stuff, but I dont approve of having peopele changing stuff on my userspace making me say stuff like i love boners or ass rape as people are doing that just look at the history. Is there a way for me to block editing on my userpage/talkpage so that a bunch noobs won't ruin it again, or at least have these people's ips banned to teach them a lesson.YOU 333Talk IF YOU DARE 23:04, November 16, 2009 (UTC)
- Got him. And I banned a guy who was already banned. I'm on the ball. Sir Modusoperandi Boinc! 23:39, November 16, 2009 (UTC)
- You mean you don't ass rape and love boners? I am so disappointed. (Seriously, glad it got stopped). WHY???PuppyOnTheRadio 15:56, November 17, 2009 (UTC)
Invincibleflamegruemaster
So yeah. I banned him. For a week. Fun times, that. --Flammable 05:14, November 17, 2009 (UTC)
- You're a tough man to please. Sir Modusoperandi Boinc! 05:17, November 17, 2009 (UTC)
The Weird O'Reilly Factor
Congrats on The O'Reilly Factor, Tuesday May 13, 1865 getting named the third best article for October. But isn't it weird that it took the article a month to get enough votes to get featured, and then it got a score of 11.5? I guess somethings just grow on you. I think we should call this phenomenon The O'Reilly Factor. Congrats! WHY???PuppyOnTheRadio 15:54, November 17, 2009 (UTC)
- My stuff is generally a slow burn. It's fairly rare that a page of mine doesn't take forever to fail or feature. It's tradition, really. Sir Modusoperandi Boinc! 23:32, November 17, 2009 (UTC)
- In that case, I have another suggestion. We can call it The Modusoperandi Factor. WHY???PuppyOnTheRadio 07:09, November 18, 2009 (UTC)
- Actually, a body part of mine already has that title. I've said too much. Sir Modusoperandi Boinc! 08:25, November 18, 2009 (UTC)
- Psssst. It's his ankles. Don't tell him I told:16, 18 Nov
- Then it would have to be The Modusoperandi Factors. Dummy. Sir Modusoperandi Boinc! 16:47, November 18, 2009 (UTC)
- Unless you're referring to them as one collective nuisance. Which is what I always do. "Yeah, I was thinking about going out with him tonight, but you know...The Modusoperandi Factor." "Oh right, right, damn those ankles. Thank God for that ass." -- Also, penis. 23:49, November 18, 2009 (UTC)
- For those who are seeing this in the future after RAHB changed his signature, I feel it my sanctified duty to point out that this originally read as '"Thank God for that ass." -- Also, penis.' alt="Compassrose" class="lzy lzyPlcHld " data- Father WHY??? (confessions) 23:57, November 18, 2009 (UTC)
- That boy just ain't quite right in the head, I tell ya. Sir Modusoperandi Boinc! 17:21, November 19, 2009 (UTC)
- Not just the head. • FreddThe Redd •
•
• 17:33, November 19, 2009 (UTC)
HEY YOU
Are you online? HELP! --Andorin Kato 08:47, November 21, 2009 (UTC)
- In case you get this later and find nothing out of the ordinary and wonder what the hell I was thinking when I posted this, there's a couple IPs creating spam pages and no admins on IRC. Again. --Andorin Kato 08:52, November 21, 2009 (UTC)
- Oh. I got one, and deleted his crap. Later, annoyed at how much crap he'd made, I banned him more. Sir Modusoperandi Boinc! 09:22, November 21, 2009 (UTC)
- I saw! G'boy! Man, we need more like you. --Andorin Kato 09:24, November 21, 2009 (UTC)
- More? There can be only one! Sir Modusoperandi Boinc! 09:31, November 21, 2009 (UTC)
A simple chop request
At least, I think it should be simple. Can you add the tagline "At least you can see it on TV" underneath "The World Is Just Awesome" with a matching font to this? I apologize for my horrible grammer, but I may or may not be drunk right:26, 21 Nov
- Okay. My program doesn't do text very well, which is doubly troubling, as I never learned how to read. Sir Modusoperandi Boinc! 18:57, November 21, 2009 (UTC)
- Holy shit that was fast. Thanks, it looks:01, 21 Nov
- Don't expect me to repeat that feat. I'm all out of haste potions. Sir Modusoperandi Boinc! 19:13, November 21, 2009 (UTC)
Are VFD nominations of Userpages Invalid?
According to MrN9000, vfd noms of userpages are valid. I wanted to get another admin's opinion before I started to VFD userpages that I don't like. --Mn-z 07:11, November 24, 2009 (UTC)
- I say that userpages are safe. As a general rule, this works pretty well. Sir Modusoperandi Boinc! 08:08, November 24, 2009 (UTC)
- Are you sure we can't change that rule, because I was hoping to delete the sigs and user pages of some users that I don't like. --Mn-z 17:48, November 24, 2009 (UTC)
- I'm trying to figure out if you're serious or not. ~
18:13, November 24, 2009 (UTC)
- I'm trying to figure out if your serious or not about you trying to figure of if I'm serious or not. --Mn-z 05:33, November 25, 2009 (UTC)
- I'm having scrambled eggs. Nice and runny. I should've probably cooked them. Sir Modusoperandi Boinc! 06:33, November 25, 2009 (UTC)
- Runny scrambled eggs always remind me of gobbling down semen.
pillow talk 17:41, November 25, 2009 (UTC)
- Welp. Now I'm never eating scrambled eggs again. I hope you're:49, 25 Nov
- Well, if by "happy" you mean "aroused"...
pillow talk 17:59, November 25, 2009 (UTC)
- Get out of my kitchen. Sir Modusoperandi Boinc! 18:23, November 25, 2009 (UTC)
SHOCK PORN
Are you sufficiently shocked? It's like a poor man's defibrillator right here. --Pleb SYNDROME CUN medicate (butt poop!!!!) 15:16, November 24, 2009 (UTC)
- Um. Yeah. Shocked. Sir Modusoperandi Boinc! 16:13, November 24, 2009 (UTC)
- That image is used on 4,426 pages. --Mn-z 17:42, November 24, 2009 (UTC)
- Pah! Don't go bringing your "math" here. I just had this page cleaned. Sir Modusoperandi Boinc! 23:35, November 24, 2009 (UTC)
Wanna do us a flavor?
Semi-protect this. Look at its history- the spambots love it for some reason. --Andorin Kato 22:16, November 27, 2009 (UTC)
- Too slow, my good man. Mordillo beat you to it. Even though I posted on your talk page and not his. Isn't that a little eerie? --Andorin Kato 22:25, November 27, 2009 (UTC)
- We are the cabal. Resistance is futile. You will be assimilated. ~
22:30, November 27, 2009 (UTC)
- I'll make sure to put that on my schedule. How's Tuesday after lunch? Sir Modusoperandi Boinc! 23:08, November 27, 2009 (UTC)
- I thought that is when you were imposing a new world order?
Join Us 19:03, November 29, 2009 (UTC)
- We aren't even done with the old world order yet. I'm not going to toss out some perfectly good world order just because some new world order has arrived. What, do you think I'm made of money? Sir Modusoperandi Boinc! 21:42, November 29, 2009 (UTC)
- You have a point. Now the question is has it been imposed enough.
Join Us 00:56, December 1, 2009 (UTC)
- That's not a question. It's got a period at the end. Sir Modusoperandi Boinc! 03:33, December 1, 2009 (UTC)
just FYI
In case your ears were burning, that is because we were about to talk shit about you on my talk page, but we ended up solving everying by blaming it on Canada and the Jews, as usual. Also, hope you enjoy the Festering Yeast Infection. Be careful who you let on your talk page. --S0.S0S.0S.0S0 01:07, December 4, 2009 (UTC)
- What in the heck is "Canada"? You foreigners and your made up places. You'll be blaming Narnia next. Sir Modusoperandi Boinc! 03:59, December 4, 2009 (UTC)
Unhuffing request
Can you unhuff BUTT POOP/VFD, and if you can't re-main-space, can I have it in userspace? --Mn-z 05:36, December 4, 2009 (UTC)
- If while you are at it, can you unhuff BUTT POOP? (which I believe you said couldn't be brought to vfd until dec 24th.) --Mn-z 05:59, December 4, 2009 (UTC)
- and everything else spang deleted on that subject if you can. --Mn-z 06:04, December 4, 2009 (UTC)
- Done and talk to Spang. Tell him about that thing I said, of which I have no recollection. None! I'm like a blank canvas. Sir Modusoperandi Boinc! 06:06, December 4, 2009 (UTC)
- Thanks, but there are still about 3 articles (and their talk pages) that need unhuff-ified. --Mn-z 06:11, December 4, 2009 (UTC)
- Speaking as a user with a differing opinion, I don't think they need to be recreated. --Andorin Kato 06:14, Dec. 4, 2009 (UTC)
- Can you name them? I'm not psychic. Also, talk to Spang, if you haven't already. Which you have. Sir Modusoperandi Boinc! 06:23, December 4, 2009 (UTC)
- BUTT POOP That time I nearly ate apples with Mike Tyson durning my sojourn in BUTT POOP!!!! and A BUTT POOP!!!! wizard did it. --Mn-z 06:44, December 4, 2009 (UTC)
- Done, done and done. Sir Modusoperandi Boinc! 06:54, December 4, 2009 (UTC)
- I'd really like to see these back in mainspace. The joke works a lot better in mainspace.
pillow talk 20:41, December 8, 2009 (UTC)
OK I am freaked out
So I'm surfin' the net, clicking links, all that jazz. Then I notice a link on Yahoo!'s main page about Conservatives rewriting the Bible. Apparently they have been going at it for some time now. I click the link and it takes me to Conservapedia. Surprise, surprise. As I read the overview, I'm getting more and more alarmed. These people are serious. They are actually rewriting the Bible. Some of the doctrinal changes they are making I can understand where they came from, but removing entire stories like Jesus forgiving the adulteress and "Father, forgive them for they know not what they do"? And that's not the worst part! They are changing words to support a conservative ideology instead of the original intent! Quite frankly, I am horrified. This is cut-and-dried wrong. Why on earth do these people think they have the authority to tamper with the Bible? How can you go so off course?
You're probably wondering why I'm putting this here. Well, I'd just like to know what you think about it. No doubt you already know about it. Really though, I just want to know if I'm really seeing what everybody else is or if this is some massive prank of some sort.
I have never been more ashamed of referring to myself as a conservative, if this is what conservatives are doing. • • • Necropaxx (T) {~} Friday, 08:19, Dec 4 2009
- Where have you been? They've been at it for awhile, and they certainly aren't the first to do it. Conservapedia is just more open about it, that's all. The adulteress story is widely considered to be a later addition, so they're probably right about that (which only proves that a broken clock is right twice a day). My favourite edits are the ones where they replace "Pharisees" with "liberals" or "people who read books and know stuff". The "redact the Bible" crowd of the Right is a tiny, tiny minority. Most of the Christians that agree with the Conservapedia-style stance on everything else are "KJV only" so, for once, Conservapedia is getting it from both sides. It won't go anywhere. It is funny though (considering that they're making liberals Pharisees when, if they paused briefly to look at themselves, they'd discover that they are Pharisees), and a little bit scary (the confidence of people who know nothing and, indeed, take pride in that fact, always catches me off guard). Of course, I'm still getting over the fact that the mother of the creator of Conservapedia, Phyllis Schlafly, spent the better part of her life going around America telling people that women should stay in the kitchen. The Right, as a group, seems to have no sense of irony. Sir Modusoperandi Boinc! 08:44, December 4, 2009 (UTC)
- This is one of the reasons why I'm not religious. Also, wow, not a single pun, joke or non-sequitur in that entire block of text. You feeling alright, Modus? --Andorin Kato 08:50, December 4, 2009 (UTC)
- It's hard to top a group that is already a parody of themselves. Sir Modusoperandi Boinc! 08:55, December 4, 2009 (UTC)
- I agree. I fell out of my chair when I heard of this. "Conservapedia rewrites the Bible to fit idealogy" sounds like an UnNews article, and all their changes sound like the jokes in said article. Further proof that reality is stranger than fiction. -- Kip > Talk • Works • • 09:12, Dec. 4, 2009
- I agree, although I'm more on the side of "that's scary" than "that's hilarious!", mostly because they are deadly serious. I wrote a rant at them last night and put it on the project's talk page. If you'll remember, I'm the smexy sig with the giant block of text on it. • • • Necropaxx (T) {~} Friday, 14:47, Dec 4 2009
- My condolences on your ban, which hasn't happened yet, but will. Sir Modusoperandi Boinc! 14:54, December 4, 2009 (UTC)
- If I didn't want to get banned, I wouldn't have posted anything there. I'm also eagerly awaiting the ban reasoning. • • • Necropaxx (T) {~} Friday, 15:01, Dec 4 2009
- Your name is Jonathan? Also, Why give a damn? Idiots rewritting the Bible will not erase the Bible that you know. Just let it be, you know, it's not the first time something of that sort happens.. • FreddThe Redd •
•
• 15:13, December 4, 2009 (UTC)
- This just goes to show that they're no better than the commies or the nazis, all rewriting:17, 4 December 2009
- Pah! Those groups didn't have the Inerrant Guidance of the Holy Spirit® on their side...even the Nazis didn't, no matter how much they thought they did. This time, you see, these guys are right. I've mentioned this elsewhere, but theology is the only science where you're never wrong. Sir Modusoperandi Boinc! 15:25, December 4, 2009 (UTC)
- Not a True Conservative™. You questioned authority. Tsk, tsk. You should probably download and read The Authoritarians, so that you'll better understand a disturbing minority of people (and the amoral jackasses they tend to elect). Once we understand them, we can work on integrating them into civilized society. Of course, I'm a latte sipping, Volvo driving liberal, so what the hell do I know? Sir Modusoperandi Boinc! 15:25, December 4, 2009 (UTC)
- Hello, sir. I'm the representetive of MAHMOOSHAMOBILE, a leading manufacturer of ecofriendly automobiles. You sir created the first truly ecofriendly car, how about you sell us the design for, say, an onion and two tomatoes? This is the best offer in the market, so please consider it, before you punch me in the cock. • FreddThe Redd •
•
• 16:10, December 4, 2009 (UTC)
- Heck no! I want money. Money money money money! I'm a liberal, not an idiot. Sir Modusoperandi Boinc! 16:14, December 4, 2009 (UTC)
- I've also read that the adulterous story is likely a later addition, but it's still one of my favorite Jesus stories. I just assume that it's accurate, and it took generations for someone to write it down (or for the Holy Spirit to inspire someone to write it down). And people have been redoing the Bible to fit their agenda for hundreds of years (the NIV is one of the few that doesn't censor a lot of the dirty parts, such as some parts of Ezekiel). Did you know that, under United States Federal Communications Commission (FCC) guidelines up through the late 20th century, there were parts of the King James Bible that, if read on air, would automatically fall under obscenity laws? A friend and I have had a laugh or two thinking about someone getting arrested for reading the Bible in America. As for conservative/liberal, here's an equation for you: conservative icon = dead liberal. Father WHY??? (confessions) 02:09, December 8, 2009 (UTC)
249 cents, final
In cash, so you can buy onions, tomatoes and potatoes as much as you like.. That's my final offer, non negotiable. • FreddThe Redd •
•
• 16:35, December 4, 2009 (UTC)
- Naw. I'm waiting for Texaco to call. Sir Modusoperandi Boinc! 16:57, December 4, 2009 (UTC)
BUTT POOP
Judging from the comments on Spang's talk page and other sources, it appears that a significant number of users like that page. Could you possibly work out some sort of compromise on that issue, before I'm forced resort to such juvenile as annoyifiying my sig and the like.? --Mn-z 05:54, December 6, 2009 (UTC)
- ...speaking of deja vu... Sir Modusoperandi Boinc! 06:05, December 6, 2009 (UTC)
- Yes, but now I claim consensus. --Mn-z 06:06, December 6, 2009 (UTC)
- So you want Spang and myself to wrestle in a nerdy grudgematch for the ages? Sir Modusoperandi Boinc! 06:07, December 6, 2009 (UTC)
- More or less. But what you did was helpful. --Mn-z 19:53, December 6, 2009 (UTC)
- So they're not going to have this gay wresling match, where the winner gets to fuck the loser with a steel dildo? For shame. • FreddThe Redd •
•
• 19:59, December 6, 2009 (UTC)
- I think that your comment reflects more on your psyche than it does mine. Or anybody elses. Sir Modusoperandi Boinc! 22:03, December 6, 2009 (UTC)
- What's a phyche? And how does stuff reflect on it? • FreddThe Redd •
•
• 04:02, December 7, 2009 (UTC)
- My point exactly. Sir Modusoperandi Boinc! 04:22, December 7, 2009 (UTC)
- . Is this your point? Sorry, I didn't know it was yours,,, Can I still use it please? It dont wont to end my sentences with a comma, • FreddThe Redd •
•
• 04:28, December 7, 2009 (UTC)
- Are you off your meds again? Sir Modusoperandi Boinc! 04:32, December 7, 2009 (UTC)
- Naw, They expired yesterday,• FreddThe Redd •
•
• 06:20, December 7, 2009 (UTC)
- I'd like to reiterate that I want to see this article back in mainspace. The whole joke is that it's an article that wouldn't be deleted on VFD because VFD voters often vote on stupid rationales. If it's in userspace, then... it won't be deleted on VFD because userspace isn't taken to VFD. Sort of ruins the joke, I think.
pillow talk 20:43, December 8, 2009 (UTC)
- So you want Spang and myself to wrestle in a nerdy grudgematch for the ages? Sir Modusoperandi Boinc! 06:07, December 6, 2009 (UTC)
- That's not the optimal solution, but, you know, here we've got at least four regular editors who want the series in mainspace; we've got you, who would at least tolerate the series in mainspace; and then we've got Spang saying "Fuck all y'all." Something isn't right here.
pillow talk 21:24, December 8, 2009 (UTC)
- Spang, as before, is the one you have to convince. I'm not getting into an edit war with another admin unless it's damn well necessary, and by "necessary" I mean "to save that thing I wrote that one time, which was awesome". Sir Modusoperandi Boinc! 21:29, December 8, 2009 (UTC)
typical badgering
- Can you try working out some sort of compromise with Spang, like a vote or something on whether or not to mainspace the articles in question? I think that will work better than me and Hype badgering everybody until it gets re-created. And, he's actually edit warring because that thing you said that one time about not bringing that one article to VFD for a month. --Mn-z 06:22, December 9, 2009 (UTC)
- *Sigh* Spang is the only person here you have to convince. Sir Modusoperandi Boinc! 06:57, December 9, 2009 (UTC)
Deletes and Reverts
Your welcome message says "Never recreate a deleted article. Never redo a reverted edit. Never." Yet I got in trouble after someone else redid my reverted edit, and a certain user here recently got two previously huffed articles featured. Could you clarify this for a person who has a habit of asking annoying questions? WHY???PuppyOnTheRadio 02:15, December 8, 2009 (UTC)
- Yes. I didn't write "my" welcome template. I just stole it from someone else. I hope that clears things up. (Alternately, "Never recreate a deleted article" applied to noobs, who have a habit of making a bad page, coming back a day later to find it was deleted, then remaking the exact same page the same way, with the same lack of quality. "Never redo a reverted edit" may refer to the fact that it's better to discuss the edit with the editor, rather than engaging in a revert war) Sir Modusoperandi Boinc! 06:19, December 8, 2009 (UTC)
- All right, thanks. But maybe the "Never recreate a deleted article" could be softened a bit. As it happens, the huffed article that was later featured was recreated by a noob who was told "you're doing it wrong" and fortunately ignored the advice. And there's already the warning when someone's trying to write a previously huffed article, basically saying are you sure you want to do this? I don't see that it's any worse to write an article on, say, Archery just because it was huffed than it is to start an article on Archery when it wasn't huffed. On the other hand, you may get some anally-retentive type like me who will be told "never" and who will still be following that policy 20 years later. WHY???PuppyOnTheRadio 06:07, December 9, 2009 (UTC)
- By re-create a deleted article, it is probably meant recreating deleted content, not creating a new article with the same page-name as a deleted one. --Mn-z 06:11, December 9, 2009 (UTC)
- While I agree with Why? that the message is misleading, I don't think it's a serious problem. If a new user sees a generic message telling him not to create a certain page but does it anyway, he's either an idiot and his ban was inevitable, or he's able to think critically and interpret the message like Modusoperandi and Mnbvcxz did, in which case he just might be a good writer. --Pleb SYNDROME CUN medicate (butt poop!!!!) 06:15, December 9, 2009 (UTC)
- I'll assume you weren't intending to insult me, but I did say had I gotten that message I might still be following what I thought it meant 20 years later. WHY???PuppyOnTheRadio 06:34, December 9, 2009 (UTC)
- I wasn't trying to insult you. What you would have done is ask someone if you can recreate a page 20 seconds later from when you decided that you wanted to create it. That's a valid approach too. To each his own. --Pleb SYNDROME CUN medicate (butt poop!!!!) 06:40, December 9, 2009 (UTC)
- Ok cool. In that case, I think it would clear it up if the wording were changed, maybe to "Never recreate a deleted article--you can make a new one under the same name, though." But then I doubt most new users would know how to recreate an article that had been huffed anyway. WHY???PuppyOnTheRadio 06:31, December 9, 2009 (UTC)
- I think I stole it from TheLedBalloon. Talk to him. Sir Modusoperandi Boinc! 06:56, December 9, 2009 (UTC)
Possible dramaz
At the last section of User talk:Meganew --Mn-z 20:09, December 8, 2009 (UTC)
- I dealt with it. I think. To be honest, there's a blank in my memory and now my hands are covered in blood. No, wait...ketchup. Whew, that was close. Sir Modusoperandi Boinc! 20:36, December 8, 2009 (UTC)
Things
- I apologize if it looked like I was trying to accuse you of some sort of Zionist conspiracy. I just wasn't careful with my wording.
- Assuming my adoptee gets off his lazy arse and actually finishes it, I'm having him do a reskin for Christmas. How plausible is this for the main page? Any feature planned for Christmas?
- Penis penis Penis penis penis penis Penis penis. --Pleb SYNDROME CUN medicate (butt poop!!!!) 07:10, December 9, 2009 (UTC)
|
http://uncyclopedia.wikia.com/wiki/User_talk:Modusoperandi?oldid=4265409
|
CC-MAIN-2015-35
|
refinedweb
| 17,241
| 76.32
|
So how do you define generic class constants in Swift?
class C {
static let k = 1
}
let a = C.k // 1
class C<T> {
static let k = 1
}
Static stored properties not yet supported in generic types
struct
struct CConstant {
static let K = 1
}
You can define global constant with
fileprivate or
private access level in the same
.swift file where your generic class is defined. So it will not be visible outside of this file and will not pollute global (module) namespace.
If you need to access this constant from outside of current file then declare it as
internal (default access level) or
public and name it like
ClassConstant so it will be obvious that it relates to
Class.
Read more about access levels in Swift 3.
|
https://codedump.io/share/HWFHeKUCHeCc/1/how-to-define-static-constant-in-a-generic-class-in-swift
|
CC-MAIN-2016-44
|
refinedweb
| 128
| 70.13
|
.NET Core is a general purpose development platform maintained by Microsoft and the .NET community on GitHub. It is cross-platform, supporting Windows, macOS and Linux, and can be used in device, cloud, and embedded/IoT scenarios.
When you think of .NET Core the following should come to mind (flexible deployment, cross-platform, command-line tools, open source).
Another great thing is that even if it's open source Microsoft is actively supporting it.
By itself, .NET Core includes a single application model -- console apps -- which is useful for tools, local services and text-based games. Additional application models have been built on top of .NET Core to extend its functionality, such as:
Also, .NET Core implements the .NET Standard Library, and therefore supports .NET Standard Libraries..
public class Program { public static void Main(string[] args) { Console.WriteLine("\nWhat is your name? "); var name = Console.ReadLine(); var date = DateTime.Now; Console.WriteLine("\nHello, {0}, on {1:d} at {1:t}", name, date); Console.Write("\nPress any key to exit..."); Console.ReadKey(true); } }
|
https://sodocumentation.net/dot-net/topic/9059/-net-core
|
CC-MAIN-2020-29
|
refinedweb
| 172
| 54.39
|
runx helps to automate common tasks while doing research:
Install with pip:
pip install runx
Install with source:
git clone cd runx python setup.py .
Suppose you have an existing project that you call as follows:
> python train.py --lr 0.01 --solver sgd
To run a hyperparameter sweep, you'd normally have to code up a one-off script to generate the sweep. But with runx, you would simply define a yaml that defines lists of hyperparams that you'd like to use.
Start by creating a yaml file called
sweep.yml:
CMD: 'python train.py' HPARAMS: lr: [0.01, 0.02] solver: ['sgd', 'adam']
Now you can run the sweep with runx:
> python -m runx.runx sweep.yml -i python train.py --lr 0.01 --solver sgd python train.py --lr 0.01 --solver adam python train.py --lr 0.02 --solver sgd python train.py --lr 0.02 --solver adam
You can see that runx automatically computes the cross product of all hyperparameters, which in this case results in 4 runs. It then builds commandlines by concatenating the hyperparameters with the training command.
A few useful runx options:
-n don't run, just print the command -i interactive mode (as opposed to submitting jobs to a farm)
Farm support is simple. First create a .runx file that configures the farm:
LOGROOT: /home/logs FARM: bigfarm bigfarm: SUBMIT_CMD: 'submit_job' RESOURCES: gpu: 2 cpu: 16 mem: 128
LOGROOT: this is where the output of runs should go FARM: you can define multiple farm targets. This selects which one to use SUBMIT_CMD: the script you use to launch jobs to a farm RESOURCES: the arguments to present to SUBMIT_CMD
Now when you run runx, it will generate commands that will attempt to launch jobs to a farm using your SUBMIT_CMD. Notice that we left out the
-i cmdline arg because now we want to submit jobs and not run them interactively.
> python -m runx.runx sweep.yml submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver sgd" submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver adam" submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver sgd" submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver adam"
We want the results for each training run to go into a unique output/log directory. We don't want things like tensorboard files or logfiles to write over each other.
runx solves this problem by automatically generating a unique output directory per run.
You have access to this unique directory name within your experiment yaml via the special variable:
LOGDIR. Your training
script may use this path and write its output there.
CMD: 'python train.py' HPARAMS: lr: [0.01, 0.02] solver: ['sgd', 'adam'] logdir: LOGDIR
In the above experiment yaml, we have passed LOGDIR as an argument to your training script. When we launch the jobs, runx automatically generates unique output directories and passes the paths to your training script:
> python -m runx.runx sweep.yml submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver sgd --logdir /home/logs/athletic-wallaby_2020.02.06_14.19" submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.01 --solver adam --logdir /home/logs/industrious-chicken_2020.02.06_14.19" submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver sgd --logdir /home/logs/arrogant-buffalo_2020.02.06_14.19" submit_job --gpu 2 --cpu 16 --mem 128 -c "python train.py --lr 0.02 --solver adam --logdir /home/logs/vengeful-jaguar_2020.02.06_14.19"
After you've run your experiment, you will likely want to summarize the results. You might want to know:
You summarize your runs with on the commandline with
sumx. All you need to do is tell
sumx which experiment you want summarized.
sumx knows what your LOGROOT (it'll get that from the .runx file) and so it looks within that directory for your experiment directory.
In the following example, we ask
sumx to summarize the
sweep experiment.
> python -m runx.sumx sweep --sortwith acc lr solver acc epoch epoch_time ------ ---- ------ ---- ----- ---------- run4 0.02 adam 99.1 10 5:11 run3 0.02 sgd 99.0 10 5:05 run1 0.01 sgd 98.2 10 5:15 run2 0.01 adam 98.1 10 5:12
sumx is part of the runx suite, and is able to summarize the different hyperparmeters used as well as the metrics/results of your runs. Notice that we used the --sortwith feature of sumx, which sorts your results so you can easily locate your best runs.
This is the basic idea. The following sections will go into more details about all the various features.
runx consists of three main modules:
These modules are intended to be used jointly, but if you just want to use runx, that's fine. However using sumx requires that you've used logx to record metrics.
In order to use
runx, you need to create a configuration file in the directory where you'll call the
runx CLI.
The .runx file defines a number of critical fields:
LOGROOT- the root directory where you want your logs placed. This is a path that any farm job can write to.
FARM- if defined, jobs should be submitted to this farm, else run interactively
SUBMIT_CMD- the farm submission command
RESOURCES- hyperparameters passed to the
SUBMIT_CMD. You can list any number of these items, the ones shown below are just examples.
CODE_IGNORE_PATTERNS- ignore these files patterns when copying code to output directory
Here's an example of such a file:
LOGROOT: /home/logs CODE_IGNORE_PATTERNS: '.git,*.pyc,docs*,test*' FARM: bigfarm # Farm resource needs bigfarm: SUBMIT_CMD: 'submit_job' RESOURCES: image: mydocker-image-big:1.0 gpu: 8 cpu: 64 mem: 450 smallfarm: SUBMIT_CMD: 'submit_small' RESOURCES: image: mydocker-image-small:1.2 gpu: 4 cpu: 32 mem: 256
runx has two level of experiment hierarchy: experiments and runs. An
experiment corresponds to a single yaml file, which may contain many
runs.
runx creates both a parent experiment directory and a unique subdirectory for each run. The name of the experiment directory is
LOGROOT/<experiment name>, so in the example of sweep.yml, the experiment name is
sweep, derived from the yaml filename.
For example, this might be the directory structure for the sweep study:
/home/logs sweep/ curious-rattlesnake_2020.02.06_14.19/ ambitious-lobster_2020.02.06_14.19/ ...
The individual run directories are named with a combination of
coolname and date. The use of
coolname makes it much easier to refer to a given run than referring to a date code.
If you include the RUNX.TAG field in your experiment yaml or if you supply the --tag argument to the
runx CLI, the names will include that tag.
runx actually makes a copy of your code within each run's log directory. This is done for a number of reasons:
CMD - Your base training command. You typically don't include any args here. HPARAMS - All hyperparmeters. This is a datastructure that may either be a simple dict of params or may be a list of dicts. Furthermore, each hyperparameter may be a scalar or list or boolean. PYTHONPATH - This is field optional. For the purpose of altering the default PYTHONPATH which is simply LOGDIR/code. Can be a colon-separated list of paths. May include LOGDIR special variable.
CMD: "python train.py" HPARAMS: logdir: LOGDIR adam: true arch: alexnet lr: [0.01, 0.02] epochs: 10 RUNX.TAG: 'alexnet'
Here, there will be 2 runs that will be created.
If you want to specify that a boolean flag should be on or off, this is done using
true and
false keywords:
some_flag: [true, false]
This would result having one run with
--some_flag and another run without that flag
If instead you want to pass an actual string, you could instad do the following:
some_arg: ['True', 'False']
This would result in one run with
--some_arg True and other run with
--some_arg False
If you'd like an argument to not be passed into your script at all, you can set it to
None
some_arg: None
Oftentimes, you might want to define separate lists of hyperparameters in your experiment. For example:
You can do this with hparams defined as follows:
PYTHONPATH: LOGDIR/code:LOGDIR/code/lib CMD: "python train.py" HPARAMS: [ { logdir: LOGDIR, adam: true, arch: alexnet, lr: [0.01, 0.02], epochs: 10, RUNX.TAG: 'alexnet', }, { arch: resnet50, lr: [0.002, 0.005], RUNX.TAG: 'resnet50', }, { RUNX.SKIP: true, arch: resnet50, lr: [0.002, 0.005], RUNX.TAG: 'resnet50', } ]
You might observe that hparams is now a list of two dicts. The nice thing is that runx assumes inheritance from the first item in the list to all remaining dicts, so that you don't have to re-type all the redundant hyperparms.
When you pass this yaml to runx, you'll get the following out:
submit_job ... --name alexnet_2020.02.06_6.32 -c "python train.py --logdir ... --lr 0.01 --adam --arch alexnet --epochs 10 submit_job ... --name alexnet_2020.02.06_6.40 -c "python train.py --logdir ... --lr 0.02 --adam --arch alexnet --epochs 10 submit_job ... --name resnet50_2020.02.06_6.45 -c "python train.py --logdir ... --lr 0.002 --adam --arch resnet50 --epochs 10 submit_job ... --name resnet50_2020.02.06_6.50 -c "python train.py --logdir ... --lr 0.005 --adam --arch resnet50 --epochs 10
Because of inheritance, adam, arch, and epochs params are set identically in each run.
This is also showing the use of the magic variable
RUNX.TAG, which allows you to add a tag to a subset of your experiment. This is the same as if you'd used the --tag option to runx.py, except that here you can specify the tag within the hparams data structure. The value of
RUNX.TAG is not passed to your training script.
A very useful feature of RUNX.TAG is that you can reference other hyperparameters, for example:
arch: resnet50, RUNX.TAG: '{arch}-lrstudy'
This results in the tag becoming
resnet50-lrstudy.
runx performs simple string matching and substitution when it finds curly braces.
In order to use sumx, you need to export metrics with logx. logx helps to write metrics in a canonical way, so that sumx can summarize the results.
logx can also make it easy for you to output log information to a file (and stdout) logx can also manage saving of checkpoints automatically, with the benefit being that logx will keep around only the latest and best checkpoints, saving much disk space.
The basic way you use logx is to modify your training code in the following ways:
At the top of your training script (or any module that calls logx functions:
from runx.logx import logx
Before using logx, you must initialize it as follows:
logx.initialize(logdir=args.logdir, coolname=True, tensorboard=True)
Make sure that you're only calling logx from rank=0, in the event that you're using distributed data parallel.
Then, substitute the following logx calls into your code:
Finally, in order for sumx to be able to read the results of your run, you have to push your metrics to logx. You should definitely push the 'val' metrics, but can push 'train' metrics if you like (sumx doesn't consume them at the moment).
# define which metrics to record metrics = {'loss': test_loss, 'accuracy': accuracy} # push the metrics to logfile logx.metric(phase='val', metrics=metrics, epoch=epoch)
Some important points of logx.metric():
phaseargument describes whether the metric is a train or validation metric.
Here's a final feature of logx: saving of the model. This feature helps save not only the latest but also the best model.
save_dict = {'epoch': epoch + 1, 'arch': args.arch, 'state_dict': model.state_dict(), 'best_acc1': best_acc1, 'optimizer' : optimizer.state_dict()} logx.save_model(save_dict, metric=accuracy, epoch=epoch, higher_better=True)
You do have to tell save_model whether the metric is better when it's higher or lower.
sumx summarizes the results of your runs. It requires that you've logged your metrics with logx.metric(). We chose this behavior instead of reading Tensorboard files directly because that would be much slower.
> python -m runx.sumx sweep lr solver acc epoch epoch_time run4 0.02 adam 99.1 10 5:21 run3 0.02 sgd 99.0 10 5:02 run1 0.01 sgd 98.2 10 5:40 run2 0.01 adam 98.1 10 5:25
A few features worth knowing about:
--sortwithto sort the output by a particular field (like accuracy) that you care about most
--ignoreflag to limit what fields sumx prints out
NGC support is now standard. Your
.runx file should look like the following.
LOGROOT: /path/to/logroot FARM: ngc ngc: NGC_LOGROOT: /path/to/ngc_logroot WORKSPACE: <your ngc workspace> SUBMIT_CMD: 'ngc batch run' RESOURCES: image: nvidian/pytorch:19.10-py3 instance: dgx1v.16g.1.norm ace: nv-us-west-2 result: /result
Necessary steps:
WORKSPACEwith it
NGC_LOGROOTwith this path. When the job is launched, this is also the path used to mount the workspace on the running instance.
RESOURCES. Recall that these parameters are passed on to the
SUBMIT_CMD, which must be
ngc batch run.
You should be able to launch jobs to NGC using this configuration. When jobs write their results, you should also be able to see the results in the mounted workspace, and then you should be able to run runx.sumx in order to summarize the results of those runs.
|
https://openbase.com/python/runx
|
CC-MAIN-2021-39
|
refinedweb
| 2,260
| 76.52
|
In today’s Programming Praxis exercise,we have to write functions to determine if one string is a subset of another as if we were in an interview. Let’s get started, shall we?
First, some imports.
import Data.List import qualified Data.Map as M import qualified Data.IntMap as I
My first attempt doesn’t actually work, since the intersect function only checks whether an element is a member of the second list. It doesn’t keep track of duplicates.
subsetOf1 :: Eq a => [a] -> [a] -> Bool subsetOf1 xs ys = intersect xs ys == xs
Since a call to a library function won’t suffice, we’ll have to whip up something ourselves. The obvious way is to get a count of all the characters in both strings and check if the second string has an equal or higher count for all the characters in the first string. Of course this method is O(n * m), so it’s not very efficient.
subsetOf2 :: Ord a => [a] -> [a] -> Bool subsetOf2 xs ys = all (\(c, n) -> maybe False (n <=) . lookup c $ count ys) $ count xs where count = map (\x -> (head x, length x)) . group . sort
To improve the big O complexity, we’re going to switch to a data type that has a faster lookup. Like the previous version, counting the frequency of each letter is O(n log n), but using Maps the comparison can now be done in O(m + n), meaning the overall complexity remains at O(n log n).
subsetOf3 :: Ord a => [a] -> [a] -> Bool subsetOf3 xs ys = M.null $ M.differenceWith (\x y -> if x <= y then Nothing else Just x) (f xs) (f ys) where f = M.fromListWith (+) . map (flip (,) 1)
Since the keys of the map are characters, there’s a further optimization we can make. By converting the characters to integers we can use an IntMap instead of a plain Map. An IntMap has a lookup of O(min(n,W)), with W being the amount of bits in an Int. For any non-trivial n, this results in O(1) lookup. Counting all the letters can now be done in O(n). Since the comparison still takes O(m + n), the resulting complexity is O(m + n). This is the minimum we can achieve, as we need to fully evaluate both strings to produce an answer, which is an O(m + n) operation.
subsetOf4 :: Enum a => [a] -> [a] -> Bool subsetOf4 xs ys = I.null $ I.differenceWith (\x y -> if x <= y then Nothing else Just x) (f xs) (f ys) where f = I.fromListWith (+) . map (flip (,) 1 . fromEnum)
A quick test shows that the first function indeed fails, but the other ones succeed.
main :: IO () main = do let test f = print (f "da" "abcd", not $ f "dad" "abcd") test subsetOf1 test subsetOf2 test subsetOf3 test subsetOf4
I must say I hope that I have internet access during the interview, though. If not, I would have had to come up with an alternative for differenceWith and I might not have remembered the existence of IntMap. In that case I’d probably have gone with something along the lines of the array-based solution from number 4 of the Scheme solutions.
Tags: bonsai, code, Haskell, interview, kata, praxis, programming, string, subsets
November 23, 2010 at 12:25 pm |
Thanks for your functional approach.
I did it with an imperative style (it seemed so simple) and I did not know where to begin in a functional style. I’ll look deeper in your code and try to understand the pattern (first look, it does not look that hard)
November 23, 2010 at 3:33 pm |
I know Haskell has a multi-set library. Does the library have an isSubset predicate? If so, just convert both strings to multi-sets of characters and check if the second is a subset of the first.
November 23, 2010 at 4:27 pm |
Hm. So it does. With that, the algorithm can be reduced to
I’ll have to remember that library, it might come in handy some day. Thanks.
November 23, 2010 at 8:16 pm |
This problem reminds me an article in reddit I’ve read 3-4 days ago:
November 23, 2010 at 8:19 pm |
Not surprising, since that same link is mentioned in the original exercise 🙂
|
https://bonsaicode.wordpress.com/2010/11/23/programming-praxis-string-subsets/
|
CC-MAIN-2017-30
|
refinedweb
| 721
| 70.84
|
Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 10, 2012 4:36 PM
Hello all,
We are running 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) integrated with Jboss AS 5.1.0-Final. This configuration has been running since about 3-4 weeks now. However, we're run into a problem where we're now seeing serveral ( > 2500 ) messages getting stuck in the queue. It seems as though some messages are going through while other remain stuck.
Is there a way to determine / debug what's going on and why these messages are 'stuck' ? There are other queues that have different consumers and those messages are being consumed. It's just one of the queues that don't seem to be passing messages to its consumer. The consumer is a MDB deployed in the same instance of Horner!
A messages typically take a few milliseconds to process (consume). At this point the queue's message count and Schedule message count are both 'fixed' at 2602 messages. The 'MessagesAdded' count however seems to keep incrementing slowly. And the logs do show that new messages are being consumed and processed.
Is there a way to 'inspect' the queue's state to see what's causing the messages to stay in the queue ? When I try to inspect the queue using Hermes, it shows that the queue is empty.
Any help will be appreciated.
Thanks
Groove
1. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 10, 2012 11:45 PM (in response to Gurvinderpal Narula)
On invoking the 'listScheduledMessagesAsJSON' the first few messages list as follows:
{,"_HQ_SCHED_DELIVERY":1341949019203,,"_HQ_SCHED_DELIVERY":1341949019186,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014159,"userID":"ID:9a7612d9-cac6-11e1-8ddc-005056a500c1","messageID":32215946663,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019159,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014140,"userID":"ID:9a732ca5-cac6-11e1-8ddc-005056a500c1","messageID":32215946658,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019140,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014107,"userID":"ID:9a6e2391-cac6-11e1-8ddc-005056a500c1","messageID":32215946653,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019107,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014067,"userID":"ID:9a68090d-cac6-11e1-8ddc-005056a500c1","messageID":32215946648,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019067,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014049,"userID":"ID:9a6549e9-cac6-11e1-8ddc-005056a500c1","messageID":32215946643,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019049,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014031,"userID":"ID:9a628ac5-cac6-11e1-8ddc-005056a500c1","messageID":32215946638,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019031,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014013,"userID":"ID:9a5fcba1-cac6-11e1-8ddc-005056a500c1","messageID":32215946633,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019013,"pricing_upload_listener_value":1,"durable":true,"type":5}
When I try to remove/expire any messages using corresponding messageid, the response I get is 'false' and the messages stay in the queue. Even changing message priority does not help and the messages stay stuck in the queue and their priority also does not change.
Is there any other steps that can be taken to try and release or remove these messages ?
Any assistance/insight/help will truly be appreciated.
Thanks in advance,
Gurvinder
2. Re: Durable messages getting stuck in Queue.Andy Taylor Jul 11, 2012 4:01 AM (in response to Gurvinderpal Narula)
The messages you have shown look like they arent in the queue yet and scheduled for delivery, however they should be removed using the ID, could you provide a test so we can take a look.
3. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 11, 2012 4:58 AM (in response to Andy Taylor)
Andy.
Thank you for a response.
Here's an update - we tried restarting the server earlier in the morning today. The 'state' of the messages seems to have changed :
{,"pricing_upload_listener_value":1,"durable":true,"type":5}
I no-longer see '_HQ_SCHED_DELIVERY' property in the message headers. Also after we restarted the server, we noticed that about 11 messages getting processed (I can't tell if these are new messages that got processed or if these were existing messages that were stuck earlier that went through).
I'm not sure how to provide a test ! When I said that I tried to remove/expire these messages, I tried doing that throught the application servers (jboss-5.1.0 + Hornetq) jmx-console. If you think that's not the right way to adminster these message and I should try something else, then please let me know. Or if you think zipping up the jboss-folder and uploading it here so that you'll can take a look at what's going on would help, then I can do that as well.
4. Re: Durable messages getting stuck in Queue.Andy Taylor Jul 11, 2012 5:09 AM (in response to Gurvinderpal Narula)
that implies that the scheduled messages were put on the queue and consumed, these will be new messages. Why do you think that they are stuck? are the MDB's still active (you can check to see if the queue has any consumers in the console).
5. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 11, 2012 5:52 AM (in response to Andy Taylor)
We have 1 MDB (the backend system that processes our messages is only capable of processing one dataset at a time via a web service) configured and it's active :
This is the output from invoking listConsumersAsJSON :
[{"sessionID":"3e51326c-cb26-11e1-9bbd-005056a50108","connectionID":"3e4e734b-cb26-11e1-9bbd-005056a50108","creationTime":1341990091300,"browseOnly":false,"consumerID":0}]
The reason why I believe they're stuck is because :
1. Our messages don't take more than about a second to process. Right now there is little to-no actvity, yet the message counts (MessageCount / ScheduledMessageCount) has not dropped at all in the last hour or so. The count has stayed fixed at 2465 since the server was restarted earlier today.
2. Even though the messages are there in the queue, there is no actvity (in coming requests) being registered in the backend system.
When I do not see the message count or schedule message count reducing, I'm assuming that they're 'stuck'. I can't tell why they're not process at this point. Our MDB logs a lot of status information in the logs and we see these updates in the logs when messages are consumed. At this point I only see very little messages coming from this consumer - I should see a lot more activity in the logs.
If the messages were consumed, why is the MessageCount still showing 2465 ? Our messageCount never exceed 5-7 at any given point in time. Yes our ScheduleMessage Count does rise when our backend service goes down. But then when it comes backup, we normally see that drop down as well (in 2-4 hours typically). But it's now been 2 days and we have not seen these numbers dropped.
6. Re: Durable messages getting stuck in Queue.Andy Taylor Jul 11, 2012 6:34 AM (in response to Gurvinderpal Narula)
What is your MDB pool size, the reason I ask is that the defailt pool size is 15 and you should 15 consumers.
also, there may be a bug in message count when a server is restarted.
Also are you using transactions for the MDB, check prepared tx's to make sure there are no pending tx's, i.e. the messages have been consumed but not commited.
7. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 11, 2012 7:23 AM (in response to Andy Taylor)
We have set our pool size to 1 and session size to 1.
Here are the annotations we have defined for the MDB.
@MessageDriven(
activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/RetailPriceRequestQueue"),
@ActivationConfigProperty(propertyName = "maxSession",propertyValue = "1" )
})
@Pool(value=PoolDefaults.POOL_IMPLEMENTATION_STRICTMAX, maxSize=1)
public class ForwardResponse implements MessageListener {
Can you please let me know how to check for pending tx's ?
I don't think the messages have been consumed since even our backend service has not registered the data in these messages. In our system, we keep a log of the messages that are 'send' by the producer and we also log this data in our backend system. So what we're see is that several of the messages that have been sent by our produced have never made it to our backend system. I will still check the pending txs once I've figured out how to. If you send me some pointers on how to do that, it would be great.
Andy, I would like to thank you for your effort in helping me out. Truely appreciate it.
Thanks again.
8. Re: Durable messages getting stuck in Queue.Clebert Suconic Jul 11, 2012 3:26 PM (in response to Gurvinderpal Narula)1 of 1 people found this helpful
You are using 2.2.5.. there were a few fixes since them.
One of the fixes was around PriorityLinkedList. The Queue would lose messages (until you restarted the system), and there were another ocurrency where this could occur after a redelivery.
9. Re: Durable messages getting stuck in Queue.Clebert Suconic Jul 11, 2012 3:27 PM (in response to Clebert Suconic)
BTW: I"m not saying you're hitting the bug. Just that if you moved to a later version maybe the issue will go away.
10. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 11, 2012 4:49 PM (in response to Clebert Suconic)
Thanks for the update Clebert. We'll work on upgrading to the later release. In the mean time, is there anyway to 'clear' out the existing queue ? The reason I ask is becuase when we tried to 'resubmit' these messages for processing, the resumitted messages simply piled up in the queue again. From our logs we can make out that there are ~ 750 requests that have not been processed. Yesterday the queue had about 1500 messages that were in the 'stuck' state. When we resumitted our requests for process, the 750 requests simply got added to the queue and did not process. So now our queue is sitting at ~ 2250 unprocessed messages. We need to get these 750 requests processed ASAP. So is there a way we can 'reset' the PriorityLinkedList so that we can re-sumit the ~750 requests ?
Again, thank you and Andy for your help and would really appreciate any addtional assistance you can provide to resolve this. Unfortunately upgarding to a new-release is going to mean quite bit of testing and we can't wait that long to process the 750 pending requests. So if there's a way to clear the current queue (like renaming the existing queue and creating a new one with the same name etc), it will be of tremendeous value to us.
11. Re: Durable messages getting stuck in Queue.Andy Taylor Jul 12, 2012 8:31 AM (in response to Gurvinderpal Narula)
if you use the console and delete using the ID that would work, if it doesnt then without some sort of test its hard to really help. Ive never seen an issue before of this tho.
12. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 16, 2012 10:22 AM (in response to Andy Taylor)
This is getting even worst - we're seeing the same behaiour now on a completely different server. We submitted about 700 messages to a test server on Friday (7/13). We normally see these messages being processed in about 45 mins. But the queue has just processed about 200 messages until now. I'm going to try to delete these messages etc.
I'm prepared to provide a test. Just not sure how to go about this ? Can you just provide me guidance on all the artifacts needed for the test ?
13. Re: Durable messages getting stuck in Queue.Clebert Suconic Jul 16, 2012 11:00 AM (in response to Gurvinderpal Narula)
With all the indications so far it seems that you are hitting a bug fixed after 2.2.5. It will be hard to fix a bug that was already fixed...
if you replicate it on the latest.. then we can fix it.
14. Re: Durable messages getting stuck in Queue.Gurvinderpal Narula Jul 16, 2012 1:08 PM (in response to Clebert Suconic)
Thanks Clebert,
Is it possible to use 2.2.14 with Jboss 5.1.0.GA ? I've tried to deploy 2.2.14 into a clean install of 5.1.0 and have run into this issue :
How do I resolve this ScopeKey issue ?
I can't move forward to 7.1.1 until the entire application is migrated. We do have a seperate initiative going towards that, but that's going to be take several weeks and we can't really wait so long to resolve this issue.
|
https://developer.jboss.org/thread/202426
|
CC-MAIN-2018-17
|
refinedweb
| 2,173
| 64.81
|
Truthfully, most users aren’t very interested in finding the largest and smallest Python source files in their home directory, but doing so does provide for an exercise in walking the file tree and using tools from the os module. The program in this post is a modified example taken from Programming Python: Powerful Object-Oriented Programming
where the user’s home directory is scanned for all Python source files. The console outputs the two smallest files (in bytes) and the two largest files.
Code
import os import pprint from pathlib import Path trace = False # Get the user's home directory in a platform neutral fashion dirname = str(Path.home()) # Store the results of all python files found # in home directory allsizes = [] # Walk the file tree for (current_folder, sub_folders, files) in os.walk(dirname): if trace: print(current_folder) # Loop through all files in current_folder for filename in files: # Test if it's a python source file if filename.endswith('.py'): if trace: print('...', filename) # Assemble the full file python using os.path.join fullname = os.path.join(current_folder, filename) # Get the size of the file on disk fullsize = os.path.getsize(fullname) # Store the result allsizes.append((fullsize, fullname)) # Sort the files by size allsizes.sort() # Print the 2 smallest files pprint.pprint(allsizes[:2]) # Print the 2 largest files pprint.pprint(allsizes[-2:])
Sample Output
[(0, '/Users/stonesoup/.local/share/heroku/client/node_modules/node-gyp/gyp/pylib/gyp/generator/__init__.py'), (0, '/Users/stonesoup/.p2/pool/plugins/org.python.pydev.jython_5.4.0.201611281236/Lib/email/mime/__init__.py')] [(219552, '/Users/stonesoup/.p2/pool/plugins/org.python.pydev.jython_5.4.0.201611281236/Lib/decimal.py'), (349239, '/Users/stonesoup/Library/Caches/PyCharmCE2017.1/python_stubs/348993582/numpy/random/mtrand.py')]
Explanation
The program starts with a trace flag that’s set to false. When set to True, the program will print detailed information about what is happening in the program. On line 8, we grab the user’s home directory using Path.home(). This is a platform nuetral way of finding a user’s home directory. Notice that we do have to cast this value to a String for our purposes. Finally we create an empty allsizes list that holds our results.
Starting on line 15, we use the os.walk function and pass in the user’s home directory. It’s a common pattern to combine os.walk with a for loop so that we can traverse an entire directory tree. Each iteration os.walk returns a tuple that contains the current_folder, sub_folders, and files in the current folder. We are interested in the files.
Starting on line 20, the program enters a nested for each loop that examines each file individually. On line 23, we test if the file ends with ‘.py’ to see if it’s a Python source file. Should the test return True, we continue by using os.path.join to assemble the full path to the file. The os.path.join function takes into account the underlying operating system’s path separator, so on Unix like systems, we get / while Windows systems get \ as a path separator. The file’s size is computed on line 31 using os.path.getsize. Once we have the size and the file path, we can add the result to allsizes for later use.
The program has finished scanning the user’s home folder once the program reaches line 37. At this point, we can sort our results from smallest to largest by using the sort() method on allsizes. Line 40 prints the two smallest files (using pretty print for better formatting) and line 43 prints the two largest files.
References
Lutz, Mark. Programming Python. Beijing, OReilly, 2013.
|
https://stonesoupprogramming.com/tag/os-path-getsize/
|
CC-MAIN-2020-29
|
refinedweb
| 618
| 59.6
|
Internal rate of return
From Wikipedia, the free encyclopedia
The internal rate of return (IRR) is a rate of return used in capital budgeting to measure and compare the profitability of investments. It is also called the discounted cash flow rate of return (DCFROR) or simply the rate of return (ROR).[1] In the context of savings and loans the IRR is also called the effective interest rate. The term internal refers to the fact that its calculation does not incorporate environmental factors (e.g. the interest rate).
[edit] Definition
The internal rate of return on an investment or potential investment is the annualized effective compounded return rate that can be earned on the invested capital.
In more familiar terms, the IRR.
[edit] Uses
Because profitable).
[edit]:
Note.
Note that
If an investment may be given by the sequence of cash flows
then the IRR r is given by
.
In this case, the answer is 29%.
formula initially requires two unique pairs of estimations of the IRR and NPV (r0,NPV0) and (r1,NPV1), and produces a sequence of
that may converge to
as
. If the sequence converges, then iterations of the formula can continue indefinitely so that r can be found to an arbitrary degree of accuracy.
The convergence behaviour of the sequence is governed by the following:
- infinity.
Having
when NPV or
when NPV0 < 0 may speed up convergence of rn to r.
[edit] Solution by Financial Calculator
Numerical iterations can become cumbersome and inefficient easily. Using a financial calculator would simplify this processes. Below we setup the variables for BA II plus Financial Calculator:
Press "CF" key to bring up the Cash Flow screen, and enter the following values. For this example, Frequency for each cash flow should be 1.
Press "IRR" to bring up a screen for Internal Rate of Return, and then press "CPT". Solution is
.
[edit] Problems with using internal rate of return (IRR)
As an investment decision tool, the calculated IRR should not be used to rate mutually exclusive projects, but only to decide whether a single project is worth investing in. consumption of positive cash flows during the project. If positive cash flows can be reinvested back into the project, then a suitable reinvestment rate is required in order to calculate the reinvestment cash flow and hence the IRR with cash flows reinvested.
When the calculated IRR is different from the true reinvestment rate for interim cash flows, the measure will accurately reflect the annual equivalent return from the project. The company may have additional projects, with equally attractive prospects, in which to invest the interim cash flows. mented.[2] Accordingly, MIRR is used, which has an assumed reinvestment rate, usually equal to the project's cost of capital.
Despite a strong academic preference for NPV, surveys indicate that executives prefer IRR over NPV[citation needed].
Mat.
- Extended Internal Rate of Return: The Internal rate of return calculates the rate at which the investment made will generate cash flows. This method is convenient if the project has a short duration, but for projects which has an outlay of many years this method is not practical as IRR ignores the Time Value of Money. To take into consideration the Time Value of Money Extended Internal Rate of Return was introduced where all the future cash flows are first discounted at a discount rate and then the IRR is calculated. This method of calculation of IRR is called Extended Internal Rate of Return or XIRR.
[edit] See also
- Modified Internal Rate of Return
- Accounting
- Capital budgeting
- Cost of capital
- Finance
- Net present value
- Discounted cash flow
[edit] References
- ^ Project Economics and Decision Analysis, Volume I: Deterministic Models, M.A.Main, Page 269
- ^ a b Internal Rate of Return: A Cautionary Tale
[edit] Further reading
- Bruce J. Feibel. Investment Performance Measurement. New York: Wiley, 2003. ISBN 0471268496
|
http://ornacle.com/wiki/Internal_rate_of_return
|
crawl-002
|
refinedweb
| 642
| 52.6
|
Managed Data Access Inside SQL Server with ADO.NET and SQLCLR
Pablo Castro
Microsoft Corporation
April 2005
Applies to:
Microsoft SQL Server 2005
Microsoft .NET Framework 2.0
ADO.NET
Summary: Using the new SQLCLR feature, managed code can use ADO.NET when running inside SQL Server 2005. Learn about SQLCLR via basic scenarios of in-process data access, SQLCLR constructs, and their interactions. (26 printed pages)
Contents
Introduction
Part I: The Basics
Why Do Data Access Inside SQLCLR?
Getting Started with ADO.NET Inside SQLCLR
The Context Connection
Using ADO.NET in Different SQLCLR Objects
When Not to Use SQLCLR + ADO.NET
Part II: Advanced Topics
More on Connections
Transactions
Conclusion
Acknowledgements
Introduction
This white paper discusses how managed code can use ADO.NET when running inside SQL Server 2005 using the new SQLCLR feature.
In Part I, I describe the basic scenarios in which in-process data access might be required, and the difference between local and remote connections. Most of the different SQLCLR constructs are covered, such as stored procedures and functions, as well as interesting aspects of the interaction between the data access infrastructure and those constructs.
Part II further details in-process connections and restrictions that apply to ADO.NET when running inside SQLCLR. Finally, there is a detailed discussion on the transactions semantics of data access code inside SQLCLR, and how can it interact implicitly and explicitly with the transactions API.
Part I: The Basics
Why Do Data Access Inside SQLCLR?
SQL Server 2005 is highly integrated with the .NET Framework, enabling the creation of stored procedures, functions, user-defined types, and user-defined aggregates using your favorite .NET programming language. All of these constructs can take advantage of large portions of the .NET Framework infrastructure, the base class library, and third-party managed libraries.
In many cases the functionality of these pieces of managed code will be very computation-oriented. Things such as string parsing or scientific math are quite common in the applications that our early adopters are creating using SQLCLR.
However, you can do only so much using only number crunching and string manipulation algorithms in isolation. At some point you'll have to obtain your input or return results. If that information is relatively small and granular you can use input and output parameters or return values, but if you're handling a large volume of information, then in-memory structures won't be an appropriate representation/transfer mechanism; a database might be a better fit in those scenarios. If you choose to store the information in a database, then SQLCLR and a data-access infrastructure are the tools you'll need.
There are a number of scenarios where you'll need database access when running inside SQLCLR. One is the scenario I just mentioned, where you have to perform some computation over a potentially large set of data. The other is integration across systems, where you need to talk to different servers to obtain an intermediate answer in order to proceed with a database-related operation.
Note For an introduction and more details about SQLCLR in general, see Using CLR Integration in SQL Server 2005.
Now, if you're writing managed code and you want to do data access, what you need is ADO.NET.
Getting Started with ADO.NET Inside SQLCLR
The good news is that ADO.NET "just works" inside SQLCLR, so in order to get started you can leverage all your existing knowledge of ADO.NET.
To illustrate this take a look at the code snippet below. It would work fine in a client-side application, Web application, or a middle-tier component; it turns out that it will also work just fine inside SQLCLR.
C# // computation with it } } Visual Basic .NET Dim cmd as SqlCommand Dim r as SqlDataReader ' computation with it Loop End Using
This sample uses the System.Data.SqlClient provider to connect to SQL Server. Note that if this code runs inside SQLCLR, it would be connecting from the SQL Server that hosts it to another SQL Server. You can also connect to different data sources. For example, you can use the System.Data.OracleClient provider to connect to an Oracle server directly from inside SQL Server.
For the most part, there are no major differences using ADO.NET from within SQLCLR. However, there is one scenario that needs a little bit more attention: what if you want to connect to the same server your code is running in to retrieve/alter data? See The Context Connection section to see how ADO.NET addresses that.
Before delving into further detail, I'd like to go through the basic steps to run code inside SQLCLR. If you already have experience creating SQLCLR stored procedures, you'll probably want to skip this section.
Creating a managed stored procedure that uses ADO.NET from Visual Studio
Visual Studio 2005 includes great integration with SQL Server 2005 and makes it really easy to create and deploy SQLCLR projects. Let's create a new managed stored procedure that uses ADO.NET using Visual Studio 2005.
- Create a SQLCLR project. The starting point for SQLCLR in Visual Studio is a database project. You need to create a new project by selecting the database project category under your favorite language. Next, select the project type called SQL Server Project, give it a name, and let Visual Studio create the project.
- Set permissions to EXTERNAL_ACCESS. Go to the properties of the project (right-click on the project node, choose Properties), choose the Database tab, and then from the Permission Level combo-box, choose External.
- Add a stored-procedure to the project. Once the project is created you can right-click on the project node and select Add -> New Item. On the pop-up dialog you'll see all the different SQLCLR objects that you can create. Select Stored Procedure, give it a name, and let Visual Studio create it. Visual Studio will create a template stored procedure for you.
- Customize the Visual Studio template. Visual Studio will generate a template for a managed stored procedure for you. Since you want to use SqlClient (the .NET data access provider for SQL Server) to connect to another SQL Server in this sample, you'll need to add at least one more using (C#) or imports (Visual Basic) statement for System.Data.SqlClient.
- Code the stored procedure body. Now you need the code for the stored procedure. Let's say you want to connect to another SQL Server (remember, your code will be running inside SQL Server), obtain some information based on input data, and process the results. Note that Visual Studio generated a SqlProcedure attribute for the stored procedure method; it is used by the Visual Studio deployment infrastructure, so leave it in place. If you take a look at the proceeding code you'll notice that there is nothing different from old fashioned ADO.NET code that would run in the client or middle-tier. We love that part :)
C# using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _
- Deploy your assembly. Now you need to deploy your stored procedure in SQL Server. Visual Studio makes it trivial to deploy the assembly to SQL Server and take the appropriate steps to register each of the objects in the assembly with the server. After building the project, on the Build menu, choose Deploy Solution. Visual Studio will connect to SQL Server, drop previous versions of the assembly if needed, send the new assembly to the server and register it, and then register the stored procedure that you added to the assembly.
- Try it out. You can even customize the "test.sql" file that's generated under the "Test Scripts" project folder to exercise the stored procedure you're working on so Visual Studio will execute it when you press Ctrl+F5, or just press F5. (Yes, F5 will start the debugger, and you can debug code inside SQLCLR—both T-SQL and CLR code—isn't that cool?)
Creating a managed stored procedure that uses ADO.NET using the SDK only
If you don't have Visual Studio 2005 handy, or you'd like to see how things work the first time before letting Visual Studio do it for you, here is how to create a SQLCLR stored procedure by hand.
First, you need the code for the stored procedure. Let's say you want to do the same as in the Visual Studio example: connect to another SQL Server, obtain some information based on input data, and process the results.
C# using System.Data; using System.Data.SqlClient; public class SP { Public Class SP
Again, nothing different from old fashioned ADO.NET :)
Now you need to compile your code to produce a DLL assembly containing the stored procedure. The following command will do it (assuming that you called your file myprocs.cs/myprocs.vb and the .NET Framework 2.0 is in your path):
This will compile your code and produce a new DLL called myprocs.dll. You need to register it with the server. Let's say you put myprocs.dll in c:\temp, here are the SQL statements required to install the stored procedure in SQL Server from that path. You can run this either from SQL Server Management Studio or from the sqlcmd command-line utility:
The EXTERNAL_ACCESS permission set is required because the code is accessing an external resource, in this case another SQL Server. The default permission set (SAFE) does not allow external access.
If you make changes to your stored procedure later on, you can refresh the assembly in SQL Server without dropping and recreating everything, assuming that you didn't change the public interface (e.g., changed the type/number of parameters). In the scenario presented here, after recompiling the DLL, you can simply execute:
The Context Connection
One data-access scenario that you can expect to be relatively common is that you'll want to access the same server where your CLR stored procedure or function is executing.
One option for that is to create a regular connection using SqlClient, specify a connection string that points to the local server, and open it.
Now you have a connection. However, this is a separate connection; this implies that you'll have to specify credentials for logging in—it will be a different database session, it may have different SET options, it will be in a separate transaction, it won't see your temporary tables, etc.
In the end, if your stored procedure or function code is running inside SQLCLR, it is because someone connected to this SQL Server and executed some SQL statement to invoke it. You'll probably want that connection, along with its transaction, SET options, and so on. It turns out that you can get to it; it is called the context connection.
The context connection lets you execute SQL statements in the same context that your code was invoked in the first place. In order to obtain the context connection you simply need to use the new context connection connection string keyword, as in the example below:
In order to see whether your code is actually running in the same connection as the caller, you can do the following experiment: use a SqlConnection object and compare the SPID (the SQL Server session identifier) as seen from the caller and from within the connection. The code for the procedure looks like this:
C# using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP(string connstring, out int spid) { using (SqlConnection conn = new SqlConnection(connstring)) { conn.Open(); SqlCommand cmd = new SqlCommand("SELECT @@SPID", conn); spid = (int)cmd.ExecuteScalar(); } } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP(ByVal connstring As String, ByRef spid As Integer) Using conn As New SqlConnection(connstring) conn.Open() Dim cmd As New SqlCommand("SELECT @@SPID", conn) spid = CType(cmd.ExecuteScalar(), Integer) End Using End Sub End Class
After compiling and deploying this in SQL Server, you can try it out:
-- Print the SPID as seen by this connection PRINT @@SPID -- Now call the stored proc and see what SPID we get for a regular connection DECLARE @id INT EXEC SampleSP 'server=.;user id=MyUser; password=MyPassword', @id OUTPUT PRINT @id -- Call the stored proc again, but now use the context connection EXEC SampleSP 'context connection=true', @id OUTPUT PRINT @id
You'll see that the first and last SPID will match, because they're effectively the same connection. The second SPID is different because a second connection (which is a completely new connection) to the server was established.
Using ADO.NET in Different SQLCLR Objects
The "context" object
As you'll see in the following sections that cover different SQLCLR objects, each one of them will execute in a given server "context." The context represents the environment where the SQLCLR code was activated, and allows code running inside SQLCLR to access appropriate run-time information based on what kind of SQLCLR object it is.
The top-level object that surfaces the context is the SqlContext class that's defined in the Microsoft.SqlServer.Server namespace.
Another object that's available most of the time is the pipe object, which represents the connection to the client. For example, in T-SQL you can use the PRINT statement to send a message back to the client (if the client is SqlClient, it will show up as a SqlConnection.InfoMessage event). You can do the same in SQLCLR by using the SqlPipe object:
Stored procedures
All of the samples I used above were based on stored procedures. Stored procedures can be used to obtain and change data both on the local server and in remote data sources.
Stored procedures can also send results to the client, just like T-SQL stored procedures do. For example, in T-SQL you can have a stored procedure that does this:
A client running this stored procedure will see result sets coming back to the client (i.e., you would use ExecuteReader and get a SqlDataReader back if you were using ADO.NET in the client as well).
Managed stored procedures can return result sets, too. For stored procedures that are dominated by set-oriented statements such as the example above, using T-SQL is always a better choice. However, if you have a stored procedure that does a lot of computation-intensive work or uses a managed library and then returns some results, it may make sense to use SQLCLR. Here is the same procedure rewritten in SQLCLR:
C# using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP(int rating) { using (SqlConnection conn = new SqlConnection("context connection=true")) { conn.Open(); SqlCommand cmd = new SqlCommand( "SELECT VendorID, AccountNumber, Name FROM Purchasing.Vendor " + "WHERE CreditRating <= @rating", conn); cmd.Parameters.AddWithValue("@rating", rating); // execute the command and send the results directly to the client SqlContext.Pipe.ExecuteAndSend(cmd); } } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP(ByVal rating As Integer) Dim cmd As SqlCommand ' connect to the context connection Using conn As New SqlConnection("context connection=true") conn.Open() cmd = New SqlCommand( _ "SELECT VendorID, AccountNumber, Name FROM Purchasing.Vendor " & _ "WHERE CreditRating <= @rating", conn) cmd.Parameters.AddWithValue("@rating", rating) ' execute the command and send the results directly to the client SqlContext.Pipe.ExecuteAndSend(cmd) End Using End Sub End Class
The example above shows how to send the results from a SQL query back to the client. However, it's very likely that you'll also have stored procedures that produce their own data (e.g., by performing some computation locally or by invoking a Web service) and you'll want to return that data to the client as a result set. That's also possible using SQLCLR. Here is a trivial example:
C# using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP() { // simply produce a 10-row result-set with 2 columns, an int and a string // first, create the record and specify the metadata for the results SqlDataRecord rec = new SqlDataRecord( new SqlMetaData("col1", SqlDbType.NVarChar, 100), new SqlMetaData("col2", SqlDbType.Int)); // start a new result-set SqlContext.Pipe.SendResultsStart(rec); // send rows for(int i = 0; i < 10; i++) { // set values for each column for this row // This data would presumably come from a more "interesting" computation rec.SetString(0, "row " + i.ToString()); rec.SetInt32(1, i); SqlContext.Pipe.SendResultsRow(rec); } // complete the result-set SqlContext.Pipe.SendResultsEnd(); } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP() ' simply produce a 10-row result-set with 2 columns, an int and a string ' first, create the record and specify the metadata for the results Dim rec As New SqlDataRecord( _ New SqlMetaData("col1", SqlDbType.NVarChar, 100), _ New SqlMetaData("col2", SqlDbType.Int)) ' start a new result-set SqlContext.Pipe.SendResultsStart(rec) ' send rows Dim i As Integer For i = 0 To 9 ' set values for each column for this row ' This data would presumably come from a more "interesting" computation rec.SetString(0, "row " & i.ToString()) rec.SetInt32(1, i) SqlContext.Pipe.SendResultsRow(rec) Next ' complete the result-set SqlContext.Pipe.SendResultsEnd() End Sub End Class
User-defined functions
User-defined scalar functions already existed in previous versions of SQL Server. In SQL Server 2005 scalar functions can also be created using managed code, in addition to the already existing option of using T-SQL. In both cases the function is expected to return a single scalar value.
SQL Server assumes that functions do not cause side effects. That is, functions should not change the state of the database (no data or metadata changes). For T-SQL functions, this is actually enforced by the server, and a run-time error would be generated if a side-effecting operation (e.g. executing an UPDATE statement) were attempted.
The same restrictions (no side effects) apply to managed functions. However, there is less enforcement of this restriction. If you use the context connection and try to execute a side-effecting T-SQL statement through it (e.g., an UPDATE statement) you'll get a SqlException from ADO.NET. However, we cannot detect it when you perform side-effecting operations through regular (non-context) connections. In general it is better play safe and not do side-effecting operations from functions unless you have a very clear understanding of the implications.
Also, functions cannot return result sets to the client as stored procedures do.
Table-valued user-defined functions
T-SQL table-valued functions (or TVFs) existed in previous versions of SQL Server. In SQL Server 2005 we support creating TVFs using managed code. We call table-valued functions created using managed code "streaming table-valued functions," or streaming TVFs for short.
- They are "table-valued" because they return a relation (a result set) instead of a scalar. That means that they can, for example, be used in the FROM part of a SELECT statement.
- They are "streaming" because after an initialization step, the server will call into your object to obtain rows, so you can produce them based on server demand, instead of having to create all the result in memory first and then return the whole thing to the database.
Here is a very simple example of a function that takes a single string with a comma-separated list of words and returns a single-column result-set with a row for each word.
C# using System.Collections; using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class Functions { [Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName="FillRow")] // if you're using VS then add the following property setter to // the attribute above: TableDefinition="s NVARCHAR(4000)" public static IEnumerable ParseString(string str) { // Split() returns an array, which in turn // implements IEnumerable, so we're done :) return str.Split(','); } public static void FillRow(object row, out string str) { // "crack" the row into its parts. this case is trivial // because the row is only made of a single string str = (string)row; } } Visual Basic .NET Imports System.Collections Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports System.Runtime.InteropServices Imports Microsoft.SqlServer.Server Partial Public Class Functions <Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName:="FillRow")> _ ' if you're using VS then add the following property setter to ' the attribute above: TableDefinition:="s NVARCHAR(4000)" Public Shared Function ParseString(ByVal str As String) As IEnumerable ' Split() returns an array, which in turn ' implements IEnumerable, so we're done :) Return Split(str, ",") End Function Public Shared Sub FillRow(ByVal row As Object, <Out()> ByRef str As String) ' "crack" the row into its parts. this case is trivial ' because the row is only made of a single string str = CType(row, String) End Sub End Class
If you're using Visual Studio, simply deploy the assembly with the TVF. If you're doing this by hand, execute the following to register the TVF (assuming you already registered the assembly):
Once registered, you can give it a try by executing this T-SQL statement:
Now, what does this have to do with data access? Well, it turns out there are a couple of restrictions to keep in mind when using ADO.NET from a TVF:
- TVFs are still functions, so the side-effect restrictions also apply to them.
- You can use the context connection in the initialization method (e.g., ParseString in the example above), but not in the method that fills rows (the method pointed to by the FillRowMethodName attribute property).
- You can use ADO.NET with regular (non-context) connections in both initialization and fill-row methods. Note that performing queries or other long-running operations in the fill-row method can seriously impact the performance of the SELECT statement that uses the TVF.
Triggers
Creating triggers is in many aspects very similar to creating stored procedures. You can use ADO.NET to do data access from a trigger just like you would from a stored procedure.
For the triggers case, however, you'll typically have a couple of extra requirements:
- You'll want to "see" the changes that caused the trigger to fire. In a T-SQL trigger you'd typically do this by using the INSERTED and DELETED tables. For a managed trigger the same still applies: as long as you use the context connection, you can reference the INSERTED and DELETED tables from your SQL statements that you execute in the trigger using a SqlCommand object.
- You'll want to be able to tell which columns changed. You can use the IsUpdatedColumn() method of the SqlTriggerContext class to check whether a given column has changed. An instance of SqlTriggerContext is available off of the SqlContext class when the code is running inside a trigger; you can access it using the SqlContext.TriggerContext property.
Another common practice is to use a trigger to validate the input data, and if it doesn't pass the validation criteria, then abort the operation. You can also do this from managed code by simply using this statement:
Wow, what happened there? It is simple thanks to the tight integration of SQLCLR with the .NET Framework. See the Part II: Advanced Topics section, Transactions, for more information.
When Not to Use SQLCLR + ADO.NET
Don't just wrap SQL
If you have a stored procedure that only executes a query, then it's always better to write it in T-SQL. Writing it in SQLCLR will take more development time (you have to write T-SQL code for the query and managed code for the procedure) and it will be slower at run-time.
Whenever you use SQLCLR to simply wrap a relatively straightforward piece of T-SQL code, you'll get worse performance and extra maintenance cost. SQLCLR is better when there is actual work other than set-oriented operations to be done in the stored procedure or function.
Note The samples in this article are always parts of stored procedures or functions, and are never complete real-world components of production-level databases. I only include the minimum code necessary to exercise the ADO.NET API I am describing. That's why many of the samples contain only data-access code. In practice, if your stored procedure or function only has data-access code, then you should double-check and see if you could write it in T-SQL.
Avoid procedural row processing if set-oriented operations can do it
Set-oriented operations can be very powerful. Sometimes it can be tricky to get them right, but once they are there, the database engine has a lot of opportunities to understand what you want to do based on the SQL statement that you provide, and it can perform deep, sophisticated optimizations on your behalf.
So in general it's a good thing to process rows using set-oriented statements such as UPDATE, INSERT and DELETE.
Good examples of this are:
- Avoid row-by-row scans and updates. If at all possible, it's much better to try to write a more sophisticated UPDATE statement.
- Avoid custom aggregation of values by explicitly opening a SqlDataReader and iterating over the values. Either use the built-in aggregation functions (SUM, AVG, MIN, MAX, etc.) or create user-defined aggregates.
There are of course some scenarios where row-by-row processing using procedural logic makes sense. It's mostly a matter of making sure that you don't end up doing row-by-row processing for something that could be expressed in a single SQL statement.
Part II: Advanced Topics
More on Connections
Choosing between regular and context connections
If you're connecting to a remote server, you'll always be using regular connections. On the other hand, if you need to connect to the same server you're running a function or stored procedure on, in most cases you'll want to use the context connection. As I mentioned above, there are several reasons for this, such as running in the same transaction space, and not having to reauthenticate.
Additionally, using the context connection will typically result in better performance and less resource utilization. The context connection is an in-process–only connection, so it can talk to the server "directly", meaning that it doesn't need to go through the network protocol and transport layer to send SQL statements and receive results. It doesn't need to go through the authentication process either.
There are some cases where you may need to open a separate regular connection to the same server. For example, there are certain restrictions in using the context connection described in the Restrictions for the context connection section.
What do you mean by connect "directly" to the server?
I mentioned before that the context connection could connect "directly" to the server and bypass the network protocol and transport layers. Figure 1 represents the primary components of the SqlClient managed provider, as well as how the different components interact with each other when using a regular connection, and when using the context connection.
Figure 1. Connection processes
As you can see, the context connection follows a shorter code path and involves fewer components. Because of that, you can expect the context connection to get to the server and back faster than a regular connection. Query execution time will be the same, of course, because that work needs to be done regardless of how the SQL statement reaches the server.
Restrictions for the context connection
Here are the restrictions that apply to the context connection that you'll need to take into account when using it in your application:
- You can have only one context connection opened at a given time for a given connection.
- Of course, if you have multiple statements running concurrently in separate connections, each one of them can get their own context connection. The restriction doesn't affect concurrent requests from different connections; it only affects a given request on a given connection.
- MARS (Multiple Active Result Sets) is not supported in the context connection.
- The SqlBulkCopy class will not operate on a context connection.
- We do not support update batching in the context connection.
- SqlNotificationRequest cannot be used with commands that will execute against the context connection.
- We don't support canceling commands that are running against the context connection. SqlCommand.Cancel() will silently ignore the request.
- No other connection string keywords can be used when you use context connection=true.
Some of these restrictions are by design and are the result of the semantics of the context connection. Others are actually implementation decisions that we made for this release and we may decide to relax those restrictions in a future release based on customer feedback.
Restrictions on regular connections inside SQLCLR
For those cases where you decide to use regular connections instead of the context connection, there are a few limitations to keep in mind.
Pretty much all the functionality of ADO.NET is available inside SQLCLR; however, there are a few specific features that either do not apply, or we decided not to support, in this release. Specifically, asynchronous command execution and the SqlDependecy object and related infrastructure are not supported.
Credentials for connections
You probably noticed that all the samples I've used so far use SQL authentication (user id and password) instead of integrated authentication. You may be wondering why I do that if we always strongly suggest using integrated authentication.
It turns out that it's not that straightforward to use inside SQLCLR. There are a couple of considerations that need to be kept in mind before using integrated authentication.
First of all, no client impersonation happens by default. This means that when SQL Server invokes your CLR code, it will be running under the account of the SQL Server service. If you use integrated authentication, the "identity" that your connections will have will be that of the service, not the one from the connecting client. In some scenarios this is actually intended and it will work fine. In many other scenarios this won't work. For example, if your SQL Server runs as "local system", then you won't be able to login to remote servers using integrated authentication.
Note Skip these next two paragraphs if you want to avoid a headache :)
In some cases you may want to impersonate the caller by using the SqlContext.WindowsIdentity property instead of running as the service account. For those cases we expose a WindowsIdentity instance that represents the identity of the client that invoked the calling code. This is only available when the client used integrated authentication in the first place (otherwise we don't know the Windows identity of the client). Once you obtained the WindowsIdentity instance you can call Impersonate to change the security token of the thread, and then open ADO.NET connections on behalf of the client.
It gets more complicated. Even if you obtained the instance, by default you cannot propagate that instance to another computer; Windows security infrastructure restricts that by default. There is a mechanism called "delegation" that enables propagation of Windows identities across multiple trusted computers. You can learn more about delegation in the TechNet article, Kerberos Protocol Transition and Constrained Delegation.
Transactions
Let's say you have a managed stored procedure called SampleSP that has the following code:
C# // as usual, connection strings shouldn't be hardcoded for production code using(SqlConnection conn = new SqlConnection( "server=MyServer; database=AdventureWorks; user id=MyUser; password=MyPassword")) { conn.Open(); // insert a hardcoded row for this sample SqlCommand cmd = new SqlCommand("INSERT INTO HumanResources.Department " + "(Name, GroupName) VALUES ('Databases', 'IT'); SELECT SCOPE_IDENTITY()", conn); outputId = (int)cmd.ExecuteScalar(); } Visual Basic .NET Dim cmd as SqlCommand ' as usual, connection strings shouldn't be hardcoded for production code Using conn As New SqlConnection( _ "server=MyServer; database=AdventureWorks; user id=MyUser; password=MyPassword") conn.Open() ' insert a hardcoded row for this sample cmd = New SqlCommand("INSERT INTO HumanResources.Department " _ & "(Name, GroupName) VALUES ('Databases', 'IT'); SELECT SCOPE_IDENTITY()", conn) outputId = CType(cmd.ExecuteScalar(), Integer) End Using
What happens if you do this in T-SQL?
Since you did a BEGIN TRAN first, it's clear that the ROLLBACK statement will undo the work done by the UPDATE from T-SQL. But the stored procedure created a new ADO.NET connection to another server and made a change there, what about that change? Nothing to worry about—we'll detect that the code established ADO.NET connections to remote servers, and by default we'll transparently take any existing transaction with the connection and have all servers your code connects to participate in a distributed transaction. This even works for non-SQL Server connections!
How do we do this? We have GREAT integration with System.Transactions.
System.Transactions + ADO.NET + SQLCLR
System.Transactions is a new namespace that's part of the 2.0 release of the .NET Framework. It contains a new transactions framework that will greatly extend and simplify the use of local and distributed transactions in managed applications.
For an introduction to System.Transactions and ADO.NET, see the MSDN Magazine article, Data Points: ADO.NET and System.Transactions, and the MSDN TV episode, Introducing System.Transactions in .NET Framework 2.0.
ADO.NET and SQLCLR are tightly integrated with System.Transactions to provide a unified transactions API across the .NET Framework.
Transaction promotion
After reading about all the magic around distributed transactions for procedures, you may be thinking about the huge overhead this implies. It turns out that it's not bad at all.
When you invoke managed stored procedures within a database transaction, we flow the transaction context down into the CLR code.
As I mentioned before, the context connection is literally the same connection, so the same transaction applies and no extra overhead is involved.
On the other hand, if you're opening a connection to a remote server, that's clearly not the same connection. When you open an ADO.NET connection, we automatically detect that there is a database transaction that came with the context and "promote" the database transaction into a distributed transaction; then we enlist the connection to the remote server into that distributed transaction so everything is now coordinated. And this extra cost is only paid if you use it; otherwise it's only the cost of a regular database transaction. Cool stuff, huh?
Note A similar transaction promotion feature is also available with ADO.NET and System.Transactions when used from the client and middle-tier scenarios. Consult the documentation on MSDN for further details.
Accessing the current transaction
At this point you may be wondering, "How do they know that there is a transaction active in the SQLCLR code to automatically enlist ADO.NET connections?" It turns out that integration goes deeper.
Outside of SQL Server, the System.Transactions framework exposes the concept of a "current transaction," which is available through System.Transaction.Current. We basically did the same thing inside the server.
If a transaction was active at the point where SQLCLR code is entered, then the transaction will be surfaced to the SQLCLR API through the System.Transactions.Transaction class. Specifically, Transaction.Current will be non-null.
In most cases you don't need to access the transaction explicitly. For database connections, ADO.NET will check Transaction.Current automatically during connection Open() and it will enlist the connection in that transaction transparently (unless you add enlist=false to the connection string).
There are a few scenarios where you might want to use the transaction object directly:
- If you want to abort the external transaction from within your stored procedure or function. In this case, you can simply call Transaction.Current.Rollback().
- If you want to enlist a resource that doesn't do automatic enlistment, or for some reason wasn't enlisted during initialization.
- You may want to enlist yourself in the transaction, perhaps to be involved in the voting process or just to be notified when voting happens.
Note that although I used a very explicit example where I do a BEGIN TRAN, there other scenarios where your SQLCLR code can be invoked inside a transaction and Transaction.Current will be non-null. For example, if you invoke a managed user-defined function within an UPDATE statement, it will happen within a transaction even if one wasn't explicitly started.
Using System.Transactions explicitly
If you have a block of code that needs to execute within a transaction even if the caller didn't start one, you can use the System.Transactions API. This is, again, the same code you'd use in the client or middle-tier to manage a transaction. For example:
C# using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP() { // start a transaction block using(TransactionScope tx = new TransactionScope()) { // connect to the context connection using(SqlConnection conn = new SqlConnection("context connection=true")) { conn.Open(); // do some changes to the local database } // connect to the remote database using(SqlConnection conn = new SqlConnection( "server=MyServer; database=AdventureWorks;" + "user id=MyUser; password=MyPassword")) { conn.Open(); // do some changes to the remote database } // mark the transaction as complete tx.Complete(); } } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP() ' start a transaction block Using tx As New TransactionScope() ' connect to the context connection Using conn As New SqlConnection("context connection=true") conn.Open() ' do some changes to the local database End Using ' connect to a remote server (don't hardcode the conn string in real code) Using conn As New SqlConnection("server=MyServer; database=AdventureWorks;" & _ "user id=MyUser; password=MyPassword") conn.Open() ' do some changes to the remote database End Using ' mark the transaction as completed tx.Complete() End Using End Sub End Class
The sample above shows the simplest way of using System.Transactions. Simply put a transaction scope around the code that needs to be transacted. Note that towards the end of the block there is a call to the Complete method on the scope indicating that this piece of code executed its part successfully and it's OK with committing this transaction. If you want to abort the transaction, simply don't call Complete.
The TransactionScope object will do the "right thing" by default. That is, if there was already a transaction active, then the scope will happen within that transaction; otherwise, it will start a new transaction. There are other overloads that let you customize this behavior.
The pattern is fairly simple: the transaction scope will either pick up an already active transaction or will start a new one. In either case, since it's in a "using" block, the compiler will introduce a call to Dispose at the end of the block. If the scope saw a call to Complete before reaching the end of the block, then it will vote commit for the transaction; on the other hand, if it didn't see a call to Complete (e.g., an exception was thrown somewhere in the middle of the block), then it will rollback the transaction automatically.
Note For the SQL Server 2005 release, the TransactionScope object will always use distributed transactions when running inside SQLCLR. This means that if there wasn't a distributed transaction already, the scope will cause the transaction to promote. This will cause overhead if you only connect to the local server; in that case, SQL transactions will be lighter weight. On the other hand, in scenarios where you use several resources (e.g., connections to remote databases), the transaction will have to be promoted anyway, so there is no additional overhead.
I recommend not using TransactionScope if you're going to connect only using the context connection.
Using SQL transactions in your SQLCLR code
Alternatively, you can still use regular SQL transactions, although those will handle local transactions only.
Using the existing SQL transactions API is identical to how SQL transactions work in the client/middle-tier. You can either using SQL statements (e.g., BEGIN TRAN) or call the BeginTransaction method on the connection object. That returns a transaction object (e.g., SqlTransaction) that then you can use to commit/rollback the transaction.
These transactions can be nested, in the sense that your stored procedure or function might be called within a transaction, and it would still be perfectly legal for you to call BeginTransaction. (Note that this does not mean you get "true" nested transactions; you'll get the exact same behavior that you'd get when nesting BEGIN TRAN statements in T-SQL.)
Transaction lifetime
There is a difference between transactions started in T-SQL stored procedures and the ones started in SQLCLR code (using any of the methods discussed above): SQLCLR code cannot unbalance the transaction state on entry/exit of a SQLCLR invocation. This has a couple of implications:
- You cannot start a transaction inside a SQLCLR frame and not commit it or roll it back; SQL Server will generate an error during frame exit.
- Similarly, you cannot commit or rollback an outer transaction inside SQLCLR code.
- Any attempt to commit a transaction that you didn't start in the same procedure will cause a run-time error.
- Any attempt to rollback a transaction that you didn't start in the same procedure will doom the transaction (preventing any other side-effecting operation from happening), but the transaction won't disappear until the SQLCLR code unwinds. Note that this case is actually legal, and it's useful when you detect an error inside your procedure and want to make sure the whole transaction is aborted.
Conclusion
SQLCLR is a great technology and it will enable lots of new scenarios. Using ADO.NET inside SQLCLR is a powerful mix that will allow you to combine heavy processing with data access to both local and remote servers, all while maintaining transactional correctness.
As with any other technology, this one has a specific application domain. Not every procedure needs to be rewritten in SQLCLR and use ADO.NET to access the database; quite the contrary, in most cases T-SQL will do a great job. However, for those cases where sophisticated logic or rich libraries are required inside SQL Server, SQLCLR and ADO.NET are there to do the job.
Acknowledgements
Thanks to Acey Bunch, Alazel Acheson, Alyssa Henry, Angel Saenz-Badillos, Chris Lee, Jian Zeng, Mary Chipman and Steve Starck for taking the time to review this document and provide helpful feedback.
|
http://msdn.microsoft.com/en-US/library/ms345135(v=sql.90).aspx
|
CC-MAIN-2013-20
|
refinedweb
| 7,226
| 56.25
|
The other day, my Yarn for Windows 092 started acting up. I handled
e-mail just fine, but if I tried to read any newsgroups, it would take
about a minute to display the first post, and about 30 seconds to go
from one post to the other. (This is on an NT box, and until this
time, it didn't give me any trouble.) After much frustration, I
finally thought to do a rebuild. That fixed it.
However, for no apparent reason, import quit working. Well, let me
clarify. When I was "upgraded" to NT, I found I had to do e-mail
imports with "import", but I had to import newsgroup posts with
"import95". Don't ask me why...I have no idea. But, that's what
worked...and I don't recall the error messages I was getting, or the
exact problems I was having. For some reason, import95 worked fine on
importing newsgroup stuff, but neither it nor import would handle my
e-mail, at all. Both would churn for a while, and then come back with
0 articles imported, and no messages about filtering anything to
INBOX, or to any other folder.
Nothing I tried helped. Finally, I turned to the mailing list
archives, and found a post from Michael Raiteri, from way back in May
of last year, who had a similar problem. He was apparently a beta
tester for Windows 2000 (the replacement for NT), and couldn't get
import to work for him. His message said that he replaced his copy of
import with the DOS version, and it worked just fine. So, I copied
over the import from yarn_092.zip that I just happened to have sitting
elsewhere on my HD, and lo and behold, it worked.
Thanks, Michael!
-- +------------------------------------------------------------------------+ | Dirk A. Loedding <*> judge@america.net | +------------------------------------------------------------------------+
|
http://www.vex.net/yarn/list/200002/0000.html
|
crawl-001
|
refinedweb
| 304
| 82.04
|
aeiou, consider the following input. Your program returns 34.62 while the answer should be 27.85. By the way, if you want an alternative way to do it, you can consider writing each a in terms of a[1], a[0] and a constant. For instance, you can write a[2] as 2*(a[1]+c[1]) - a[0] and so forth until you can formulate a[n+1] in terms of a[1], a[0] and a constant. Then just solve for a[1].
1
2
50.50
43.45
10.15
10.15
10014 - Simple calculations
All about problems in Volume 100. If there is a thread about your problem, please use it. If not, create one with its number in the subject.
Moderator: Board moderators
Post Reply
10014 got time limit error
here is my code face time limt error how solve this problem
using this logic?
using this logic?
Code: Select all
#include<stdio.h> int rec(long n) { if(n%10>0) return n%10; else if(n==0) return 0; else rec(n/10); } int main() { long p,q; long sum=0; long i; while(scanf("%ld%ld",&p,&q)!=0) { if(p<0&&q<0) break; else { for(i=p;i<=q;i++) sum+=rec(i); } printf("%ld\n",sum); sum=0; } return 0; }
- Guru
- Posts: 5947
- Joined: Thu Sep 01, 2011 9:09 am
- Location: San Jose, CA, USA
Re: 10014 got time limit error
It looks like the wrong problem number. Try to think of a faster method. For example, if you were asked to sum the numbers from 1 to 1000000 would you iterate through them all?
10014 - Simple calculations
I can't understand why I got WA? Can anyone please help me. Here is my code:
N.B: Code will be removed after AC.
Code: Select all
#include <iostream> #include <cstdio> using namespace std; int main() { int n, test, i; double sum; double a1, a0, a2; double ci[3002]; while(scanf("%d", &n) == 1) { cout<<endl; sum = 0; scanf("%d", &n); scanf("%lf", &a0); scanf("%lf", &a2); for(i = 1; i <= n; i++) { scanf("%lf", &ci[i]); } for(i = 1; i <= n; i++) { sum += ci[i]; } a1 = ((a0 + a2)/2) - sum; printf("%0.2lf\n", a1); } return 0; }
- Guru
- Posts: 5947
- Joined: Thu Sep 01, 2011 9:09 am
- Location: San Jose, CA, USA
Re: 10014 - Simple calculations
Post Reply
|
https://onlinejudge.org/board/viewtopic.php?t=15272&start=30
|
CC-MAIN-2020-05
|
refinedweb
| 402
| 82.34
|
I looked at the help and in the forums but couldn't find anything.
If there is an hscript expression that would work too.
Thanks
Posted 23 June 2012 - 08:32 PM
Posted 23 June 2012 - 11:34 PM
Basically something that would return the primitives that uses a particular point.
Posted 23 June 2012 - 11:38 PM
Posted 23 June 2012 - 11:55 PM
Thanks rdg. How do you build a tree for the bounding boxes of primitives? Actually I also don't know how to get the bounding box of a primitive
Is there an expression for that? My geometry is a single connected polygon mesh, not sure if that matters.
Posted 24 June 2012 - 12:00 AM
Posted 24 June 2012 - 12:35 AM
Interestingly there is a way to get the points of a primitive in python, but not the other way around
# This code is called when instances of this SOP cook. node = hou.pwd() geo = node.geometry() # Add code to modify the contents of geo. pointnumber = node.evalParm('val') def GetAllPoints(): """ use it to get points of each primitive """ dict = {} for prim in geo.prims(): points = [] for verticle in prim.vertices(): points.append(verticle.point().number()) dict[prim.number()] = points return dict def GetPointPrimitives(dict, pointnumber): """ use it to get primitives that uses this point """ prims = [] for k, v in dict.items(): if pointnumber in v: prims.append(k) return prims # MAIN() print(GetPointPrimitives(GetAllPoints(), pointnumber))
Edited by mantragora, 24 June 2012 - 04:06 AM.
magic happens here... sometimes
Vimeo
Twitter
Orbolt
"If it's not real-time, it's a piece of shit not a state of the art technology" - me
Posted 24 June 2012 - 01:46 AM
Posted 24 June 2012 - 01:53 AM
Thanks mantragora, that's the method I was talking about. But looking up the prims from the points would be slow. You could make another dictionary from yours where point numbers would be the keys but that would be even slower to construct
These are the kinds of solutions I don't like implementing because they are not scalable. If I had 10 points in a mesh with 100 points, and it takes 1 ms for cooking my SOP, having the same 10 points in a mesh with 1 million points will be 10000 slower, which would be 10 seconds (just an example), but it shouldn't be. I shouldn't pay that price because I am not modifying the whole mesh.
Reminds me Edit Poly limitations in Max (not Editable Poly), where even setting the position of a vertex/point would be an epic undertaking
magic happens here... sometimes
Vimeo
Twitter
Orbolt
"If it's not real-time, it's a piece of shit not a state of the art technology" - me
Posted 24 June 2012 - 02:00 AM
Use InlineCPP.
Posted 24 June 2012 - 02:47 AM.
Edited by mantragora, 24 June 2012 - 04:59 AM.
magic happens here... sometimes
Vimeo
Twitter
Orbolt
"If it's not real-time, it's a piece of shit not a state of the art technology" - me
Posted 24 June 2012 - 02:48 AM
Posted 24 June 2012 - 07:02 AM
def buildPointPrimRefMap(geo): """ Build a dictionary whose keys are hou.Point objects and values are a list of hou.Primitive objects that reference the point. """ point_map = {} for prim in geo.prims(): for vert in prim.vertices(): pt = vert.point() if not pt in point_map: point_map[pt] = [] point_map[pt].append(prim) return point_mapThis results in a dictionary where I can use a hou.Point to get any prims that reference it.
cpp_geo_methods = inlinecpp.createLibrary("cpp_geo_methods", includes="""#include <GU/GU_Detail.h>""", structs=[("IntArray", "*i"),], function_sources=[ """ IntArray connectedPrims(const GU_Detail *gdp, int idx) { std::vector<int> ids; GA_Offset ptOff, primOff; GA_OffsetArray prims; GA_OffsetArray::const_iterator prims_it; ptOff = gdp->pointOffset(idx); gdp->getPrimitivesReferencingPoint(prims, ptOff); for (prims_it = prims.begin(); !prims_it.atEnd(); ++prims_it) { ids.push_back(gdp->primitiveIndex(*prims_it)); } return ids; } """,]) def connectedPrims(point): """ Returns a tuple of primitives connected to the point. """ geo = point.geometry() result = cpp_geo_methods.connectedPrims(geo, point.number()) return geo.globPrims(' '.join([str(i) for i in result]))
Edited by graham, 24 June 2012 - 07:05 AM.
0 members, 1 guests, 0 anonymous users
|
http://forums.odforce.net/topic/15708-is-there-a-way-to-get-primitives-using-a-point/
|
CC-MAIN-2015-27
|
refinedweb
| 702
| 65.42
|
When the documents get created during import, you get a primary key that's
created for you called '_id' which has an ObjectId type field.
As it turns out the first four bytes of the ObjectId are the timestamp of
its creation. So you can sort by _id as a proxy for sort by insert time,
in addition various MongoDB drivers provide methods to extract the
timestamp from the ObjectId - for example in MongoDB shell:
> var o=new ObjectId()
> o
ObjectId("51ae926b77bf7c394dfe0cc8")
> o.getTimestamp()
ISODate("2013-06-05T01:20:43Z")
Since the majority of time is likely spent serializing JSON objects into
BSON (native MongoDB format) you will likely get faster import if you can
split up your file and have several parallel jobs each running mongoimport
with a separate file.
The stopOnError explicitly skips the duplicate key error as can be seen
from the source at.
I have filed to have the documentation specify
this.
You're correct, map-reduce in mongo assumes a key with a single value
document - which is why you end up with the documents with just two
objects, a key and a (complex, nested) value object.
You could write some code to translate your .csv files into .json.
Alternatively, here is a bit of a hack. I exported the data (after using
the map-reduce process) to json using mongoexport, used sed to unpack the
value object, and then imported it back in using mongoimport. If you use
pipes, you don't need to save the whole file at any time (sed is a
streaming editor). For the example in the other post you referenced, you
use this:
mongoexport -c users_comments | sed 's/"value" : {//' | sed s/}$// |
mongoimport -c user_comments2
This one-liner will duplicate the user_comments collection and sto
If I understand correctly, you want a document for each unique MISDIN in
the first column, with each document having a subdocument for each MISDIN
in the second column with which the first MISDIN has incoming/outgoing
calls. So, for the data you provided, a document in the collection would
look like this:
{ _id: ObjectId("5237258211f41a0c647c47b1"),
MISDIN_mine: 7259555112,
call_records: [
{ MISDIN_theirs: 774561213,
incoming_count: 3,
outgoing_count: 4,
total_count: 7,
is_EE: 1
},
{ MISDIN_theirs: 774561214,
incoming_count: 4,
outgoing_count: 5,
total_count: 9,
is_EE: 1
}
... ]
}
Admittedly, I'm not sure what is_EE is supposed to represent, but let's get
the rest into place.
In order to import the data in the format you want, first add
You might find this question/answer helpful. The OP there was attempting to
do the same thing.
The mongoimport command doesn't give you the option of skipping fields
during the import process, so that would require a full import followed by
$unset operations on the fields you intended to omit. Ultimately, that
would leave your collection with fragmentation.
You would be better served using fgetcsv or str_getcsv in PHP to parse the
file. This would also allow you the change to validate/sanitize the input
as necessary. Finally, MongoCollection::batchInsert() would efficiently
insert multiple documents into MongoDB at once.
Your JSON seems to have only a single object. Format like {..},{..} is
expected.
So, use --jsonArray option:
mongoimport -d mydb -c mycollection --jsonArray < glossary.json
Well, I suppose I would first switch from using mongoexport/mongoimport to
using mongodump/mongorestore. Mongodump is faster, and also will preserve
all rich BSON data types, unlike mongoexport.
Also, the command db.bar.remove() will go through your collection document
by document and drop each one. Since you really want to just get rid of
everything, you can do this much more quickly by dropping the entire
collection wholesale with db.bar.drop(). This is much faster. However,
dropping the collection will also drop any indexes you have built for it,
so you will need to recreate those afterwards.
It shouldn't be necessary for you to run db.repairDatabase() after each
migration, because MongoDB will reclaim the freed space from dropping your
collection. What you can do is to use compact (h
MongoDB comes with mongoimport which will import json data.
However, this wont convert the date types - A manual import script could be
used to convert: "instance_start_time":"1371104307474652" into a datetime
- so then you can query against it as a datetime - rather than a string.
did you tried the new solr-mongodb connector?
You will need to write a script in your favourite language that reads each
file, JSON-decodes it and then inserts them one by one into MongoDB. In
PHP, such a script would be akin to:
<?php
$f = glob("*.json");
$m = new MongoClient;
$c = $m->myDb->myCollection;
foreach ( $f as $fileName )
{
$contents = json_decode( file_get_contents( $fileName ) );
$c->insert( $contents );
}
?>
You don't need to store intermediate files, you can pipe the output of s3
file to stdout and you can get input to mongoimport from stdin.
Your full command would look something like:
s3cmd get s3://<yourFilename> - | mongoimport -d <dbName> -c
<collectionName>
note the - which says send the file to stdout rather than to a filename.
You forgot the --file option
This way (no file given) mongoimport is waiting for data feed via stdin.
This is why you had to use Clt+c for interrupt.
Add --file c:/review.csv to mongoimport command line
Metadata interfaces must not have any properties with setters. You should
modify the IPlugInMetadata interface so its properties won't have any
setters, otherwise the composition will fail:
interface IPlugInMetadata
{
string Name { get; }
}
Also, you should consider making your PlugInMetadataAttribute class inherit
from ExportAttribute rather than Attribute. That will allow using this
attribute as an export attribute and you won't have to use a
RegistrationBuilder.
EDIT: I think I found your problem
When trying to use ImportMany in the constructor, you must specify so
explicitly, so your constructor should look like this:
[ImportingConstructor]
public Engine([ImportMany] IEnumerable<Lazy<IPlugIn,
IPlugInMetadata>> plugins)
{
PlugIns = plugins;
}
Alternatively,
at the top of test.py add
import sys
sys.path.append("..")
base is not a folder on the path...once you change this it should work
or put test.py in the same folder as base. or move base to somewhere that
is on your path
I check the above tutorial and it works for me. To make it working i
install these packages
python-setuptools python-pygame python-opengl
python-gst0.10 python-enchant gstreamer0.10-plugins-good python-dev
build-essential libgl1-mesa-dev libgles2-mesa-dev cython
and then install or update kivy.
Once you install kivy than just check it out in python shell
Python 2.7.4 (default, Apr 19 2013, 18:28:01)
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import kivy
[INFO ] Kivy v1.7.1
if it shows the same result than change this line in tutorial code
import kivy
kivy.require('1.7.1') #change kivy version same as it shows in above
import.
That's the way mongoimport works. There's an existing new feature request
for merge imports, but for now, you'll have to write your own import to
provide merge behavior.
i think you forgot the 'new' keyword:
var userSchema = new Schema({ // <-- new
userName: String,
fullName: String,
});
var articleSchema = new Schema({ // <-- new
name: String,
content_date: Date,
content: String,
author: String
});
Looks like
pip install evernote don't do what it is suppose to do:
cat /etc/SuSE-release
openSUSE 12.2 (x86_64)
VERSION = 12.2
CODENAME = Mantis
pip install evernote
Downloading/unpacking evernote
Downloading evernote-1.24.0.macosx-10.8-x86_64.tar.gz (326kB): 326kB
downloaded
Running setup.py egg_info for package evernote
Traceback (most recent call last):
File "", line 16, in
IOError: [Errno 2] No such file or directory:
'/tmp/pip-build-root/evernote/setup.py'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 16, in
IOError: [Errno 2] No such file or directory:
'/tmp/pip-build-root/evernote/setup.py'
----------------------------------------
Command python setup.py egg_info failed with error code 1 in
Reinstalling pyobjc seemed to fix it too
pip uninstall pyobjc
pip instal pyobjc
Thanks @eryksun . But it still did not work. I instead used a Python
wrapper for Hunspell called Pyhunspell. The actual link in PyPi does not
work for Python 2.7 but it has been improved and upgraded in here.
Apache is not addressing to your virtual env. check very first line of your
error trace
/usr/lib/python2.7/dist-packages/django/core/handlers/base.py
Its pointing to default python location not your virtual env.
Try this to include virtual env path.
sys.path.append('/var/www/projects/openstack-horizon/.venv/local/lib/python2.7/site-packages')
Regards
Ansh Jain
You should not be using mongoexport and import if you are moving data
between two mongoDB instances. Those tools are intended for exchange of
data with external systems.
Use mongodump and mongorestore instead - they use bson which is MongoDB's
native format and they preserve all types.
You have specified MongoDBDialect in the configuration file. But on the
console log, you are getting NoopDialect on HHH000400.
And in the next line, you are getting connection was null.
And the last line is, Unable to create requested service.
First, don't forget the Z. That is the ISO8601 indicator for "zulu time",
which is another way of saying "UTC" or "GMT". Without it, you are just
representing an "unspecified" time, or in other words - a calendar position
that is not a distinct moment in time because you didn't indicate how it
relates to UTC. (I am not sure if Mongo will allow those or not.)
Secondly, it looks like you are trying do an exact equality match. The
value in the database has decimals, but the value you're querying does not.
So they will never match because they are not the same value.
You might instead want to do a range query. For example, you might do
this:
db.post.find({"dateCreated": {"$gte": new ISODate("2013-08-09T06:29:07Z"),
"$lt": new ISODate("2013-08-09T06:29:08
MongoLab creates databases that require an authenticated user to access.
When you connect with the Shell, you will need to provide the UserName and
Password to the shell command. Docs are here.
mongo --username Mark --password something
You will need that Username/Password combination to be configured within
mongoose as well. The Mongoose docs have details on the possible ways to do
this.
Note that you are using a very old shell. 1.8.3 is about 4 versions back
from the current 2.4.* line. This is not directly related to your problem,
but it's definitely something you should rectify going forward.
mongoimport will always use the default double representation.
It can't be used to differentiate between double and long.
See Reference > MongoDB Package Components > mongoimport
Note Do not use mongoimport and mongoexport for full instance, production
backups because they will not reliably capture data type information. Use
mongodump and mongorestore as described in Backup Strategies for MongoDB
Systems for this kind of functionality.
I had the same Problem. The solution was to edit the /etc/hosts file and
add at the first line:
127.0.0.1 localhost localhost
See also:
Unable to import Maven project into IntelliJ IDEA
MongoDB stores data in a totally different format, called BSON, which is
going to take up more disk space. Not only do the values need to be stored
for each field, it also will have to store the column names again in each
document (row). If you have large column names, this can definitely
increase the size in MongoDB to be 8 to 10 times of your CSV file. If
possible, you can look at shortening your field names if this is too much
for you.
MongoDB also preallocates data files for you. For example, the moment it
starts adding data to taq.2, it will create taq.3, and similarly when it
starts writing into taq.4 it creates tag.5. So in your case, say your 230MB
file would create 1.9GB of data, MongoDB has already allocated the 2.0G
sized taq.5. This behaviour can be turned off by specifying --., "
";
}
Had the same problem (used Web Deploy 2.0)
I changed to Web Deploy 3.5 and it's working now
(I uninstalled Web Deploy 2.0 via Control Panel before and used the Web
Platform Installer in IIS to install the newer version on server 2012
-simply search for web deploy there)
The following works perfectly fine for me, and it's modified because some
of your code seemed a little incomplete.
inner.py
#!/usr/bin/python
import os
import datetime
print os.getcwd()
main.py
#!/usr/bin/python
import os
import datetime
import subprocess
import sys
# <---- Some Code--->
subprocess.call([sys.executable, "inner.py"]).
You likely will want to add a route specifically for the "news":
app.get('/news', function(req, res, next){
News.
find().
exec(function(err, nws){
if(err) { res.writeHead(500, err.message) }
else {
res.send(nws);
}
});
});
Right now, you've got it returning the basics of a web page
(res.render('index', { title: 'static Express' });) and then the response
from the find call. Angular won't accept that as a result, and it explains
why you're seeing the DOCTYPE in the response.
And then call it from your Angular code:
var News = $resource('/news');
I have found the issue finally:
The important thing is to create a CLEAN GeoTIFF file in Matlab (RGB and
alpha layer for transparency). Here some Matlab guidance, the resulting
GeoTIFF can directly be imported into WorldWind:
%%% read intensity values Z (2D matrix) - with values of 0 and above
%%% (we want 0 to be completely transparent in the final geotiff) -
%%% together with spatialref.GeoRasterReference ss
[Z, ss] = geotiffread('./flddph_1976-01-01.tif');
info_3 = geotiffinfo('./flddph_1976-01-01.tif');
%%% generate indexed image with 0 to 255 (255 equals max. intensity)
indexedimage = gray2ind(Z);
indexedimage = double(indexedimage);
%%% normalize so that everything between 0 and 1
normalizedimg = (indexedimage) / 255;
%%% scaling data and applying colormap
imgscaled = uint8(25
I work with doctrine this way:
Controller:
$products =
$this->getDoctrine()->getRepository("AcmeStoreBundle:Product")->findProducts();
ProductRepository:
class GalleryRepository extends EntityRepository
{
public function findProducts(){
return
$this->getEntityManager()->getRepository("TLWEntitiesBundle:Product")->findAll();
}
}
The rest of the code seems ok.
So, what you're seeing here is their distribution model. Usually a module
will have one root import that everything stems from, but that's not
necessarily the case. They're providing a package with (what I assume) is
many modules that don't interact with each other; or they can all stand
alone.
instead of importing each package individually, you could use the 'from'
keyword:
from ROOTFOL.PACKAGE import *
which will grab everything inside that sub-module. You could e-mail the
developer and ask why they deployed it this way...or you could add your own
__init__.py to the root folder and,
from ROOTFOL import *
which will walk the tree. Good luck!
Works for me:
danielallan@MacBook:~$mkdir myproject
danielallan@MacBook:~$cd myproject/
danielallan@MacBook:myproject$mkdir lib
danielallan@MacBook:myproject$cd lib
danielallan@MacBook:lib$touch __init__.py
danielallan@MacBook:lib$touch view.py
danielallan@MacBook:lib$touch common_lib.py
danielallan@MacBook:lib$cd ..
In [1]: from lib import view
In [2]: view
Out[2]: <module 'lib.view' from 'lib/view.pyc'>
What happens when you try that on your machine? Are you sitting in the
wrong directory, or is your path not configured to find these files?
|
http://www.w3hello.com/questions/Unable-to-import-csv-into-mongodb-using-mongoimport
|
CC-MAIN-2018-17
|
refinedweb
| 2,581
| 56.66
|
Slashdot Log In
Reiser On ReiserFS's Future And More
Steven Haryanto writes: "This one's from Indonesia. InfoLinux did an email interview with Hans Reiser, in which he explained about the ReiserFS project plan and the new Namesys business model. Mr. Reiser told me that Namesys recently received $600K funding from DARPA to include encryption in ReiserFS v4.0." Dig this quote: "We are going to add plugins in our next major version, and we hope that plugins will do for filesystems what they did for Photoshop." Mmmm -- encrypted, compressed, journaling, extensible filesystems. Reiser also touches on issues of international software development and how programmers can achieve fame.
This discussion has been archived. No new comments can be posted.
Reiser On ReiserFS's Future And More | Log In/Create an Account | Top | 123 comments (Spill at 50!) | Index Only | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Mmmm... CPU cycles (Score:5)
Perhaps I'm just not up on the latest compression techniques (most likely), but those questions just popped in my head.
Either way, this is just further down the road of increasing CPU requirements just to drive the friggin disk. Ick. I miss SCSI.
--
Modular Plugins != plugin modules. (Score:3)
Just because something is modular in the kernel doesn't mean it can only be a module. The only case that this exists, AFAIK, is the protocol-specific masquerading modules.
Maybe against the current recommendations, anything that I don't have to load as a module (my AWE32 and Masq mods) gets compiled into the kernel. Why? Because it's not like I won't need the features - that's why I selected them for compile in the first place.
If you encrypt all of your main filesystems, then you'll just have a
/boot partition with vmlinuz on it, and the encrypted filesystem mods already loaded. Load the kernel, find the encrypted root, and *Bam* there's your newly-readable filesystem. This isnt' rocket science.
Ooops... (Score:5)
Re:encrypted, compressed, journaling.... (Score:4)
I remember when the Linux kernel introduced modules and in the race to out-module one another, a lot of newbies rebuilt their kernels with every single filesystem as a module. Ahh, those were the days...
Re:great news, xfs (Score:3)
I'm glad to see ReiserFS aggressively pushing the technology envelope, but I have nothing but good things to say about XFS, and would recommend it to anyone using a recent kernel who wants a robust journaling filesystem.
As others have said I think there is room for both filesystems going forward.
DARPA (Score:3)
Nice to see that DARPA [darpa.mil] (the Defense Advanced Research Projects Agency) is still funding useful things like this. Remember that they funded the internet when it first started. They're usually up to something interesting.
SQL crippling? (Score:3)
"SQL has been crippling the database industry for decades"
I'll admit it's not the best syntax to manipulate
a database but before SQL there was no uniformity
in database access. At least you don't have to
learn a new "language" for each database you
maintain/access. I don't see how it's crippling.
I am curious about any solution he would propose.
LinuxTag (Score:3)
Hans Reiser is also going to speak [linuxtag.org] at LinuxTag 2001 [linuxtag.de] in Stuttgart, Germany. From the LinuxTag website:
Is it even legal for DARPA to fund GPLed code? (Score:3)
GPLing the code is also bad policy because people should be able to use technology that's developed with their tax dollars for any purpose. The GPL prevents enterprising programmers from using the code in their own products and making money from those products. It therefore seems to me that DARPA should either NOT fund such a project or insist that the code that is generated be placed in the public domain -- or at least licensed under the BSD License or the MIT license. After all, it's our tax dollars.... None of us should be denied the use of the code.
--Brett Glass
Plugins do the same as for photoshop? (Score:4)
Get ready for the RDF? (Score:3)
What, make them run 55% faster on a Mac than on an equivalent PIII?
Re:DARPA (Score:3)
Re:Mmmm... CPU cycles (Score:3)
CPUs cycles are getting cheaper than I/O and memory bandwidth, but this kind of thing makes it hard to do DMA.
Random access to files is not particularly helped by using the "latest compression techniques", it's more a matter of how you design the filesystem.
You pretty much have to do compression if you're going to do encryption, since uncompressed data will have lots of cribs and repeated series of known data which will make cryptanalysis easier.
Most files are not accessed randomly, and of those that are, most of the file is eventually accessed even if it is done non-sequentially.
Most random accesses occur on pagesize boundaries. Even if you are accessing one single byte in the middle of the page, modern VM-integrated buffer caches will fetch the whole page to do it. So what you need is not the ability to seek to any random point in the middle of the file, but just to any page boundary. Much simpler. You can store tables of these things.
If you write to the middle of the file and blow the compressibility of the data, then you punch out the page and relocate it on the disk. Sequential contiguity suffers, but heck, you weren't accessing this file sequentially anyway, so why are you complaining?
Has the world gone totally mad? (Score:4)
Now the US Defense Department is paying a bunch of Russians (oh, the irony, the irony) to do exactly this!
And the Bush administrations is paying Osama bin Laden 40 mil for his valiant efforts on behalf of the War on (some) Drugs?
What's next? An invitation for Fidel Castro to spend the night at the White House and drop E with President Junior? Saturday Night Live quits doing White House satires because "we just can't keep up"?
You got the url wrong (Score:4)
Re:great news, xfs (Score:5)
Reiser is planning on selling their modules in the future, make a new feature to be sold and change the previously sold module to be free. Their entire business model depends on them having newer and newer features, which is great for people who are wanting/needing feature over stability.
XFS is leaning more towards the datacenter type of situation, it may not have the latest and greatest, but it will work reliably, constantly, and with great performance. XFS is looking towards Linux as their OS platform, they have to give the same quality of filesystem they had on Irix to their customers who demand that quality. (when buying a multi-million dollar 512proc numa system they tend to require lots of stability).
On the competeing filesystems, Steve Lord from the XFS mailing list said it probably best:
"..."
Reiser will be used for the things it's good at (squid, mail spool, new features) and XFS will be used for the things it's good at (larger files, NFS server, stability). They compete only that they are filesystems, but what they are designed to be good at are two different things.
NFS plugin? (Score:3)
I won't be rushing again to stick ReiserFS on an NFS system any time soon....
RTWP (Score:3)
Actually he feels that it's relational databases that are crippling. In his whitepaper [reiserfs.com] advocating a unified namespace, he proposes a hierarchical model that fits to data rather than fitting the data to the model.
Photoshop plugins? (Score:5)
Fame (Score:4)
That should be pretty easy - create a kick-ass piece of software that everybody uses & name it after yourself (like he did
:).
Re:Mmmm... CPU cycles (Score:3)
Novell takes the scheduled approach -- files are always written uncompressed. Later, at a specified time, the system looks for "compressable" files and compresses them. In this way, only decompression is performed on the fly.
-------
-- russ
"You want people to think logically? ACK! Turn in your UID, you traitor!"
First things first... (Score:3)
Before going into implementing new features, I'd prefer if they made ReiserFS rock solid first.
At the moment, and after hearing recent scary stories about problems with ReiserFS (but I personnally haven't ever had any big trouble), I switched to XFS and enjoy it just as much.
I also find SGI much more quick at getting things stable and finished, which is fairly important IMHO.
Matthias
Re:stability?? (Score:3)
- ReiserFS itself
- The provider of the plug-in.
and that ReiserFS itself is stable, you do have to be careful with your choice of plug-in.
If you care about your files be as careful about your plug-ins as you are about the manufacturer of your brake-disks, or your gas oven, say.
No-one will _force_ you to use any particular plug-in, you simply have to look for advocacy stories, and make sure you're not on the bleeding edge.
THL.
--
Re:Photoshop plugins? (Score:3)
You forgot
* Invert - learn to read documents backwards...
* Fade - You didn't need _all_ of the data did you?
* Rotate - Put data on the next sector to the one the FS headers say it's on.
* Merge - Great if you're short of space.
THL.
--
Re:Mmmm... CPU cycles (Score:4)
Compressed FSs have always had this problem. The best solution that people have comeup with is one that we have already implemented - caches.
In particular, the hairy writing to disk stage (where the compressions and on-the-fly construction of dictionary etc. takes place) only needs to be done on file closure.
For some files, such as those that are frequently opened for writing, it's perfectly possible to have backgrounded compression. Basically you don't compress the file until a later point in time, you store it uncompressed on the hard disk and mark it as such. It's a seperate house-keeping job to actually compress the files (when they are 'stable' for some heuristically defined stability function (e.g. closed >10 seconds ago, or yesterday, or whatever))
Another helpful technique is to chunk the files, so that you only ever have to seek from the nearest chunk boundary. This simply shrinks the amount of data that suffers from the seek problem, and often (though _not_ always) reduces the compression ratio. (e.g. a large executable could possibly be made smaller, as the code and initialised data could end up in different chunks, and the compression model could only need to adapt to one type of data.)
Note, however, that we are not talking about _a_ compression plug-in. We are talking about compression _plug-ins_ (i.e. plural). You can chose your plug-in depending on the requirements.
e.g. A FS for infrequent writes and frequent whole-file reads, such as a document management server, and could use a 'slow' compression, fast decompression, no-seek algorithm.
From purely personal opinion, I believe with current HD transfer rates giving the CPU the decompression task is a better than reading bigger files. However, I'd not swear to that until I've played around with it and tested it thoroughly, with all my favourite file types. (I tend to have 500MB highly-compressible log files from what I do, so this really pushed my buttons!)
I do go on sometimes...
THL.
--
|
http://slashdot.org/developers/01/05/23/1321250.shtml
|
crawl-002
|
refinedweb
| 1,952
| 71.34
|
In a new version of Evolution there is an option for an 'IMAP+' server type. What is IMAP+?
In short, it lets you access your mail on a remote server. It does not download the mail to the local client so that any changes made on any client are reflected on the server and therefore globally accessible. This is useful for people who check mail on multiple devices. See IMAP on Wikipedia for more details.
IMAP+ or IMAPX is just evolution's better management of the IMAP protocol with their client. You can read here and here for information about what is different.
I
The items which am working on are,
- Store operations (folder delete/create etc.)
- Preference options
- Connection manager to allow concurrent folder access (configurable one)
- Smart background message caching
- Mutliple namespace support
- 4
|
https://superuser.com/questions/218080/what-is-imap
|
CC-MAIN-2019-18
|
refinedweb
| 136
| 66.33
|
18 July 2012 17:07 [Source: ICIS news]
LONDON (ICIS)--The European polyethylene terephthalate (PET) market is short and prices are still firming, sources said on Wednesday.
"Everyone seems to be sold out because demand is up … I'm looking for a European cargo and it's difficult," a reseller said.
Offers are reported around €1,200/tonne ($1,481/tonne) FD (free delivered) and higher, which for a few customers is an increase of €100/tonne compared with early July business, according to data from ICIS.
A second reseller reported having sold four prompt trucks just over €1,260/tonne. "That's all we could get for him," he said.
At the end of June/beginning of July, prices plummeted from highs of €1,210/tonne to lows of €1,100/tonne in a week, as sellers panicked in what was an unusually quiet market. Up until now, there have been no signs of the traditional peak PET bottling season kicking in.
Once news of spikes in the value of upstream Asian paraxylene (PX) reached the European market and those sellers that dropped their PET prices had realised they overreacted, material was no longer freely available. There was suddenly more buying interest as customers who have survived hand-to-mouth for months, tried to pre-buy ahead of their holidays and likely price increases.
"Everyone seems to have sold all their material," a buyer said, adding that general market sentiment points to price increases in August.
PX prices in ?xml:namespace>
European industry sources are now questioning how sustainable higher PET prices are, and whether or not demand will be sufficient to support the higher prices currently on offer. Discussions continue.
($1 = €0.81)
Follow Caroline on Twitter for tweets on the
|
http://www.icis.com/Articles/2012/07/18/9579333/europe-pet-market-tightens-forcing-prices-up.html
|
CC-MAIN-2014-41
|
refinedweb
| 293
| 58.82
|
This documentation is archived and is not being maintained.
Constructor Usage Guidelines
.NET Framework 1.1
The following rules outline the usage guidelines for constructors:
- Provide a default private constructor if there are only static methods and properties on a class. In the following example, the private constructor prevents the class from being created.
NotInheritable Public Class Environment ' Private constructor prevents the class from being created. Private Sub New() ' Code for the constructor goes here. End Sub End Class [C#] public sealed class Environment { // Private constructor prevents the class from being created. private Environment() { // Code for the constructor goes here. } }
- Minimize the amount of work done in the constructor. Constructors should not do more than capture the constructor parameter or parameters. This delays the cost of performing further operations until the user uses a specific feature of the instance.
- Provide a constructor for every class. If a type is not meant to be created, use a private constructor. If you do not specify a constructor, many programming language (such as C#) implicitly add a default public constructor. If the class is abstract, it adds a protected constructor.
Be aware that if you add a nondefault constructor to a class in a later version release, the implicit default constructor will be removed which can break client code. Therefore, the best practice is to always explicitly specify the constructor even if it is a public default constructor.
- Provide a protected (Protected in Visual Basic) constructor that can be used by types in a derived class.
- You should not provide constructor without parameters for a value type struct. Note that many compilers do not allow a struct to have a constructor without parameters. If you do not supply a constructor, the runtime initializes all the fields of the struct to zero. This makes array and static field creation faster.
- Use parameters in constructors as shortcuts for setting properties. There should be no difference in semantics between using an empty constructor followed by property set accessors, and using a constructor with multiple arguments. The following three code examples are equivalent:
' Example #1. Dim SampleClass As New Class() SampleClass.A = "a" SampleClass.B = "b" ' Example #2. Dim SampleClass As New Class("a") SampleClass.B = "b" ' Example #3. Dim SampleClass As New Class("a", "b") [C#] // Example #1. Class SampleClass = new Class(); SampleClass.A = "a"; SampleClass.B = "b"; // Example #2. Class SampleClass = new Class("a"); SampleClass.B = "b"; // Example #3. Class SampleClass = new Class ("a", "b");
- Use a consistent ordering and naming pattern for constructor parameters. A common pattern for constructor parameters is to provide an increasing number of parameters to allow the developer to specify a desired level of information. The more parameters that you specify, the more detail the developer can specify. In the following code example, there is a consistent order and naming of the parameters for all the
SampleClassconstructors.
Public Class SampleClass Private Const defaultForA As String = "default value for a" Private Const defaultForB As String = "default value for b" Private Const defaultForC As String = "default value for c" Public Sub New() MyClass.New(defaultForA, defaultForB, defaultForC) Console.WriteLine("New()") End Sub Public Sub New(a As String) MyClass.New(a, defaultForB, defaultForC) End Sub Public Sub New(a As String, b As String) MyClass.New(a, b, defaultForC) End Sub Public Sub New(a As String, b As String, c As String) Me.a = a Me.b = b Me.c = c End Sub End Class [C#] public class SampleClass { private const string defaultForA = "default value for a"; private const string defaultForB = "default value for b"; private const string defaultForC = "default value for c"; public MyClass():this(defaultForA, defaultForB, defaultForC) {} public MyClass (string a) : this(a, defaultForB, defaultForC) {} public MyClass (string a, string b) : this(a, b, defaultForC) {} public MyClass (string a, string b, string c) }
See Also
Design Guidelines for Class Library Developers | Class Member Usage Guidelines
Show:
|
https://msdn.microsoft.com/en-us/library/3f80506d
|
CC-MAIN-2016-50
|
refinedweb
| 647
| 50.23
|
nc -l myhost.acme.com 3872
and make sure you are actually listening:
netstat -an | grep 3872
tcp 0 0 10.33.80.121:3872 0.0.0.0:* LISTEN
On the Source host:
echo ciao | nc myhost.acme.com 3872
and the "ciao" should appear on Destination and the nc should exit.
If you don't have nc installed, there are alternatives to nc:
wlst or python:
import socket
HOST = 'myhost.acme.com'
PORT = 3872
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world') data = s.recv(1024) s.close()
(see)
or simply run
telnet myhost.acme.com 3872
To receive data, run Java or python:
from java.net import ServerSocket
ss = ServerSocket(3872)
ss.accept()
(see )
The great advantage of nc is that you can bind to any IP on the source host:
nc -s "your_ip_here"
To check if nc could actually connect, do:
echo ciao | nc....
echo $?
1 means "unable to connect", 0 means "connected"
echo a | nc -s "10.26.20.116" -w 1 10.51.87.24 1722 ; echo $?
A script to check firewall could very well be:
#!/bin/sh #This script is to check that a firewall rule is operational #Author name : Pierluigi Vernetto function checkFirewall { sourceIPsArray=$(echo $sourceIPs | tr "," "\n") destinationIPsArray=$(echo $destinationIPs | tr "," "\n") for sourceIP in $sourceIPsArray do for destinationIP in $destinationIPsArray do echo a | nc -s "$sourceIP" -w 2 $destinationIP $port if [[ $? -eq 0 ]] then echo $sourceIP $destinationIP $port success else echo $sourceIP $destinationIP $port failure fi done done } sourceIPs=10.56.218.91,10.56.218.93,10.56.218.90,10.56.218.94,10.56.218.92 destinationIPs=10.56.128.10,10.56.128.8,10.56.128.9 port=1522 checkFirewall
|
http://www.javamonamour.org/2013/05/poor-mans-firewall-test.html
|
CC-MAIN-2017-51
|
refinedweb
| 292
| 59.7
|
VisM.ocx Control Details
This chapter provides reference details for the Caché Direct ActiveX control (VisM.ocx). It discusses the following topics:
Extended connection string syntax
VisM methods (including a comparison of the SetServer() and Connect() methods)
This control is a wrapper for the C++ classes listed in a later chapter.
VisM Extended Connection String Syntax
The Server property, Connect() method, and SetServer() method can all use a connection string, which is a pieced string that uses a colon for the delimiter. Usually it has the following form (as described in Connection Strings and Connection Tags):
"CN_IPTCP:server[port]"
The first piece of this argument, CN_IPTCP, is the connection method, which is always TCP. The second piece is the server name or IP address and port where the Caché superserver is running. For example, you could use the following syntax to set the connection of a VisM named VisM1:
VisM1.Server = "CN_IPTCP:127.0.0.1[57772]"
Runtime Form of the Connection String
For historical reasons, the connection string can have a slightly different form at runtime. Specifically, if you are connected, the connection string has an odd number of pieces, because Caché Direct inserts a third piece to this property, after the superserver information, as follows:
"CN_IPTCP:server[port]:slaveserver[port]"
This new third piece indicates the slave server to which you are connected. It has the same form as the master server piece. If you are not currently connected, this property is empty.
Usernames and passwords cannot contain characters that are used as delimiters in the connection string. These include the colon (":", the $Piece delimiter), and square brackets ("[" and "]", used to separate the port number).
Other Forms of the Connection String
The connection string can include the username and password; these are used only if you have enabled the Caché Direct login option. This login option has been rendered obsolete by Caché security and is thus not documented apart from this mention.
Use of this form is discouraged. If you include a password in the connection string, your Caché is susceptible to any simple attack.
The connection string can include the username and encrypted password as follows:
"CN_IPTCP:server[port]:username:password"
In this case, if you are connected, the connection string would have the following form at runtime:
"CN_IPTCP:server[port]:slaveserver[port]:username:password"
Alternatively, the connection string can include the username and unencrypted password. If you are connecting to a 5.2 or later server, note that Caché requires the password in unencrypted format.
"CN_IPTCP:server[port]:username:@password"
In this case, if you are connected, the connection string would have the following form at runtime:
"CN_IPTCP:server[port]:slaveserver[port]:username:@password"
If your client is 5.2 or later, then the client automatically uses your Windows authorization information (username and password) for Kerberos validation if needed (see Connecting with an Indirect Reference for details on connections that use Kerberos authorization). This supplements the Caché Direct login (rather than replacing it). If Caché security is not turned on, then Caché Direct bypasses the Kerberos checking, but still passes the username/password if they are given. The server then does whatever is switched on at that end.
VisM Properties
The VisM control has a set of properties that are mirrored on the server, as well as other properties.
Mirrored VisM Properties
Caché Direct mirrors the values of certain VisM properties between the client and server, as described in the section Mirrored Properties in the chapter Basics of the VisM Control.
If you are using these properties for one-way communication – and especially if they are large – clear them before returning values from the server. Otherwise, your application will waste communication resources. See the chapter Best Practices. As with all other properties that the client sends to the server, the values must have only text characters; see the section Unicode and Locale Issues in that chapter.
These properties are mirrored on the client and server. On the client, they appear as properties of the VisM control; on the server, they appear as local variables, having the same values as the corresponding properties on the client.
This property is mirrored on the client and server in a different manner. Because the client and Caché have different representations of arrays, Caché Direct uses this property to pass array-like values between the server and client. The property has a different form on the client and server. For complete details, see the section Using PLIST in the chapter Basics of the VisM Control.
This property is mirrored on the client and server in the same way as P0, P1, and so on, with one addition: If the value of the Code property begins with a dollar sign or an equal sign, the server prepends “Set VALUE” or “Set VALUE =” to the start of the Code property. This means that the result is returned in the VALUE property in such a case.
Other VisM Properties
This section lists the other VisM properties (the properties that are not mirrored). Note that some of these properties are sent to the server.
Contains the line of ObjectScript code that is sent to the server for execution. As with all other properties that the client sends to the server, this string must have only text characters; see the section Unicode and Locale Issues.
This property always indicates the state of the connection. It is used in conjunction with the keep alive feature and tells an application whether the client has had a communication failure and, if so, when the connection was broken.
If ConnectionState is zero, the connection is OK or a successful disconnect has occurred. If the property is nonzero, then it indicates the time of day (in seconds since midnight) when the server was lost. (This is the same as the second piece of $Horolog. The day is not indicated; it is presumed to be recent.) This property is a long integer.
Runtime only. Indicates the tag of the CDConnect to which this VisM is connected. If you change this property, you change the tag of the associated CDConnect itself, rather than changing the connection. This property is mainly useful for informational purposes. Note that if this property is an empty string, either there is no connection or there is a connection but no tag is associated with it.
Indicates how long it took Caché Direct to process the last message; this is the time from when the client sent the message to the time when the client received a reply. This property is read-only at runtime.
Contains an error number. If it is zero, no error has occurred. This property is read-only at runtime. See the description of the ErrorName property, next.
A string describing an error that has occurred. If it is empty, no error has occurred. This property is read-only at runtime.
The Error and ErrorName properties are set after every server call. If the call is successful, Error is set to 0 and ErrorName is cleared. If an error is reported from the server, the error number is set into Error and a short description is set into ErrorName. While these are not always fully distinctive or descriptive, they still allow the client portion of the application to inform the user that something has gone wrong and to take some action.
Note that errors reported at this level are errors noticed by the server, usually programming errors such as <SYNTAX> or <UNDEFINED>. Logical errors, inconsistencies, and others noticed by the application code should be reported by the application in its results. There are features in the server that allow an application to return error conditions through the Error and ErrorName properties.
Controls the handling of communication errors. There are two classes of errors that may occur in a Caché Direct application: errors in the communications process itself, and errors that occur in the application and are reported back to the client.
Application errors are always reported through the Error and ErrorName properties and the OnError event.
Communication errors are reported differently, depending on the value of ErrorTrap. If ErrorTrap is False, communications errors are handled with a message box, warning the user of a problem. If ErrorTrap is True, communications errors are reported through the Error and ErrorName properties and the OnError event. The application can then handle them any way you choose.
For historical reasons, the default value for ErrorTrap is False. You should usually set ErrorTrap to True before trying to connect to Caché from the VisM.
This matters only if the application is running without a user or if you want the application to handle such errors automatically.
A switch that controls when the line of code in the Code property is executed. Its default value is 0, indicating that the client is idle and not sending messages to the server. Possible values:
As soon as you set ExecFlag to 1, the server executes the line of code in the Code property once (in the context of the P0-P9, VALUE, and PLIST properties). After the server returns, it resets ExecFlag to 0. (You might find the Execute() method more convenient than this setting.)
If ExecFlag is set to 2, this means “execute on reference.” That is, any reference to the VALUE property is preceded by an automatic call to the server to execute the code in the Code property. This is useful if the Code property is an expression that represents the current state of something on the server and that you would like to execute again every time you need it. For example, if the Code property were "=$$GetNext^mydata", then the following Visual Basic code could be used to retrieve an array of data from the server:
For i = 1 to 1000 array(I) = VisM1.VALUE Next iCopy code to clipboard
If ExecFlag is set to 3, this means “execute on interval timer.” In this case, a timer (with the interval given by the Interval property), causes the Code to be executed each time the timer goes off.
A common use of the timer option is to do something periodically and use the Executed event to respond after each execution. To use the timer option, use the following overall flow:
Set ExecFlag to 0.
Set values for all the relevant mirrored properties and for the Interval property.
Then set ExecFlag to 3 to switch on the timer.
Remember that all communication with the server is synchronous. The client must receive the reply to the current message before sending the next message. Using timers can occasionally cause the client to try to send a nonsynchronous message. For example, a user might perform an operation that generates a message while a timer-generated message is in progress. In this case, the client will receive a “nonsynchronous message” error, and the message will be not be sent.
The number of milliseconds between automatic execution of the Code property. The default value is 1000 milliseconds (= 1 second). See the ExecFlag property for value 3.
Specifies the interval between automatic keep alive messages from the client. It is an integer number of seconds. These messages are sent whenever the client is otherwise idle and the interval has expired.
Specifies how long the client waits for a reply from the server, after sending a keep alive message to the server.
Used for debugging on the client side. This is a 32-bit integer property, with each bit assigned as a flag for a particular type of logging. If logging is on at any time during the run of a process, a text log file will be created in the same directory from which the executable is run. Its name will be CDxxx.log, where xxx is the next available sequential number, starting at 000. (So, the first time a log is created, it will be CD000.log.) . The log is closed when the process exits.
To enable client-slide logging, turn all the bits on by setting the value of this property to 2,147,483,647 (231 – 1). In Visual Basic, you can use &H7FFFFFFF, which is the hexadecimal representation of the same number. To turn off logging, set the value to 0.
The contents of the log are best interpreted by InterSystems personnel, but they include a trace of most of what happened to the VisM and a full dump of all the messages sent to and received from the server. It tends to err on the side of too much information rather than too little. If it is needed, it can be very helpful as a real-time record of what actually happened.
Has the same purpose as the Server property and can be set to any of the same values as that property. This property is provided only for backward compatibility and should not be used in new applications. See the appendix Notes for Users of the Previous Versions.
A four-piece string that specifies the message to display when asking the user whether to cancel (see the PromptInterval property). This property must be a string of the form:
"prompt_message|title_text|OK_button_text|Cancel_button_text"
The message dialog box that is displayed has a window title (as given by title_text) and longer message (as given by prompt_message). The dialog box also has two buttons that have text labels.
If the user clicks the button that is labeled with OK_button_text, the dialog box is closed and the query is not interrupted.
If the user clicks the button that is labeled with Cancel_button_text, the dialog box is closed and the query is interrupted.
The default value of this property is as follows:
"This may take a while. Do you wish to wait?| Communications|Wait for Reply|Cancel Wait"
When you set this property, you can omit any piece. The client uses the default for any string piece that you omit or that you set to a zero length string.
Establishes the namespace context of the routines and globals referenced by the application code. The default value is the empty string. In that case, the routines and globals are referenced in the namespace in which the server is running. When the execution message arrives at the server, if the NameSpace property is not empty and is different from the current namespace, the namespace is changed as indicated. This occurs before the code is executed.
Remember that there is a significant cost to changing the namespace; see the chapter Best Practices.
The delimiter string used with the PLIST property; see the section Mirrored VisM Properties. It is read/write at runtime, which means it cannot be set at design time. For historical reasons, the default value is the string $C(13,10). If it is set to the empty string, there is effectively no delimiter and PLIST is taken as a single element. Note that this property is not sent to the server.
Specifies how long to wait, in seconds (not milliseconds), before asking whether the user wants to keep waiting or cancel the activity (which is typically a long query). This prompt is displayed only if the server has not yet replied. This prompt would give the user the option of waiting longer or of canceling the activity. The MsgText property specifies the text of the message to display in this case. If the property is zero or negative, the user is never prompted; the default is zero.
This property serves two purposes.
You set the property to connect to a particular server or to disconnect from the currently connected server.
At runtime, you can get the property to see what server the client is connected to; in this case, the property value has a slightly different form.
You can set this property to a connection string, a connection tag, a quoted question mark, or an empty string. For details on connection strings, see the section VisM Connection String, earlier in this chapter.
Setting the Server property has the same effect as calling the SetServer() method. See the section Comparison of Connection Methods, later in this chapter.
Not used by Caché Direct. This property exists for compatibility with Microsoft Visual Basic conventions. You may use it any way you wish.
The integer number of milliseconds that the client will wait for a reply from the server. The time is measured from just after the message is sent through TCP to when Windows reports that data has been received. When the timer goes off, meaning that no reply has been received within the allowable time, the connection is broken, which normally causes a TCP error on the server, causing it to shut down. The client then regains control with an error condition that can be handled. If the application wishes to proceed, it should create a new CDConnect, which will create a new slave server job, with access to the globals (naturally), but none of the local state of the old server job.
If this property is negative or zero, the client will wait forever for a return message. The default value is 60000 (60 seconds).
VisM Methods
The VisM control provides the following methods:
Connects this VisM to the specified Caché server, creating a new CDConnect if needed. Use any of the following syntaxes:
Connect(connection_string, tag) Connect(connection_string) Connect("?", tag) Connect("?") Connect(tag) Connect("")
For details on the behavior, see the subsection Comparison of Connection Methods.
Disconnects from and destroys the CDConnect connected to this VisM, and shuts down the server job.
This method is a shortcut way of setting the Code property and calling the server. It is exactly equivalent to saving the Code property, setting the Code property to the argument to the Execute method, setting the ExecFlag property to 1 to cause execution, and then restoring the Code property to what it was before the call. All error trapping and execution of the OnError and Executed events occurs in the same way.
Obsolete. Do not use.
Obsolete. Do not use.
Has the same purpose as the SetServer() method. This method is provided only for backward compatibility and should not be used in new applications. See the appendix Notes for Users of the Previous Versions.
Closes the existing connection for this VisM and creates a new connection, as specified. In contrast to the Connect() method, the SetServer() method can change the channel of an existing CDConnect. Use any of the following syntaxes:
SetServer(connection_string, tag) SetServer(connection_string) SetServer("?", tag) SetServer("?") SetServer(tag) SetServer("")
For details on the behavior, see the subsection Comparison of Connection Methods, next.
Comparison of Connection Methods
The following table describes the actions of the SetServer() and Connect() methods.
Setting the Server property has the same effect as calling the SetServer() method.
Note that multiple VisMs sharing a single CDConnect should not try to communicate simultaneously. If they do, you will get a "nonsynchronous communication error" message.
VisM Events
This event is fired after every attempt to execute code on the server, whether successful or not. If an error occurred, the OnError event is fired before the Executed event.
This event is fired any time the server reports an error to the client (that is, whenever the Error and ErrorName properties are set to non-empty values). If the error occurred while trying to execute some code, OnError is fired before the Executed event.
This event is fired if any server message times out or if the server indicates it is in the process of shutting down. Your application can use this to inform the user, perform a graceful shut down, or attempt a reconnection. The integer argument to this event is the value of the ConnectionState property, the time when the server was lost.
Once a connection has been lost, further attempts to send a message result in a <ServerLost> error.
|
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=GBCD_vism
|
CC-MAIN-2021-17
|
refinedweb
| 3,302
| 54.42
|
Technical Support
On-Line Manuals
CARM User's Guide
Discontinued
#include <stdlib.h>
void init_mempool (
void *p, /* start of memory pool */
unsigned int size); /* length of memory pool */
The init_mempool function initializes the memory management
routines and provides the starting address and size of the memory
pool. The p argument points to a memory area in
xdata which is managed using the calloc, free,
malloc, and realloc library functions. The size argument specifies the number of bytes to use for the
memory pool.
Note
None.
calloc, free, malloc, realloc
#include <stdlib.h>
unsigned int malloc_mempool [0x400]; // 4KB memory pool
void tst_init_mempool (void) {
int i;
void *p;
init_mempool (&malloc_mempool, sizeof(malloc_mempool));
p = malloc (100);
for (i = 0; i < 100; i++)
((char *) p)[i] = i;
free .
|
http://www.keil.com/support/man/docs/ca/ca_init_mempool.htm
|
CC-MAIN-2019-43
|
refinedweb
| 124
| 51.48
|
Slack was founded in 2014 and is being touted as the fastest-growing business application in history. It currently has over 50,000 paying companies using its product—including my current place of employment.
Slack has really distinguished itself from its predecessors that were more focused on being a messaging system with some voice and video capabilities. Slack has really pushed the envelope and is working diligently on building on a very powerful App Directory. The App Directory contains hundreds to thousands of integrations that can provide an incredible amount of flexibility to improve your company's efficiency.
The goal of this article is to demonstrate how you can build your own unique integration with Slack, leveraging Python for the logic.
To help focus on understanding the mechanics of a Slack Bot's basics, I will focus on a custom bot.
Why Build a Slack Bot?
Even though Slack has a world-class App Directory, each business has unique business processes. This leaves many companies in a situation where they simply cannot find the perfect application for them.
This is where building your own Slack Bot comes into play.
A Slack Bot's job is to receive and handle events generated by your team's Slack interaction. Slack provides at least 50 different event types, such as:
message: A message was sent to a channel.
team_join: A new member has joined the team.
member_joined_channel: A user joined a public or private channel.
And of course, there are many more event types that your bot can optionally handle and react to. For example, the
team_join event is a fantastic event that can begin an entire onboarding process.
The goal of this article will demonstrate how a Slack Bot will interact with the
message event to perform specific actions when team members interact with the bot.
Picking the Right Slack Bot Type
In this article, I will create a Python application and a Slack Bot that can be added to your team project to respond to the
message event.
To begin, I need to create a Bot on Slack. Two types of bots can be created:
- a custom bot
- creating an application and adding a bot user
This article will create a custom bot because an application bot user would be more appropriate if I were planning to write and publish an application on Slack. Given that I wish this bot to be private to my team, a custom bot will suffice.
Create Python application.
Setting Up Your Python Application
According to Slack's official documentation for the Slack Developer Kit for Python, it currently supports version 2.7. It does make mention that version 3 will be supported in the near future.
I've already got version 2.7 installed, so I will stick with that version of Python for now. If your Python installation is already set up and ready to go, you can move on to the next step; otherwise, please visit the Python Download page and download the appropriate version for your operating system.
To install the Slack Developer Kit, I will use PyPi to simplify the installation of the Slack Client. On some operating systems, the
pip command can be run directly from a command prompt. If not, you would need to reference the entire path to the
pip program.
For example, on my Windows system, I ran the following command to install the Slack Client from a command prompt (referencing the full path to the pip.exe file):
\Python27\Scripts\pip
install slackclient.
Next, you will want to choose a location to house your application. I enjoy using GitHub, so I created a main
python folder that contains all of my different Python applications. Inside this folder, I made a sub-folder appropriately called
slackbot.
Once I've chosen where my application will be hosted, I'm going to store the core of my application in a file called
slackbot.py.
Your First Slack API Call
It's time to lay fingers to the keyboard and connect to Slack and make our first API call. Let's get right to the code, and I'll explain what's happening after:
from slackclient import SlackClient slack_client = SlackClient("xoxb-*******************") api_call = slack_client.api_call("users.list") if api_call.get('ok'): users = api_call.get('members') for user in users: print user.get('name')
The code begins with importing the Slack Client library, followed by instantiating the
SlackClient class with your Slack Bot's API Token that you saved earlier. Be sure to replace the example token in this example with your token.
The
SlackClient object is stored in a local variable called
slack_client that will be used to interact further with Slack.
Using the
slack_client, an API call is made to retrieve a list of your team's users. If the API call succeeded, the list of team members is stored in the
users variable. The
users variable is an array that, using a for loop, prints each team member's name to the console application.
Slack supports several different types of interactions with the system. The first, which we just completed, made an API call. Slack offers many other types of API calls: Web API, Events API, Conversations API, Real Time Messaging API, and SCIM API. The API call we made to retrieve the list of users leveraged the SCIM API.
In the next example, I will demonstrate how to use the Real Time Messaging System. Once we begin building the final bot, the Conversations API will be used to send messages in response to the commands our bot will respond to.
Connecting to the Real Time Messaging System
The RTM system provides a lot of power because Slack sends events that your application can handle and respond to immediately. Of course, there are so many events that your bot may not need to handle every event. To demonstrate the many different events that occur simply upon connection, the following example will output each event that is received.
Let's immediately look at the code to connect and begin receiving Slack events:
from slackclient import SlackClient import time slack_client = SlackClient("xoxb-****************") if slack_client.rtm_connect(with_team_state=False): print "Successfully connected, listening for events" while True: print slack_client.rtm_read() time.sleep(1) else: print "Connection Failed"
Just like the previous example, this code begins with importing the Slack Client library and instantiates the
SlackClient class with the same API Token as earlier. This example also imports the Time library that is used later in the code.
With the
SlackClient successfully created, the next line of code makes a call to the
rtm_connect method. This is done with an if statement. If the connection fails for some reason, an error message is outputted to the console. When successful, a success message is printed to let us know that we are connected and ready to begin interacting with Slack events.
An endless while loop is then started. Inside this loop, I call the
rtm_read method of the Slack Client library.
The results of this call are logged to the console. After this occurs, the application sleeps for 1 second before reading the next potential event from Slack. Below is an example of what it looks like reading events upon first connection:
Successfully connected, listening for events [] [{u'type': u'hello'}] [{u'url': u'wss://lbmulti-yeo0.lb.slack-msgs.com/websocket/Rm8R-Q0PLxK_8UQmBo0Apru-AtL7qnICzeNazVUDQGUCnIY8N51kO07ZUw37jZc4KvXJlu4c1EWDNwTtrXkLzkwn0GBmak_RATHLSFVCCCcht0YLqlgZAS0-6cb1marGhznvmnQStgdW6rd3yub0CpCzmJdgIkRPgIOIB2JurYA=', u'type': u'reconnect_url'}] [{u'type': u'presence_change', u'user': u'U6RM1S17T', u'presence': u'active'}] [] []
When the bot is connected, three events are sent by Slack as seen above. Because this is in a while loop, when there is no event, it receives an empty array as seen above with the empty brackets [].
Now that we have a basic understanding of making an API call and connecting to Slack's Real Time Messaging system, it's time to build out a fully functional Slack Bot.
My Slack Bot will listen to events using the RTM system. When it receives a message event that is directed to my bot, my application will respond back to the user with a response to the command that was received.
Building the Slack Bot
To build a full bot, it requires quite a bit of code. To help organize and simplify the final code, I am going to split the functionality into three different classes: Bot, Event, and Command. These classes should be extensible by your own application, improving the functionality of your own bot. Let's explore the purpose of each of the three different classes:
- The Bot class will be responsible for connecting to Slack and will begin the while loop to listen for events.
- The Event class is responsible for reading the events received from Slack, parsing them out to only deal with message events that are aimed directly to our bot. When a message is received, it will call the Command class and send an API call with the response from the Command class.
- The Command class will receive the text from the event and provide a customized message based on the command received. This message will then be sent back to the originating Slack Channel Event class to be sent to the originating channel of the message.
Initializing the Slack Bot
I had previously mentioned that my Python application entry point is placed in the
slackbot.py file. This file contains the bare minimum to get the application rolling which is to instantiate the Bot class that will handle the rest of the process:
import bot bot.Bot()
Creating the Bot Class
The Bot Class contains the heart of the bot's configuration and setup. Let's look at the entire Bot Class that I've placed inside a
bot.py file:
import time import event from slackclient import SlackClient class Bot(object): def __init__(self): self.slack_client = SlackClient("xoxb-*****************") self." return None def listen(self): if self.slack_client.rtm_connect(with_team_state=False): print "Successfully connected, listening for commands" while True: self.event.wait_for_event() time.sleep(1) else: exit("Error, Connection Failed")
The file begins by importing the necessary libraries: time, event, and SlackClient. The event library will be created next.
With the libraries imported, the Bot Class is now created. The class's constructor inside the
__init__ function sets up a few variables that will be used throughout the remainder of the code. This includes the
slack_client, the
bot_name, and the
bot_id.
The name of the bot is used to find the ID of the bot. The ID will be used later to parse out events that are aimed directly to the bot. If the application cannot find the bot, the application is exited with an error as it cannot proceed without the ID.
The Event Class is then instantiated to be used a bit later in the class. The final thing the constructor does is call the
listen function, which connects to the RTM system and begins the endless loop waiting for events that the bot will handle.
The next function,
get_bot_id, is quite similar to the first example that loops through the users, this time finding our bot's ID by finding its name in the list of users and returning the ID. In the event that the bot cannot be found,
None is returned, which will cause the previous code to exit because it was unable to find the bot.
The final function in the Bot class is the aforementioned
listen function. This function looks very similar to the second example where we first connected to Slack's RTM system. The key difference in this example is that it calls the
wait_for_event function that will be explored next in the Event class.
This completes the Bot class, making it responsible for creating the SlackClient and starting the endless loop waiting for events. It, however, does not do anything with those events, leaving that responsibility to the Event class.
The Event Class
The purpose of the Event class is to read any events returned from Slack's RTM system. Each event received will be examined for a message containing a reference to the Bot's ID. The following is the Event class that I've placed inside an event.py file:
import command class Event: def __init__(self, bot): self.bot = bot self.command = command.Command() def wait_for_event(self): events = self.bot.slack_client.rtm_read() if events and len(events) > 0: for event in events: #print event self.parse_event(event) def parse_event(self, event): if event and 'text' in event and self.bot.bot_id in event['text']: self.handle_event(event['user'], event['text'].split(self.bot.bot_id)[1].strip().lower(), event['channel']) def handle_event(self, user, command, channel): if command and channel: print "Received command: " + command + " in channel: " + channel + " from user: " + user response = self.command.handle_command(user, command) self.bot.slack_client.api_call("chat.postMessage", channel=channel, text=response, as_user=True)
This class begins by importing the final class that will be explored, the Command class. The constructor of the Event class receives a single parameter: a reference to the Bot object. This is stored in a variable that can be accessed by the other functions in this class. Inside the
__init__ function, another variable is created that instantiates the previously imported Command class.
The next function,
wait_for_event, is the function that was called by the Bot's class
listen function. This function reads any events that have been received from Slack's RTM system. The
rtm_read() function returns an array of events. The
wait_for_event function checks if the array contains any events. If it does, the events are looped through and call the Event's internal function
parse_event.
The
parse_event function receives the event as input. It proceeds to check for a property in the event called
text. If this property exists, it then checks that the
text property contains a reference to our Bot's ID. When this condition is true, this function calls the final function in this class, the
handle_event function.
Before calling the
handle_event function, the
text property uses the Python
split function, the string separator being represented by the bot's ID. This converts the
text property into an array. The first element in the array is the string containing the text with the bot's ID. The second element contains the remainder of the message. This element is passed to the above-mentioned
handle_event function as the command.
The final function,
handle_event, accepts three properties: the user who sent the message, the command that was sent, and the channel it was sent in.
The
handle_event function ensures the command and channel contain valid values. When they do, a friendly debug message is outputted to the console that indicates what command was received, the channel it was sent in, and which user sent it.
After the friendly debug message, the
handle_event function calls the main function from the earlier mentioned Command class. The result of this function is used by the
handle_event function by making an API call that posts the response from the Command class function to the channel that initiated the event.
Let's now look at the Command class to see how it generates a custom response based on the command received from the user.
The Command Class
To complete our bot, it's time to create the final class, Command, in an aptly named command.py file:
class Command(object): def __init__(self): self.commands = { "jump" : self.jump, "help" : self.help } def handle_command(self, user, command):: " if command in self.commands: response += self.commands[command]() else: response += "Sorry I don't understand the command: " + command + ". " + self.help() return response def jump(self): return "Kris Kross will make you jump jump" def help(self): response = "Currently I support the following commands:\r\n" for command in self.commands: response += command + "\r\n" return response
I really like how this class turned out because it provides a solid foundation that is easily extendable to handle many more commands than I outlined above.
The constructor to the Command class creates a dictionary of keys with an accompanying function name that will be executed when the command is received from the Event class. In this abbreviated example, the
commands dictionary contains two commands: jump and help. This dictionary can be extended to include other commands you wish to handle with your own bot.
The next function,
handle_command, is the function that is called when a successful event that contains a message directed to our bot is called from the Event's class
handle_event function.
The
handle_command function accepts two parameters: the user who sent the message and the command. The function begins by building a response string that will direct a message to the user who sent the command. The function then checks that the command received is a valid command in the dictionary of commands defined in the constructor.
When the command is valid, the associated function to that command is called, appending the string to the
response variable created earlier.
If the command does not exist, the response is appended to indicate that the command is not valid. It also calls the
help command function to aid the user in understanding what commands are supported by this bot.
The remaining functions,
jump and
help, generate a custom response which will be sent to the user who initiated the command.
As I mentioned during the constructor, the dictionary of commands can be extended with a new command. To complete that process, an accompanying function must be created that is called automatically by the
handle_command function.
Testing the Slack Bot
Now that all of the coding is completed, it's time to test our new bot. To start, we must run our main Python script: slackbot.py. In a command prompt, execute this script with Python, e.g.
python slackbot.py.
This will launch our bot and connect to Slack's Real Time Messaging system. Upon success, our debug message should be printed to the console indicating our Slack bot is ready to receive commands.
To execute a command, our bot needs to be invited into a public or private channel. Once the bot is in the channel, a user can tell the bot to jump or ask for help. In my case I would say:
@jamietest jump. The bot would aptly respond:
@endyourif: Kris Kross will make you jump jump.
This bot is not limited to a single channel. Because it parses out the channel from the event message, it can handle commands from many different channels.
Now it's your turn to give your bot a go and see what you can make it do!
Conclusion
My bot is now complete. I have hopefully shown you the power of creating a Slack bot. With the multiple classes (Bot, Event, and Command) each handling a single concern, the Command class can be extended to handle many more commands.
To see the full source code, I've created a GitHub Repository.
Remember, don’t hesitate to see what we have available for sale and for study in Envato Market, and don't hesitate to ask any questions and provide your valuable feedback using the feed below.
The sky is truly endless in how this bot can be extended. Below is a short list of ideas to extend the initial setup of classes:
- To add a new command, you would create a new function following the pattern of the
jumpand
helpfunctions inside the Command class. When the function is created, it needs to be added to the dictionary of available commands.
- Another great way to further enhance your bot is to extend the functionality of the
parse_eventfunction in the Event class. Currently, this function is explicitly looking for a message event that contains our Bot ID in the text. This function could be further extended to look for other events, such as
team_join. This event could call a new command (in the Command class) that provides the new team member with your company's onboarding documents and policies.
- Finally, if you are interested in creating a custom application or wish to create your own Slack Commands, you can explore creating a custom application and adding a bot user to the application. Many of the code examples work with either bot type.
I hope you've enjoyed this article on creating a Slack Bot using Python. Use the comments form below to let your fellow readers know how you have extended the examples above to create an extremely robust Slack Bot!
><<
|
https://code.tutsplus.com/articles/building-a-slack-bot-using-python--cms-29668
|
CC-MAIN-2021-17
|
refinedweb
| 3,417
| 62.88
|
THINK (?) the issue is generating from the SBS server. It has a SMALL BUSINESS SMTP connector, forwarding the mail to the other exchange server, to send ALL outgoing mail out (not just forwarding the shared namespace). No MX records point to the site where the SBS server is. I could delete the Small Business SMTP connector, but I believe its set up as a normal SMTP connector should be.
You don't need to smart host, unless you're dealing with a dynamic Internet connection for the SBS server.
The default SMTP server should be fine. It's just named that. Nothing magical.
Set an MX record to the SBS server and ensure that reverse DNS is configured as believe I follow you, and I think I know the answer, but If I don't ask this question, I'm sure I'll set up wrong.
For the mx records, are you referring to internal DNS, or do I need to get my MX records modified completely with My ISP (like mail03.domain.com is my SBS 2003 server). (I'm assuming its with my ISP I should change, due to the reverse DNS )
I was just hoping to not have change with my isp, as my connection at the SBS server location is slow and problematic, and didn't want my mail to deliver there if my mail 01 and mail 02 went down. As a side note, I may have to switch who is hosting my MX records to do this but it can be done to resolve the problem.
Will routing by DNS result in NDR's being created when neither site has the requested e-mail address?
1. The MX record for BOTH domains is doing to the site with the Exchange server, correct?
2. The SBS server uses the POP connector to connect to the Exchange server and download the email.
3. The SBS server also smart hosts in sending out email to the Exchange server
My question is, how is the main Exchange server holding the email for the SBS server? Is the SBS server using the POP connector to connect to another email server? Can you confirm this aspect of the setup please?
Do you mind posting the domain names so I can run some DNS tools to get a better handle of your network setup?
You won't have to change ISPs to get an MX record setup, and you shouldn't have to change ISPs to get a simple DNS change. If it's that inflexible, then you should get your domain name registrar to be your DNS host as well (Godaddy, Network Solutions, etc), as they have decent enough control panels that make it easy to make updates. With a reverse DNS, you will need to call the ISP, as they are responsible for setting it up.
If you have a flaky Internet connection to the Exchange server, then you might want to look into a 3rd-party solution that can hold your email for you then push it to your Exchange server (hint: it's a service we offer :-D). However, email should not time out until about 2 or 3 days, depending on the senders email server configuration. So intermittent email outages should not result in dropped mail.
Just to clarify, MX record updates should be done on the authoritative DNS server - that could be your server (but not likely), but probably hosted externally.
As of right now, user accounts are set up on the main exchange server (and the pop3 connector downloads it to the SBS server every 15 minute to basically identical accounts, (with site2.local and the main domain as primary.)) The smart connector from the SBS are set so all mail gets sent to the main exchange server, and sent out (so no reverse DNS issues, but probably the reason for the lack of NDR's when I created the smart connector).
I do know this is not the best way... hence the start of this.
DNS is hosted via ATT (or whatever incarnation they are as of now) who controls our connection to the internet, and yep, I can switch to network solutions for the DNS hosting(just have to explain to the owner but that's somewhat what I meant when I said I could do it).
I believe your solution with the DNS is the best way to go, and I should go that way.... I just want to make sure before I switch my mx records that I'm still going to get the NDR's (that I didn't get when I tried to use smart connectors to solve this) by having the mail go to both locations.
|
https://www.experts-exchange.com/questions/24379433/Exchange-2003-Namespace-Sharing-SBS-2003-NDR's-not-generated.html
|
CC-MAIN-2018-22
|
refinedweb
| 789
| 74.02
|
Hi Simon, On Thu, 11 Mar 2004, Simon Waters wrote: > Care to name the platform? (I'm just curious). This was on NetBSD 1.6. > Better yet can we spot non-preemptive thread library at config time? I don't know (but it should be possible to create an ugly configure test...) The context in which I saw the problem is the NetBSD pkgsrc () that is a framework for packaging and building software. Pkgsrc is formally a NetBSD project, but it works on nearly all Unix-like operating systems. Pkgsrc sets up a unified environment when building packages. This environment includes the pthread library from the operating system, or the GNU PTH thread library for OS that do not have native threads. So testing for _POSIX_THREAD_IS_GNU_PTH solves my problem. (I don't think there are many other non-preemptive thread libraries out there...) > >. Hmm. I'm not sure which end conditions you mean, but the patch below is definitely more responsive... --- src/search.c.orig Tue Mar 16 00:04:30 2004 +++ src/search.c Tue Mar 16 00:07:48 2004 @@ -546,6 +546,10 @@ SET (flags, TIMEOUT); } } +#ifdef _POSIX_THREAD_IS_GNU_PTH + else if (NodeCnt & TIMECHECK) + sched_yield(); +#endif /* The following line should be explained as I occasionally forget too :) */ /* This code means that if at this ply, a mating move has been found, */ > We still have a known bug in the handling of games being reset whilst > the machine is thinking, does this cause a problem when you have no > pre-emptive threads? I have not seen that problem, but I have OTOH not tried it more than a couple of moves... :) /Krister
|
https://lists.gnu.org/archive/html/bug-gnu-chess/2004-03/msg00034.html
|
CC-MAIN-2020-50
|
refinedweb
| 272
| 71.34
|
The.Bill de hOra tries to help answer the question from (the other) Bill by outlining the 7 important aspects of Atom for him:.
- atom:id
- atom:updated
- atom:link
- the extension rules (mustIgnore, foreign markup)
- the date construct rules
- the content encoding rules
- unordered elements
He then concludes with how these principles are actually much broader applicable than Atom:.And one of the earliest articles on Atom mentions that ...
... the Atom API was designed with several guiding principles in mind:Certainly a different start to SOAP. With more and more people embracing Atom for various reasons, it certainly seems like it is the favored child of REST at the moment.
- Well-defined data model -- with schemas and everything!
- Doc-literal style web services, not RPC
- Take full advantage of XML and namespaces
- Take full advantage of HTTP
- Secure, so no passwords in the clear
Community comments
|
https://www.infoq.com/news/2008/10/atom-value?itm_source=presentations_about_atom&itm_medium=link&itm_campaign=atom
|
CC-MAIN-2019-22
|
refinedweb
| 147
| 51.28
|
.
The. Frankly, the browser makers so this coming and created the notion of a plugin precisely to allow web technologies to flourish without their having to do all the work. Really, it's analogous in that respect to open source.
I think the W3C has somewhat lost sight of these facts recently. There appears to be some fear that browser maker lag in implementing some of the recommendations equates with the W3C not "leading the web to its full potential." I respectfully disagree. The situation is not nearly so black and white. The web is supposed to work this way. The web cannot afford to be constrained to the limitations of a browser maker hegemony, especially when they don't even really want the job and have given us the tools to catch our own fish, so to speak.. Moreover,..
A few posts ago I promised to get back around to the discussion of the namespace and versioning of XForms 1.1.
In the early days of XForms 1.1 (late 2004), we expected to upgrade the namespace URI as the means of indicating which processor should be used to interpret the XForms vocabulary within a document. There were, after all, not just new elements but also new attributes for existing elements and in some cases slightly adjusted behaviors for the same elements.
Well, a lot can happen in 18 months! For one thing, we originally intended to publish XForms 1.1 as the upgrade to XForms 1.0, but since then we decided to focus more heavily on the excellence of XForms 1.0, resulting in the publication of a significant body of errata. Those have since appeared in the new W3C Recommendation for XForms 1.0 in March of 2006. We have since published some further errata, and I expect we will publish an updated errata list in the next few weeks. This has allowed XForms 1.1 to become much more about the new features and less about the behaviors of existing features.
The second thing that change is that we decided that an important aspect of increasing XForms adoption was to make it easier for web content authors to write. For this reason, the HTML working group communicated the need to import the XForms module directly into the XHTML2 namespace rather than leaving it in the native XForms namespace. To accommodate this requirement, we created the notion of a chameleon namespace in the schema for XForms 1.1. This concept allows us to define XForms in the XForms namespace, but it allows a host language like XHTML to more easily import the XForms vocabulary by substituting its own namespace instead of the default XForms namespace.
Of course, any host language that attempts such an import has to deal with possible name conflicts, and we did encounter some, like the src attribute. We had to do some jiggling of both XHTML2 and XForms to smooth the integration. This is interesting to note because you can't expect XForms to change for every host language. We did this more as a one-off for XHTML to help increase adoption. Generally, XForms remaining in the XForms namespace is preferrable because it is easier for a wider array of XML-based tools (like XSLT) to find the XForms content if it stays in its own namespace. However, there is a special heritage for web content, so adding the chameleon namespace to XForms and making other technical adjustments was part of our effort to cater to backward compatibility needs and ease of authoring needs.
So, to be clear, XForms content should remain in the XForms namespace whenever possible, such as when used with host languages other than XHTML, because there are significant technical benefits to doing so.
Now, how does this relate to the versionining of XForms or the fact that XForms 1.1 will use the same namespaces as XForms 1.0? Well, along separate lines, the XForms working group had received a number of communiques indicating that the change of namespace made it harder to write software that processed XForms content in ways that were not necessarily affected the vocabulary changes. Basically, the change of namespace was viewed as costly to the community. While mulling over this feedback several months ago, I realized that people who were dissatisfied with our change of namespace URI between XForms 1.0 to XForms 1.1 could simply use the chameleon namespace feature to put the XForms 1.1 markup into the XForms 1.0 namespace. They didn't have to use the feature to put the markup in the namespace of a host language.
This thought was very good news for solving the problem of those who wanted us not to change the namespace of XForms in 1.1. Since the chameleon namespace feature could defeat the intent to use the namespace URI to control the language version, clearly we had to change to versioning 'within' the XForms vocabulary. Hence, you can see in the latest working draft of XForms 1.1 that we now have reverted to the one and only XForms namespace URI originally allocated in 2002, and we have added an internal version attribute on the XForms model, defaulting to 1.0, to indicate the version of the language...
What is the value proposition of an open standard?
XML is fairly pervasive, so we rarely ask this question of XML anymore, but once upon a time the question came up a lot as business managers tried to figure out why the technical people were insisting on spending money to move to XML. And the truth is that the impact is difficult to measure precisely, so open standards are sometimes a bit of an uphill battle. Nevertheless, the software engineering benefits are tangible and increase in magnitude over time.
One benefit is, of course, the human resource factor. Given a schema or DTD for a pile of pointy brackets, human beings can learn a lot about your document format quickly, which means they can become proficient more quickly and be more efficient overall at moving information into and out of the document.
This has an impact on the development of software systems. The software engineering benefits of increased interoperability/looser coupling of system modules have a significant positive effect on the time and cost efficiency of software development. Really, it's the same benefits as a service oriented architecture, which is why SOA and XML documents are such a good match.
But XML standardization has a deeper impact as it also places a value on the document format. In other words, the document format becomes a product in and of itself. A software system based on an XML document format is more valuable than one that is not because it is easier for enterprises to migrate to or from the document format. The benefit to a vendor of enterprises being able to migrate to the vendor's format is immediately obvious, but the ability of the enterprise to migrate from the vendor's format is also surprisingly valuable to the vendor. This is true not just for the obvious reason that being trapped in a document format is inherently costly to an enterprise. So, the enterprise can more readily adopt a vendor's solution when it does not imply vendor lock-in, but frankly it is the capability to more easily migrate away from the vendor's solution that becomes a selling point. A vendor can say, "We know you have a choice, so we're going to be responsive to your needs and deliver quality software so you keep choosing us."
It is with all these benefits in mind that we moved the predecessor of Workplace Forms to an XML syntax called XFDL. The XFDL language is an XML vocabulary that simplifies the design, development and deployment of high precision, secure forms applications that provide a rich user experience.
Of course, the first thing we did with our new XML syntax was to report it to the W3C in a document which became a W3C Note. The purpose of a W3C Note is to bring to the attention of the W3C something that contains aspects worth of consideration for standardization. The W3C does not and never will standardize a vendor's submission. But it does take note of its own notes! A positively reviewed note is likely to result in some movement in the standardization world. In the case of XFDL, that movement has occurred all over the place, including the likes of XPath, XML Schema, XML Signatures and Canonicalization, and XForms.
Of course, XFDL now incorporates XForms to express all aspects of XFDL that it can. And like a good standard ought to do, XForms itself incorporates other W3C technologies where appropriate, like XPath and XML Schema. But XForms depends on a host language to deliver the actual user experience, and there are aspects of a precision presentation and rich user experience that properly belong at the host language level. And XFDL even encodes these bits with the most pervasive standard of all -- XML.! But I would be remiss in not giving you at least a taste, especially since the message is so closely aligned to the value proposition of XForms and, indeed, XFDL+XForms.
In other words, XML documents are the lifeblood of a service oriented architecture, and XML technologies are valuable because they help us overcome the limitations of rigid, monolithic systems.
With Workplace Forms, we combine XFDL and XForms to achieve a somewhat elaborated version of this view in which the forms themselves are the documents that make their way through a service oriented architecture, interacting according to their own rules of engagement to achieve validity of the contained XML data document and more efficiently achieve the intent of a business process.
In other words, the SOA is the infrastructure, the XFDL form is the medium, and the XML data is the message. With this analogy, it is easy to see that the powerful words of Marshall McLuhan are applicable: The medium is the message. The more powerful the medium, the more powerful the message. An XForms layer around XML data trumps a system in which only the XML data is standardized. An XFDL layer around the XForm... transaction auditability, digital signature security, comprehensive accessibility, rich text, globalization, and on and on. All the things we get to talk about in future installments of this blog.>
|
https://www.ibm.com/developerworks/community/blogs/JohnBoyer?sortby=0&page=2&maxresults=50&lang=en
|
CC-MAIN-2018-22
|
refinedweb
| 1,741
| 59.53
|
UPMC iCub project/XDE-simulator-dev
If you want to develop robot controllers in XDE, you may need to prepare your machine by installing two further components: XDE-core, which is a set of development files of XDE used for developers, and ORC. XDE-core is, as XDE, not open-source.
- For any problems with the installation ask Serena Ivaldi, Joseph Salini, Sovannara Hak.
- For any problems with running it ask Sovannara Hak.
- For the license ask Vincent Padois.
Once you have XDE-core and ORC, you can install the modules developed in ISIR for developing controllers in XDE. In this case the code can be downloaded on github.
Preparation
Linux
We hereby assume you have Ubuntu 12.04.
Dependencies:
- System dependencies
The basics for XDE-core
sudo apt-get install libeigen3-dev g++ python-dev cmake git
These are necessary for XDE-ISIR modules
sudo apt-get install swig python-matplotlib
- XDE: installation
- ORC: installation
- XDE-core
To retrieve the last developers packages, ask Sovannara Hak.
- XDE-ISIRController (C++ version)
To retrieve the latest ISIR Controller for XDE based on C++ ask Joseph Salini.
- XDE-ISIR modules
You will need to retrieve some modules from the github repository Create a folder, for example xde-isir, where you can put all he modules. Each will be compiled separately.
cd /home/icub/software/src/xde-isir git clone git clone git clone git clone git clone git clone git clone git clone git clone git clone git clone git clone git clone git clone
Note that XDE-ISIRController in this case has the same name of the one downloaded in the previous step, but it is different. The previous is in C++ and makes the controller available in C++ to XDE, whereas the one you get from git is the python version, that calls the C++ version to accelerate some parts of the simulation.
Installation
Linux
- XDE-Core
Go in the folder where you have put xde-core, for example
cd /home/icub/software/src/xde-core
and install the package:
dpkg -i xdecore_3.99.5.0_amd64.deb
Check that LGSM library is included, this
pkg-config --cflags eigen3
should return
-I/usr/include/eigen3 -I/usr/include/eigen3/unsupported
If this is not the case, then you need to copy or modify the file eigen3.pc in your $PKG_CONFIG_PATH. Install the developer package:
sudo dpkg -i --force-all xdecore-dev_3.99.5.0_amd64.deb
Now you have to create a file xdecore.pc and put it your $PKG_CONFIG_PATH. In my case, $PKG_CONFIG_PATH=/home/icub/software/lib/pkgconfig.
cd $PKG_CONFIG_PATH touch xdecore.pc
Copy-paste this into the file:
prefix=/usr
libdir=${prefix}/lib
includedir=${prefix}/include/xdecore
Name: xdecore Description: XDE core library Version: 3.99.5.0 Requires: eigen3 Libs: -L${libdir} -lXDECore Cflags: -I${includedir}
- XDE-ISIRController (C++ version)
Go into the folder where you put XDE-ISIRController, for example
cd /home/icub/software/src/XDE-ISIRController
Create the usual build folder, enter it and compile through cmake:
mkdir build cd build ccmake .. make make install
This will install libXDE-ISIRModel-gnulinux.so (in my case, it is located in: /home/icub/software/lib/libXDE-ISIRModel-gnulinux.so).
- XDE-ISIR modules
Now these modules have to be compiled one by one, and in some cases in a different way.
- XDE-WorldManager
cd xde-isir/XDE-WorldManager python setup.py develop
You may want to build also its documentation
runxde.sh setup.py build_doc
which builds the html doc in xde-isir/XDE-WorldManager/build/sphinx/html
- XDE-RobotLoader
cd xde-isir/XDE-RobotLoader python setup.py develop
- XDE-Resources
cd xde-isir/XDE-Resources mkdir build
Before doing the standard ccmake/make chain, you must check the content of the cmake file CMakeLists.txt, in particular check the execute_process command, and verify that it is installing into dist-packages.
If not, change the code as below.
EXECUTE_PROCESS( COMMAND "${PYTHON_EXECUTABLE}" "-c" "import sys, os; print os.sep.join(['lib', 'python' + sys.version[:3], 'dist-packages'])" OUTPUT_VARIABLE PYTHON_SITELIB OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_QUIET)
This avoid a known issue with ipython, it is going to be fixed - so probably you won't have this issue. Then you can finally do the ccmake/make thing: please do not change the install location in this case! I tried installing it in my usual /home/icub/software location, it doesn't work. So stick to the usual installation path.
cd build ccmake .. // DO NOT CHANGE THE INSTALL PATH HERE! make sudo make install
You may need to check that XDE-Resources can be correctly found by python. So, in a terminal, type:
ipython import xde_resources
If it works, installation is ok. If it doesn't check which python packages for xde you have installed so far
import xde_ +TAB
If you only see
xde_robot_loader xde_world_manager
It means you encountered a little bug, or you probably configured ccmake to install in a different location than the default one. In this case you need to redo the ccmake step, and make and make install in the default location. To verify, the install log should tell you something like:
... -- Installing: /usr/local/share/xde-resources/urdf/icub_simple.dae -- Installing: /usr/local/share/xde-resources/urdf/icub_simple_collision.dae -- Installing: /usr/local/lib/python2.7/dist-packages/xde_resources/core.py -- Installing: /usr/local/lib/python2.7/dist-packages/xde_resources/__init__.py
Note that the python files go to dist-packages.
- XDE-ZMPy
cd xde-isir/XDE-ZMPy python setup.py build python setup.py install --user
The script will install things locally, in my case it is in /home/icub/.local/lib/python2.7/site-packages/xde_zmpy Note: since 12-2013 this module has been integrated in the new XDE-ISIRcontroller, so it is not used anymore
- XDE-ISIRController
cd xde-isir/XDE-ISIRController python setup.py develop
Note that to install this module, you must have installed before the C++ version (few steps before), otherwise you will get the error
XDE-ISIRController not found by pkg-config
- XDE-DocExamples
Some documentation can be built by:
cd xde-isir/XDE-DocExamples make html
which will create a browsable html documentation in build/html. In source/examples you can find some examples with scripts.
- XDE-SwigISIRController
cd XDE-SwigISIRController python setup.py build python setup.py install --user
The script will install things locally, in my case it is in /home/icub/.local/lib/python2.7/site-packages/swig_isir_controller
Usage
Linux
The general way to run scripts in XDE is
runxde.sh my_script.py
If you installed XDE correctly, you should be able to have runxde.sh in the PATH (so you can use the autocompletion in your terminal).
Some scripts for using the ISIR controllers with XDE and some robots (KUKA LWR, iCub) are provided in the examples of XDE-ISIRController (the python version)
cd xde-isir/XDE-ISIRController/examples
As a first test, try running
runxde.sh 01_full_joint_control.py
Or you can try iCub for simple things:
cd xde-isir/XDE-ISIRController/examples/icub_control runxde.sh 02_squatting.py
|
http://wiki.icub.org/wiki/UPMC_iCub_project/XDE-simulator-dev
|
CC-MAIN-2020-24
|
refinedweb
| 1,167
| 56.45
|
Do you have any questions with regard to my packages, SlackBuild scripts, other scripts, or my documentation? Or do you have a request to make? Please use this space to write down your ideas and I will try to answer. Other readers are of course also allowed to voice their thoughts about what you write.
Keep your posts on topic please. No flamewars, trolling, or other nastiness allowed. This is not meant to be a replacement for LinuxQuestions.org…
If the blog refuses to accept your post, then perhaps you are affected by a bug in the SQLite plugin. Check if your post contains a string of text which is enclosed by the characters ( ). Is there a pipe symbol or a comma inside those round brackets? Try to remove those and re-post your comment.
Continue to my blog articles if you want.
Eric
Thanks for creating this space, and for being such a force for the Slackware community.
Okay, so the good news is that Handbrake 0.9.8 works perfectly. The bad news is it doesn’t contain any features, only bug fixes. And according to the devs, the current trunk won’t be released as the official build until next year –. Is there any way I could get you to build the 4883svn nightly ()?
hi, just noticed the handbreak package in “restricted_slackbuilds” is still version 0.9.5 in 32/64 versions.
Do I get all the features if I get the regular pkg from slackware.com server?
best regards
O… I forgot that I had to place the packages in restricted_slackbuilds due to the lame and faac encoders… I will set that right tonight and update the repositories.
Actually, it is easy for you to build your own package from a SVN trunk checkout. I will stick to the official releases but you can grab the SlackBuild and edit it so that it has these two lines:
VERSION=${VERSION:-r4883}
RELREV=${RELREV:-“”}
Which causes the script to checkout revision 4883 from trunk and build a package for that.
Eric
Thanks Bob. Unfortunately it’s not going to be that simple, because sourceforge.net only has the official releases. It looks like the only way to grab a nightly for a build is to use a subversion repository call, which is failing badly for me when I try it.
Hm… I reread your post, and understand my confusion now. The current .build file contains no references to SVN, so it defaulted to sourceforge.
Well, after a lengthy process of finding all the dependencies necessary for a compile in unRAID, I have a working executable. I’m not 100% of the best way to make it into a package, though. I ran “make install” in the build directory and that has me going for now, but is there a simple way to make the .build file?
Hi Eric, slackpkg won’t blacklist last multilib update
Mon Jul 30 20:12:24 UTC 2012
current/gcc-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt.
current/gcc-g++-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt.
current/gcc-gfortran-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt. Fixed the
64-bit libgfortranbegin.a library which got overwritten by the 32-bit
version. Thanks to hiptobecubic for reporting this.
current/gcc-gnat-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt.
current/gcc-go-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt.
current/gcc-java-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt.
current/gcc-objc-4.7.1_multilib-x86_64-1fix1_alien.txz: Rebuilt.
I had to add:
“[0-9]+fix1_alien” to /etc/slackpkg/blacklist so they don’t get replace by slackware’s native pkg
best regards
Hi cesarion76
I think I may have to rename the packages so that they end on “fix1_1alien” instead of “1fix1_alien” because having to add yet another line to the blacklist file of slackpkg is not an elegant solution. Thanks for mentioning it.
Cheers, Eric
aiden, I am a bit confused as to what you are trying. A “.build” file? Could you not just use the hints I gave 6 comments back and run the edited handbrake.SlackBuild with changed variable definitions? That will checkout a trunk snapshot and build that for you.
Eric
Bob, yes I tried that first, of course. But I got an “invalid scheme” error. So instead I went down the road of manually checking it out and compiling it locally based on Handbrake’s wiki instructions. What I would like to do is make a package out of that binary so I can post it on the unRAID forums for other users to download. I appreciate your patience, because I’m clearly a novice at these things.
I never saw an “invalid scheme” error. Perhaps if you can post a full log of your failed compilation on then I could have a look at it.
Eric
Hi. Are there any difficulties in building recent qemu-kvm-1.1.1 for slack 13.1 or 0.14 packages just left from old SlackBuild version?
Thank you
Thanks Eric for the kde 4.9.1, it has helped me out a lot.
Hello Erik!
Could you please create a package for virt-manager? Or create a howto how we can install it? I found some howtos on the net for slackware and virt-manager, but I was unable to make it.
Thank you.
Krisz
Hi Krisz
I have been looking at virt-manager, especially for the VNC viewer features, and it is possible that I will create packages and/or write an article about it, when I get some free time.
Eric
Alien, everyone would like to know how is your perfect Slackware installation, from beginning to end, partitioning, filesystem, packages, tweaks and also a screenshot of your computer, this can be an upcoming article.
Hi willian
My own computers are not all that interesting to talk about. They are functional but not shiny. I add only a few packages to a full Slackware install, depending on the needs (my laptop for work has some other stuff than the desktop I share with the family).
Even the background on this laptop is the standard KDE bakground…
Eric
Hi Eric
First of all I wish to thank you for the amazing packages you place and all the contribution you do for slackware.
In your post about LibreOffice, you mentioned somewhere that packages under 13.37 might work for 14.0.
I wonder if I should try it for lame and others, in the restricted packages, or wine, in the regular ones, and even for packages for older versions, like xawtv for 12.0 (I have multilib installed).
I am using slackware 14.0.
cheers!
Hi Duodecimo.
The golden rule for binary packages is, if they are not available for the Slackware release you are currently using, try a package for an older release. Often that will just work (but not always). If I find a package for Slackware 13.37 which fails on Slackware 14 then I will specifically compile a new package for Slackware 14.
So, yes, you can use the “older” packages for lame and wine. If you encounter any issues, like library linking errors. let me know so that I can compile a new package!
With sources it is different. You will find that often, the source for an older version of software will no longer compile on a newer Linux distribution. That is typically caused by updates to the gcc and glibc packages which introduce new library calls and interfaces which were not available at the time the older software was written. For successful compilation you would have to find patches or even use a newer version of the source. At the same time, a binary which was compiled from that old source, on an older Slackware release, will usually still work without issues on the new Slackware.
Eric
Hello Eric.
Just to report that qemu-kvm 1.2.0 does not work on 14.0 (missing libgnutls.so.26). I had to recompile it in order to make it to function.
Hi MartinOwaR
Thanks for mentioning this. I have just finished uploading some fresh qemu-kvm and vde packages, built on Slackware 14.
Cheers, Eric
Hi Eric,
Just a note to let you know about the release of IcedTea 2.3.3, see
And, once again, many thanks for all the work you do on Slackware!!
Cheers, Jean-Francois
Hi Jean-Francois
The new openjdk packages have been built already, and I will upload them soon. A blog article will have to wait until tonight.
Cheers, Eric
Hi Eric,
Thanks for the openjdk upgrade. I installed the new package and it works great!
Cheers, Jean-Francois
Just saying thanks for all of the work you have done. I use Slackware for my own network and for work related things. And the work you and the others do is appreciated.
thanx.
Thanks for the packages and updates. They work flawlessly ! Amazing stuff.
Hi Eric, the the *.info for your tool is out of date for SBo. The new .info file is updated to add a REQUIRES line, and removed APPROVED per
Hi JJ,
You are right, I fixed the .info generator. Thanks for notifying me.
Cheers, Eric
just wanted to say thank you for all your work.
Hello!
Want to write howto page in docs.slackware .com, registerd, log in, try to “add page”, and got
You are here: start » howtos » misc » sb_live_5.1
Permission Denied
Sorry, you don’t have enough rights to continue. Perhaps you forgot to login?
what we can do?
alienbob – Got one heck of a question for you. I’ve been tinkering, building a new system and in the name of keeping my wife happy, decided to take a stab at getting handbrake-gtk installed. So far, your build script is the only one I’ve found for the gui. Getting handbrake installed went without a hitch using a build script from elsewhere. But running your build script is causing my system to reboot.
What the heck? Reboot while building software? There’s something really strange going on. Any ideas? Is your build dependent on multi-lib or something that I’m not providing?
Any insight would be more than welcomed.
BTW, loved the calibre build. Nice work.
Hi OlPhart
My handbrake.SlackBuild does not need multilib or anything else. I run that script on a “virgin” Slackware in order to compile a package.
If your computer reboots during compilation then that might indicate an overheating issue. Software does not make your computer reboot just like that.
Is there anything in the message log right before the reboot occurs? Can you try to monitor the case- and CPU-core temperatures during the compilation?
Cheers, Eric
Thanks Eric ,
Temps are good. Don’t know what is going on but it’s certainly not temps that are doing it. Very abrupt reboot, no warning. Seems to occur at some point during ffmpeg build.
At the time I was getting the reboots I had been exec’ing the build script rather than sourcing.
When I try to source the script I get a rather cryptic kernel crash that freezes the system. Appears to be some sort of cpu related memory segmentation issue.
That said, the thought occured to me that a “virgin” build may be the answer. When I get the chance I’ll try a clean build and see how it goes.
When I get the chance to try a “clean” build I’ll get back to you and let you know how things go.
In the meantime, I’m off to shovel 6″ of fresh heavy wet snow.
Thanks,
Drew.
I brought this up in #slackbuilds a while back but thought I’d post it here because I wasn’t able to catch you online at the same time I was.
I wanted to suggest that –enable-libvorbis be added to your restricted ffmpeg slackbuild. While ffmpeg does use libvorbis internally, certain applications which rely on ffmpeg to transcode (Amarok’s transcoding dialog that it gives you when you put music on a portable player being the first one that comes to mind) appear to require –enable-libvorbis to have been passed during compilation in order to transcode to ogg.
I manually add it myself when I compile ffmpeg and it does not appear to cause any problems, and does not require any additional work for the user since libvorbis is included in a stock Slackware installation.
Anyway, just my suggestion. Thanks!
Cultist,
I will use “–enable-libvorbis” in future builds of my ffmpeg package. No problem at all.
Eric
Pingback: Steam - Titans Attack - Slackware 14 64bit - no sound
Upgrading KDE to 4.10.x removed some important plasmoid SDK tools, not packaged into kde-workspace anymore (plasmaengineexplorer, plasmoidviewer etc.). Now they are included into Plasmate (). Could you try to build Plasmate slackware package?
Thank you for all your work
Tom
thank you.
Mr. slack_dragon
no requests. thank you for all!!
or maybe…..more tutorials to know the same as you know
Just one question: You have to replay mysql with mariadb in compat32 stuff, have you?
Best regards
Hi Michelino
Yes, I should have done that earlier.
Fixed and uploaded.
Cheers, Eric
Just found your modified inet1 files for bridging. Once again AB does the hard work so I don’t have to. Thank you, your a star.
Hi mickski
Those bridge modifications for rc.inet1 were merged into Slackware 14. Are you running an older version still?
Eric
Hey Bob
yeah still on 13.37, running kde 4.10.1 + lo 4 thanks to some slackbuilds I found somewhere :-). It really would be easier to just upgrade.
Couldn’t agree more about the awful weather.
Cheers 🙂
Hi Eric, I wonder if you have reached to compile wvdial and wvstreams on your arm port. Because now I’m having problems on my raspberry-pi trying tu use wvdial.
An error about:
getcontext open parenthesis &get_stack_return closes parenthesis == 0.
The error is not in the compile time, is in execution time. I didn’t have any problems compiling from the sources provide by slackbuilds.
Searching in google point out that this is and old problem,since 2009 and so on, but I have not read anything about some that fixed the problem.
Hi P431i7o
I have not yet compiled any slackbuilds.org package… just sticking to the official Slackware packages for now.
Perhaps this is your solution:
Eric
Eric
Hi, Is there any easy way to make VLC link to libva installed in the system?
What do I need to change in the slackbuild?
If you want to link to the system libva, then do not let the vlc.SlackBuild script enter the routine that builds an internal static copy of libva.
Eric
Hi Eric,
It appears there’s something weird going on with your Slackware mirror (taper). Whenever I try to check for updates in the patches directory with slackpkg, it reports no packages to be upgraded. However, switching to the slackbuilds.org mirror fixed the problem.
Also see:
Hi JKWood
There was a small omission in my rsync_slackware_patches.sh script, some additional files were not being synced (FILELIST.TXT and CHECKSUMS.md5* in the main Slackware directory are updated whenever there are new patches).
I uploaded a fixed script and also ran that script on taper, so that the Slackware repositories there should work with slackpkg again.
Thanks for mentioning,
Eric
I just wanted to send you a small note to let you know how much I appreciate everything you do for the slackware community, and your continual efforts to make Slackware the most amazing distro available.
Your work is always impeccable, and I hope to someday be able to give back to the community as much as you have generously given over and over again.
Hi Bob.
I need help. I want to install Cairo Dock in slackware 12.0 and not locate binary packages. you know where I can find them?
Thank you.
Hi raymundo
I do not think you will find many binary 3rd party packages for Slackware 12.0 on the Internet. Most people only release for Slackware 14.0.
A SlackBuild for ciro-dock was added to for Slackware version 13.1. I have no idea if cairo-dock will work on an old Slackware like 12.0.
Eric
Thank you Bob.
Hi, Eric.
Appreciate all your hard work on Slackware and your repositories for that. Not sure if you use SSD disks. But for those who uses SSD disks and running encrypted LVM setup, there is a need to add option –allow-discards for the cryptsetup lusOpen commad in the init file for initrd.
Would appreciate if you update that.
Thank you in advance.
Hi dolphin77
Using “–allow-discards” has a potential negative security impact (see the cryptsetup man page). I guess that if you want to use this parameter because you installed Slackware on an encrypted SSD, you will have to add that parameter yourself to the init script inside the initrd. Slackware’s mkinitrd command won’t overwrite /boot/initrd-tree-init unless you specidy “-c” to (re)create the initrd from scratch.
Or you can submit a patch to introduce a variable which can be used to specify additional non-default cryptsetup parameters.
Eric
Eric, thank you for prompt reply.
You are right, I didn’t think of possible security impact. Probably it is better to leave as is system wide. Anyway on-line trimming (mounting with discard option) is not a good choice. Thus it is better to boot up from external flash disk from time to time and manually mount encrypted partitions and to run fstrim manually.
Thanks.
Hello, Eric
both 32-bit and 64-bit packages of kajongg-4.10.3 game are broken – there’s no executables in them.
greetz, Alek
Hi Alek
You are right. I checked the build logfile and it appears that Kajongg requires pyhon-twisted (a networking library) which again depends on zope-interface.
I do not think that zope will ever be integrated into Slackware, so I left a note for Pat to decide what to do about this.
Basically there are two options:
1) remove the kajongg package entirely from Slackware’s KDE package set
2) Force the installation of the kajongg binaries (after all they are only python scripts) and leave it to the Slackware user to install python-twisted and zope-interface from SlackBuilds.org (they are both present there already).
Eric
Thanks Eric, I’ll follow you forever!
Hi Eric,
I know that you’re concentrating on Samsung Chromebook. But, would your Arm port of Slackware run on BeagleBone Black ?
–William
Hi William
Unfortunately I can only test on hardware that I actually own. However, my ARM packages will work on any armv7 hardware. It’s usually just the kernel which you have to create for a new piece of hardware, and that will at least give you a bootable Slackware system. After that, there will usually be tweaks to get X.Org and sound fully functional but that will straighten out itself in time.
You could try what I initially did for my ChromeBook: find a bootable SD card image for another distro, write that to an empty SD card, and then wipe the other distro’s filesystem. Then copy my mini root filesystem into the empty partition and see if that will boot your BeagleBone…
A mini rootfs is here: . That rootfs does not contain a kernel or kernel modules, try copying those from a working BeagleBone distro image.
Eric
hi alienbob!
any chance to see an update of your handbrake package to v0.9.9?
alex
Hi, Eric.
Want to report that something is wrong with your slackware64-current mirror. Looks like new files were added during sync, the old ones were not deleted.
See for example, where both kde-4.10.3 and kde-4.10.4 files are present.
Hi dolphin77
Yeah, I guess that Pat uploaded his KDE packages around the time that I run my mirror script. Usually those two events are far apart.
I re-ran my mirror script and all is well again.
Cheers, Eric
Hi Alexandre Jobin
Handbrake has been upgraded last week. It took some time to create a patch to make the compilation succeed.
Eric
Hi Erik, just installed calibre in slack 14.0 x86_64 multilib with all dependencies get this error.
$ calibre
Traceback (most recent call last):
File “/usr/bin/calibre”, line 20, in
sys.exit(main())
File “/usr/lib64/calibre/calibre/gui2/main.py”, line 415, in main
app, opts, args, actions = init_qt(args)
File “/usr/lib64/calibre/calibre/gui2/main.py”, line 85, in init_qt
from calibre.gui2.ui import Main
File “/usr/lib64/calibre/calibre/gui2/ui.py”, line 31, in
from calibre.gui2.widgets import ProgressIndicator
File “/usr/lib64/calibre/calibre/gui2/widgets.py”, line 21, in
from calibre.gui2.progress_indicator import ProgressIndicator as _ProgressIndicator
File “/usr/lib64/calibre/calibre/gui2/progress_indicator/__init__.py”, line 15, in
pi_error)
RuntimeError: Failed to load the Progress Indicator plugin: the sip module implements API v9.0 to v9.1 but the progress_indicator module requires API v8.1
Do I need to recompile in 14.0 or is something missing?
Thanx
Hi César
I thik you are not running Slackware 14. It seems that you added some newer software on top which replaced original Slackware packages. Are you running my KDE 4.10 for Slackware 14? Part of that is an upgrade to the Slackware ‘sip’ package which is incompatible with calibre’s Slackware 14 package.
You have to recompile calibre to fix the sip error.
Eric
Eric, yes I’m using your KDE 4.10.4. I’ll recompile calibre and try to make it work. Gracias
Hi Eric,
I’ve tried your alienarm miniroot fs on a device with NAND flash. I’ve created an ubifs image for rootfs and booted it. It always stops at the fsck check. ubifs does not support fsck at all, so I have patched rc.S to skip fsck for certain file systems:
Thorsten
Hey Bob
Did you forget to update the libreoffice slackbuild on or do I just need some patience.
Thanks 🙂
mick, you looked at the “14.0” directories, right? I am not updating LibreOffice for Slackware 13.37, only for 14.0.
Eric
Hi Thenktor, interesting patch. In case your pastebin entry does not have eternal life, I copied its content to .
Was this a Nokia device you tried the mini rootfs on? What were your findings?
Cheers, Eric
Hey Bob
I mean the build script – it’s dated 18-may-2013.
last entry in the changes comments is :-4.0.3-1: 09/may/2013 by Eric Hameleers
# * New 4.0.3 release built for Slackware 14 and newer
The sources dir is up to date 30/6/13.
Once again many thanks 🙂
Lightening speed on the update Bob.
Thanks 🙂
ps
Don’t know where you find the time, but very much appreciated.
Hi Bob,
it was a Atmel devkit, based on a Cortex A5:
Except for the fsck thing I had no other problems. I’m using a kernel compiled from the Atmel source tree:
Here is a picture:
I love slackware because I saw the birth of computer science. I’m sixty years and appreciates your work. It makes me very happy. I am Brazilian. Sorry my english. Congratulations!.
Frederico
Hey Eric! I couldn’t help but notice this –. Are you considering creating packages for Slackware 13.37, for old time’s sake? Either way, thank you for your continuous efforts to provide timely updates for all the extra software packages you offer to us fellow Slackers! 🙂
Jaycee, good idea. I will find some time for new LibreOffice 3.6.7 packages.
Eric
Hi Eric!.
I recently installed slackware64 (for the first time) on a Dell inspiron 14z, the thing is that it came with a hibryd video system, ati radeon plus intel graphics.
I proceeded to install it’s driver, but in some way that is described in many tutorials (download the driver, run sh amd-ati-installer.sh –buildpkg, that generated a tgz file). The thing is that it didn’t work, I tried with the beta release and the stable one, but not success, after that I looked forward into the Xorg logs, because the message was ‘no screen founds’ and something related to libgl not found.
So I started looking for that library and the result was that it was installed on /var/lib64 instead of /var/lib where it was looked for.
So the the thing is, who I have to report this? To Patrick or to the people of ATI?
I solved the problem just creating a link with ln -s of the folder fgl between the /var/lib and /var/lib64
Hi p431i7o
That sounds strange, the ATI driver looking for a file in /usr/lib on a 64-bit system which uses the lib64 convention for its library locations.
I would guess that this is an issue you should report to ATI.
Also, if you feel that this issue should be mentioned on then you could use that article’s “Talk” page to document your findings and hope that a WIki editor picks it up.
Cheers, Eric
This isn’t a post to inform, complain, or anything like that.
It’s fairly simple.
Thanks, alienbob for all you do in the Slackware community. You are very appreciated by this guy —> Skinnable
Although the only Slackware installation currently running in my household is on a server, I would have been completely lost without all the posts provided by you on the many pages floating around the internet.
So, thanks so much for your help – and, I appreciate all you do.
Skinnable
Hey Eric! Another week, another LibreOffice release –. I was just wondering if you plan to create packages for Slackware 14.0, or wanted to wait for the release of Slackware 14.1. Considering that interoperability was a large focus of the release, it’d be handy for 14.0 users to have LO 4.1 packages available. Either way is fine though, I trust your judgement. 🙂
Hi Eric!
Here you can find a patch for building libtorrent-rasterbar 0.16.10 with boost 1.54 (as in current):
With respect to your desire to have a tablet with Slackware on it, have you heard of the PengPod. I would love to have tablet with Linux on it but as I am not up to date on the politics of computer freedom, I find myself bouncing back and forth between the pengpod site and some of your articles on a port to the Arm architecture. Just Curious what your thoughts may be on the topic. Thanks.
I’ve followed your work for quite awhile, and as a last resort as not to waste your time I’ve come to you. Somewhere down the line slackpkg has been corrupt on a newly installed Slack 14 system. I’ve reinstalled the system as well as slackpkg to no avail, I’m more then willing to open a SSH tunnel for you if you would be willing to assist in either assisting me to resolve this mattery or I’m more then willing to follow any and all direction.
Thanks
Hi Austin
This blog is no replacement for the user forum at linuxquestions.org … problems like the one you describe are hard to diagnose without more than “slackpkg has been corrupt”.
What I suggest you do, is open a thread on and describe your problem there. Please try to give more information than what you wrote here, because I can not even start diagnosing it. Error messages? Weird behaviour? How do you experience this corruptness? Did stuff stop working? Etc…
I do not perform one-on-one support, so the ssh tunnel offer is nice but I am not going to follow up on it. I hope you understand that.
Cheers, Eric
Hi Charles
I looked at the PengPod but I think its hardware is a generation too old for the price tag. Nice initiative though and I wish there were more like this. Tablet computer is not going away but the closed hardware / closed driver approach of most vendors _really_ annoys me.
Cheers, Eric
Hi,
Eric, condolences for the lose of your dear friend ‘Sox’. I too lost a friend a few weeks back who would be at my side when working on the farm. ‘Mo’ will be remembered and loved for her unconditional love. Best dog!
-Gary
Thanks for your response. Yes, I just wish there was a tablet that had a basic linux os like slackware so I could practice writing scripts or programs. In some ways this closed hardware fight reminds of the early 90’s but from a different standpoint. The drivers didn’t exist, at least not the interconnectivity of machines. I still have my Macintosh Quadra 605 with 4 mb of ram 80 Mb HD, that I programmed in assembly, and pascal and c but that was a a long while back. I used a modem to connect to the university and did my work on a Unix mainframe. Those were the days. Appreciate greatly your response. Cheers.
Hi Eric.
Would it be possible to add support for OpenConnect VPN to the networkmanagement package in KDE?
I successfully built the latest openconnect (5.01) and NetworkManager-openconnect (0.9.8.4) using the slackbuilds from slackbuilds.org (just had to bump version numbers). Unfortunately OpenConnect did not appear in the VPN list in the NetworkManager configuration. It seems the networkmanagent package depends on openconnect to be installed to build support for it. After installing openconnect I manually built the networkmanagent package using the KDE.SlackBuild and sources from the slackware64-current repo (4.10.5). Now it’s working like a charm. I guess this would add OpenConnect as a dependency to the KDE build.
I need openconnect because my college uses Cisco AnyConnect VPN and it failed to install via the browser. Earlier I got OpenVPN working after installing NetworkManager-openvpn using Robby Workman’s slackbuild so I was led to believe the same would apply to NetworkManager-openconnect. Googling around only confused me but in the end I found the answer in the sources (networkmanagent-0.9.0.8/vpnplugins/openconnect/CMakeLists.txt).
Cheers, Marius.
Hi Marius
I think the best course of action for you would be to write to Pat Volkerding (volkerdi at slackware dot com) asking for the inclusion of openconnect and NetworkManager-openconnect into Slackware.
I am carefully aiming my KDE stuff at slackware-current and therefore not introducing new packages unless absolutely required for KDE. Openconnect is a bit of a grey area.
The build-time dependency of networkmanagement on the presence of openconnect is unfortunate.
Eric
Thanks for your reply.
I followed your advice and asked Pat Volkerdi to include openconnect into Slackware. I also wrote to Robby Workman because he is the maintainer of the openconnect and NetworkManager-openconnect packages at slackbuilds.org and there is no mention of this issue there.
Cheers, Marius.
Hi Eric,
I was wondering what happened to the URL
I get an error 404. Did it move or get abandoned?
– dr –
Hi Eric,
I did this:
#ln -s /usr/src// linux
with an additional “/”
hence, my symlink linux -> /usr/src///
notice the double “//” at the end.
Upon boot up, I get a whole series of version magic error:
version magic ‘3.2.29-custom SMP mod_unload PENTIUMIII ‘ should be ‘3// and
linux -> /usr/src///?
Appreciate if you can provide some insights to this puzzle.
Thanks in advance for your generousity!
ck
Hi Eric,
Sorry that I had to repost this as I realised all my comments within “” were somehow truncated.-3.2.29-custom linux
I did this:
#ln -s /usr/src/linux-3.2.29-custom/ linux
with an additional “/”
hence, my symlink linux -> /usr/src/linux-3.2.29-custom//
notice the double “//” at the end.
Upon boot up, I get a whole series of version magic error:
version magic ’3.2.29-custom SMP mod_unload PENTIUMIII ‘ should be ’3/linux-3.2.29-custom/ and
linux -> /usr/src/linux-3.2.29-custom//?
Appreciate if you can provide some insights to this puzzle.
Thanks in advance for your generousity!
ck
Hi donrich39 –
Samba4 was eventually added to Slackware-current and therefor my own samba4 packages were no longer needed. But I keep a copy at
Eric
Hi ck
Apparently, the kernel you boot and the modules you are trying to load are not built from the same sources.
The error you posted tells you that the modules which are being loaded are compiled for “PENTIUMIII” CPU architecture, while the kernel has been compiled for “486”
The extra slash in the symlink should not have any effect on this.
You need to recompile your kernel plus modules and install them all. Perhaps you forgot to install the new kernel which you built for PENTIUMIII ?
Eric
Hi Eric,
Thanks for your advice.
I double checked to confirm that the kernel was build for pentium III under the processor config. Still same error.
So I rm and cp new source for the build using config seed from kernel-seeds.org.
It works now. Thanks again!
cheers
ck
hi eric,
just had a question since i use your build of ffmpeg, is that includes the libavcodec also? ( I’m not sure but i would say it was in your ffmpeg, since after upgrade to 12-Jun-2013, i get the error that it is missing.)
thank you,
cheers,
inman.
Hi inman, what program is giving you issues? It’s most likely not FFmpeg but a progranm which uses FFmpeg.
Check the exact error. It will probably say something like “libavcodec.so.53: not found”.
My latest FFmpeg package contains “libavcodec.so.54”. Any program which you have compiled against an older version of FFmpeg will stop working because the SONAME of that library has changed. FFmpeg does that a lot.
This is the reason why I bundle an internal copy of FFmpeg with the VLC package for instance… to avoid this kind of breakage.
Eric
Hi Eric,
Thanks for referring me to slack-current and the samba 4 pkg. Upgraded to current, installed samba 4, needed kerberos5 so got the slackbuild from slackbuilds.org and the build failed. The reason is the tcl package has been upgraded to v.8.6 in slack-current
and some test code in the kerberos5 package uses (Tcl_Interp *) interp->result, which is deprecated in Tcl 8.6
and only defined when USE_INTERP_RESULT is defined.
The fix is to breakout the offending file (tcl_kadmin5.c) and add the line:
#define USE_INTERP_RESULT
before the #include declaration. I.E.:
/* -*- mode: c; c-basic-offset: 4; indent-tabs-mode: nil -*- */
#include “autoconf.h”
#include
#include
#if HAVE_TCL_H
#define USE_INTERP_RESULT
#include
#elif HAVE_TCL_TCL_H
#include
#endif
#define USE_KADM5_API_VERSION 2
#include
#include
#include
#include
#include
#include “tcl_kadm5.h”
then replace the file in the archives and rerun the slackbuild.
I can give the commands if anyone else has the problem and is interested.
Thanks, – dr –
For some reason in my above post, all the include statements had the greater-than “filename” less-than info removed.
the #define statement had to be before the #include tcl.h statement.
Mr. Hameleers, I wanted to thank you for all the documentation and support you provide for Slackware. I am writing today to see if I might be of some assistance. I discovered something neat while trying to install Slack on Apple products that might help others who encounter similar issues as myself.
A while back, I was offered a chance to purchase an Apple Mac Pro for a crazy-reasonable price. I have never been a fan of over-priced, last-years hardware; however, this machine had really great specs and a very attractive price. I have quite a few systems in my home, for various purposes; however, I have always wanted a true dual CPU machine. So, I went ahead and bought the Mac. As expected, I wasn’t too enthralled with Mac OS. It had some really nice features; but overall, it seemed lacking in some features that now seem basic. I began to experiment with the Mac and noticed that I couldn’t boot most install disks on it. After working with, and learning, the ups and downs of the “Mac” EFI implementation, I discovered something.
Out of all the distro’s I tried, the Ubuntu disk was the only one that started up and ran normally. It ran well, even in live mode. I did a little digging and started looking at the version of Grub on the Ubuntu boot disk, and it was unique. I tried many versions of Linux looking for one that gave me choice and also had a simple interface at boot. I found that Slackware was the best choice. Unfortunately, I couldn’t get the system to boot with the Slackware64-14 disk. I believe I read somewhere that the Linux Kernel 3.2.29 in that version wasn’t yet compatible with EFI. So I cloned Slack64 current from a mirror, built an ISO and tried to boot. No joy. So then I started playing around with the images. After much trial and error, I was able to extract both images to a folder and make a working installer. I found that the Grub.cfg for the Ubuntu version loaded additional add-ons that the Slackware version did not. Some are realted specifically to EFI and Apple products. So I took the Slack, Grub.cfg and then the Ubuntu, Grub.cfg and made a hybrid. The only difference is that the new one includes the additional add-ons that the Ubuntu installer used. I replaced the version of grub (from the Slack disk) with the one included with Ubuntu and it worked. I was able to install from DVD as well as USB. I have pretty much perfected this method. I know that many would ask why would someone want to run Slackware on a Mac. Well, because it’s awesome, and because you can. Also, older Macs get no love or support from Apple. Ultimately, my main goal was to compare features and the usage of system resources. I also wanted to test what compiling with Dual Quad-Core Xeons with 16GB DDR3 ECC would be like. I have to say, very nice. Both CPU’s support hyper-threading. If I specify 16 threads, I can compile a kernel in about 3 minutes. Anyways, sorry for the long email. I just wanted to get this information to you. If you think this method could be helpful, please let me know and I will make a tut showing what I did so others can do the same. Also, I noticed this method worked with my other UEFI systems as well. Might help some other folks having issues.
I was looking for an experience with Linux that I was unable to find other places. I want to learn the details of the OS and what makes it tick. Slackware forced me to learn in order to get what I wanted, and ultimately, that is exactly I needed.
Thanks,
Austin
Hi Austin
I do not think that this information you are able to supply will find its way into Slackware 14.1 (the doors are pretty much closed on that) but why don’t you request an account at and write a nice tutorial about what you had to do to Slackware’s grub in order to boot a Mac Pro?
The Slackware Documentation Project would be the ideal location for hosting such information.
Cheers, Eric
I kinly like to ask for a Slackware Mini 14.1 build. Thank you!
Hi Mr.X
Uploading mini ISO images now.
Eric
I believe there is a general problem with vlc 2.1 tearing with x264 videos. Do you still have a 2.0 build somewhere until they fix it?.
Hi Boris,
I keep a copy of old VLC 2.0.8 packages here:
Eric
Hello. I would like to install latest transmission 2.82 n my current 64 box. Somehow it fails to compile when it comes to qt- part of compiling. If i comment qt-part on sb.script it goes well, but I would like to have qt as well. It fits nice to kde. Thanks.
Hi Chu_Set
I do not like Transmission myself, I use qbittorrent which is also Qt-based and is a much cleaner program: .
But if you want to compile transmission, give this SlackBuild script a try, it is checked on Slackware 141:
Eric
Are you going to rename current/ to 14.1/ at?
hey, how much do I have to click on the ads? cheers..
I saw that you packed a screen videocapture program called xvidcap a while ago. Is there any reason why xvidcap-1.1.7-i486-1alien.tgz would not work on a 14.0 installation:
Sorry, I really love that package. I used to work with it before. It just made the job. I work on porteus 2.1.
Thanks.
Hi dmitri,
Next KDE is going to go into a “14.1” directory instead of current.
Eric
Hi Ferdi
Don’t stress yourself 🙂
Just click from time to time. Google’s adwords program recognizes abnormal click behaviour and discards those click-through actions…
Eric
Hi francois.e,
Perhaps I should just build a new xvidcap package for Slackware 14.0 / 14.1.
Eric
Just FYI, dconf-0.18.0-x86_64-1.txz, is included in -current.
🙂
Hi cwizardone
Yeah you’re right. I have removed the redundant package from the repository.
I also patched gecko-mediaplayer so that it works with the Chromium & Chrome browsers
Eric
Eric,
I have used your multilib packages for some time. THANK YOU! However I recently installed MesaLib-9.2.3 on slackware 14.1 (graphics issues) and I am now ‘in between’ the 64 bit mesa library for 9.2.3 and the 32 bit library for the compat32-9.1.7 library. Is there a way that I can get both these libraries in sync?
Thanks, jwc
Hi john,
What you can do is compile a 32-bit package for that MesaLib-9.2.3 (should be doable on your multilib computer or else ask someone with a 32-bit Slackware installation) and then use ‘convertpkg-compat32’ on the 32-bit package.
Eric
Eric, What command do I use to get a 32 bit compile? The plain vanilla configure and make commands give the the 64 bit version. Incidentally the resulting installation has gone into /usr/local/lib instead of /usr/lib. I don’t know if that is a problem or not, but is there a simple way to change it?
Thanks, John
Hi John
The best way forward would be to remove the Mesa stuff in /usr/local (you can probably just run “make uninstall” in the source directory) afrter creating a proper new 64-bit mesa package.
How to do/
* Use the command “upgradepkg –install-new /tmp/mesa-9.2.3-*.t?z” to install your new mesa package (upgrading anything that remains from the original Slackware mesa package)
* Then proceed building a 32-bit package for mesa. Follow the guidelines here to accomplish that:
* Finally, convert the 32-bit package to a “compat32” package and upgrade your computer with the result. Something like:
# convertpkg-compat32 -i /tmp//mesa-9.2.3-i486-1.txz
# upgradepkg –install-new /tmp//mesa-compat32-9.2.3-x86_64-1compat32.txz
And do not forget to re-install your binary (Nvidia or Ati) graphics driver every time you upgrade the mesa package!!
Er/”
The mesa package that I get when I do that has all the lib files in /usr/local/lib and the header files in /usr/local/include. I don’t know how to change the slackBuild file so the files go to the right place.
Do you know if anyone has written a more recent slackbuild script for rng-tools? The only one I find is from Slackware 12
Hi q5sys
This is a question about the rng-tools entry on slackbuilds.org , right? You should really be asking that on the slackbuilds-users mailing list or visit the #slackbuilds IRC channel on Freenode to get an answer.
Nobody seemed to care about maintaining the SlackBuild script, why don’t you volunteer?
Eric
Hi Eric,
I was introduced to Slackware and Linux on general when I built my unRAID server, and I firstly want to say a huge thank you for all your packages and scripts I’ve used the last couple of years!!
I am a complete noob however when it comes to writing a slackbuild… I usually rely on botching together someone’s old script and thats what I ahve done for Makemkv(con).. until now! Theres a new dependency (libavcodec) required from ffmpeg, and I’m at a complete loss how to get that to compile for Slackware 13.1 (unRAID) to get the latest MakeMKV to compile with my (botched) slackbuild script. Any pointers would be massively appreciated!!
Alex
Alex, libavcodec is part of ffmpeg. Have you tried compiling my ffmpeg.SlackBuild on your Slackware 13.1 system?
Btw – any reason why you are sticking with a relatively old Slackware version? Chances get bigger with every release that older releases no longer support compilation of newer software.
When the ffmpeg package is installed, other software should be able to pick up libavcodec.
Eric
Thanks for your reply Eric!
unRAID is built on Slackware 13.1 ( or possibly 13.37) so unfortunately I’m stuck with it!
I tried your ffmpeg.Slackbuild tonight and it failed unfortunately :/. My laptop is busy running chkdsk so can’t tell you exactly what it failed on other than I remember it failed in line 1557!
I’ll check what the actual failure was in the morning! If you ever fancy making a MakeMKV package in the meantime, I’d never complain! Haha. In all honesty tho I wouldn’t mind getting my head around it all a bit better!
Alex
Eric,
Is there some reason why is not being updated? I’ve had my mirror-slackware-current.sh pointed there for a while, but since the upgrade, it hasn’t shown any movement.
Regards & thanks for all the effort,
Bill
Oops!
Looks like after the crash on 5 november, a lock file was not removed and therefore the mirror process was stuck. I have deleted the lockfile and manually started the mirror script. It should be OK in a short while.
Thanks for reporting this.
Eric
Hi Eric,
Sorry for the late reply! Still having issues and any help would be greatly appreciated!
I’ve installed your latest FFmpeg pkg ( 13.37 version) and I’m running the slackbuild script from here: (Version Number changed).
I keep getting the error “checking LIBAVCODEC_VERSION_MAJOR… failed
configure: error: in /tmp/SBo/makemkv-oss-1.8.7′:
configure: error: LIBAVCODEC_VERSION_MAJOR is not known at compile time in libavcodec.h
See config.log’ for more details”
There is a libavcodec.pc file in /usr/lib/pkgconfig so unsure what the problem is?! For info the build instructions for MakeMKV is here:
“convertpkg-compat32” converts incorrectly “lesstif-0.95.2-i486-1.txz” package. “installpkg” throws message:
“install/doinst.sh: line 23: syntax error: unexpected end of file”.
Keyword “fi” is missing (line 5) in doinst.sh script…
Hi Rysio
Yes I am aware of that, it has always been there since the first multilib packages.
But I did not care to write a shell script parser just for the converting… and the error is harmless.
Eric
Hi Eric. It has been a long time since I visit your site. I feel I should spend more time here and with you. How do you join the blog? I can’t find a register button anywhere. I’m getting old (62 tomorrow).
Thank you.
Hi moonstroller.
There is no need to register to this site, posting comments is always allowed.
If you want to be kept informed of new posts you can use a RSS feed reader (thunderbird will do the job) and subscribe to the RSS feeds for my posts and/or comments (the links are at the bottom of every page).
Happy birthday!
Cheers, Eric
Hello, Eric!
First off, THANK YOU for all of your hard work. I’ve been using Slackware Linux for over 12 years now, and what you’ve done for Slack makes the overall Slackware experience that much quicker, easier, and enjoyable.
THANK YOU.
So, I’m writing to let you know that a problem seems to exist in your new LibreOffice 4.1 packages when attempting to start/play a slide show in Impress (Pressing F5 to start the show to present the slides in full screen mode). The screen freezes and will not present the slides. Pressing ESC will get you out of the freeze, but you still cannot present.
Happy Holidays.
lbs, that is actually not a new problem.
See here, google will show more hits:
Basically:
– Go to “Tools > Options > LibreOffice (View)”
– Untick “Use Hardware Acceleration”
– Restart LibreOffice Impress
Eric
Hi Alien Bob I just wanted to take some time out to say a big thank you to you I have used slackware now since about 1998/9 and built lots of my own pkgs over the years, then children arrived and my time diminished your packages and fixes have saved me so many hours of work time I have used well. I just found your cure for dropbox now I can make real use of it from my linux systems. I run your firewall to jolly nice and secure BT my isp said we cant see any thing you have connected to our system thats good I said Alien Bobs firewall’s working a treat gob smacked they were lol. any way have a great 2014 when the kids are a bit bigger I will be back in the circles time I gave back to slackware. Sam
Hi Sam
Good to see a Slacker who is into audio equipment servicing & repairs. Perhaps the Studioware guys should check you out (and vice versa)…
Cheers, Eric
How about infinality fonts patch for Slackware 14.1?
I am not interested in those Infinality font patches since they are mostly targeting MS WIndows font usage. If you use the default Slackware open source fonts you should not have a need for these patches IMHO.
Eric
Actually you can target Linux too or even Mac OS X fonts (bash /etc/fonts/infinality/infctl.sh setstyle). Anyway keep up the good work.
Hi,
Are you using Slackware in virtualised environment (most precisely in Citrix XenServer)?
I’m trying to install XenServer Tools in Slackware 14.1, but unfortunately Slackware is not officially supported.
If you have any experiences with this, or if you know anybody running Slackware on XenServer, please let me know. (I already tried to ask for help on linuxquestions.org, but I got no response.)
Thanks,
kukukk
Hi kukukk
I only use Slackware in VirtualBox and in QEMU. I have never tried Citrix Xenserver.
I found your post: but I thik there was no response because of the lack of detail. Without a copy/paste of the commands you used and the responses/errors you got there is nothing sensible to answer.
Eric
Hi, and thanks for the quick reply.
I was almost sure that when you are creating/testing packages for different Slackware versions, you use them in a virtualised environment, and I hoped that it’s XenServer.
I did not added any other details because I don’t have :}. Converting and installing the Red Hat package went without any error message, but after restarting the guest XenServer still complained about missing XenServer Tools. Probably it’s because the different internal structures of the systems (something is not started, is not the right folder, etc). I don’t know, unfortunately I’m not a Linux expert. I hoped that somebody already went trough this and knows the steps required for installing XenServer Tools.
Anyway, Slackware is working fine as guest in Citrix XenServer, I just don’t have some information in the Management Console, like memory and cpu usage.
Alternatively, try installing that RPM directly in Slackware, not converting it to a Slackware package first. That way, the RPM pre- and post-install scripts will be executed.
Use the command “rpm –nodeps” to avoid getting a ton of missing dependency errors.
Eric
Hi Eric,
I hope you can help me to find out what is wrong with my installation of pipelight.
I have followed the instructions from.
when I run Pipelight diagnostic page, I get:
Pipelight diagnostic:
Please select the Plugin you want to test:
User agent (Javascript)
Checking for Windows user agent …okay
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0
Please note: not every user agent works on every site, try multiple ones if something doesn’t work!
Silverlight (as seen by a website)
Checking for Silverlight …failed
Pipelight
Checking for Pipelight …okay
Configuration of Pipelight
Checking if config exists …okay
/usr/share/pipelight/pipelight-silverlight5.1
Checking if pluginLoaderPath is set and exists …okay
/usr/share/pipelight/pluginloader.exe
Checking if winePath is set and exists …okay
/usr/libexec/wine-pipelight/bin/wine
Checking if wine exists …okay
/usr/libexec/wine-pipelight/bin/wine
Checking if winePrefix is set and exists …okay
/home/duo/.wine-pipelight/.20913.0\)
(dllName = npctrl.dll)
Checking if dependencyInstaller is set and exists …okay
/usr/share/pipelight/install-dependency
Checking if dependencies are defined …okay
Distribution
Welcome to \s \r (\l)
Content of file: /usr/share/pipelight/pipelight-silverlight5.1
#
# Enables a diagnostic mode which might be helpful to find an
# error in the configuration or installation.
# To get the error messages go to:
#
# Information for advanced users: The diagnostic page embeds the plugin
# of type “application/x-pipelight-error” to trigger the output of some
# debug information. The plugin cannot be triggered if everything is
# working, so this only affects users with a broken installation.
# The debug output will include paths on the local filesystem and the
# linux distribution used. If you don’t want to leak this information
# accidentially in case of a broken installation please either uninstall
# Pipelight or disable the diagnosticMode. [default: false]
#
diagnosticMode = true
#
# Path to the wine directory or the wine executable. When you
# specify a directory it should contain /bin/wine.
#
winePath = /usr/libexec/wine-pipelight
#
# Path to the wine prefix containing Silverlight
#
winePrefix = $HOME/.wine-pipelight/
#
# The wine architecture for the wine prefix containing Silverlight
#
wineArch = win32
#
# DLLs to overwrite in Wine
# (prevents Wine from asking for Gecko, Mono or winegstreamer)
#
wineDLLOverrides = mscoree,mshtml,winegstreamer,winemenubuilder.exe=
#
# Path to the plugin loader executable
# (Should be set correctly by the make script)
#
pluginLoaderPath = /usr/share/pipelight/pluginloader.exe
#
# Path to the runtime DLLs (libgcc_s_sjlj-1.dll, libspp-0.dll,
# libstdc++-6.dll). Only necessary when these DLLs are not in the same
# directory as the pluginloader executable.
#
gccRuntimeDlls =
#
# Path and name to the Silverlight directory
# You should prefer using regKey to make it easier to switch between
# different versions.
#
dllPath = c:\Program Files\Silverlight\5.1.20913.0\
dllName = npctrl.dll
#
# Name of the registry key at HKCU\Software\MozillaPlugins\ or
# HKLM\Software\MozillaPlugins\ where to search for the plugin path.
#
# You should use this option instead of dllPath/dllName in most cases
# since you do not need to alter dllPath on a program update.
#
# regKey = @Microsoft.com/NpCtrl,version=1.0
#
# fakeVersion allows to fake the version string of Silverlight
# Allows to get around some version checks done by some websites
# when using an old version of Silverlight.
#
# fakeVersion = 5.1.20913.0
#
# overwriteArg allows to overwrite/add initialization arguments
# passed by websites to Silverlight applications. You can
# use this option as often as you want to overwrite multiple
# parameters. The GPU acceleration state of Silverlight can be controlled
# by setting:
#
# enableGPUAcceleration=false # disable GPU acceleration
# comment out # let the application decide (default)
# enableGPUAcceleration=true # force GPU acceleration
#
# You may need to overwrite the minimum runtime version if
# you use an old Silverlight version as some websites set
# an artificial limit for the version number although it
# would work with older versions.
#
# overwriteArg = minRuntimeVersion=5.0.61118.0
# overwriteArg = enableGPUAcceleration=false
# overwriteArg = enableGPUAcceleration=true
#
#
# windowlessmode refers to a term of the Netscape Plugin API and
# defines a different mode of drawing and handling events.
# On some desktop enviroments you may have problems using the
# keyboard in windowless mode, on the other hand the drawing is
# more efficient when this mode is enabled. Just choose what works
# best for you. [default: false]
#
windowlessMode = false
#
# embed defines whether the Silverlight plugin should be shown
# inside the browser (true) or an external window (false).
# [default: true]
#
embed = true
#
# Path to the dependency installer script provided by the compholio
# package. (optional)
#
dependencyInstaller = /usr/share/pipelight/install-dependency
#
# Dependencies which should be installed for this plugin via the
# dependencyInstaller, can be used multiple times. (optional)
#
# Useful values for Silverlight are:
#
# -> Silverlight versions (you need to adjust dllPath):
# wine-silverlight5.1-installer
# wine-silverlight5.0-installer
# wine-silverlight4-installer
#
# -> optional depependencies (required by some streaming sites)
# wine-mpg2splt-installer
#
dependency = wine-silverlight5.1-installer
dependency = wine-mpg2splt-installer
dependency = wine-wininet-installer
#
# Doesn’t show any dialogs which require manual confirmation during
# the installation process, like EULA or DRM dialogs.
# [default: true]
#
quietInstallation = true
#
# In order to support browsers without NPAPI timer support
# (like Midori) we’ve implemented a fallback to
# NPN_PluginThreadAsyncCall. In the default configuration
# a timer based approach is preferred over async calls and the
# plugin decides by itself which method to use depending on the
# browser capabilities. Setting the following option to true
# forces the plugin to use async calls. This might be mainly
# useful for testing the difference between both event handling
# approaches. [default: false]
#
# eventAsyncCall = true
#
# The opera browser claims to provide timer functions, but they
# don’t seem to work properly. When the opera detection is
# enabled Pipelight will switch to eventAsyncCall automatically
# based on the user agent string. [default: true]
#
operaDetection = true
#
# Minimal JavaScript user agent switcher. If your page doesn’t check
# the user agent before loading a Silverlight instance, you can use
# this trick to overwrite the useragent or execute any other Java-
# Script you want. You can use this command multiple times.
# Uncomment the following 4 lines for FF15 spoofing.
#
# executejavascript = var __originalNavigator = navigator;
# executejavascript = navigator = new Object();
# executejavascript = navigator.__proto__ = __originalNavigator;
# executejavascript = navigator.__defineGetter__(‘userAgent’, function () { return ‘Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1’; });
#
# We are currently implementing hardware acceleration support, which
# can cause problems on some systems as especially specific open source
# drivers render only garbage when hardware acceleration is enabled.
# To prevent breaking any working solutions, we are currently
# implementing a whitelist system, which will enable hardware
# acceleration by default if the specified shell scripts returns 0.
# Otherwise we will disable it. You can still use
#
# overwriteArg = enableGPUAcceleration=false/true
#
# to overwrite the check results. If you really want to skip this test
# you can use: silverlightGraphicDriverCheck = /bin/true
#
silverlightGraphicDriverCheck = /usr/share/pipelight/hw-accel-default
#————————- EXPERIMENTAL ————————-
# Watch out: The following section contains highly experimental
# stuff! These functions are likely not working properly yet and
# might be removed at any time.
#
# Silverlight uses a lot of timer stuff do to the window redrawing
# . In order to speed this up a bit the following switch enables
# some API hooks to do most of timer stuff in user mode (without
# having to call wine-server each time). It is still unclear
# if this option has any significant effect on the performance.
# [default: false]
#
# experimental-userModeTimer = true
#
# In order to make it possible to let a window stay opened in fullscreen, even
# if the user clicks somewhere else it is necessary to install a window class
# hook. With some plugins this could lead to other problems! [default: false]
experimental-windowClassHook = true
#
# A sandbox is a method to isolate an untrusted program from the rest of
# the system to prevent damage in case of a virus, program errors or
# similar issues. We’ve been developing the ability to use a (self-created)
# sandbox, but this feature still has to be considered experimental.
# The feature will only be used when the sandbox path exists.
#
sandboxPath = /usr/share/pipelight/sandbox
On the terminal window running firefox I get:
[PIPELIGHT:LIN:unknown] attached to process.
[PIPELIGHT:LIN:unknown] checking environment variable PIPELIGHT_SILVERLIGHT5_1_CONFIG.
[PIPELIGHT:LIN:unknown] searching for config file pipelight-silverlight5.1.
[PIPELIGHT:LIN:unknown] trying to load config file from ‘/home/duo/.config/pipelight-silverlight5.1’.
[PIPELIGHT:LIN:unknown] trying to load config file from ‘/etc/pipelight-silverlight5.1’.
[PIPELIGHT:LIN:unknown] trying to load config file from ‘/usr/share/pipelight/pipelight-silverlight5.1’.
[PIPELIGHT:LIN:silverlight5.1] GPU driver check – Your driver is supported, hardware acceleration enabled.
[PIPELIGHT:LIN:silverlight5.1] using wine prefix directory /home/duo/.wine-pipelight/.
[PIPELIGHT:LIN:silverlight5.1] checking plugin installation – this might take some time.
[PIPELIGHT:LIN:silverlight5.1] basicplugin.c:373:checkPluginInstallation(): error in execvp command – probably dependencyInstaller/sandbox not found or missing execute permission.
[PIPELIGHT:LIN:silverlight5.1] basicplugin.c:383:checkPluginInstallation(): Plugin installer did not run correctly (exitcode = 1).
[PIPELIGHT:LIN:silverlight5.1] basicplugin.c:142:attach(): plugin not correctly installed – aborting
Same problem happens when I change silverlight plugin version.
Cheers, Duodecimo.
Hi Eric,
Oops, I forgot to mention on the previous message that I use Slackware 14.0 64 bits multilib.
Cheers, Duodecimo.
Hi Duodecimo
Did you execute the command (under your own user account):
pipelight-plugin –enable silverlight
Also, you should check the enabled plugins by running:
pipelight-plugin –list-enabled
Do you see Silverlight mentioned there?
The very first time that the pipelight plugin loads you should see a Wine dialog mentioning that SilverLight is being downloaded and installed.
Eric
Hi Eric,
yes, the pipelight-plugin commands runs ok, and I checked on my user home that .mozilla/plugin gets the correct link to silverlight library.
As with wine dialog mentionning that silverlight is being downloaded, well, I remember it showing up the first time I installed pipelight. As I had got problems, I did try to reinstall all over again, as I foun out I had to versions of the Alsa package, the converted one that I installed following your pipelight page instructions, and a previous one (an alien version, maybe from slackpkg+ with multilib settings). In fact I thought about cleaning out stuff in order to have wine downloading silverlight again to see if it fixes my problem, but I didnt know what I should erase.
thanks, Duodecimo.
Hi Eric,
Are you aware that your avidemux 2.6.x package creates a bin named ‘avidemux3’ but the avidemux.desktop entry still calls ‘avidemux2’? It’s not a big thing, but just in case you didn’t realized that.
And about your RSS. It’s great and I’ve subribed and use lots of your packages. Are you thoght on adding the link to the specific package in each RSS entry instead of linking to the whole repository? Just and idea.
Anyway, thanks for your work and happy New Year!
Hi, yes I am aware of that bug in avidemux which I did not spot at first (someone else pointed it out in another comment). I have a fixed SlackBuild but that won’t be used until there is a new version of avidemux probably.
I’ll give your RSS request some consideration.
Eric
Eric,
I have been spending the post couple of days recovering from something (that I probably did!) that erased the partition table from sda on my desktop machine. (At least it didn’t get my data drive. Anyway, when reloading calibre & all of its deps, I noticed that you have a “python-imaging” package & Slackware-current already has “pil”. Both seem to package the same digital libraries. I did not install your package, just to see what would happen. Calibre starts & performs simple tasks properly. Will I miss out on something more obscure by not having your package? If I do install your package, should I upgrade Slackwares package or put them in side by side?
Regards,
Bill
Yeah Slackware’s pil and my python-imaging packages are the same. I should remove it from the external dependency list.
Eric
Hi Eric,
i have installed the current slacky 14.1 and build a handbrake package with your script.
With the “Video Encoder” = x264 i get the error message “Segmentation fault”. So other encoder goes.
I read something about “Miscompilation with gcc 4.8”
Have you an idea for patch/workaround ?
Thanks, Björn
What if you use my package for handbrake instead of compiling it yourself? I can create H.264 videos on Slackware64-current without any segfault.
And what is “the current slacky”? The name of the distro is Slackware.
Eric
I compiling it self with your build files and get the error.
Now i found your package and have no problems with H.264. Thanks, Björn
Are you going to upload the 64 bit version of fbreader for -current? The folder is empty on both slackware.com & taper.alienbase.nl.
Regards,
Bill
Hi Bill
The package did not get uploaded for unknown reasons, but I will set that right tonight. Thanks for the notification.
Eric
Hi,
Just a small update regarding to XenServer Tools on Slackware. Finally I managed to get it working. I had to make the following changes:
– recompile the kernel with Xen guest support enabled in Linux guest support (it was a bit tricky to add it to lilo, because my device changed from /dev/sda to /dev/xvda)
– modify the distribution detection script in XenServer Tools to support detecting Slackware
– modify the guest parameters updater script, because “ifconfig” seems to have different output format on Slackware.
A future request for Slackware: Xen guest support enabled by default :}
Best regards,
kukukk
Hi Eric,
i am a brazilian user of slackware since longe time ago.
This days i try to figure out a way to start a diskless Slackware 14.1 client from a PXE + DHCP + NFS, without success. I can do this using a specific old 2.6 kernel, but in Slackware 14.1, i can’t figure out a way.
My PXE config sounds like:
DEFAULT pxelinux.cfg/vmlinuz-2.6 ip=dhcp root=/dev/nfs nfsroot=192.168.0.80:/tftpboot/nova-distro vga=791 init=/etc/rc.d/rc.S quiet rw
So if i change the old specific kernel (vmlinuz-2.6) to the new one Slackware 14.1 kernel (vmlinuz-huge-3.10.17) on PXE server, clients cant start due to a kernel panic error.
In Slackware 14.1 i will need a initrd?
Using the old “vmlinuz-2.6” i dont need any extra ramdisk, just the kernel.
Can you help me to solve this?
Thanks to you attention and sorry to my bad english.
Hey Eric,
Lookups like the slackware-current and slackware64-current directories on taper aren’t being updated.
Hi JKWood
Yes, a lock file was left (perhaps after a crashed mirror action) and it prevented further mirror actions.
I deleted the lockfile and the mirrors are syncing again.
Thanks for spotting and reporting!
Eric
Hello Eric,
is there a reason why your KDE packages don’t come with Kolab libraries anymore? I’ve tried and created build-scripts based on SlackBuilds-templates and your last build-scripts from 4.8 (or was it 4.9?) and got Kolab support for Kontact after compiling your KDE 4.12.3 packages. Additional dependencies are xerces-c (available on slackbuilds.org) and xsd (just created my own slackbuilds-script).
If you are interested in adding Kolab support again I could help if needed, otherwise I would simply submit my scripts to SlackBuilds. Personally I think it would be better if KDE would come with all needed dependencies, though.
Mike
Hi Mike
I wrote about how to re-add Kolab support here: which was well before Slackware adopted KDE 4.10. At that time, I discussed it with Patrick and we decided not to add the Kolab support packages to the distro.
You can still find the sources for libkolab and libkolabxml here: . A package for xerces-c is in my mail repository:
Unless new dependencies have been added in the meantime, these three packages should allow you to add Kolab support by recompiling kdepim-runtime.
Eric
Hello! In multilib, I noticed that two packages have to be updated! openssl-1.0.1f and openssl-solibs-1.0.1f are vulnerables to Heartbleed. Many thanks for you excellent job.
Best regards,raffaele
Hi raffaele
Yes that was on the agenda for after work. By now, the updates should be in the multilib repository.
Eric
I liked Slackware to have requirements for Inkscape by default.
They are too much to install 🙂
I know it ain’t your fault but there is a error in LibreImpress that prevents the slides from showing unless hardware acceleration is turned off. It was quite alarming and I installed slackware on another machine and found the same problem with the latest update.
Appreciate all your work.
I am going to build a new machine and upgrade to the 64 bit Slackware. I suppose I should go with 14.1?
Ah, I live in Beijing so Chinese language support is useful for me. I got SCIM running about halfway- not very acceptable.
I fiddled with ibus for awhile some time back and finally couldnt make it perk. fcitx i got going for awhile but it was very unstable.
Any further thoughts? I am not going to try another Linux version. I am committed to being a slacker.
Hi Regnad
Yes, the issue with Impress has been known for sometime ( for instance), nobody has fixed it yet unfortunately. Takes you by surprise.
Eric
Regnad, what are your issues with SCIM? I did not play around with ibus yet, so I can not talk about how (un)stable it is.
Eric
scim doesnt seem to work with some software like firefox. havent tried it with chromium though.
When I used scim it only seemed to be able to input chinese pinyin for qt4 applications. I then tried fcitx but it wasnt much better because it was erratic. I finally did a clean install to start over.
I am going to tear into the chinese thing full bore when I get time. Will probably go to Slackware 64 bit first though because I am wanting to build a i7 machine in a small portable itx size box for a project I am working on.
Any advice on Chinese input would be appreciated.
I installed the new KDE update, but now find that my audio does not work. Slackware finds the card and tests are ok but audio applications dont find the audio device.
Hi Eric. First of all, thanks for your *great* jobs for Slackware.
I’ running –current (64bit) with your multilib packages and last kde-4.13.0. On kde i’ve found that “baloo” ( the new semantic-indexing tool) eats lots of resources and eats lot of disk space ( for example, before i killed it, 7,9G of my homedir were filled by its stuff).
Removepkg-ed it, no chance to have dolpin so reinstallet baloo package but “chmod -x” its binaries.
Is there a chance to have dolpin not linked to baloo libraries?
Ah: my machine has a Amd-8core, 32G of ram and 5 hardisk ( my /home is on a raid-1 two 1G disks).
Thankyou again for your works…
HP
Hi Hubert,
Baloo libraries will remain linked to KDE binaries, but you can disable the indexing preocess in two ways: by removing the binaries “baloo_file_extractor” and “baloo_file_cleaner”.
OR, add your homedirectory to the list of excluded directories for indexing. If Baloo finds that you have excluded your $HOME, then it will disable itself. In “System Settings -> Desktop Search” add your home to the blacklist.
If you do not want the index-based search menu in dolphin, you can do the following:
$ cp /usr/share/autostart/baloo_file.desktop ~/.kde4/share/autostart/
$ echo “Hidden=True” >> ~/.kde4/share/autostart/baloo_file.desktop
There was a heated discussion on the KDE packagers mailing list about the bright idea of the developer, *not* to add a “disable indexing” button in the System Settings. I guess such a button will appear at some point in the future.
Eric
Hi Regnad
The updates to KDE should not have an influence on your audio… is there something else you have changed or updated recently?
Eric
Regnad, I would have to check using SCIM with pinyin input methods, to see how it affects Firefox. It used to work in the past, I have not paid attention to it recently.
Eric
TY alienbob…
Putting $HOME in the blacklist didn’t have effects on my machine, i dont’ know why.
And there is a bug ( as the developer writes in his blog) regarding Maildir directories ( i’ve such a dir, of course …) that can bring to a loop in indexing.
Anyway, “chmod -x” the binaries is good for me and i hope developers will put ASAP that damn button to stop their crappy software and that they will stop to think like Gnome’s developers do.
Have a funny WE
😉
HP
Hi Huber
I very much agree with your observation “stop to think like Gnome’s developers do”. The lack of the “disable indexing” button because the developer wanted to decide for the user that indexing is a “good thing” and therefore must be enabled always, strikes me as a worrisome example of how KDE software should _not_ work.
While the GNOME paradigm is “dumb down the interface, let the developer / UI designer make the decisions for the user” I have always though that the KDE philosophy was “if it can be tweaked, show a tweak button in the UI and leave the choice to the user”.
Eric
For anyone else experiencing failures to login to non kde sessions with kde-4.12.5 there’s a patch here .It works for me.
Big thanks to alienbob for the latest kde releases amongst many other things.
Hi Mickski56,
Well, the patch I used is but it is the same issue that it fixes (the KDE patch is better than the Gentoo patch).
I was already compiling a new kde-worlspace package after this patch was announced on the kde-packagers mailing list and I am taking the opportunity to fulfill a request from Willy Sudiarto Raharjo to move the KDE sessions files to /usr/share/xsessions/ where XFCE installs its own session definition file already, so that other login managers can see these desktop session choices too.
Thanks for being alert.
Eric
hi – i am a longtime slack user. i was running 14.0 and using the ftdi_sio to talk to a printer and it was working fine, then i upgraded to 14.1 and the ftdi_sio now goes into an infinite connect/disconnect loop on the device. did 14.1 add something that is competing with ftdi_sio for the device? i can’t figure it out. the ftdi_sio source code hasn’t changed…
Hi eric spiering,
I have no idea – I do not use printers myself, perhaps it is wiser to open a thread on and get a wider audience for your driver issue…
Eric
hello again, i have had this posted in LQ for several days now, but nobody has replied to it. i am thinking there is some configuration change in 14.1 that is now fighting for the ttyUSB0 connection, but i don’t know where to look to find it. do you see anything in these logs that might identify the culprit? This device connects fine to putty, so the device is good, but i need to use my minicom!
Hi eric
If this is a Slackware-specific issue, then you should request that the moderators move your post to the Slackware forum () where there will be more knowledgeable people than in the generic forums.
The first line I read in your post, “I have slack 14.1, kernel 3.14.3” means that you are not running a stock Slackware 14.1 (which contains kernel 3.10.17 instead).
Eric
Hi Eric,
I just install your new chromium package and I have issues with the keyboard.
More details: I’m using a QWERTY keyboard with an IT layout and it’s impossible to type every accented letter:
à is ‘
è is [
ì is =
ò is ;
ù is nothing.
It’s a chromium issue because everywhere in the system typing is good and I tested this on two 64bit slackware one with kernel 3.10.17 and one with 3.14.1.
With the previous version of chromium all was good.
What would suggest to do?
Hi Alberto
Are you perhaps using SCIM or Ibus as text input methods? There is a bug in Chromium 35 apparently with XIM, as found here:
Eric
Nope.
But I found a similar issue (probably the same, but explained better than mine) and setting the LANG variable before starting chromiu seems to help:
$ LANG=it_IT.UTF-8 chromium
In other words, browser keyboard layout is “locked” to en_US.
Sorry:
Upgraded just know: no problems at all with “italian” letters ( or keyboard).
I’m on 64–current with the entire Aliens’ stuff (multilib, kde, eccecc).
I have:
export LOCALE=it_IT.UTF-8
export LANG=it_IT.UTF-8
in /etc/profile.d/lang.sh
italian keyboard in Kde and:
Option “XkbLayout” “it”
in the InputDevice section of my /etc/X11/xorg.conf
HP
It doesn’t work for me.
Alberto: what family of fonts are set in chromium?
I guess standard ones. Anyway, after rebooting everything works fine. Thanks for your suggestion =)
Sadly there’s another problem: after updating, chromium doesn’t see java.
Worst release ever =(
P.S. In firefox everything works fine.
Hi Eric
Tried updating weekly version of calibre, but according to slackpkg+ there are no updates. Checked around various mirror sites and it seems that only the master has been updated.
Hi Alberto
Starting with chrome and chromium 35, Google has removed support for NPAPI plugins (Netscape Plugin API i.e. the mozilla browser compatible plugins). This means that things like the icedtea-web Java plugin stopped being supported, but also the Pipelight plugin and several others that do not “talk” Google’s own PPAPI (Pepper Plugin API).
It sucks.
Eric
Hi Phil
If everybody mirrors from my “taper” mirror then nobody will have had new packages. It looks like the mirror cron job had become stuck.
It’s been repaired now.
Eric
Hi Eric,
thanks for your reply.
I don’t use openJDK, but Oracle’s official jdk (repackaged with the slackbuild included in Slackware).
Does it make any difference?
Cheers Eric
My mirror now up to date 🙂
Hi Alberto
The closed-source Oracle Java browser plugin uses the same NPAPI protocol as the opensource icedtea-web browser plugin. Both are incompatible with Chrome and Chromium 35 and higher.
Eric
I see =/
The worst part is that also google talk plugin is blocked now.
Alberto, the newest version of the HangoutPlugin should be a PPAPI version and work in Chrome/Chromium 35.
Eric
I tried with the last one (version 5.4.1.0 – according to what the SBo slackbuild extracted) and it is not visible among plugins =(
If you succeed in making it work, let me know =)
Thanks again!
Hi Alberto
The google-talkplugin 5.4.1.0 installs both the NPAPI and the PPAPI libraries. It takes a small change to make the PPAPI library available in chromium if you are using the SBo script to create a package.
Below is the patch I applied to the google-talkplugin.SlackBuild :
Eric
# —–8< ---------------------------------- 26c26 < VERSION=${VERSION:-4.9.1.0} --- > VERSION=${VERSION:-5.4.1.0}
37a38
>
42a44
> # Just in case:
65,66c67,75
< chmod 0755 $PKG # Put this back. < rm -rf etc/ # The cron job is debian/ubuntu only. --- > # Put this back.
> chmod 0755 $PKG
>
> # change /usr/lib/chromium-browser to /usr/lib/chromium
> mv ${PKG}/usr/lib/chromium-browser ${PKG}/usr/lib/chromium
>
> # Remove cron update script:
> rm -rf ${PKG}/opt/google/talkplugin/cron
> rm -rf ${PKG}/etc
# —–8< ----------------------------------
It works!
Thanks a lot, Eric.
Hi Eric,
since Chromium 35 doesn’t support NPAPI plugins any more, could you make available again your last Chromium 34 package?
Thanks a lot!!
I get an error when I try ann open a book on the command line with fbreader 0.99.4.
Gentoo had the following patch that works for me:
— fbreader-0.99.4-pristine/zlibrary/ui/src/qt4/filesystem/ZLQtFSManager.cpp
+++ fbreader-0.99.4/zlibrary/ui/src/qt4/filesystem/ZLQtFSManager.cpp
@@ -49,3 +49,4 @@
path = replacement + path.substr(1);
– }
+ }
+ ZLUnixFSManager::normalizeRealPath(path);
Hi Brad
Ah, weird! I see the same crash, and indeed only happens when you try to open a book on the commandline.
I am going to look at that patch, thanks.
Eric
Hi JesusM
Chromium 35 is a security release, so it is not advisable to keep running 34.
If you need Java or Netflix, then Firefox will be your only option.
If you want to know how hard it is to switch pipelight to a PPAPI plugin because of the sandboxing of Chrome plugins, read
Eric
Hi Eric! I’d like to thank you for your build of Chromium 35. It feels much lighter and faster than the previous versions.
However, there is a problem and I’d like to troubleshoot it in order to know whether the issue lies in the build, or the software itself, or my own configuration.
My keyboard layout is US International, which has dead keys. Now, starting from Chromium 35 I cannot produce some characters with dead keys, but I can produce others. Among the characters that I can write are: á é ñ, and among the characters that I cannot produce are: « » ¿ ç. Some of them (like ¿) are essential to write proper Spanish.
Thus: Is this due to some ./configure parameter? Is this because the software itself (I understand they’re using a new toolkit).
My own configuration may also be the culprit, but I doubt it because I had no problems until Chromium 34.
Thanks again,
Eduardo
Some comments:
1. I got SCIM to work in chinese and it works very well indeed and I have a fully functional bilingual (for me) system.
2. The latest version of Libre Office is quite good and I am transitioning completely away from microsoft office except for shared edits with stored changes (just because I dont feel completely comfortable with this feature in microsoft office.)
I still have to disable hardware acceleration manually otherwise Libre Impress hangs on slide shows.
3. I recently had a horrid experience with adding a 3rd hard drive to my machine which has to be added to an SATA slot number smaller than the boot drive. Finally i got it going using UUID and mkinitrd and lilo.conf editing. This really was a pain and took a lot of time to figure out and I almost despaired of adding the 3rd hdd. The documentation on this was not so clear and I had to dig. Others might not be so determined as me.
Just to notice you the new release of mirall
;-)))
Thanks in advance!!!
Hi Eric
Please I need chromium-33.0.1750.152-i486.txz you have this package? Please post link!
Hi Eric, I noticed you upgraded chromium to 36. I use Slack64-current multilib. I’m thinking about switching to firefox because since the latest X.org upgrade in current, chromium in Slack64-current began to be really sluggish when typing text. This began to be apparent in chromium 35 (after the X.org upgrade) and continued in chromium 36. To this add the known issue with dead keys and foreign languages.
I’ll continue to upgrade chromium as you provides releases, but for now it will become my second browser, not the default. The typing issue makes it very uncomfortable to use it.
Thanks again for all your effort. I hope this gets solved soon.
Regards,
Eduardo
Hi Eric,
as always great work with the new chromium release.
I’d like to report a bug that wasn’t there in previous releases: it is not possible to set it as default browser.
I don’t know if it is due to the build process or to chromium itself.
Hi Alberto,
Works here with Chromium 36.0.1985.125 and running Slackware (almost-)current and KDE 4.13.1.
Note that setting it as the default browser changes the xdg-open behaviour.
How -current does -current need to be? I am running multilib -current but havent updated to the most recent -current. I notice that you are also not-quite-current. The latest KDE and chromium are up and working ok. I find some problem with KDE occasionally (kwin blows up and gives and error message but I can keep working) and LibreCalc does weird stuff sometimes but it’s all tolerable.
Actually I am a bit leery of the latest -current with the display problems.
Hi Eric, I’m happy to say that Chromium behaved itself again after the latest X.org update (I use Intel chips). The issue with international characters continue, though. Thanks for everything!
Hi,
Thank you for providing this site and your slackware packages.
My apologies if this should be posted to the LibreOffice dev group. I searched for a ticket on this but couldn’t find one.
I have been using LibreOffice from your slackbuilds (slackware64-14.1) and have had no issues until the 4.3.0 release. Just recently, I tried saving in MS *.xls and *.xlsx formats and received an error from LibreOffice stating that saving failed. The export works with OpenOffice 4 and previous versions of LibreOffice.
Just wondered if you or anyone else has had this issue.
Cheers,
Fred
i regularly save in those formats .xls .xlsx from libreoffice calc and dont have any trouble.
i am running slackware 64 -current (linux 3.14.16) and the latest libre office. i had some issues with calc blowing up after font changes a couple versions back but it works fine now.
Hi Eric,
I just installed the last version of your pepperflash package (15.0.0.152) and I noticed a weird thing.
In chrome://plugins page I have the following flash version (but the path is the pepper’s one):
I’m using chromium 37.0.2062.94.
Is this normal?
There is nothing wrong with the way the plugin reports its version to web sites. Just check
The problem is in the /etc/default/chromium file where I try to determine the flash version. The string in the binary changed so it’s reporting nothing now, and in chroime://plugins that translates to this weird 11..2.999.999 version.
Make sure the version determination string in /etc/default/chromium is changed to:
flashversion=$(strings $flashso|grep “LNX “|sed -e “s/.*LNX //”|sed -e “s/,/./g”)
It’s really only cosmetics, but I refreshed my package anyway to fix this.
Hi Eric, just wanted to ask you if you could update your scripts regarding SimpleScreenRecording. The changes are quite important imo.
Building from git works pretty well, the “simple-build-and-install”+”postinstall” scripts do all the magic (just needed to disable pulseaudio).
Sorry I couldn’t send you proper patches for such an easy task to save you the minutes, I lack knowledge on the topic.
Hi Jerónimo – what scripts should I update and why? And with what/
Eric
The slackbuild from here:
Because it does everything flawless but I get no exec file (notice that it download latest git master when it check the current date). Can’t find the program itself anywhere.
I guess with the “simple-build-and-install” script (diff).
Sorry If it’s me and I’m doing something wrong.
hi Eric, three (four?) years ago I downloaded a package of yours — Shisen — built without all the (KDE) cruft. I used to love playing it and, having a new 2nd-hand computer, looked to download the game again. alas, cannot find it onsite, search returned no useful results. can you help? thank you.
Hi jr
I never had a “shisen” package separately in my repository. Shisen is part of KDE, can not be shipped separately.
Perhaps you are thinking about another Mahjongg game? In my repository I have xmahjongg (quite old, just a 32-bit Slackware 12.1 package).
hi Eric, thanks. I played kShisen a few days ago (love these tile matching games) and must have got mixed-up. I’d appreciate an url for the mahjonng game, it’s the one with the flowery(?) background image, right? thank you in advance.
regards, jr.
Hi Eric,
few week ago you talked about the 2.x release of Calibre and the doubts you had about embedding Qt5 libraries or keep them separate. In the meanwhile your Calibre package would remain un-updated. Have you made a choice?
Thanks!
Hi JesusM
I am too busy with work (the kind that earns money) to spend a lot of time on Slackware at this moment. I have decided not to try and build a calibre 2 package for the moment because it needs research time which I do not have right now.
Hi Eric,
Thanks for your packages.
I’ve tried your Chromium package on my Slackware-14.0 (release: 37.0.2062.94).
But I noticed the omnibox doesn’t work as usual: if I put there, for example “hello world”… well nothing happens!
I expected to be redirect to:
“w w w . g o o g l e . c o m/search?q=hello world”.
I’ve tried to change search engine and noticed that the matter is in “%s” variabile not expanded regularly.
For example if I create a new serch engine called with following string:
myengineaddress/search?q=%s
Then if I put something in omnibox and press enter, appears the address:
“myengineaddress/search?q=”
…without “%s” expanded.
Now, default search engine string (related to google engine) is made of just variabiles:
“{google:baseURL}search?q=%s{google:originalQueryForSuggestion}{google:assistedQueryStats}{google:searchFieldtrialParameter}{google:bookmarkBarPinned}{google:searchClient}{google:sourceId}{google:instantExtendedEnabledParameter}{google:omniboxStartMarginParameter}ie={inputEncoding}”
So here the answer to why nothing happened when I tried to search something using omnibox.
Can you confirm this release behavior?
Or pheraps could be a my local system matter?
Are you going to update chromium package also for slackware 14.0?
Thanks in advance! 🙂
Since I recently bought a new laptop I had to upgrade to Slackware current just to get it to boot. Since I did that I thought I’d try your kde upgrade to 4.14.2. Worked fine except for Okular which could no longer read mobipocket format. I had to back out to Slack 14.1 to get it working.
Just to not only complain here… I really really really find your multi-libraries handy. Thank you for that hard work and saving me from the aggravation. 🙂
I looked at your remark about Okular and I have found the culprit.
There is an issue with the order in which some packages are built.
I have fixed that order and the next KDE 4.14 release will have an Okular with mobipocket support.
In the meantime, you could install calibre or FBreader which both support .mobi.
Hi,
installed package wireshark on my slackware but after installation when I try to run the following error occurs:
bash-4.2 # wireshark
wireshark: error while loading shared libraries: libgnutls.so.26: can not open shared object file: No such file or directory
Can you help me?
Thank you
You probably installed my wireshark package on Slackware 14.1 which I built for Slackware 13.37. The glutls package was updated since Slackware 13.37 and the package needs a recompilation.
You can download the wireshark “build” directory and run the wireshark.SlackBuild script. The package should be available in /tmp when the compilation finished.
A request. In a future update of ffmpeg package would it be possible to add libfdk_aac to the encoders? And thank you very much for your hard work. Obrigado!!
Hi fabio, I can do that.
hi eric,
i wanted to ask u this question quiet longtime. I have a lenovo t400 and i can recall u use to have the same! … i have always run SC on it, and everything was quiet fine, minor problems but they could be easily fixed … since pat updated “x/*” on July 15, which messed up almost everything. Despite all my efforts i couldn’t get the issue with graphic fixed … openGL got massive problems, the lcd is partially colored with blue or black till refreshing opened windows, VSync cusses freezing and etc … i just wanna know if u have had also the same problem on ur t400? any solution to this? (of course if u still have ur t400!) … and do u know any way to upgrade Slackware Current to the version before July 15? (Just to mentioned i don’t have any of those problem on S14.1)
thank u in advance,
bests
Hi Inman
I still have that T400 and use it daily – it is my main workstation for work and home.
I am running Slackware64-current on it and have never experienced the problems you are describing. Could it be that you missed some new packages that you forgot to install, or else have not deleted some package that was removed in Slackware-current? A properly configured slackpkg will offer to install all missing packages when you run “slackpkg update ; slackpkg install-new”.
By the way, “slackware-current” does not have backup versions. You either have what’s available today, or you have to go back to Slackware 14.1 which is the most recent stable release available.
Thanks for the chromium src scripts. Just a small thing to say. I don’t use printing and so cups is not installed. The chromium build fails when it tries to do the gyp stuff and after a lot of digging I found that I needed to add -Duse_cups=0 and then all worked. I don’t enable NaCL (sodium chloride I use) and I don’t even know what advantage there is using NaCL (other than food tastes better with it, but might be cooks fault and not food’s fault).
When you use one of my scripts to compile a package, I only guarantee its working on a full installation of Slackware, since that is how I build them myself.
NaCl is Google’s Native Client. See so you can decide for yourself if this is useful for you.
I understand the full-installation aspect of Slackware and it is a key thing for anyone using Slackware to understand, especially newcomers. I mentioned the cups config in case someone else might stumble into it. Thanks again.
(FWIW I don’t need the NaCL)
Just noticed the update on ffmpeg. Thank for the new decoders/encoders. I know you have a real life, and taking the time to include requests was very kind of you.
Hi Eric,
RE; ip-address proxying
This was posted from the Mistress of the house’s computer
and if this got through, your explanation of my ISP proxying traffic from my computer through somewhere else, is verified.
Will expand further off-line from your feedback-of-blog.
Regards,
mike.
Thanks so much for the great works that you have done. I have been redirected countless times from searches to your works and every time they work!
Thanks again!
Ed
Hi Eric,
First of all I have to say that you’re one of the main forces that help me to stay and develop using Slackware. I’m running current-64, multilib and a lot of alien packages absolutely updated. I start using qemu and vde networking from your first packages. My present issue is on vde. For some reason rc.vdenetwork (nat option) could not start dnsmasq and serve my qemu VMs. The error message is: “dnsmasq: failed to create listening socket for 10.111.111.254: address already in use”. Could you give me some hint ?
Cheers
Gilcio
Hi Gilcio
It sounds as if you already have a DHCP/DNS server running on your computer.
You’re right. I did a ps ax | grep dhcp and results: “6476 ? S 0:00 /sbin/dhcpcd -B -K -L -G -c /usr/libexec/nm-dhcp-client.action -h orion eth1” but I have no idea of what program could have started it?. I will look at it in more detail. As always thanks for the fast answer in a sunday afternoon.
How do I make these stop? I get emails but there’s no info on how to make them stop. If I subscribed I made a mistake.
Gilcio – the dhcpcd is the “DHCP client daemon” which manages your computer’s IP address. It is not a DHCP server.
If there is a DHCP server, it will be listening on port 67 and you should find its process with the command (as root): “netstat -panel | grep :67”
R.H. – all the emails you receive from this blog are because you checked a subscription checkbox below one or more articles. I do not send them myself.
The emails you are getting should contain a link that allows you to manage your subscriptions.
I figured out what’s happening. Some emails from your blog show the unsubscribe and some do not.
emails with subject [New comment] do not show the unsubscribe and those with [New post] do.
I can see the unsubscribe link always:
Eric, sorry for a late acknowledge of your answer. I did a fresh reinstall of slack64 current and all my problems disappear. Thanks for your time
Eric,
When are you planning to upload patched (CVE-2015-0235) glibc-multilib?
Will the updates on glibc (both stable and current) trigger an update in the multilib set? yes, it is a request. thanks Eric.
Yes, please update multilib glibc as soon as you can. It’s a pretty dangerous vulnerability. More details on this and other up-to-date computer vulnerability news:
Thanks for making multilib! I couldn’t use more than half of what’s out there, if there weren’t this easy multilib setup! 🙂
You guys don’t seem to realize that I do all this Slackware stuff in my spare time. If you start demanding that I must do things I will start demanding money in return.
The updates will arrive as soon as I have time at home to build them.
in my defense: 1 – You created the space for requests and 2 i asked politely, not demanded. 3 if i had money you wouldn’t have to demand. It would be my pleasure due to the great work you do for us all. sorry to bother man, but you are always so fast that we grew spoiled hehe.
Requets for new stuff is one thing. Requests to work faster is not acceptible to me. Yes, you were polite about it, fabio, and it was on the planning list anyway. My comment was more directed at Alex and MajorLunaC – I am well aware of glibc updates. This GHOST vulnerability is hard to exploit, actually there are no real-life exploits yet (apart from one for Exim which Slackware does not ship). You should try not to be so paranoid. And if you are, you know where the source code is.
Wow. So the words “please”, “as soon as you can”, and “Thanks” now mean “Work faster slave! Right now, damit!” in modern internet lingo? Let me interpret for those who think everyone is demanding instant results for a completely free repository made in spare time, and that everyone is ready to take a bite out of you. Maybe we use different dictionaries:
“please” = “I beg of you”. “I ask of you kindly”. Even “I seek your help”.
“as soon as you can” = “Whenever you are willing and able”. “I now you work hard, and you do in your spare luxury time in which you could be having fun, so if you happen to find the time, anytime in the distant future”. “Whenever you see fit”.
“Thanks” = “I am in your debt”. “I really appreciate it”. “I don’t think I could have done it without you”.
**As for your comments:
“I am well aware of glibc updates”: First I’ve heard you mention it.
“it was on the planning list anyway”: I would love to see that list. If I had known you knew of it and it was set to be updated, I would never have posted except a “Thank You for such a quick fix! Nice Work! You work too hard! Take it easy!”
“actually there are no real-life exploits yet”: A vulnerability is a vulnerability, no matter how small it is. In my opinion, vulnerabilities get their own level of priority far above anything else. Disclosure of the vulnerability increases the chance that someone will try to exploit it 10 fold or more, which is why vulnerabilities are usually disclosed privately, quite some time before the public disclosure, to those who can and do fix them. Once the vulnerability is publicly popularized, new ways to exploit it are often explored.
I need to come up with something to say to those who consider me paranoid, like “Wait till you see what I got off your computer …”
now that i saw the guy before me had asked the same, i fell quite stupid for posting before reading everything. As for the vulnerability itself, my primary concern was about getting the applications that rely on multilib working again, since few mortals have the solid knowledge in programming to really exploit it. Thanks for the updates and as for me, will wait quietly next time.
MajorLunaC, and a good day to you too.
Actually, no one gets to see my TODO list. And if you would go looking at historical data of the multilib ChangeLog.txt you would notice that I am usually only hours or a day behind on Slackware proper. However, these were glib updates for 13.37, 14.0 and 14.1 combined with the KDE5 which cost me six weeks of multile hours per day work. At some point there is a decision to make about the balance of hobby work (which this is) and tending to family. The multilib glibc updates would come, if you were doubting that, you are quite new to Slackware and this community.
As for vulnerabilities: they are real but overrated. A vulnerability must be exploitable to be a vulnerability. And exploitable does not mean that it is exploitable _on Slackware_
Nevertheless, if it bothers you (and I respect your point of view) there is always the option to go back to Slackware’s patched glibc – temporarily – until the time that I release new multilib versions. All you lose is the capability to run 32-bit programs. You gain peace of mind by knowing your system is patched against a vulnerability.
Finally: you do not get so say what I have to do, unless you are my employer. And when you do, even if accompanied with please and thank you – you can rub me the wrong way if I just came home from a bad day at work and don’t need more people on my back.
End of statement.
Quick suggestion for when you get the time (if you feel like doing it, of course). A bump on LXQT to the 0.9 version. And thanks for the 0.7 version. Excellent for “reviving” old machines with slackware.
Hi fabio
Building LxQt 0.9 is somewhere on my TODO. Their migration to Qt 5 and (some) KDE Frameworks made the update non-trivial and I wanted a stable KDE 5 before looking closer at LxQt.
First of all a heartfelt thanks for your sterling work producing reliable slackware packages 🙂
However, your latest vlc package (2.2.0) doesn’t handle HD (720p and above) as well as the older 2.1.5 version: video stutters or freezes.
I downgraded to your 2.1.4 package (in the slackware-14.0 repo) and everything was back to normal.
I know that building vlc packages is a PITA. It could also be that my slightly outdated hardware could cause the malfunctioning. Still, if you can find the time to look into it, it would be highly appreciated.
Hi KG
Start “vlc -vv” (or with one v, or with three, depending on the amount of debug info you need) and see if there are any hardware acceleration related errors when you play a video.
I have no problems with playback of HD video here.
Hi Eric and thanks for a swift reply!
Embarrasing: after a new upgrade, the videoclip that caused trouble yesterday ran OK…
vlc -vv creates a huge amount of info, and repeated entries like this:
“[b5487e98] avcodec decoder warning: More than 4 late frames, dropping frame
[h264 @ 0xb549cf40] Frame num gap 19 17”
Maybe it should be put down to hardware. I’m on an old netbook with a Celeron processor, and when playing the HD clip in question the CPU load is as high as 80%. No wonder if there’s a bit of stuttering.
So for now I’ll stick to the latest version 🙂
diff for new version of google-chrome.SlackBuild
77c77
ar p $CWD/google-chrome-${RELEASE}_current_${DEBARCH}.deb data.tar.lzma | lzma -d | tar xv || exit 1
Hi Bryan
The recent debian packages are using xz compresson instead of lzma compression. The file is now called data.tar.xz.
The google-chrome.SlackBuild in slackware-current was updated for this, early March 2015..
Hi
Mirror contain “deleted” package after huge update (20150421)
eg:
contain kernel source 4.14.33 and 4.18.11
same kde 4.10.5 amd 4.14.x
same type of thing in the iso.
Good Afternoon,
Early this morning, my time, I went to download your latest -current iso and noticed it was 4.3 gigs in size.
Is that correct? I had thought the size of the iso had been reduced to somewhere over 2 gigs?
Thanks..
Just to inform, Libreoffice won’t start after the last batch upgrades on -current.
symbol lookup error: /usr/lib64/libreoffice/program/libi18nutil.so: undefined symbol
as well as other libs missing. Thanks for your work on the updates!
Hi fabio, I know. I have new (and working!) 64-bit LibreOffice packages ready and the 32-bit package is still compiling (will take the rest of the day while I am at work).!
about kde themes not working, if someone got the same. Just delete .kde folder in your home. worked for me. sorry about the noise early..
It’s weird indeed Eric. Thanks for your help.
@alienbob Please check here is the chromium package listed twice in
Hi websafe. I do not see any issue on my side. I updated that bug report with a comment.
Hey Eric,
I just wanted to drop you a note that I am having problems accessing your slackbuild repository at.
*Edit
Sorry I guess that is all of slackware.com.
Hi Ed
Yes, there was an issue with the interaction between the slackware.com server and the Akamai contentserving front-end. We had to reboot the server in order to fix that. Should be OK now.
Yes, perfect. Thank you!
Ed
alienbob,
thank you for all your great work….
Hi again Eric,
What is the possibility of getting you to look at building VeraCrypt for Slackware-current?.
Wow, thanks a lot!
I have installed it on -current/MATE and it works great! (Using GUI)
I have never experienced support like that which you give for Slackware. I am going to be making a donation to you and PV soon!
@ Alien Bob,
Many Thanks for the Veracrypt package!!!
Greatly appreciated.
I am writing to report a bug in konqueror 4.14.6 in slackware64 current. I go to my wordpress.com admin page, then to the “freshly pressed” page, and soon after the browser crashes.
Hi Darren
You should create a bug report in the KDE bug tracker instead. I do not solve application crashes.
Hello! The multilib openssl and openssl-solibs were not upgraded to correspond 64-bit versions.
Best regards and many thanks!
@Alex,
That is one of the easier things to do yourself, Download the 486, actually, I think they are labeled, 586, and follow the instructions here,
The information is about 1/2 way down the page.
@cwizardone, thank you for the link. In any case, alienbob already upgraded multilib.:
It was under my eyes and i don’t seen it, thanks.
Thank you, Eric. I will do it.:
Hi orbea
Your question has been addressed in yesterday’s -current update:
Please read the remarks about the reasons why you should not always update your aaa_elflibs if you are running slackware-current:
Thanks for the reply and helpful link, I have read it before and have been only updating aaa_elflibs during the bigger updates when Pat updates it.
The KWalletManager5 doesn’t show any of my wallets..
Hi fabio.
Slackware releases only get critical security updates in their /patches directory. A newer version of fluxbox for Slackware 14.1 is something you’ll have to compile yourself..
Hi luca
That will indeed be easy to add, and it also allows to compare openssl with gnutls if I add openssl only to the restricted build.
Good idea.
Hi Alex,
That was a fast fix!
I will check the next release to see if this patch has been applied or else I can apply it myself.
LO 5.0.3 should be released very soon according to.
Shame on me!!! 🙁 I really mistakenly used 32bit packages. Well than tahnx for Your time and effort to answer for so fullish mistakes 🙂
KDE 5 is running fine here, thank you for this much of work!
solve the problem..
**$
The only relevant message shown in that bit of text is “ERROR: libass not found using pkg-config”. This means something was not well, a lot earlier in the compilation process. Try finding out what happened when libass was being compiled.
Build ffmpeg from the Github repo; it builds flawlessly, and has the advantage that it is compatible with the GH repo for vlc, too, and it was the only way I could get the latter to build successfuly..
Actually Alex, I will try to come up with the patch myself. Most of the failed stuff is a simple s/IDF_/InsertDeleteFlags::/g
I have a patch now, and I am compiling.,
Hi Eric! It will be nice if you will also make a persistent Slackware Live edition.
People like the idea of having a stable and powerfull live operating system stored on an usb stick.
Daniel, did you not read my article about my Live edition? Probably not, since you are posting this in the generic Feedback section..
Systemd will never be a requirement to run a computer. Slackware current recently replaced its old udev with eudev, so its seems like slackware will not have systemd in the near future if ever.
Chris, honestly.
I can not look 5 years into the future of Slackware. Make up your mind about what you _want_ to use and then stick with that decision..
Hi orbea,
Sure. I added the two to my script, and I wanted to do an update before the weekend anyway. So hopefully these two new compat32 packages will be available for download by the end of the day..
Fair enough, I thought I’d mention it regardless. 🙂.
Phantom, you should read better. That article is NOT about Slackware Linux.?
Hi orbea
Thanks but no… I am busy enough with just my own stuff. People will have to build their own “compat32” versions of SBo packages if they need that – it is not difficult..
Hi allend,
I will probably need that when I compile vlc 2.2.2… whenever *that* comes out.
Eric,
thanks both for your answer and your huge slack work. I’ll post in LQ if I figure it out.
Svar, I may have sounded a bit harsh, which was not my intention. Hope you get your answer on LQ..
thank you for the response.
Hi Eric, in case you don’t already know, Copy.com will be discontinued on May 1:
Bad news.
Regards,
Eduardo
That’s too bad indeed Eduardo. I had not yet seen this EOL message. is down 🙁
Anze, does not sound wrong what you did. But again, what exact version of Slackware are you running? What medium did you use to install? The official Slackware 14.1 DVD ISO image?
hi bob. i am running slack 64bit 14.1 installed from official kde dvd downloaded from slackware website.
Hello.
Your last change log is time stamped about 12 or 13 hours ago, butt none of the packages have hit the mirrors I’ve looked at.
Just FYI.
You’re right. Apparently I did not add the packages to the repository. Attribute that to the spell of flu I am under.
Sorry to hear that Eric.
Hope you get better really soon.
And, thanks for all the hard work you do!.
You’re right. I’m sorry for this mistake..
Hi,
Just wondering if you had my e-mail to alien at slackware dot com about a possible mirror. I haven’t any error but don’t know how often you check it. (it’s from another e-mail address)
Hi Tonus, I think I received your email, and another too with a mirror proposal. Currently busy due to real-life situation but I _will_ respond. Thanks!
I wonder if you know about I just discovered it recently. It is a secure skype replacement.
With a qt5 dependency how hard would it be to add to slackware?.
@alienbob, I am looking into nix pkg manager to see if that can be used but so far it has given me a headache. And yes ring.cx is a rare thing and is why I watched the fosdem talk..
hi eric, do u noticed the last chromium-dev-pepperflash-plugin-20.0.0.248-x86_64-1alien.txz causes noisy and distorted audio on chromium flash video players?
inman, some people do not visit creepy websites with flash video.
And if I visit pages with flash-based advertisements, they usually do not come with audio so I would not notice that something is wrong with the audio…
Even if it is bbc.co.uk? i mean where can you find unbiased news with html5 player? 🙁 😀
For unbiased news I read a news paper. Real paper.
Thanks for all of your hard work on Slackware. Are we going to see LibreSSL enter the main repository, if not replace OpenSSL outright? Also, will the Lumina desktop environment ever make its way into the repositories? Thanks, Ryan.
Hi ryan; no and no…
@alienbob
THANK YOU…to you and all those that make slackware what it is !
Keep up the great work !!
Hi blizzack – thanks for the kind words. Glad you like Slackware, be sure to advocate it among your friends 🙂
Hello Alien .. you are a legend;)
I would like to ask you how can I make a non-root user login on KDE 5_16.02
Thanks Francesco
peebee – the 32-bit PepperFlash in my repository was vulnerable to attack. That is why I removed it.
Interesting thought that the older 32-bit Chrome still downloads an uptodate PepperFlash library from somewhere..
|
https://alien.slackbook.org/blog/your-feedback/?replytocom=18688
|
CC-MAIN-2020-10
|
refinedweb
| 19,414
| 66.64
|
Brush up on the basics or master the advanced techniques required to earn essential industry certifications, with Courses. Enroll in a course and start learning today. Training topics range from Android App Dev to the Xen Virtualization Platform.
import re from subprocess import Popen,PIPE lookfor = re.compile(r'Making install in') rhost = "somehost" cmd = "ssh "+rhost+" \"cd "+r_build_dir+" ; make -j install" build = Popen(cmd, stdout=PIPE, shell=True, universal_newlines = True, bufsize=1024 ) count = 0 while build.poll() == None: line = build.stdout.readline(1024).strip() if (lookfor.match(line)): count = count + 1 print "%04d" % count
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.
Connect with top rated Experts
9 Experts available now in Live!
|
https://www.experts-exchange.com/questions/27373641/How-to-display-a-progress-bar-of-remote-linux-process.html
|
CC-MAIN-2017-09
|
refinedweb
| 130
| 50.12
|
Using a Raspberry Pi with StepperBee
(also applies to StepperBee+)
The following details show how to control a
StepperBee using a program
written in Python running under the Raspbian operating system on a
Raspberry Pi model B single board Computer.
( If you are not familiar with basic Stepper Bee functionality, more details can be found here )
The Raspberry Pi has two standard USB sockets. One of them is usually dedicated to the keyboard (or keyboard and mouse where a small USB hub has been used). It is assumed the StepperBee is connected via a standard USB lead to one of these ports or to a free USB port on a hub if one is connected.
The following simple example of Python Code is all that is required to run stepper motor 1 forward by 50 steps with an interval of 100ms between steps. It will also turn on switching output 1. Have a look at this code and then see the explanation that follows for a more detailed description of each of the lines.
import usb.core
dev = usb.core.find(idVendor=0x04d8, idProduct=0x005d)
if dev is None:
raise ValueError('Device not found')
else:
try:
dev.detach_kernel_driver(0)
except:
pass
dev.set_configuration()
timeLSB = 100
timeMSB = 1
stepsLSB = 50
stepsMSB = 1
direction = 1
outputs = 0b00000110
data=[2, direction, timeLSB, timeMSB, stepsLSB, stepsMSB, outputs]
dev.write(2,data)d)
This uses one of the PyUSB functions to "find" the attached StepperBee. The way it finds the StepperBee is by checking all available USB ports to find the one with a device that has the unique identifier associated with the StepperBee. This identifier is made up to two parts: the Vendor ID and the Product ID. For the StepperBee this is the two hexadecimal numbers 0x04d8 and 0x005d respectively. If the device is found, the "dev" object is created for the StepperBee and can then be used for susequent operations on the StepperBee. It is only necessary to use this statement once in your program, but obviously, it needs to be before any other use of the "dev" functions.
if
dev
is
None:
raise ValueError('Device not found')
It is always good programming practice to check if the Stepper StepperBee. When the StepperBee is first plugged into the USB port the operating system tries to be helpful and associates one of its standard drivers to deal with all operations to the board. We don't want this, but, with it "attached" to the StepperBee it won't then allow any other direct operations. This line "detaches" this driver from the StepperBee allowing us to access it directly. Note that this only needs to be done the first time the program is run after connecting the Stepper StepperBee, it simply has one configuration which is it's default. However, this still needs to be "set" as the active one using the statement shown. This only needs to done once in your program and before any other code that communicates with the StepperBee.
timeLSB = 100
timeMSB = 1
stepsLSB = 50
stepsMSB = 1
direction = 1
outputs = 0b00000110
data=[2, direction, timeLSB, timeMSB, stepsLSB, stepsMSB, outputs]
dev.write(2,data)
The Stepper StepperBee consists of a message type number followed by six 8-bit numbers that correspond to the required settings for the stepper motor. The message type number for commands for stepper motor 1 is simply '2'. This indicates to the StepperBee that it should use the following 6 numbers to control stepper motor 1. Each number has been given a named variable above but it could just as easily have been entered as simple numbers into the data array. For example the above is equivalent to .....
data=[2, 1, 100, 1, 50, 1, 0b00000001]
The "timeLSB and timeMSB" variables hold the time interval between stepper motor steps. i.e. this controls the speed of rotation. The time interval, expressed in milliseconds, is made up of these two 8 bit numbers, both in the range 1 to 128. The actual resulting time interval is simply (timeMSB x 128) + timeLSB.. Similarly, the stepsLSB and stepsMSB correspond to the number of steps to do. Again this is given by two n umbers in the range 1 to 128 with the resulting number being (stepsMSB x 128) + timeLSB.
The "direction" variable is either '0' or '1' which corresponds to "forward" or "reverse" respectively.
The "outputs" variable contains the desired settings for the three switching outputs associated with motor 1. It is a simple 8 bit number with the least significant bits corrsponding to the 3 outputs. The number shown (0b00000001) would turn on output 1 (outputs 2 and 3 off).
In the example shown the time interval between steps would be 100ms, the number of steps to run would be 50, the direction would be forward and only switching output '1' would be on.
dev.write(2, data) The line immediately following the setup of the data array actually "writes" this data to the stepper bee across the USB interface. The number '2' shown is just the USB buffer name used by the stepper bee for incoming messages and remains constant for all messages sent to the board.
To stop a motor that is currently running, you just need to send a message with the steps and interval set to '0'.... i.e.
data=[2, 0, 0, 0, 0, 0, outputs]
dev.write(2, data)
Note that this will still set the outputs to the bit pattern supplied in the "outputs" variable. This offers a convenient way of manipulating the outputs without moving the stepper motor. i.e. just keep sending stop commands with your chosen outputs set.
The above set of code applies to running stepper motor 1. To run motor 2, the code is identical except that the message number is now '4'. the equivalent for motor 2 would be ....
data=[4, 1, 100, 1, 50, 1, 0b00000001]
dev.write(2, data)
In the above description its worth noting that the first few lines may look slightly complicated but, once your program has "found" the StepperBee, detached the kernel driver (if necessary) and set the StepperBee configuration, the control of the StepperBee motors and is simply a case of using the "dev.write" function (with the appropriate data in the data array) as often as you like.:
# run motor1 forward by 1000 steps with 50ms
between steps and turn on output 2
data=[2, 0, 51, 1, 105, 8, 0b00000010]
dev.write(2,data)
# run motor2 forward by 100 steps with 500ms
between steps and turn on output 3
data=[4, 0, 4, 117, 101, 1, 0b00000100]
dev.write(2,data)
# run motor2 in reverse by 10 steps with
100ms between steps and turn all outputs off
data=[2, 1, 101, 1, 11, 1, 0b00000000]
dev.write(2,data)
Note: It is probably worth emphasizing that, in the above examples, it may look odd that the number used is always one more than you would expect. i.e. in the example directly above where it is required to run for 10 steps, the stepsLSB number is 11. This is due to the number range being used as 1 -128 (not 0 - 128). It is simply a quirk of the microcontroller subsystem and should just be accepted (as odd as it may seem). This also applies to the time interval numbers.
--------
Other Facilities
There are two other facilities available on the stepper bee which can be used when required. These are to "SetStepMode" (i.e. fullstep, wavestep or power off) and the "GetCurrentStatus" function which returns the current running state of both motors and the current state of the 5 digital inputs. These functions still rely on the initial setup of the board as described above having been completed successfully. These two functions will now be described in more detail...
GetCurrentStatus
data=[8]
dev.write(2, data)
These two lines of code send a message to the StepperBee in exactly the same way as the previous but this time the data is simply the number '8'. This tells the StepperBee to read its current status and make that available to be read across the USB interface by a subsequent command. The command that actually reads this data from the StepperBee is next....
status = dev.read(0x81, 6)
The status data, made ready by the previous code, is now read by this statement. It is very similar to the "write" statement shown earlier. The 0x81 is just a hexadecimal number and corresponds to the internal USB buffer (or endpoint) associated with USB reads. It is always set to 0x81 for StepperBee reads. The number '6' is the number of bytes of data to read from the StepperBee . The variable which we have called "status" will then hold the returned data. i.e. "status" is a Python array with individual elements holding the returned 6 bytes of data. These bytes of data will now be described.
status[0]
8 bit byte where bit0 and bit2 correspond to the current state of the two motors. Logic '1' indicates "running" and '0' indicates "stopped".
status[1] and status[2]
These hold the number of steps remaining to be executed by motor 1 in the form of least significant and most significant bytes of a two byte number (respectively) exactly as described earlier for stepsLSB and stepsMSB. i.e. steps remaining = status[1] + (status[2]-1) x 128
status[3] and status[4]
These hold the steps remaining for motor 2 (the format is exactly the same as described above)
status[5]
8 bit byte where bits 0,1,2,6 and 7 and correspond to the current state of the 5 digital inputs respectively.
For example....
# read and display the current status data
data=[8]
dev.write(2,data)
status=dev.read(0x81, 6)
print status
SetStepMode
mode = 0b00000000
data=[16, mode]
dev.write(2,data)
These three lines of code send a message to the StepperBee with a message number of '16' which tells the StepperBee to use the "mode" data supplied. The mode data is a single 8 bit byte with individual bits corresponding to the setup required as shown below...
For example: set motor 1 to full step
mode and motor 2 to power off..
mode = 0b00001001
data=[16, mode].
14797
|
https://pc-control.co.uk/control/raspi/raspi-stepperbee.php
|
CC-MAIN-2018-13
|
refinedweb
| 1,712
| 61.36
|
Hello,
First post out here. Found out about streamlit last week, and it has been a game changer in my workflow. Thank you for making it!
I’ve run into some trouble with caching. The following is an example of a case that I don’t fully understand. When I uncomment the line, the caching works as I expect it too. However, when commented both functions seem to share the hash which results in them being called every time I press run.
import streamlit as st MY_HASH = {str: lambda _: None} @st.cache(max_entries=1, hash_funcs=MY_HASH, suppress_st_warning=True) def func1(bool_arg, str_arg): st.write(f'Ran func 1 - {str_arg}') return [] @st.cache(max_entries=1, hash_funcs=MY_HASH, suppress_st_warning=True) def func2(bool_arg, str_arg, str_arg2): st.write(f'Ran func 2 - {str_arg}') # st.write(f'Ran func 2 - {str_arg2}') # Uncomment this line to make the code work as intended return [] func1(False, '1') func2(False, '2', '3') st.button('run')
I was able to make it work by uncommenting the line, removing the custom hash function, or changing max_entries to 2. Is this expected behavior?
I’m working with streamlit 0.70.0 on python 3.7.9
Thank you,
Saurabh Parikh
|
https://discuss.streamlit.io/t/caching-with-hash-funcs-fails-for-similar-methods/6941
|
CC-MAIN-2020-50
|
refinedweb
| 201
| 78.85
|
BGE: menu buttons and opening webpages
This tutorial will look at the method I use to create menu buttons in the BGE. We’ll cover how to change a button’s text when it’s being hovered over, how to change the game scene or exit from a button and how we can use the webbrowser module to open websites from within the BGE.
Button set up
Open up blender and delete everything in the default scene. Then add a text object. Now, add a plane, and move it on the z-axis to sit just above the text object. In edit mode, adjust the plane to fit around the text. Then in the physics properties set the plane to be invisible (make sure you’re in game engine mode). Next, add a game property called “url”, set it to string and enter the web address you want the button to lead to.
In the logic panel, add 2 mouse sensors to the plane, one set to mouse over called ‘Over’, another set to left click, called ‘Click’. Connect these to a python controller set to module mode and in the module box enter “menus.goToSite”. Then select the text object and then the plane and make the plane the parent of the text object. And that’s our button set up and ready to go.
Before we get stuck in with the python, we need to add a camera. So do that. Then move it a bit above the plane and text. In the camera properties panel set it to orthographic. Now, instead of moving the camera up and down the z-axis to fit the text in you can adjust the orthographic text. Using this mode will ensure that all objects/text in the menu will appear flat.
The python
In the text editor lets create a new text file and call it ‘menus.py’ and enter the following:
import bge import webbrowser def goToSite(): own = bge.logic.getCurrentController().owner click = own.sensors['Click'] over = own.sensors['Over'] if over.positive: own.children[0].color = [1,0,0,1] if click.positive: webbrowser.open(own['url'], new=2) else: own.children[0].color = [1,1,1,1]
So lets break this down. You’ll notice that all the magic happens within the function goToSite(). Because we’re operating in module mode more-or-less all the code we need will exist will exist within a function. This includes the standard BGE set up stuff most scripts do (get the owner etc). That’s because when we’re using python in module mode only the function in the called function will be executed.
Text objects, do not use materials, but get their colour from their object colour. We can take advantage of this. So when the mouse over sensor is positive we can change our text colour (line 9). Because we parented the text object to the button plane we can access it through own.children[0]. Because there is only one child we know it’s index will always be 0. If your button plane had more than one child it would be better to call it through its object name (own.children[‘myTextObj’], or whatever you called it). The KX_GameObject.color propery takes a list of 4 floats (0.0 <-> 1.0), red, green, blue and alpha. Here we just set it to red.
Next we check the click sensor has been activated. Notice how it’s nested in after the over sensor. Without this it would just detect the click and not care if our mouse is over our button or not. Now the magic really happens. We use python’s bundled webbrowswer module to open the link stored in our button object by calling webbrowser.open(). And all we have to pass to this is the url we wish to open, which we’ve stored in our button object as a property. By doing this, instead of hard coding the url into our function, we can reuse our button function with other buttons and let each button object define what site it goes to. webbrowser.open() actually takes 3 arguments (url, new and raise), the last 2 have default values. New controls if it opens in the current browser window, a new window or a new tab (set to 0, 1, or 2 respectively, 0 is the default. Raise can be set to True or False and controls if the window is raised, ie, pops-up. By default this is set to True.
While the webbrowser module will try to open the browser using the behaviours we’ve specified, it is at the whim of the system and browser set up and that will have the final say over how things are opened. Outside of that function the webbrowser module doesn’t do much else. All it’s there for is to open links in the webbrowser. The other bits in that module you’ll probably never need.
Finally, we take advantage that when the mouse is no longer over our button object the mouse over sensor will send a negative pulse to the python module controller. This will cause the function to run one last time, this time setting our text back to its original colour.
Taking it further
This is how I set up all my menu buttons. In menu.py I define a separate function for each type of behaviour. Where buttons share behaviours but have different values (like url’s) I store these values in the button object to save on code re-writing. We can put any code after checking the click sensor. So we could create an exit button:
def exit(): own = bge.logic.getCurrentController().owner click = own.sensors['Click'] over = own.sensors['Over'] if over.positive: own.children[0].color = [1,0,0,1] if click.positive: bge.logic.endGame() else: own.children[0].color = [1,1,1,1]
Or we could go to another scene, such as going to the game from the main menu:
def playGame(): own = bge.logic.getCurrentController().owner click = own.sensors['Click'] over = own.sensors['Over'] if over.positive: own.children[0].color = [1,0,0,1] if click.positive: bge.logic.mouse.visible = False bge.logic.getCurrentScene().replace('Level1') else: own.children[0].color = [1,1,1,1]
Each time, we can just copy and paste the code from our previous button behaviour and change what happens in the inner-most if block.
Check boxes
Just to show you how extensible this method is, here’s a check box example.
Set up your button like before, but instead of using a text object, create a kind of check box shape:
And instead of adding a property called ‘url’ give it a property called ‘tickOb’ and make sure it is set to 0.
Then, in a new (hidden) layer, create a small tick/dot object that will go in your check box and name it something sensible. Finally, add the below function to menus.py and set the check boxes python module controller to call that.
def checkBox(): own = bge.logic.getCurrentController().owner click = own.sensors['Click'] over = own.sensors['Over'] if over.positive: own.children[0].color = [1,0,0,1] if click.positive: if own['tickOb']: # deselect box own['tickOb'].endObject() own['tickOb'] = None # update your game options/settings/whatever here else: # select box own['tickOb'] = own.scene.addObject('Tick', own) # update your game options/settings/whatever here else: own.children[0].color = [1,1,1,1]
To get your check box to change colour on when the cursor’s hovering over it, add a material and set it to use the object color:
Thoughts on menu design
Designing good menus it not too hard. I think the key thing is to keep it simple. Avoid overly chaotic, detailed or busy backgrounds. Block colours work better. Use fonts that are easy to read. And try not to include too many buttons on one singular menu. If you’ve got lots of buttons, can they be split across sub-menus? The purpose of each button and menu should be clear to the player without having to put in too much thought. At the end of the day, they want to spend most of their time playing the game, not navigating menus and figuring them out.
Think about the colour pallet of your game. Choose menu colours from that to help them feel tied in with the rest of the game. If you use game objects/images/elements/moving parts in the background don’t place them underneath text. Instead, use them as a framing tool to draw the player’s eye to the menu buttons. Blender Jones BGMC15 entry is a really good example of this:
In closing
I like using python modules for menu buttons. I find it leads to a really fast menu development and keeps all the menu functions tidy and in one place. If you look closely at the buttons.blend in the resources below, you might notice that each of the button objects is just copied down from the first one, all I’ve had to change is the function called. One of the beauties about the BGE using python is the amount of modules that python provides us with. We can use these modules to perform all kinds of actions outside of the BGE, such as directing them to a website. From a game development perspective you can use this to point players to updates, new content, or your blog **winky-face**
As always, if you’ve got any comments or suggestions leave them below. And feel free to leave your own menu/button design tips and share your own button functions.
|
https://whatjaysaid.wordpress.com/2015/02/25/bge-menu-buttons-and-opening-webpages/
|
CC-MAIN-2019-04
|
refinedweb
| 1,626
| 74.79
|
I was given a text file that has list of names phone numbers, calls in and out ect... Like this
Adams#Marilyn#8233331109#0#0#01012014#C
Anderson#John#5025559980#20#15#12152013#M
Baker-Brown#Angelica#9021329944#0#3#02112014#C
The # are delimiters between data items and each line has the call status as the last item. I need to know how I can display each persons information on the screen in a format such as:
Name Phone Calls Out Calls In Last Call
Marilyn Adams (823) 333-1109 0 0 01-01-2104
John Anderson (502) 555-9980 20 15 12-15-2013
Angelica Baker-Brown (859) 254-1109 11 5 02-11-2014
I have to use substring method to extract the phone number and add parentheses/dashes ect I also must have a while statement and a delimiter...
So Far my code looks like this Also I am in a beginners Java coding class....
import java.util.Scanner; import java.io.*; public class phonedata2_1 { public static void main (String[] args) throws IOException { String Phonefile, FirstName, LastName; Scanner PhoneScan, fileScan; System.out.println (" Name Phone Calls Out Calls In Last Call Status"); fileScan = new Scanner(new File("phonedata.txt")); while (fileScan.hasNext()) { Phonefile = fileScan.nextLine(); PhoneScan = new Scanner(Phonefile); PhoneScan.useDelimiter("#"); System.out.println(PhoneScan.next()+" "+ PhoneScan.next()+"\t" + PhoneScan.next()+"\t" + PhoneScan.next()+"\t" + PhoneScan.next()+"\t" + PhoneScan.next()+"\t" + PhoneScan.next()); } System.out.println ("\nTotal outgoing calls for the period: " + "\nTotal incoming calls for the period: \n"); } }
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/36003-manipulating-strings-file-scanner.html
|
CC-MAIN-2014-42
|
refinedweb
| 253
| 53.61
|
Groovy Goodness: Create Our Own Script Class
Groovy Goodness: Create Our Own Script Class
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
Groovy is a great language to write DSL implementations. The Groovy syntax allows for example to leave out parenthesis or semi colons, which results in better readable DSL (which is actually Groovy code).
To run a DSL script we can use the GroovyShell class and evaluate the script. By default the script is evaluated with an instance of groovy.lang.Script class. But we can extends this Script class and write our DSL allowed methods, which can then be used by the DSL script. We pass our own Script class to the GroovyShell with an CompilerConfiguration object. The CompilerConfiguration allows us to set a new base script class to be used.
The following sample is a simple DSL to change the state of a Car object. Notice we implicitly access the Car object that is passed to the GroovyShell via a binding. The custom CarScript class can access the car object via the binding and change it's state.
import org.codehaus.groovy.control.CompilerConfiguration // Simple Car class to save state and distance. class Car { String state Long distance = 0 } // Custom Script with methods that change the Car's state. // The Car object is passed via the binding. abstract class CarScript extends Script { def start() { this.binding.car.state = 'started' } def stop() { this.binding.car.state = 'stopped' } def drive(distance) { this.binding.car.distance += distance } } // Use custom CarScript. def compilerConfiguration = new CompilerConfiguration() compilerConfiguration.scriptBaseClass = CarScript.class.name // Define Car object here, so we can use it in assertions later on. def car = new Car() // Add to script binding (CarScript references this.binding.car). def binding = new Binding(car: car) // Configure the GroovyShell. def shell = new GroovyShell(this.class.classLoader, binding, compilerConfiguration) // Simple DSL to start, drive and stop the car. // The methods are defined in the CarScript class. def carDsl = ''' start() drive 20 stop() ''' // Run DSL script. shell.evaluate carDsl // Checks to see that Car object has changed. assert car.distance == 20 assert car.state == 'stopped' }}
|
https://dzone.com/articles/groovy-goodness-create-our-own
|
CC-MAIN-2018-51
|
refinedweb
| 365
| 68.67
|
This week, we're going to discuss the ins and outs of adding parameters to your loadable modules and, as we did last week, we're going to work off the appropriate section in the book Linux Device Drivers (3rd ed.) (LDD3), that you can find online here. And again, as we did last week, we'll be dealing with some of the content from Chapter 2. (The archive of all previous "Kernel Newbie Corner" articles can be found here.)
This is ongoing content from the Linux Foundation training program. If you want more content please, consider signing up for one of these classes.
So How Do We Start?
Let's start with a module source file p1.c that has nothing to do with parameters:
#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
static int answer = 42 ;
static int __init hi(void)
{
printk(KERN_INFO "p1 module being loaded.\n");
printk(KERN_INFO "Initial answer = %d.\n", answer);
return 0;
}
static void __exit bye(void)
{
printk(KERN_INFO "p1 module being unloaded.\n");
printk(KERN_INFO "Final answer = %d.\n", answer);
}
module_init(hi);
module_exit(bye);
MODULE_AUTHOR("Robert P. J. Day");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("No parameters here.");
Only one thing is slightly different about this new version of our classic module--the definition of the static int variable answer, which is initialized to 42. And since that's the initial value that's compiled directly into the module, it comes as no surprise that, when we compile, load and unload this module, we expect to see the following in /var/log/messages:
... localhost kernel: p1 module being loaded.
... localhost kernel: Initial answer = 42.
... time passes, until we unload ...
... localhost kernel: p1 module being unloaded.
... localhost kernel: Final answer = 42.
Obviously, since we started with answer having the value 42, and didn't change it anywhere, we're not at all surprised to see that the value is still 42 upon module exit. And now, we drag parameters into it.
So What Do We Do With "Parameters?"
As the name suggests, module parameters give us the ability to pass a value of some kind to a module upon loading, and also to examine its value while the module is loaded and running, and even change it on the fly if we want. Here's the simple change we would make to the code above (let's call the new source file p2.c):
...
#include <linux/module.h>
#include <linux/moduleparam.h> // add this
...
static int answer = 42 ;
module_param(answer, int, 0644);
MODULE_PARM_DESC(answer, "Life, the universe, etc.");
...
Now recompile your module and, without even loading it, let's discuss what just happened above:
Technically, if you want to define any parameters, you should include the header file moduleparam.h. Practically, it turns out that that's not really necessary since the file module.h already does that, but it's cleaner for you to do it explicitly, just in case that weird and indefensible inclusion is ever fixed some day.
Parameter variables should be defined in your module statically and at file scope, not within the scope of any function. (Typically, all parameter variable definition takes place near the top of your module source file. It just looks cleaner that way.)
What the above does is define the variable answer as a module parameter of (not surprisingly) type int, with permission 0644 (to be discussed shortly) and a brief description of the parameter itself.
After you compile your module, you can see all of the above via the modinfo command, thusly:
$ modinfo p2.ko
filename: p2.ko
description: Trivial example of an int parameter
license: GPL
author: Robert P. J. Day
srcversion: 67F60C3C2129CCFAE9687FB
depends:
vermagic: 2.6.29.5-191.fc11.x86_64 SMP mod_unload
parm: answer:Life, the universe, etc. (int) <-- oh, look!
The old macro for defining module parameters was MODULE_PARM, as opposed to the newer macro of module_param, but you'll probably find the occasional historical cruft hanging out in the kernel source tree referring back to the original form, at least in the occasional comment or for backward compatibility.
Given that you'll be working with numerous module source files this week, you can create different subdirectories for each single source file or, for brevity, you can just stuff all the source files in the same directory and modify the Makefile thusly:
obj-m = p1.o p2.o p3.o ...
which will rebuild all of your loadable modules as needed upon each invocation of make. Your choice.
So What Do I Do With That Parameter?
This:
# insmod p2.ko answer=6
# rmmod p2
whereupon you expect to see in /var/log/messages:
... kernel: p2 module being loaded.
... kernel: Initial answer = 6.
... time passes ...
... kernel: p2 module being unloaded.
... kernel: Final answer = 6.
If you hadn't set the parameter value during the module load, the value would have been 42 throughout the life of the module as it was before. But since you set the parameter value to six at load time, that value was used instead to initialize the variable. This is the sort of thing that is used by some modules to be told, for instance, their I/O address or interrupt line upon loading, and you should appreciate just how handy that can be.
But wait ... there's so much more.
Poking Around Under /sys
We've never mentioned it before but, on most sane Linux systems, the instant you load a module, an entire subdirectory structure is created for it under /sys/module/modulename. Assuming you've reloaded your p2 module with an answer parameter value of 6, here's what you should expect to find:
$ ls -1F /sys/module/p2
holders/
initstate
notes/
parameters/
refcnt
sections/
srcversion
You can poke around the other entries related to your p2 module but we're most interested in:
$ ls -l /sys/module/p2/parameters/
total 0
-rw-r--r--. 1 root root 4096 2009-07-11 09:00 answer
and there it is--a user-readable way to examine the current value of a module parameter in real time while the module is loaded and running. Just list it:
$ cat /sys/module/p2/parameters/answer
6
Not surprisingly, if your module changes that variable while it's running, the value you see will correspond to whatever is stored there at the time of examination. But, yes, there's more.
What's With That Parameter Permission Setting?
Easy--as with regular files, those permission settings dictate who is allowed to do what with that parameter file under /sys/module. In our case, since it has a file mode of 644, the owner (root) can both read and write, and everyone else can read.
It's easy to see what it means to have read access, but what does it mean to have write access? It means that, on the fly, you can do this to change the value of that internal variable at any time while the module is loaded:
# echo 21 > /sys/module/p2/parameters/answer
# cat /sys/module/p2/parameters/answer
21
#
and when you finally unload the module, you shouldn't be surprised to note that that variable now has the value 21 (unless, of course, you changed it yet again).
So what else do you need to know about parameter permissions? There are a few things.
First, when you define parameter permissions in your code, you can use numeric values as we did above (such as 0644) or, if you include the header file <linux/stat.h>, you can use the fairly intuitive macros:
#define S_IRWXU 00700
#define S_IRUSR 00400
#define S_IWUSR 00200
#define S_IXUSR 00100
#define S_IRWXG 00070
#define S_IRGRP 00040
#define S_IWGRP 00020
#define S_IXGRP 00010
#define S_IRWXO 00007
#define S_IROTH 00004
#define S_IWOTH 00002
#define S_IXOTH 00001
or any bitwise OR'ed combination to get the same effect. It's entirely up to you.
Next, if you create a parameter with a permission setting of zero, that means that that parameter will not show up under /sys at all, so no one will have any read or write access to it whatsoever (not even root). The only use that parameter will have is that you can set it at module load time, and that's it.
Finally (and this one's kind of important), if you choose to define writable parameters and really do write to them while your module is loaded, your module is not informed that the value has changed. That is, there is no callback or notification mechanism for modified parameters; the value will quietly change in your module while your code keeps running, oblivious to the fact that there's a new value in that variable.
If you truly need write access to your module and some sort of notification mechanism, you probably don't want to use parameters. There are better ways to get that functionality.
Anything Else?
Of course. Where do we even begin?
First, module parameters can have a number of different types such as int, uint, short, ushort, long, bool and a few others, all defined in the header file <linux/moduleparam.h>. You can even define array-type parameters. All of this is covered in the aforementioned section of LDD3, and testing any of those is left as an exercise for the reader.
Next, now that you know that loading your module creates an entire directory substructure under /sys/module/modulename that lists that module's properties and attributes, it should come as no surprise that you can view that same directory structure for any loaded module thusly:
$ modinfo fuse
which will pick up most of its output from that very directory.
The next neat feature is that it's possible to define the filename that shows up under /sys as being different from the name of the actual static variable thusly:
static int answer = 42 ;
module_param_named(readable_answer, answer, int, 0444);
MODULE_PARM_DESC(readable_answer, "The readable answer");
In the above snippet, the internal variable has the conveniently brief name of answer, while the visible filename under /sys that corresponds to it will have the (perhaps) more self-descriptive name of readable_answer. In fact, nothing stops you from defining more than one renaming of the same internal variable with different permission settings, for whatever reason you can imagine.
And, finally, if the pre-defined parameter data types aren't adequate, you can in fact create parameters with entirely user-defined data types as described here, although this doesn't appear to be a very commonly-used feature so I wouldn't worry too much about that.
Exercises for the Reader
And here's where we start a brand new feature of this column--puzzles for the reader to solve.
- As we noted above, it's possible (although admittedly unlikely) that a single static variable in a module source file has been defined with multiple module_param_named() calls to show up as more than one filename under the directory /sys/module/modulename/parameters/. Can you find such an example anywhere under the kernel source drivers/ directory? Leave a solution in the comments section.
- Is there any example anywhere under drivers/ of someone creating a user-defined parameter type for their module? Where?
- Use lsmod to list the loaded modules on your system, then run modinfo on some of them to see which ones have interesting parameters.
Andrey Said:
Thanks for the article. Here is my solution for your first question. It's not perfect, but it works find drivers -iname '*.c' -exec bash -c "found=\$(grep -i -H -n 'module_param_named' {} | awk '{ print \$2 }' | sort | uniq -c | grep -v ' 1 ' ); if [ ! \"\$found\" == \"\" ]; then echo {}; echo \$found; fi;" \;
Andrey Said:
The same approach for the second question: $ find drivers -iname '*.c' -exec bash -c "found=\$(grep -i -H -n 'param_check_' {} ); if [ ! \"\$found\" == \"\" ]; then echo {}; echo \$found; fi;" \;
|
http://www.linux.com/learn/linux-career-center/28065-the-kernel-newbie-corner-everything-you-wanted-to-know-about-module-parameters
|
CC-MAIN-2015-40
|
refinedweb
| 1,970
| 61.56
|
Everyone appreciates a fast and responsive UI, and Visual Studio is no exception. Extensions that run in Visual Studio play a significant role in how responsive the IDE will be for its users. Visual Studio has been evolving over the past few cycles to not only improve performance, but also responsiveness during operations that may take a while to execute, offering cancellation or the ability to run these operations in the background while you can interact with the IDE in the meantime.
IDE responsiveness during long-running operations requires these operations to be written asynchronously or off the UI thread, which can be challenging. Although it might be easy to write and maintain async code that uses the C#/VB async keyword for responsiveness during these long running operations, doing so can cause deadlocks if that async code is ever called by a method that must synchronously block until the async work has completed. For example, code as simple as this would deadlock if run on the UI thread of any GUI app:
It can be very tempting to write code such as the above so that you can call DoSomethingAsync() most of the time to provide a responsive UI, but call DoSomething() when you have to do it synchronously. In fact completing something synchronously is quite often necessary in VS to satisfy old IVs* interfaces that were not designed with async in mind. So how do you write async code that won’t deadlock when it must synchronously block the UI thread?
In this post, we outline modern guidelines for Visual Studio 2013 for managed code developers writing VS extensions regarding the use of async and multi-threaded code that avoids pitfalls such as the one above. We’ll start with a short history lesson in COM that may help explain why the above code deadlocks. Then we prescribe the tools and coding patterns to use to avoid these pitfalls.
See also MSDN on this topic: Managing Multiple Threads in Managed Code
A small history lesson in COM thread marshaling
With Visual Studio 2010 came the introduction of significant chunks of managed code to the Visual Studio product itself. The Visual C++ project system was the first project system to be (mostly) rewritten in (ironically) managed code. This was also the version when the text editor was rewritten in managed code. Since then Solution Explorer has been rewritten in managed code and the Javascript project system was introduced as an all-managed project system.
With that managed code came subtle but important differences in how services behaved and interacted, notwithstanding backward compatibility being a firm pillar. Relevant to this post are differences in how threading rules between components were implemented.
When everything was C++ native code, COM ensured that almost everything happened on the main STA thread (i.e. the UI thread). If code running in another apartment (e.g. a background thread) called any of these COM components the background thread would block while the call was re-issued on the main thread. This protected the COM component from having to deal with concurrent execution, but left it open to reentrancy (being invoked while in an outbound call). This technique worked whether the caller was managed (automatically) or native code (via the proxy stub that COM would generate for the caller).
When those same COM components were rewritten in managed code (C# in most cases) some of these automatic thread marshaling behaviors became less certain. For instance, if native code called the rewritten managed COM component, then it would execute on the main thread since the native caller was typically already on the main thread, or since native code calls managed code through a COM proxy the call could get marshaled. But if the caller and new service are both written in managed code, the CLR removes the COM marshaling interop boundary between the two components to improve performance. This removal of the interop boundary meant that any assurance that the managed COM service might otherwise have of always executing on the UI thread was no longer guaranteed. As a result, the conscientious managed code developer writing VS components should either write thread-safe code or be sure that every public entrypoint marshals to the UI thread explicitly before invoking any internal code to help assure thread-safety.
When everything was COM written in native code, marshaling to the UI thread was done by posting a message to the windows message queue for the main thread and then blocking the calling thread until the call completes. The main thread would pick up the message in normal course of its message pump, execute the code, and then return to the message pump. In some cases the main thread was busy, and these messages would just wait until the main thread returned to its message pump. In a few cases, this work on the main thread was actually blocking the main thread from returning to its message pump waiting for some background thread to complete its work, which in turn was blocked waiting for the main thread to do some work. This deadlock would be broken by the main thread waiting using CoWaitForMultipleHandles, which ran a filtered message pump that only processed messages with a matching “COM logical thread ID” that let it know it was related work and presumably necessary to execute to avoid deadlocks.
Switching to the UI thread
When managed code needs to marshal a call to the UI thread in Visual Studio, ultimately the same approach would be taken as in native code. If you’re on a background thread, switching to the UI thread required adding a message to the message queue and (usually) waiting for it to be executed and handling the result. But at an actual coding level this tended to surface in either of two ways: SynchronizationContext.Post for asynchronous invocation, or relying on a truly native COM component to marshal the call to the UI thread and then call the managed code back from the new thread.
In fact one of the simplest ways of getting to the UI thread for a managed develop in VS has been to use ThreadHelper.Invoke. Internally this uses the method of calling a native COM service in order to get to the UI thread and then it invokes your delegate.
The problems start when the deadlock resolving code kicks in. The COM logical thread ID doesn’t automatically propagate for managed code like it does for native code. So the VS filtered message pump doesn’t know which marshaling messages to execute when the main thread is blocked in managed code in order to avoid deadlocks. So it lets them all in. Well, almost. Posted messages (in the SynchronizationContext.Post sense) don’t get in, but all the “RPC” level marshaling calls do get in regardless of their actual relevance to what the main thread is waiting on.
The one or two fundamental ways to get to the UI thread notwithstanding, there were at least a dozen ways to get to the UI thread in VS (each having slightly different behaviors, priorities, reentrancy levels, etc.) This made it very difficult for code to choose which method was appropriate, and often required that the code had complete knowledge of what scenario it was called in, which made it impossible to get right when the same code executed in multiple scenarios.
Reentrancy
Because of this nearly wide open policy for executing code from other threads, an evil we call “reentrancy” occurs when the main thread is blocked waiting for something and something unrelated from another thread jumps in and begins executing unrelated work. The main thread may have been blocked on anything (e.g. it could be a contested lock, I/O, or actually a background thread) and suddenly it’s executing something completely unrelated. When that work never calls back into your component, the problem is merely annoying and can slow down your own code because it can’t resume execution until the offending party gets off your callstack. But if that work eventually calls into the same component that it interrupted, the results can be devastating. Your component may be ‘thread safe’ in the sense that it always executes on the UI thread, but reentrancy poses another threat to your data integrity. Consider this case:
Code inspection may not suggest that this code is vulnerable to threading issues. But the call to File.Open may result in the main thread blocking. During this block an RPC call may re-enter the UI thread, and call this same method. This second execution of the method will also satisfy the file open count test and start opening a file. It will assign the result to the last element in the array, increment the field, and exit. Finally, the original call (that was interrupted) will finish its File.Open call, and then throw when assigning the result to the array since m_filesOpened is now out of bounds (beyond the last element of the array). This is remarkably similar to multi-threaded concurrency issues, but remarkably can be reproduced even though your method only ran on the UI thread, which allowed reentrancy.
Product crashes and hangs can occur due to reentrancy when code wasn’t prepared for it due to the data corruption reentrancy can cause. And these symptoms often are detected long after the reentrancy has occurred, making it very difficult when you’re analyzing the results of the devastation to figure out how it was introduced in the first place.
Lions and Tigers and Bears, oh my!
So we have two evils: deadlocks and reentrancy. Without reentrancy to the main thread we have deadlocks when the main thread is blocked on background threads that are in turn blocked on the main thread. And with reentrancy we tend to get too much reentrancy leading to corruption, crashes and hangs. As early as Dev10, VS architects would meet to discuss the current ‘balance’ between reentrancy and deadlocks to resolve some of the ship-blocking bugs that would plague that version of VS.
Avoiding this reentrancy involves turning two knobs: what kinds of messages are allowed through the message filter and/or the priority those messages themselves come in with. Letting fewer messages in tends to create deadlocks, whereas letting in more messages tends to create more reentrancy with their own crashes and hangs. Since adjusting the overall policy in the message filter was rife with dangers on either side, each cycle we tended to fix the major bugs by some localized code change that relied heavily on the other players’ current behavior and was therefore fragile – leading to yet another meeting and code change later that cycle or in the next one.
Clearly there was a need for a systemic fix. The native COM logical thread ID was attractive but wasn’t an option for managed code as far as we could see.
Asynchronous programming in managed code
Opportunities to write async code within VS tended to be few and far between, since most of the time the code implemented some IVs* interface that had synchronous method signatures such that postponing the work till later isn’t an option. In a very few cases (such as async project load) it was possible, but required use of the VS Task Library which was designed to work from native code rather than something that felt more C# friendly. Use of the managed-friendly TPL Tasks library that shipped with .NET 4.0 was possible in some cases, but often led to deadlocks because TPL Tasks lack of any dependency chain analysis that would avoid deadlocks with the main thread. While the VS Task Library could avoid the deadlocks, it demanded that the code be VS specific instead of being rehostable inside and outside Visual Studio.
In more recent history a new pattern emerged in code that ran in the Visual Studio process: use of the C#/VB async keyword. This keyword makes writing asynchronous code simple and expressive, but didn’t work at all with the VS Task Library. Since so much code has to execute synchronously on the main thread in order to implement some IVs* interface, writing asynchronous code usually meant you’d deadlock in VS when the async Task was synchronously blocked on using Task.Wait() or Task.Result.
Introducing the JoinableTaskFactory
To solve all of these problems (deadlocks, reentrancy, and async), we are pleased to introduce the JoinableTaskFactory and related classes. “Joinable tasks” are tasks that know their dependency chain, which is calculated and updated dynamically with the natural execution of code, and can mitigate deadlocks when the UI thread blocks on their completion. They block unwanted reentrancy by turning off the message pump completely, but avoid deadlocks by knowing their own dependency chain and allowing related work in by a private channel to the UI thread. Code written with the C# async keyword also (mostly) just works unmodified, when originally invoked using the JoinableTaskFactory.
So in essence, we’ve finally solved the classic deadlock vs. reentrancy problem. There is now just one way we recommend to get to the UI thread that works all the time. And you can now write natural C# async code in VS components so by leveraging async we can improve responsiveness in the IDE so developers don’t see so many “please wait” dialogs. Goodness.
Let’s look at some concrete examples. Please note that most of these examples requires that you add a reference to Microsoft.VisualStudio.Threading.dll and add the following line to your source file:
using Microsoft.VisualStudio.Threading;
Switch to and from the UI thread in an asynchronous method
Notice how the method retains thread affinity across awaits of normal async methods. You can switch to the main thread and it sticks. Then you switch to a threadpool thread and it likewise sticks. If you’re already on the kind of thread that your code asks to switch to, the code effectively no-ops and your method continues without yielding.
The implementation of async methods you call (such as DoSomethingAsync or SaveWorkToDiskAsync) does not impact the thread of the calling method. For example suppose in the sample above, SaveWorkToDiskAsync() was implemented to switch to the UI thread for some of its work. When SaveWorkToDiskAsync() completes its work and PerformDataAnalysisAsync() resumes execution, it will be on the same type of thread it was before, which is the threadpool in our case. This is very nice for information hiding. When writing async code, you can use whatever threads you want, and your caller needn’t be aware of or impacted by it.
At this point we recommend writing async code whenever you have an opportunity to. If you’re writing a method that does I/O (whether disk or network access), take advantage of .NET’s async APIs. Obviously if the main thread is doing async I/O the system is more responsive because the message pump can be running while the I/O is in progress. But perhaps less obvious is why async I/O is advantageous even if you’re already on a threadpool thread. The threadpool is a scarce resource too. By default the CLR only provides as many threadpool threads as the user has cores. This often means 4 threads, but can be as low as 1 or 2 on netbooks (and yes, some Visual Studio customers develop on netbooks). So blocking a threadpool thread for more than a very brief time can delay the threadpool from serving other requests, sometimes for very long periods. If the main thread, which is active while you’re doing I/O on a threadpool thread, processes a message that requires use of the threadpool, you may end up being responsible for the IDE freezing up on the user because you’re blocking the threadpool. If you use await whenever you can even on threadpool threads, then those threads can return to the pool during your async operation and serve these other requests, keeping the overall application responsive and your extensions acting snappy, so customers don’t uninstall your extension because it degrades the IDE.
Call async methods from synchronous methods without deadlocking
In a pure async world, you’re home free just writing “async Task” methods. But what if one of your callers is not async and cannot be changed to be async? There are valid cases for this, such as when your caller is implementing a public interface that has already shipped. If you have ever tried to call an async method from a synchronous one, you may have tried forcing synchronous execution by calling Task.Wait() or Task.Result on the Task or Task<T> returned from the async method. And you probably found that it deadlocked. You can use the JoinableTaskFactory.Run method to avoid deadlocks in these cases:
The above Run method will block the calling thread until SomeOperationAsync() has completed. You can also return a value computed from the async method to your caller with Run<T>:
Calling the Run method is equivalent to calling the RunAsync method and then calling JoinableTask.Join on its result. This way, you can potentially kick off work asynchronously and then later block the UI thread if you need to while it completes.
The 3 threading rules
Using the JoinableTaskFactory requires that you follow three rules in the managed code that you write:
- If a method has certain thread apartment requirements (STA or MTA) it must either:
- Have an asynchronous signature, and asynchronously marshal to the appropriate thread if it isn’t originally invoked on a compatible thread. The recommended means of switching to the main thread is:
OR
- Have a synchronous signature, and throw an exception when called on the wrong thread.
In particular, no method is allowed to synchronously marshal work to another thread (blocking while that work is done). Synchronous blocks in general are to be avoided whenever possible.
See the Appendix section for tips on identifying when this is necessary.
- When an implementation of an already-shipped public API must call asynchronous code and block for its completion, it must do so by following this simple pattern:
- If ever awaiting work that was started earlier, that work must be Joined. For example, one service kicks off some asynchronous work that may later become synchronously blocking:
Note however that this extra step is not necessary when awaiting is done immediately after kicking off an asynchronous operation.
In particular, no method should call .Wait() or .Result on an incomplete task.
A failure to follow any of the above rules may result in your code causing Visual Studio to deadlock. Analyzing deadlocks with the debugger when synchronous code is involved is usually pretty straightforward because there are usually two threads involved (one being the UI thread) and it’s easy to see callstacks that tell the story of how it happened. When writing asynchronous code, analyzing deadlocks requires a new set of skills, which we may document in a follow-up post on this blog if there is interest.
You can read more about these and related types on MSDN:
- Microsoft.VisualStudio.Threading namespace
- JoinableTaskFactory class
- ThreadHelper.JoinableTaskFactory property
JoinableTask Interop with the VS Task Library (IVsTask)
What is the VS Task Library?
The VS Task Library, if you’re not already familiar with it, was introduced in Visual Studio 2012 in order to provide multi-threaded task scheduling to native code running in Visual Studio. This is how ASL (asynchronous solution load) was built for all the project systems that were written in native code. The VS Task Library is itself based on TPL, and the COM interfaces look very similar to the TPL public surface area. In particular, the VS Task Library is based on TPL as it was in .NET 4.0, so strictly speaking the VS library doesn’t help you write asynchronous tasks using async methods. But it does let you schedule synchronous methods for execution on background threads with continuations on the main thread, thereby achieving an async-like effect, just with a bit more work.
While we’re on the subject, a word of caution: even though TPL and the VS Task Library look similar, they don’t always behave the same way. The VS Task Library changes the behavior of some things like cancellation and task completion in subtle but important ways. If you’re already familiar with TPL, don’t make assumptions about how the VS Task Library works.
One important difference between TPL and IVsTask is that if you block the main thread by calling Task.Wait() or Task.Result on an incomplete task, you’ll likely deadlock. But if you call IVsTask.Wait() or IVsTask.GetResult(), VS will intelligently schedule tasks that require the UI thread to avoid deadlocks in most cases. This deadlock-resolving trait is similar to the JoinableTask.Join() method.
JoinableTask and IVsTask working together
Suppose you need to implement a COM interface that requires your method to return an IVsTask. This is an opportunity for you to implement the method asynchronously to improve IDE responsiveness. But creating an IVsTask directly is tedious, especially when the work is actually asynchronous. Now with JoinableTaskFactory it’s easy using the RunAsyncAsVsTask method. If all you have is a JoinableTask instance, you can readily port it to an IVsTask using the JoinableTask.AsVsTask() extension method. But let’s look at some samples that use RunAsyncAsVsTask, as that is preferable since it supports cancellation.
What we’ve done here is created an IVsTask using the JoinableTaskFactory. Any work that requires the UI thread will be scheduled using background priority (which means any user input in the message queue is processed before this code starts executing on the UI thread). Several priorities are available and are described on MSDN. The cancellation token passed to the delegate is wired up to the IVsTask.Cancel() method so that if anyone calls it, you can wrap up your work and exit early.
If your async delegate calls other methods that return IVsTask, you can safely await IVsTask itself within your async delegate naturally, as shown here:
Here is a longer, commented sample so you can understand more of the execution flow:
Which scheduling library should I use?
When you’re writing native code, the VS Task Library is your only option. If you’re writing managed code in VS, you now have three options, each with some benefits:
- System.Threading.Tasks.Task
- Pros:
- Built into .NET 4.x so code you write can run in any managed process (not just VS).
- C# 5 has built-in support for creating and awaiting them in async methods.
- Cons
- Quickly deadlocks when the UI thread blocks on a Task’s completion.
- Microsoft.VisualStudio.Shell.Interop.IVsTask
- Pros:
- Automatically resolves deadlocks.
- Compatible with C++ and COM interfaces and is therefore used for any public IVs* interface that must be accessible to native code.
- Cons:
- Relatively high scheduling overhead. Usually not significant though if your scheduled work is substantial per IVsTask.
- Hardest option for managed coders since tasks and continuations have to be manually stitched together.
- No support for async delegates.
- Microsoft.VisualStudio.Threading.JoinableTask
- Pros:
- Automatically resolves deadlocks.
- Allows authoring of C# 5 friendly async Task methods that are then adapted to JoinableTask.
- Less scheduling overhead than IVsTask.
- Bidirectional interop with IVsTask (produce and consume them naturally).
- Cons:
- More overhead than TPL Task.
- Not available to call from native code.
To sum up, if you’re writing managed code, any async or scheduled work should probably be written using C# 5 async methods, with JoinableTaskFactory.RunAsync as the root caller. This maximizes the benefit of reusable async code while mitigating deadlocks.
Summary
The JoinableTaskFactory makes asynchronous code easier to write and call for code that runs within Visual Studio. Switching to the UI thread should always be done using JoinableTaskFactory.SwitchToMainThreadAsync(). The JoinableTaskFactory should be obtained from ThreadHelper.JoinableTaskFactory.
As you use these patterns, we’d like to hear your feedback. Please add comments on this post.
Appendix
It is not always obvious whether some VS API you’re calling may require marshaling to the UI thread. Here is a list of types, members, or tips to help identify when you should explicitly switch to the UI thread yourself using SwitchToMainThreadAsync() before calling into VS:
- Package.GetGlobalService()
- Casting an object to any other type (object to IVsHierarchy for example). If the object being cast is a native COM object, the cast may incur a call to IUnknown.QueryInterface, and will most likely require the UI thread.
- Almost any call to an IVs* interface.
Join the conversationAdd Comment
For VS Extension authors who currently support older versions of Visual Studio, this would be an important part to consider under "Which scheduling library should I use?" summary:
Pro: Usable in any version of Visual Studio
Con: Only usable in Visual Studio 2012 and up
Con: Only usable in Visual Studio 2013 and up
Is this accurate? Is there no way to 'rip out' the JoinableTask pattern and use it in, say, VS2010?
The Microsoft.VisualStudio.Threading library is now available for Visual Studio 2010 and 2012, by installing it with your VS extension as described and allowed by this NuGet package:
|
https://blogs.msdn.microsoft.com/andrewarnottms/2014/05/07/asynchronous-and-multithreaded-programming-within-vs-using-the-joinabletaskfactory/
|
CC-MAIN-2016-07
|
refinedweb
| 4,217
| 59.64
|
A transparent boilerplate + bag of tricks to ease my (yours?) (our?) PyTorch dev time.
Project description
mytorch is your torch :fire:
A transparent boilerplate + bag of tricks to ease my (yours?) (our?) PyTorch dev time.
Some parts here are inspired/copied from fast.ai. However, I've tried to keep is such that the control of model (model architecture), vocabulary, preprocessing is always maintained outside of this library. The training loop, data samplers etc can be used independent of anything else in here, but ofcourse work better together.
I'll be adding proper documentation, examples here, gradually.
Installation
pip install my-torch
(Added hyphen because someone beat me to the mytorch package name.)
Idea
Use/Ignore most parts of the library. Will not hide code from you, and you retain control over your models. If you need just one thing, no fluff, feel free to copy-paste snippets of the code from this repo to yours. I'd be delighted if you drop me a line, if you found this stuff helpful.
Features
Customizable Training Loop
- Callbacks @ epoch start and end
- Weight Decay (see this blog post )
- :scissors: Gradient Clipping
- :floppy_disk: Model Saving
- :bell: Mobile push notifications @ the end of training :ghost: ( See Usage) )
Sortist Sampling
Custom Learning Rate Schedules
Customisability & Flat Hierarchy
Usage
Simplest Use Case
import torch, torch.nn as nn, numpy as np # Assuming that you have a torch model with a predict and a forward function. # model = MyModel() assert type(model) is nn.Module # X, Y are input and output labels for a text classification task with four classes. 200 examples. X_trn = np.random.randint(0, 100, (200, 4)) Y_trn = np.random.randint(0, 4, (200, 1)) X_val = np.random.randint(0, 100, (100, 4)) Y_val = np.random.randint(0, 4, (100, 1)) # Preparing data data = {"train":{"x":X_trn, "y":Y_trn}, "valid":{"x":X_val, "y":Y_val} } # Specifying other hyperparameters epochs = 10 optimizer = torch.optim.SGD(model.parameters(), lr=0.001) loss_function = nn.functional.cross_entropy train_function = model # or model.forward predict_function = model.predict train_acc, valid_acc, train_loss = loops.simplest_loop(epochs=epochs, data=data, opt=optimizer, loss_fn=loss_function, train_fn=train_function, predict_fn=predict_function)
Slightly more complex examples
@TODO: They exist! Just need to add examples :sweat_smile:
- Custom eval
- Custom data sampler
- Custom learning rate annealing schedules
Saving the model
@TODO
Notifications
The training loop can send notifications to your phone informing you that your model's done training and report metrics alongwith. We use push.techulus.com to do so and you'll need the app on your phone. If you're not bothered, this part of the code will stay out of your way. But If you'd like this completely unnecessary gimmick, follow along:
- Get the app. Play Store | AppStore
- Making the key available. Options:
- in a file, named
./push-techulus-key, in plaintext at the root dir of this folder. You could just
echo 'your-api-key' >> ./push-techulus-ley.
- through arguments to the training loop as a string
- Pass flag to loop, to enable notifications
- Done :balloon: You'll be notified when your model's done training.
Changelog
v0.0.2
- Added negative sampling
- [TODO] Added multiple evaluation functions
- [TODO] Logging
- [TODO] Typing all functions
v0.0.1
- Added some tests.
- Wrapping spaCy tokenizers, with some vocab management.
- Packaging :confetti:
Upcoming
- Models
- Classifiers
- Encoders
Transformers(USE pytorch-transformers by :huggingface:)
- Using FastProgress for progress + live plotting
- W&B integration
- ?? (tell me here)
Contributions
I'm eager to implement more tricks/features in the library, while maintaining the flat structure (and ensuring backward compatibility). Open to suggestions and contributions. Thanks!
PS: Always appreciate more tests.
Acknowledgements
An important part of the code was designed, and tested by :
Gaurav Maheshwari · GitHub @saist1993 · Twitter @__gauravm
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/my-torch/0.0.3/
|
CC-MAIN-2021-25
|
refinedweb
| 639
| 50.23
|
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8a3) Gecko/20040808
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8a3) Gecko/20040808
At the moment only attributes without namespace can be used as ID attributes.
To support for example X+V's xv:id, it should be possible to define the
ID attribute name and namespace. So possibly we could add
set/getIDAttributeNamespace() to nsINodeInfo
Reproducible: Always
Steps to Reproduce:
Namespace information should be able to get also from nsIContent.
*** Bug 258237 has been marked as a duplicate of this bug. ***
Or am I wrong here. If someone needs namespace ID attribute he can implement
GetID();
Hmm, how did I create that duplicate.
First off, you're correct that ID attributes in fact have to be in the null
namespace in Mozilla right now (and the code you're looking at, along with
nsIContent and all callers are what would need fixing).
That said, the concept of namespaced IDs makes no sense. Consider:
<html:div
What should happen?
The point is, the ID attribute should be determined by the language the element
is from (as specified by the element's namespace). There's no need to namespace
the ID attr at that point... So it sounds to me like we would be better off
talking to "X+V" (whatever/whoever that is; got a link?) about this than going
of and implementing something that is ill-defined in general.
Won't this be solved, eventually, when 'xml:id' becomes a standard?
<html:div ?
Ok, I meant it more as a reply to comment 0. If you can use 'xml:id', there is
no need for namespaces. Shouldn't the W3C define which ID gets preference in the
'xml:id' draft?
Why can't an element have more than one ID? Am I missing something?
Ian, see (and for the XML 1.0 equivalent).
Anne, the xml:id draft doesn't specify the case I cite because it is technically
not valid XML. This is not to say that people won't write it, and we're not a
validating parser...
As you say, that's a validity constraint, and so largely irrelevant here.
I don't understand the problem with having more than one ID.
X+V adds a new ID attribute to VoiceXML fields:
The new attribute is in X+V namespace, see the DTD or field element.
Right. The question is why they're doing this instead of putting the ID
attribute in the null namespace. If people all took this route, you could end up
in a situation where you wanted two ID attributes in different namespaces on the
node, and the XML spec prohibits that...
Because the element itself might not be under your control.
XML prohibits a validating UA from processing a document with a DTD that
explicitly marks two attributes on the element as being of type ID. But since
Web UAs typically aren't validating, and namespaced documents typically don't
have a DTD, I don't see that it's very relevant. :-)
>?
> and namespaced documents typically don't have a DTD
I thought the point was that the UA assumed knowledge of the document language
(including things like IDs that would be defined in the DTD associated to that
document) from the namespace? Which rather implies to me that restrictions
placed on such documents with DTDs would also apply to DTD-less documents...
anything else doesn't really make much sense unless the intent is to confuse people.
FWIW, the DOM Level 3 Core spec defines a way to flag any attribute as an ID, see:
and friends...
(In reply to comment #14)
> >?
Well, I don't think changing nsIStyledContent would be necessary since any
namespaced ID attributes would presumably be IDs for any elements.
Actually, no. The links in comment 11 show that the xv:id attribute is only an
ID for vxml:field elements per that spec. Other vxml elements ("rule",
"cancel") use an "id" attribute in the null namespace for their ID. Still
others have no attr of type ID defined at all. The XHTML attributes being
imported into this spec naturally use "id" in the null namespace. There's also
some mumbling about an "id" attribute in the null namespace at the bottom of the
table (search for "vxml:field&" in the page at)
The basic problem here is of course that DTDs aren't "namespaced" and that
id-attributes is declared in the DTD, thus you can't specify a namespace for the
id-attribute, only a name.
In my mind defining namespaced id-attributes will always get you on thin ice.
Namespaced attributes are intended for "global" attributes that can be placed on
any element, such as ev:event, xlink:href, or xsl:use-attribute-sets. And if you
set such an attribute on an element that already has an id defined you'll break
the rule defined in comment 9.
I do realize that it's not the author that walked out on thin ice here, but
rather was pushed out there by the W3C vote for WONTFIX here (and in bug 275196, for that matter).
|
https://bugzilla.mozilla.org/show_bug.cgi?id=258238
|
CC-MAIN-2016-30
|
refinedweb
| 875
| 72.05
|
I’ve been writing Ruby code for years, and like me, you may reach out for familiar tools like Redis or Sidekiq to process background jobs and data. I’d like to show you why sometimes, pure Elixir is more than enough to get the job done.
This is the story of how I replaced Redis/Exq with pure Elixir and the Erlang :queue class.
A huge thank you to omgneering for his videos on GenServer. If you’re having trouble understanding GenServer, this will be a tremendous help.
I built Magnetissimo as a learning exercise to really understand Elixir and see what it takes to ship production ready code. And while the version on Github right now works, it’s lacking in very important areas I set out in the initial goals.
Goals:
It wasn’t easy for people to run the project, even developers had questions and I wanted zero-friction.
The less steps there are in running Magnetissimo the higher the adoption rate would be.
I found my solution in Erlang’s queue class, and in Elixir’s GenServer.
This was step one towards that ultimate goal.
The first thing I did was make Crawlers, and create a worker for each of
those crawlers. All supervised by my supervisor.
children = [
# Start the Ecto repository
supervisor(Magnetissimo.Repo, []),
# Start the endpoint when the application starts
supervisor(Magnetissimo.Endpoint, []),
worker(Magnetissimo.Crawler.ThePirateBay, []),
worker(Magnetissimo.Crawler.EZTV, []),
worker(Magnetissimo.Crawler.LimeTorrents, []),
worker(Magnetissimo.Crawler.Leetx, []),
worker(Magnetissimo.Crawler.Demonoid, []),
]
Each crawler is actually a GenServer implementation. For example, here’s ThePirateBay version of the crawler.
defmodule Magnetissimo.Crawler.ThePirateBay do
use GenServer
alias Magnetissimo.Torrent
alias Magnetissimo.Crawler.Helper
def start_link do
queue = initial_queue
GenServer.start_link(__MODULE__, queue)
end
def init(queue) do
schedule_work()
{:ok, queue}
end
defp schedule_work do
Process.send_after(self(), :work, 1 * 1 * 300) # 5 seconds
end
# Callbacks
def handle_info(:work, queue) do
case :queue.out(queue) do
{{_value, item}, queue_2} ->
queue = queue_2
queue = process(item, queue)
_ ->
IO.puts "Queue is empty - restarting queue."
queue = initial_queue
end
schedule_work()
{:noreply, queue}
end
def process({:page_link, url}, queue) do
IO.puts "Downloading page: #{url}"
html_body = Helper.download(url)
if html_body != nil do
torrents = torrent_links(html_body)
queue = Enum.reduce(torrents, queue, fn torrent, queue ->
:queue.in({:torrent_link, torrent}, queue)
end)
end
queue
end
def process({:torrent_link, url}, queue) do
IO.puts "Downloading torrent: #{url}"
html_body = Helper.download(url)
if html_body != nil do
torrent_struct = torrent_information(html_body)
Torrent.save_torrent(torrent_struct)
end
queue
end
# Parser functions
def initial_queue do
urls = for i <- 1..6, j <- 1..50 do
{:page_link, "{i}00/#{j}/3"}
end
:queue.from_list(urls)
end
def torrent_links(html_body) do
html_body
|> Floki.find(".detName a")
|> Floki.attribute("href")
|> Enum.map(fn(url) -> "" <> url end)
end
def torrent_information(html_body) do
name = html_body
|> Floki.find("#title")
|> Floki.text
|> String.trim
|> HtmlEntities.decode
magnet = html_body
|> Floki.find(".download a")
|> Floki.attribute("href")
|> Enum.filter(fn(url) -> String.starts_with?(url, "magnet:") end)
|> Enum.at(0)
size = html_body
|> Floki.find("#detailsframe #details .col1 dd")
|> Enum.at(2)
|> Floki.text
|> String.split(<<194, 160>>)
|> Enum.at(2)
|> String.replace("(", "")
{seeders, _} = html_body
|> Floki.find("#detailsframe #details .col2 dd")
|> Enum.at(2)
|> Floki.text
|> Integer.parse
{leechers, _} = html_body
|> Floki.find("#detailsframe #details .col2 dd")
|> Enum.at(3)
|> Floki.text
|> Integer.parse
%{
name: name,
magnet: magnet,
size: size,
website_source: "thepiratebay",
seeders: seeders,
leechers: leechers
}
end
end
Initial_queue is a function that creates an Erlang :queue object with the initial urls that sprawl out and link to other pagination pages or individual torrent links.
Each element in my :queue is a tuple, with two parts:
{:torrent_link, “some_url”}
{:page_link, "some_url"}
Using function pattern matching in the process methods above, I can easily either parse a pagination link or parse an individual torrent page.
The schedule_work function then schedules the next item to be processed.
The end result is more cohesive code, with less indirection. It’s much easier now to add a new crawler to the project. It’s also easier to know what exactly is running. Less chances for bugs, and more predictable growth behavior. One downside with this approach is volatility. If my app shuts down, I will lose the queue to process. But I’m comfortable with that for this particular project.
One potential upgrade will be to change from handle_info, to an async handle_call.
My next step is going to be using Distillery to build a single deployable executable so end users can just run it and have Magnetissimo start service on localhost.
Create your free account to unlock your custom reading experience.
|
https://hackernoon.com/background-processing-using-elixir-genserver-and-the-erlang-queue-class-8d476d4942c2
|
CC-MAIN-2021-10
|
refinedweb
| 759
| 52.46
|
Optimizations
In this lesson, you’ll learn about new optimizations made for Python 3.8. There are several optimizations made for Python 3.8, some on which.06228263299999526 >>> # Python 3.8 >>> timeit("raymond.twitter", globals=globals()) 0.03338557700000422
You can see that looking up
namedtuple is 30% to.
By default,
timeit() runs within its own namespace, where it does not have access to variables or functions you have defined. There are essentially two ways you can give it access to objects like
raymond in the example:
- Using
setup=...to create all necessary variables or object
- Using
globals=...to change which namespace
timeit()runs within
For simple exploration, option 2 is usually easier. Option 1 gives you slightly better control, as it’s easier to keep track of what’s in the
setup statement than to have full control of what’s in a namespace. However, for examples like these option 2 is more than adequate.
Using
globals=globals() is particularly easy when exploring, as that effectively tells
timeit() to run within the current global namespace.
See the documentation for more details about how
timeit() works.
Dear Ladies and Gentlemen,
I love coding and I am a passioner with computing. First of All, I would like to thank you for your amazing offer and your suprise idea. God Bless you All. Please think Us and on the next year around. Whatever you offer it makes good. Once more time thank you all as I pleased with your present/gift. Thank you once more and I wish you to have a good weekend.
Truly Yours
Ariadne-Anne Tsambali Phd Researcher
Become a Member to join the conversation.
Pygator on Nov. 28, 2019
Why do you have to pass in globals=globals()?
|
https://realpython.com/lessons/optimizations/
|
CC-MAIN-2020-40
|
refinedweb
| 289
| 68.47
|
The previous two articles in this series, Introduction to Play 2 for Java, and Developing Scalable Web Applications with Play, explored the value of the Play Framework, set up a development environment, wrote a Hello, World application, and then explored Play's support for domain-driven design and its use of Scala templates as we built a simple Widget management application. Now we turn our attention to probably the most exciting part of Play: asynchronous processing. Here we explore Play's support for sending messages to "actors," relinquishing the request processing thread while those actors run, and then assembling and returning a response when those actors complete. Furthermore, we explore integrating Play with Akka so that our play applications can send messages to actors running in a separate Akka server for processing. In short, we're going to learn how to service far more simultaneous requests than we have threads and scale our application almost infinitely.
The code for the examples provided in this article can be downloaded here.
Asynchronous Processing
The Play Framework not only allows you to think in terms of HTTP rather than Java APIs, which alone would be enough to encourage you to adopt it, but it also allows your application to relinquish its request processing thread while executing long-running operations. For example, in a standard web framework, if you need to make a dozen database calls to satisfy your request, then you would block your thread while waiting for the database to respond. If your web container had 50 threads, then at most you could support 50 simultaneous requests. Play, however, allows you to build a message, send it to an "actor," and then relinquish its thread. The actor can then make your database calls for you and, when it is finished processing, it can send your Play application a response message. Play delivers the message to your application, along with the request/response context so that you can respond back to the caller. This means that 50 threads can service far more than 50 simultaneous requests. In addition, the actors to which you send messages do not necessarily need to be collocated with your application; they can be running in an Akka server on another machine. This section demonstrates how to execute actors in the Play JVM, and the next section demonstrates how to execute actors on an external server.
Thus far our controller actions have returned a Result, but now we're going to change them to return the promise of a result: Promise<Result>. Essentially this means that we will "eventually" return a response. Play knows that when it sees a promise for a result that it can suspend processing of that request and reuse the thread for other operations. When the result does arrive, then Play can use a thread to extract the response, convert it to a Result, and return that Result back to the caller.
A full description of Akka actors is beyond the scope of this article, but I need to give you enough to be dangerous (and you can read an article that I wrote about Akka here:). Akka implements the Actor Model (), which was defined in 1973 to support concurrent systems. Interest in the Actor Model has reemerged in recent years with the advent of cloud computing: In 1973 it was trying to distribute processing across multiple physical machines whereas now we're trying to distribute processing across multiple virtual machines.
Akka operates through a level of indirection: All actors live in an "ActorSystem" and your application requests a reference to an actor (ActorRef) from the ActorSystem. Your application constructs a message and sends it to the ActorRef. The ActorRef delivers the message to a MessageDispatcher that in turn delivers the message to the Actor's MessageQueue. When the actor is allotted CPU time, the actor's Mailbox checks its MessageQueue and, if there are messages available, the Mailbox removes the message from the MessageQueue and passes it to the Actor's onReceive() method. All this is summarized in Figure 1.
The benefit to this approach is that we can have millions of messages passing through a JVM and the application will not crash: Under extreme load the MessageQueue may back up, but the actors will process messages using the JVM's threads as they are able. Furthermore, the indirection allows the actor's location to be decoupled from the client. (The actor may be in the same JVM or across the country running in another data center.)
In this section we want to create an actor in the local JVM and send it a message. Listing 1 shows the source code for our HelloLocalActor class.
Listing 1. HelloLocalActor.java
package actors; import akka.actor.UntypedActor; import com.geekcap.informit.akka.MyMessage; /** * Local Hello, World Actor */ public class HelloLocalActor extends UntypedActor { @Override public void onReceive( Object message ) throws Exception { if( message instanceof MyMessage ) { MyMessage myMessage = ( MyMessage )message; myMessage.setMessage( "Local Hello, " + myMessage.getMessage() ); getSender().tell( myMessage, getSelf() ); } else { unhandled( message ); } } }
Actors extend UntypedActor and override its onReceive() method, which is passed an Object. It typically inspects the type of message and then either handles the message or returns unhandled(message). The message that we're passing around is of type MyMessage, which wraps a single String property named message (shown in Listing 2.) If the message is of type MyMessage, then the HelloLocalActor prefixes the message with "Local Hello, " and notifies our sender by invoking getSender().tell().getSender() returns a reference to the Actor that sent the message and tell() is the mechanism through which we can send a response message. The tell() method accepts the message to send as well as a reference to the sender of the message, which in this case is the HelloLocalActor.
Listing 2. MyMessage.java
package com.geekcap.informit.akka; import java.io.Serializable; public class MyMessage implements Serializable { private String message; public MyMessage() { } public MyMessage( String message ) { this.message = message; } public String getMessage() { return message; } public void setMessage( String message ) { this.message = message; } }
Now that we have an actor that can process a MyMessage, let's add a controller action that can call it. Listing 3 shows the source code for the first version of our Application class, which contains a localHello() action.
Listing 3. Application.java
package controllers; import akka.actor.ActorSelection; import akka.actor.ActorSystem; import akka.actor.Props; import play.*; import play.libs.Akka; import play.libs.F.Promise; import play.libs.F.Function; import play.mvc.*; import views.html.*; import static akka.pattern.Patterns.ask; import actors.HelloLocalActor; import com.geekcap.informit.akka.MyMessage; public class Application extends Controller { static ActorSystem actorSystem = ActorSystem.create( "play" ); static { // Create our local actors actorSystem.actorOf( Props.create( HelloLocalActor.class ), "HelloLocalActor" ); } public static Result index() { return ok(index.render("Your new application is ready.")); } /** * Controller action that constructs a MyMessage and sends it to our local * Hello, World actor * * @param name The name of the person to greet * @return The promise of a Result */ public static Promise<Result> localHello( String name ) { // Look up the actor ActorSelection myActor = actorSystem.actorSelection( "user/HelloLocalActor" ); // Connstruct our message MyMessage message = new MyMessage( name ); // As the actor for a response to the message (and a 30 second timeout); // ask returns an Akka Future, so we wrap it with a Play Promise return Promise.wrap(ask(myActor, message, 30000)).map( new Function<Object, Result>() { public Result apply(Object response) { if( response instanceof MyMessage ) { MyMessage message = ( MyMessage )response; return ok( message.getMessage() ); } return notFound( "Message is not of type MyMessage" ); } } ); } }
The Application class contains a static reference to the ActorSystem and initializes as its defined. We need the ActorSystem to host our actors as well as to send and receive messages. The ActorSystem is named, which makes it addressable, and it makes it possible to have multiple ActorSystems in the same JVM. In our case we named our ActorSystem "play", but you could have just as easily named it "foo" or "bar". Furthermore, there is a static code block in which we create the HelloLocalActor. We create actors by invoking the actorOf() method on the ActorSystem (there are other mechanisms, but this is certainly one of the easiest), passing it a Props object with the class that implements the actor. We also pass the actorOf() method the name of the actor so that it will be easier for us to lookup later.
When the localHello() action is invoked, we search for our actor, by name, using the ActorSystem's actorSelection() method. Actors are identified using an actor path, which is of the format:
akka://ActorSystemName@server:port/guardian/TopLevelActor/SubActor
In this case we are looking for an actor in the local JVM and our ActorSystem is already named, so we do not need to specify the ActorSystemName, the server, or the port. There are two guardians in Akka: system and user. System contains all Akka's actors and user contains ours. The HelloLocalActor is defined directly in the ActorSystem, so it is considers a "top-level actor". If it were to create sub-actors of its own, then they would be defined as sub-actors of the HelloLocalActor. Therefore we can find our actor with the path "user/HelloLocalActor". In the next section we'll be looking up an actor that is not in our local JVM, so we'll see a full actor path.
The ActorSelection is an ActorRef, so at this point we just need to construct a message and send it to the ActorRef. We construct a simple MyMessage and then enter the scary-looking code. There's a lot going on in the next line, so let's review what it's doing step-by-step:
- Patterns.ask: This is an Akka function that sends a message asynchronously to an actor (an ActorRef) with a message and a timeout, which eventually returns a response through a scala.concurrent.Future<Object> object. Note that the target actor needs to send the result to the sender reference provided.
- F.Promise.wrap() accepts a Future<Object> and returns a F.Promise<Object>. Akka works in terms of futures, but Play works in terms of promises, so this is just a wrapper to integrate Akka with Play.
- map() accepts a Function that maps an Object to a Result. When we receive our response from the actor, it will be in terms of an Object, but Play wants a Result.
- The Function has an apply(Object) method that accepts an Object and returns a Result. In this case we inspect the message to make sure it is a MyMessage and then return an HTTP 200 OK message containing the text of the message. We could have just as easily passed the MyMessage to a template to render the response, but I wanted to keep it simple here.
Stating this more verbosely, when we call the ask() method, Akka asynchronously sends the message to the specified actor via its ActorRef. Akka immediately returns a Future<Object> that will "eventually" have the response from the actor. Play uses Promises rather than Futures, so the Promise.wrap() method wraps the Future with a Promise that Play knows how to handle. When the actor is complete, the response is sent to the future (Scala code) that is wrapped in the promise, and we provide a mapping function that converts the Object to a Play Result. The Result is then returned to the caller as though the entire operation happened synchronously.
Next, we need to add a new route to the routes file to send a request to the localHello() method:
GET /local-hello/:name controllers.Application.localHello( name : String )
Finally, we need to add Akka support to our build file (build.sbt). Listing 4 shows the contents of our build.sbt file.
Listing 4. build.sbt
name := "SimplePlayApp" version := "1.0-SNAPSHOT" libraryDependencies ++= Seq( javaJdbc, javaEbean, cache, "com.typesafe.akka" % "akka-remote_2.10" % "2.2.3", "com.geekcap.informit.akka" % "akka-messages" % "1.0-SNAPSHOT" ) play.Project.playJavaSettings
We could import Akka's actors package, but because in the next section we're going to call an external Akka server, I opted to use akka-remote. Note that the version is not the latest: You need to pair your Play and Akka versions. (I found out the hard way using the latest version and seeing weird errors that did not direct me to the fact that I don't have the correct version.) The notation is a little different from a Maven POM file, but the information is the same:
group ID % artifact ID % version
You'll notice that I have a separate project for akka-messages. We will be serializing MyMessage instances and sending them across the network to the Akka server (called a micro-kernel) so it is important that the messages are identical. Rather than copying-and-pasting the code, I decided to create another project that includes just our message(s) and import that project into both of our projects (Play and Akka).
With all this complete, start Play (execute Play from the command line and invoke the run command from the Play prompt) and open a browser to, and you should see "Hello, YourName".
Integration with Akka
When I think about the true power of Play, what comes to mind is a web framework that accepts a request, dispatches work to one or more external servers, and then allows its thread to be used by other requests while the work is completed elsewhere. Play runs on top of Akka, and integrating Akka Remoting into Play is straightforward, so it makes it a natural choice. Listing 5 shows the source code for our actor, which looks remarkably similar to the HelloLocalActor created in the previous section.
Listing 5. HelloWorldActor.java
package com.geekcap.informit.akka; import akka.actor.UntypedActor; public class HelloWorldActor extends UntypedActor { @Override public void onReceive( Object message ) throws Exception { if( message instanceof MyMessage ) { MyMessage myMessage = ( MyMessage )message; System.out.println( "Received message: " + message ); myMessage.setMessage( "Hello, " + myMessage.getMessage() ); getSender().tell( myMessage, getSelf() ); } else { unhandled( message ); } } }
This actor receives a message, validates that it is an instance of MyMessage, and returns a response to the sender that is "Hello, " + the body of the supplied message. This is the same functionality as our local actor, but we're going to deploy it to Akka directly.
Deploying actors to an Akka server, which Akka calls a "micro-kernel", requires you to build a "bootable" class that manages the startup and shutdown life cycle events of your actors. Listing 6 shows the source code for our life cycle management class.
Listing 6. MyKernel.java
package com.geekcap.informit.akka; import akka.actor.ActorSystem; import akka.actor.Props; import akka.kernel.Bootable; public class MyKernel implements Bootable { final ActorSystem system = ActorSystem.create("mykernel"); public void shutdown() { // Shutdown our actor system system.shutdown(); } public void startup() { // Create our actors system.actorOf( Props.create( HelloWorldActor.class ), "HelloWorldActor" ); } }
Listing 6 creates a class called MyKernel that implements the akka.kernel.Bootable interface. This interface defines two methods: startup() and shutdown(), which are called when the kernel starts up and shuts down, respectively. We create an ActorSystem named "mykernel" as our bootable class is created and we shut it down when the shutdown() method is called. You are free to name your ActorSystem anything that you want: When Play sends a message to our ActorSystem, it will send the name as a parameter in the actor path. In the startup() method we create all our top-level actors, with their names.
To make our actor available remotely, we need to add an application.conf file to the root of our resultant JAR file. In Maven projects, we can put this file in src/main/resources. Listing 7 shows the contents of the application.conf file.
Listing 7. application.conf
akka { actor { provider = "akka.remote.RemoteActorRefProvider" } remote { enabled-transports = ["akka.remote.netty.tcp"] netty.tcp { hostname = "127.0.0.1" port = 2552 } } }
The application.conf file sets up a remote provider that listens on the local machine on port 2552, which is Akka's default port. This configuration allows external Akka clients to send messages to actors running in our Akka micro-kernel.
Listing 8 shows the contents of the Maven POM file that builds the Akka project.
Listing 8. pom.xml file for Akka Actors
<project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.geekcap.informit.akka</groupId> <artifactId>akka-actors</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>akka-actors</name> <url></url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>com.typesafe.akka</groupId> <artifactId>akka-actor_2.11.0-M3</artifactId> <version>2.2.0</version> </dependency> <dependency> <groupId>com.typesafe.akka</groupId> <artifactId>akka-kernel_2.10</artifactId> <version>2.3.2</version> </dependency> <dependency> <groupId>com.geekcap.informit.akka</groupId> <artifactId>akka-messages</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build> </project>
Listing 8 build our Akka actor and kernel and bundles the application.conf file with it into a single JAR file. The POM file includes the Akka actor and Kernel dependencies, but also includes a reference to our akka-messages project. The source code attached to this article will have that project - you'll need to build it before you can build this project. And remember that we externalized our message to its own project so that we could include it in both the Akka project as well as the Play project.
You can build the project with the following command:
mvn clean install
Now that you have your actor and kernel in a JAR file, you need to set up an Akka environment. You can download Akka from here (). I downloaded the previous version (2.2.4) to ensure that it is compatible with the version of Play we installed (2.2.3). The specific versions do not matter, just make sure that when you install both Play and Akka that the versions match up. Download the ZIP file and decompress it to your hard drive. Next, set the AKKA_HOME environment variable to the directory to which you decompressed the Akka archive.
To deploy your actor and kernel to Akka, copy the akka-actors.jar file that you just built to Akka's deploy directory and copy the akka-messages.jar file (that contains the MyMessage class) to Akka's lib/akka directory. With these two files in place, you can launch Akka from the bin directory by executing the following command:
./akka com.geekcap.informit.akka.MyKernel
After the Akka header is displayed, you should see something like the following:
Starting up com.geekcap.informit.akka.MyKernel Successfully started Akka
Now we need to retrofit the Play application to make the remote call to Akka. We already included the Akka remoting dependency in our build.sbt file, but to get called back we're going to need to add the following to the end of our conf/application.conf file:
akka.default-dispatcher.fork-join-executor.pool-size-max = 64 akka.actor.debug.receive = on akka { actor { provider = "akka.remote.RemoteActorRefProvider" } remote { enabled-transports = ["akka.remote.netty.tcp"] netty.tcp { hostname = "127.0.0.1" port = 2555 } } }
This will configure Play to listen for callbacks from Akka on port 2555. (The port number doesn't matter; it just needs to be different from the Akka port if you're running them on the same machine.) Next, we're going to add a new route and a new controller action to our Application class. The following shows the new route (added to the conf/routes file):
GET /hello/:name controllers.Application.hello( name : String )
This maps a GET request to hello/:name to the hello() action in the Application class, which is shown in Listing 9.
Listing 9. Application Class's hello() Method
public static Promise<Result> hello( String name ) { ActorSelection myActor = actorSystem.actorSelection( "akka.tcp://mykernel@127.0.0.1:2552/user/HelloWorldActor" ); MyMessage message = new MyMessage( name ); return Promise.wrap(ask(myActor, message, 30000)).map( new Function<Object, Result>() { public Result apply(Object response) { if( response instanceof MyMessage ) { MyMessage message = ( MyMessage )response; return ok( message.getMessage() ); } return notFound( "Message is not of type MyMessage" ); } } ); }
The hello() method in Listing 9 looks almost identical to our localHello() method in Listing 3. The only difference is that we changed the actor path from "user/HelloLocalActor" to point to the HelloActor we have running in Akka:
akka.tcp://mykernel@127.0.0.1:2552/user/HelloWorldActor
This actor path can be defined as follows:
- akka: Identifies this as an actor path.
- tcp: Defines the call as using TCP (Transmission Control Protocol), which will be resolved to Netty from the application.conf file.
- mykernel: The name of the actor system, which we defined in the MyKernel class in the Akka project.
- 127.0.0.1:2552: The address and port of Akka.
- user: The user guardian, which is the guardian that manages all our top-level actors.
- HelloWorldActor: The name of the top-level actor to which to send the message.
And that's it. Save your file, start Play if it is not already running, and then open a web browser to
As a response you should see "Hello, YourName". In the Play console, you should see something like the following:
[INFO] [05/23/2014 14:34:32.395] [play-akka.actor.default-dispatcher-5] [Remoting] Starting remoting [INFO] [05/23/2014 14:34:33.490] [play-akka.actor.default-dispatcher-5] [Remoting] Remoting started; listening on addresses :[akka.tcp://play@127.0.0.1:2555]
This says that Play has started remoting and is listening for a response in actor system "play", which is defined in Listing 3, on port the local machine (127.0.0.1) on port 2555, both of which are defined in application.conf.
In the Akka console you should see something like the following:
Received message: com.geekcap.informit.akka.MyMessage@5a5a7c64
This is from the System.out.println() call that we made in the HelloWorldActor class.
Summary
The Play Framework not only provides a natural web-centric paradigm for developing web applications, but it also can be used to asynchronously process requests while not monopolizing threads that are doing nothing more but waiting for long-running operations. We explored Play's asynchronous processing model by first delegating request processing to a local actor running in the same JVM and then sending a message to an Akka micro-kernel for processing on a potentially different server. This is where the true power of Play and Akka comes from: Your Play application can receive requests, dispatch work to a cluster of Akka micro-kernels, and then, when that processing is complete, it can compose a response to send to the caller. And while it is waiting for a response from the remote actors, Play can give up the request processing thread to allow that thread to service additional requests. In short this means that if you have 50 threads in your thread pool, you can satisfy far more than 50 simultaneous requests!
|
http://www.informit.com/articles/article.aspx?p=2228804
|
CC-MAIN-2016-50
|
refinedweb
| 3,886
| 55.03
|
The provided samples, that I converted to Python tests, give a better idea of our task:
def test_provided_1(self): self.assertEqual(151, solution('115')) def test_provided_2(self): self.assertEqual(2048, solution('842')) def test_provided_3(self): self.assertEqual(80000, solution('8000'))115 is followed by 151. All the other permutations of the three digits are higher than that.
There is no permutation of 842 that is bigger than it. So we need to add a zero, getting 2048.
Same goes for 8000, we have to add a zero getting 80000.
I felt it was too complicated to check the number and trying to build its follower considering its digits, and I went instead for an almost brute force approach that comes naturally from watching the test case. Check the permutations, and see which one is the next one. If there is no next one, add a zero to the number in second position and return it.
I implemented this algorithm in Python3 adding just a tiny variation. First I generate the number with an added zero and then I check all the permutations. The reason for this inversion should be clear following the code.
digits = [] for c in line: # 1 if c != '0': insort(digits, c) # 2 while len(digits) <= len(line): # 3 digits.insert(1, '0') result = int(''.join(digits))1. Loop on the digits of the passed number, stored in the string named line.
2. I store all the digits but the zeros, sorted in natural order, in the buffer list named digits. The insort function comes from the bisect package.
3. Then I push in the buffer all the required zeros after the first, lowest, significative digit. In this way I get the smaller possible number having one cipher more than the passed one.
Now I am ready to loop on all the permutations for the passed number:
current = int(line) for item in permutations(line): candidate = int(''.join(item)) # 1 if result > candidate > current: # 2 result = candidate1. I get a permutation, I convert it to an integer. See the itertools library for details.
2. If the current candidate is less than the tentative solution and it is bigger that the number passed in input, I discard the previous calculate result and keep the current one.
At the end of the loop I have in result my solution.
I feared that this intuitive algorithm was too expensive. Fortunately, I was wrong. It has been accepted with good points. Test case and python3 script are on GitHub.
|
http://thisthread.blogspot.com/2017/03/codeeval-following-integer.html
|
CC-MAIN-2017-51
|
refinedweb
| 418
| 67.45
|
This is a C++ Program to solve 0-1 knapsack problem..
Here is source code of the C++ Program to Solve the 0-1 Knapsack Problem. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below.
#include<stdio.h>
#include<conio.h>
#include<iostream>
using namespace std;
// A utility function that returns maximum of two integers
int max(int a, int b)
{
return (a > b) ? a : b;
}
// Returns the maximum value that can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
// Base Case
if (n == 0 || W == 0)
return 0;
// If weight of the nth item is more than Knaps));
}
// Driver program to test above function
int main()
{
cout << "Enter the number of items in a Knapsack:";
int n, W;
cin >> n;
int val[n], wt[n];
for (int i = 0; i < n; i++)
{
cout << "Enter value and weight for item " << i << ":";
cin >> val[i];
cin >> wt[i];
}
// int val[] = { 60, 100, 120 };
// int wt[] = { 10, 20, 30 };
// int W = 50;
cout << "Enter the capacity of knapsack";
cin >> W;
cout << knapSack(W, wt, val, n);
return 0;
}
Output:
$ g++ 0-1Knapsack.cpp $ a.out Enter the number of items in a Knapsack:5 Enter value and weight for item 0:11 111 Enter value and weight for item 1:22 121 Enter value and weight for item 2:33 131 Enter value and weight for item 3:44 141 Enter value and weight for item 4:55 151 Enter the capacity of knapsack 300 99
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
Here’s the list of Best Reference Books in C++ Programming, Data Structures and Algorithms.
|
http://www.sanfoundry.com/cpp-program-solve-0-1-knapsack-problem/
|
CC-MAIN-2017-26
|
refinedweb
| 287
| 55
|
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
I need help with setting up a transition post function to change the due Date of an Issue. I've added on Script runner as I get the impression that this is the only path possible to achieving what I want to.
Now not having any experience in Java/ Groovy, to say I am a noob would be putting it kindly, but I have had a first attempt at it anyway. Would this work?
def newDueDate = "31/03/2017" put("/rest/api/2/issue/${issue.key}") .header("Content-Type","application/json") .body([ fields:[ due: newDueDate ] ]) .asString() logger.info("dueDate of " & newDueDate & "set successfully")
Note :: I just want to confirm that this is the correct approach as we only have a single instance of JIRA cloud, therefore any code I load will be going into the production environment and I would rather not mess this up.
Hi @Brendan Clough,
There are only three small changes you need to make to that script.
First, the date should be in YYYY-mm-dd format.
Second, to join strings/text together you need to use the + symbol, not the &.
Third, the field is referenced as duedate, not due.
def newDueDate = "2017-03-31" def response = put("/rest/api/2/issue/${issue.key}") .header("Content-Type","application/json") .body([ fields:[ duedate: newDueDate ] ]) .asString() // If the request fails, this will print out a nice message in the logs // showing you what the error response was. // The 204 number here is the HTTP Status code for No Content, which is // the documented response from this REST API assert resp.status == 204 logger.info("dueDate of " + newDueDate + "set successfully")
Feel free to ask if you have any more questions, or raise a support ticket - there is a link to our support portal from the Diagnostics & Settings page provided by the addon.
Thanks,
Jon
Awesome thanks mate! When an issue transitions, is it possible to determine what status it was in before the transition? Or what transition it is moving to?
Yes! You get a variable, transitionInput, in the script's context which contains this data:
When adding a Post Function, if you click on the ? button underneath the Code editor you'll get a popup dialog with a list of the variables that are already available in your.
|
https://community.atlassian.com/t5/Jira-Core-questions/Need-help-setting-an-absolute-dueDate-in-Scriptrunner-on-a/qaq-p/185687
|
CC-MAIN-2019-13
|
refinedweb
| 402
| 63.8
|
Hi,
I’m trying to execute a .rjs, but instead of executing the javascript
code, the javascript code is shown on the html. Does any one know why?
thanks you very much
Sayoyo
example code:
on main.rhtml, I have:
<%= link_to_remote(‘add’, :url=>{:action=>‘openAdd’},
:update=>‘addZone’)-%>
on MainController.rb I have
def openAdd
end
on openAdd.rjs, I have
page.replace_html(“addZone”, :partial=>“openAdd”)
on _openAdd.rhtml, I have
“add opened”
after click on “add” on the main.rhtml i got:
try { Element.update(“addControlZone”, “add opened\n”); } catch (e) {
alert(‘RJS error:\n\n’ + e.toString());
alert(‘Element.update(“addControlZone”, “toto\n”);’); throw e }
|
https://www.ruby-forum.com/t/rjs-not-executing/83252
|
CC-MAIN-2021-43
|
refinedweb
| 106
| 64.57
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.