text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Bye Manual SQL, Hello Speedment!
Bye Manual SQL, Hello Speedment!
Learn how you can write database applications rapidly using code generation and leverage Java 8's stream library.
Join the DZone community and get the full member experience.Join For Free
Download Microservices for Java Developers: A hands-on introduction to frameworks and containers. Brought to you in partnership with Red Hat. pro’s and con’s with both of these approaches, but both tend to involve writing a lot of boilerplate code that looks more or less the same across every codebase. In this article I will showcase another approach to easy database communication using an open source project called Speedment.
What is Speedment?
Speedment is a developer tool that generates java classes from your SQL metadata. The generated code handles everything from setting up a connection to data retrieval and persistence. The system is designed to integrate perfectly with the Java 8 Stream API so that you can query your database using lambdas without a single line of SQL. The created streams are optimized in the background to reduce the network load.
Setting Up a Project
In this article I will write a small application that asks for the user’s name and age and persist it in a MySQL database. First of, we will define the database schema. Open up your MySQL console and enter the following:
CREATE DATABASE hellospeedment; USE hellospeedment; CREATE TABLE IF NOT EXISTS `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(32) NOT NULL, `age` int(5) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
Next we will create our java project. Fire up your favorite IDE and create a new Maven Project from Archetype. Archetypes are template projects that can be used to quickly define new maven projects. Exactly how they are used differ between different IDEs. The following information will have to be entered:
Similar archetypes are available for PostgreSQL and MariaDB as well.
On NetBeans, the archetype is usually found among the default ones indexed from the Maven Central Repository. When the project is created you should have something like this:
Launching the Speedment UI
Now then the project has been created it is time to start up the Speedment User Interface. This is done by executing the speedment:gui-maven goal. In NetBeans and IntelliJ IDEA, a list of available maven goals can be found from within the IDE. In Netbeans this is found in the Navigator window (often located in the bottom-left of the screen). The project root node must be selected for the goals to appear. In IntelliJ, the goals can be found under the "Maven Projects"-tab in the far right of the screen. You might need to maximize the "Project Name", "Plugins" and "speedment-maven-plugin"-nodes to find it. In Eclipse, you don’t have a list of goals as far as I know. Instead you will have to define the goal manually. There is a tutorial for doing this on the Speedment GitHub wiki.
When the user interface starts the first time it will ask for your email address. After that you can connect to your database.
The connection dialog will only allow you to choose between databases that you can connect to using the loaded JDBC-drivers. If you for example want to use a PostgreSQL-database, you should add the PostgreSQL-driver to the <dependencies>-tag of the speedment-maven-plugin section in the pom.xml-file and the re-run the UI.
Once you have connected to the database, the main window opens. On the left side you can see a tree-view of the database. In the middle is the workspace where things like database connection, code generation and entity naming can be configured. You can select what part of the project to configure by selecting other nodes in the tree.
In this case, we will simply press the "Generate"-button in the toolbar to generate a project using the default settings. We can now close the UI and return to our IDE.
Write the Application
Now when Speedment has generated all the boilerplate code required to communicate with the HelloSpeedment database we can focus on writing the actual application. Let’s open the Main.java-file created by the maven archetype and modify the main() method.
public class Main { public static void main(String... params) { Speedment speedment = new HellospeedmentApplication() .withPassword("secret").build(); Manager<User> users = speedment.managerOf(User.class); } }
In Speedment, an application is defined using a builder pattern. Runtime configuration can be done using different withXXX()-methods and the platform is finialized when the build()-method is called. In this case, we use this to set the MySQL password. Speedment will never store sensitive information like your database passwords in the configuration files so you will either have to have a unprotected database or set the password at runtime.
The next thing we want to do is to listen for user input. When a user starts the program, we should greet them and then ask for their name and age. We should then persist the user information in the database.
final Scanner scn = new Scanner(System.in); System.out.print("What is your name? "); final String name = scn.nextLine(); System.out.print("What is your age? "); final int age = scn.nextInt(); try { users.newEmptyEntity() .setName(name) .setAge(age) .persist(); } catch (SpeedmentException ex) { System.out.println("That name was already taken."); }
If the persistence failed, a SpeedmentException is thrown. This could for example happen if a user with that name already exists since the name column in the schema is set to UNIQUE.
Reading the Persisted Data
Remember I started out by telling you how Speedment fits in nicely with the Stream API in Java 8? Let’s try it out! If we run the application above a few times we can populate the database with some users. We can then query the database using the same users manager.
System.out.println( users.stream() .filter(User.ID.lessThan(100)) .map(User::toJson) .collect(joining(",\n ", "[\n ", "\n]")) );
This will produce a result something like this:
[ {"id":1,"name":"Adam","age":24}, {"id":2,"name":"Bert","age":20}, {"id":3,"name":"Carl","age":35}, {"id":4,"name":"Dave","age":41}, {"id":5,"name":"Eric","age":18} ]
Summary
This article has showcased how easy it is to write database applications using Speedment. We have created a project using a maven archetype, launched the Speedment UI as a maven goal, established a connection with a local database and generated application code. We have then managed to do both data persistence and querying without a single row of SQL!
That was all for this time.
PS:Speedment 2.3 Hamilton was just released the other day and it contains a ton of really cool features for how you can manipulate the code generator to fit your every need. Check it out!
Download Building Reactive Microservices in Java: Asynchronous and Event-Based Application Design. Brought to you in partnership with Red Hat.
Published at DZone with permission of Emil Forslund , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/bye-manual-sql-hello-speedment | CC-MAIN-2018-30 | refinedweb | 1,214 | 57.37 |
The Berkeley DB subsystems can be accessed through interfaces from multiple languages. Applications can use Berkeley DB via C, C++ or Java, as well as a variety of scripting languages such as Perl, Python, Ruby or Tcl. Environments can be shared among applications written by using any of these interfaces..
dbstl is an C++ STL style API for Berkeley DB, based on the C++ API above. With it, you can store data/objects of any type into or retrieve them from Berkeley DB databases as if you are using C++ STL containers. The full functionality of Berkeley DB can still be utilized via dbstl with little performance overhead, e.g. you can use all transaction and/or replication functionality of Berkeley DB.
dbstl container/iterator class templates reside in header files dbstl_vector.h, dbstl_map.h and dbstl_set.h. Among them, dbstl_vector.h contains dbstl::db_vector and its iterators; dbstl_map.h contains dbstl::db_map, dbstl::db_multimap and their iterators; dbstl_set.h contains dbstl::db_set and dbstl::db_multiset and their iterators. You should include needed header file(s) to use the container/iterator. Note that we don't use the file name with no extention --- To use dbstl::db_vector, you should do this:
#include "dbstl_vector.h"
rather than this:
#include "dbstl_vector"
And these header files reside in "stl" directory inside Berkeley DB source root directory. If you have installed Berkeley DB, they are also available in the "include" directory in the directory where Berkeley DB is installed.
Apart from the above three header files, you may also need to include db_exception.h and db_utility.h files. The db_exception.h file contains all exception classes of dbstl, which integrate seamlessly with Berkeley DB C++ API exceptions and C++ standard exception classes in std namespace. And the db_utility.h file contains the DbstlElemTraits which helps you to store complex objects. These five header files are all that you need to include in order to make use of dbstl.
All symbols of dbstl, including classes, class templates, global functions, etc, reside in the namespace "dbstl", so in order to use them, you may also want to do this:
using namespace dbstl;
The dbstl library is always at the same place where Berkeley DB library is located, you will need to build it and link with it to use dbstl.
While making use of dbstl, you will probably want to create environment or databases directly, or set/get configurations to Berkeley DB environment or databases, etc. You are allowed to do so via Berkeley DB C/C++ API. dbm and hsearch interfaces. After including a new header file and recompiling, programs will run orders of magnitude faster, and underlying databases can grow as large as necessary. Also, historic dbm applications can fail once some number of entries are inserted into the database, in which the number depends on the effectiveness of the internal hashing function on the particular data set. This is not a problem with Berkeley DB. | http://docs.oracle.com/cd/E17275_01/html/programmer_reference/arch_apis.html | CC-MAIN-2014-42 | refinedweb | 492 | 55.74 |
- 96 Piece/Pieces
- 200 Set/Sets
- 200 Set/Sets
- 32 Piece/Pieces
- 200 Set/Sets
- 1000 Piece/Pieces
- 100 Watt/Watts
- 100 Watt/Watts
- 100 Watt/Watts
import solar panels
1.Amorphous silicon thin film solar panel
2. can work in all weather conditions
3. CE&ROHS certificate
import solar panels
QF-45DB
Features:
1.Works in all weather conditions, including low light and cloudy conditions
2. Water shock, rust resistant
3. Aluminum or plastic framed (Aluminum frame has plastic corner sleeve to prevent solar panel from scratching customers application)
3. Comes with quick connect battery clamp or cigarette lighter plug
4. Different output available
5. Multiple pieces can be connected to acquire higher power
6. Mounting brackets and supporting rod can be provided upon request
7. with blinking charging indicator
8.Tested under standard condition: AM1.5, 100mW/cm 2 module at 25°C
9. Customer required solar cell in different sizes and /or performance can also be provided
10.Encapsulation
Application:
Widely used in BIPV, PV power generation station and outdoor PV power supply | http://www.alibaba.com/product-detail/import-solar-panels_487235425.html | CC-MAIN-2015-14 | refinedweb | 176 | 52.97 |
I am attempting to generate a client that calls a web service. I was able to get it nearly working, except it seems that the main method being called ends up with a tag with an xmlns attribute (i.e. <inv:InventoryUpdateBatch). This is causing it to fail with "org.apache.axis2.databinding.ADBException: Unexpected subelement".
I know that removing the xmlns parameter on the main method's tag will work, as I confirmed it using SoapUI, by using the identical XML generated by SOAP::Lite, and removing that one xmlns attribute. Note that I am using call() to add a prefix (inv) to the method. That prefix is the same namespace that is in xmlns, which was previously serialized using register_ns(), so it shows up as an attribute on the soap:Envelope tag, xmlns:inv="...".
I am attempting to generate a client that calls a web service
You're working too hard, use SOAP::Simple/XML::Compile::SOAP/ is much easier (if you learn the SOAP lingo)
I have searched the cookbook and examples to no avail. Any help would be greatly appreciated. Thanks!
:) I remember answering a question like that before, something to do with encodingStyle and/or envelope, but I hate SOAP :) 2010/2011 was a good | http://www.perlmonks.org/?node_id=987017 | CC-MAIN-2014-23 | refinedweb | 210 | 60.35 |
Grouping content types and properties
This topic explains how to group content types and properties into logical entities, to make the editing experience more intuitive. You can for example organize page types in groups, to make it easier for editors to select the correct page type when creating pages in edit view. Properties can be grouped and ordered under tabs in the All Properties editing view.
Content types
To group for example page types, add the GroupName property to the content type to specify a group, and use the Order property to determine the order in which the groups are displayed. This is used for example when creating pages in edit view, and in the listing of content types in admin view.
Example: Here we have two page types - StandardPage and ArticlePage, belonging to the groups Basic pages and Facts articles respectively, where the Basic pages group will be displayed first since it has the lowest sort order.
[ContentType(GroupName = "Basic pages", Order = 1, DisplayName = "StandardPage", GUID = "abad391c-5563-4069-b4db-1bd94f7a1eea", Description = "To be used for basic content pages.")] public class StandardPage : PageData { }
[ContentType(GroupName = "Facts articles", Order = 2, DisplayName = "ArticlePage", GUID = "b8fe8485-587d-4880-b485-a52430ea55de", Description = "Basic page type for creating articles.")] public class ArticlePage : PageData { }
The result when creating a new page in edit view:
This can also be applied to other types of content, for example products and variants in Episerver Commerce.
Properties
Properties can be grouped under tabs. Use the Display attribute to specify the GroupName that will be displayed as a tab. The Order property controls the order of the displayed properties on the tab.
Example: The Article page has two properties - Author and Classification, which are displayed under a tab named Details. The Author property will be displayed first on the tab, since it has the lowest sort order number.
[Display( Name = "Author", Description = "Name of article author.", GroupName = "Details", Order = 1)] public virtual String Author { get; set; } [Display( Name = "Classification", Description = "Genre or type of article.", GroupName = "Details", Order = 2)] public virtual String Classification { get; set; }
The result when editing an article in the All Properties editing view:
Built-in groups
Episerver provides a set of built-in tabs/groups that are used by built-in properties. Note that these built-in groups are only used for properties, not for content types. You can add your custom properties to the groups, to make them display under the built-in tabs. Constants for the built-in tabs are defined in EPiServer.DataAbstraction.SystemTabNames.
The Content and Settings tabs are available by default. Tabs can be edited from the CMS admin view. From here you can also define access levels, to create tabs wih properties that are only available for selected editor groups.
Note: Tabs without properties will not be displayed in the All properties editing view. Scheduling, Shortcut, and Categories are obsoleted groups.
Using group definitions
As previously described, you can define sort order on individual content types and properties. The order set indirectly defines in which order content groups are displayed when creating new content, and in which order tabs are shown when editing content in the All Properties editing view.
When the number of content types and properties increase, it is convenient to define the order of groups at a higher level, and then use order to sort among the content types and properties in each group. Normally you define groups as a list of constants that you use in the DisplayAttribute.
Example: The Article page type with a News content type group, and a Contact tab with an Image property.
[ContentType(GroupName=GroupNames.News, Order=1)] public class ArticlePage : PageData { [Display(GroupName = GroupNames.Contact)] public virtual ContentReference Image { get; set; } } public static class GroupNames { public const string News = "News"; public const string Contact= "Contact"; }
You can also define the group names as constants in a separate class, and decorate the class with the GroupDefinitions attribute (which is automatically picked up). You can define multiple classes with the GroupDefinitions attribute, but you can define a single group name on one class only. Groups defined in code cannot be edited in admin view.
Set the order in which groups are displayed with the Display attribute. Properties and content types are sorted within each group by each individual order. Groups that have no order defined get Order=-1 and are displayed first.
Applying access levels
When you group content types and properties, you can apply access levels so that an editor must be part of a role, function, or other criteria to access the group. Required access level applies to both groups of properties and groups of content types.
Example: The Contact group with access level set to Publish.
[GroupDefinitions] public static class GroupNames { [Display(GroupName="MyNews", Order=1)] public const string News = "News"; [RequiredAccess(AccessLevel.Publish)] public const string Contact= "Contact"; }
Overriding sort order of built-in groups
Groups without any order defined will fall back to the indirect sorting and will have sort index set to -1. It is possible to override built-in groups to change sort order.
Example: Overriding the default sort order for the Content tab.
[GroupDefinitions] public static class GroupNames { [Display(Order = 1000)] public const string Content = SystemTabNames.Content; }
Note: You cannot edit groups that are defined in code from the admin view.
Overriding the default sort order for the Content Tab doesn't work. It generates an error stating that it was declared twice.
Hi,
I have tried to reproduce the problem you describe without any luck.
If I add
To Alloy MVC or Alloy Webforms I can change the order of the Tab, without getting any runtime error saying that the Display attribte has been added twice. | http://world.episerver.com/documentation/developer-guides/CMS/Content/grouping-content-types-and-properties/ | CC-MAIN-2017-47 | refinedweb | 953 | 52.9 |
Siebel Object Interfaces Reference > Interfaces Reference > Applet Events >
The InvokeMethod event is triggered by a call to applet.InvokeMethod or a specialized method, or by a user-defined menu.
Applet_InvokeMethod(name, inputPropSet)
Name
The name of the method that is triggered.
inputPropSet
A property set containing arguments to be passed to the InvokeMethod event.
Not applicable
Typical uses include showing or hiding controls, or setting a search specification. When accessing a business component from this event handler, use this.BusComp(), rather than TheApplication.ActiveBusComp.
Browser Script
Some special methods create, modify, or delete records. In some cases, events at the applet or business component level are triggered by these actions. If there is a requirement to perform a specific action before and after the method has been executed, these events can be used. In this example, code has been added to the PreInvokeMethod and InvokeMethod applet events to set and reset the flag and to the NewRecord server event to set the fields.
function Applet_PreInvokeMethod (name, inputPropSet){ if (name == "Quote") { // Add code that needs to be executed BEFORE the special method // Set flag to "1" TheApplication().SetProfileAttr("flag","1"); }
return ("ContinueOperation");}
function Applet_InvokeMethod (name, inputPropSet){ if (name == "Quote") { // Add code that needs to be executed AFTER the special method // Reset the flag to "0" TheApplication().SetProfileAttr("flag","0"); }
}
function BusComp_NewRecord (){ if (TheApplication().GetProfileAttr("flag")== "1" ) { this.SetFieldValue ("Field1", "Value1"); this.SetFieldValue ("Field2", "Value2"); . . . . . }
Applet_PreInvokeMethod EventApplication_InvokeMethod Event | http://docs.oracle.com/cd/B31104_02/books/OIRef/OIRefInterfaceRef28.html | CC-MAIN-2015-32 | refinedweb | 236 | 50.53 |
I think this might be bad and I should feel bad.But could someone tell me if this is crazy for a completions plugin?
Specifically the part where I add "<" (less than sign) into the array using for item in html.
[pre=#141414] if ch != '<': html = ((list(item)-2],"<" + list(item)1]) for item in html])[/pre]I'm just covering my bases since a lot of folks are using this plugin I'd hate to have to revert these changes later.Here is the plugin example:
[pre=#141414]class HtmlTagCompletions(sublime_plugin.EventListener): def on_query_completions(self, view, prefix, locations):
pt = locations0] - len(prefix) - 1
ch = view.substr(sublime.Region(pt, pt + 1))
if ch != '<':
html = ((list(item)-2],"<" + list(item)1]) for item in html])
return html, sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS[/pre]
Thanks in advance!
Hmm... I'm not exactly sure what you're trying to do. If the question is, can you put a < into the autocompletion, yes. You can. However, I'm not sure I understand what the line is doing.
If I understand correctly, HTML is a list of tuples.So for each tuple, you are converting the tuple into a list and getting the 2nd and 2nd to last item and adding it to the autocomplete? You lost me.
If you notice, the html tag completions in the html array for this plugin do not begin with a less than character (tag begin <). The user will get a complete tag if they start with the less than character but not if they start with say the letter F. The code just adds a less than character to the completion but only when it's triggered by a character that is not the less than character.
I hope I explained that well...
Okay. That makes sense, but why convert the tuples to a list and get the -2 element and 1 element?
Also, substr accepts points as a parameter. So you can replace
ch = view.substr(sublime.Region(pt, pt + 1))
with
ch = view.substr(pt)
I'm not sure how else to add in the less than character, it was the easiest way I could think of (can you suggest something easier?).By the way the code works, the main problem is any extra sublime-completions files aren't being added to the autocomplete.
I can see what you are trying to achieve but I am not sure that I would be happy with the end result. This setting:
// Additional situations to trigger auto complete
"auto_complete_triggers": {"selector": "text.html", "characters": "<"} ],
persuades ST to display the tag auto-completions after the opening angle is typed. If you follow the path you are suggesting then your completions will pop-up for practically every single character typed. For example, if you are typing plain text within a
tag, there is no way to distinguish whether you are continuing to type text or starting a new tag
[quote="C0D312"]Also, substr accepts points as a parameter. So you can replace
[/quote]
Nice! That might be a recent thing since that part of the code is from jps' html_completions.py in the default HTML package.
Well, you're right. Since you understand my direction, could we prevent triggering autocomplete within tags using the api?Edit: I realize this won't be possible and it makes me a little sad that autocomplete works the way it does for tags. Thanks.
In the on_query_completions,
if 'tag' in scope_name(pt):
return
[quote="C0D312"]In the on_query_completions,
@COD312 I would not have thought of that without really spending more time with the API. Brilliant thanks!
The bigger issue though is the sublime-completions files aren't loading in the autocomplete list - unfortunately.
I'm not super familar with how the scoping system works but I believe theres away to say "minus scope x." So in your completion file, you can have scope = text.html, minus tags.
Edit: yep. Here's some info: viewtopic.php?f=2&t=1809&p=8405&hilit=scope+except#p8405 text.html - tag
Well yeah, your posts are really helpful man.
I'll see if I can't get to the bottom of that and post a new thread with more information if I can't get it to do what I need.Edit: Perfect. Thanks!
So okay. That solves the triggering unwanted autocompletes in tags.
But completions in sublime-completions files arent being loaded into the autocomplete list (even with just text.html as the scope).I think it might be that ST is expecting an array/list to be returned by the plugin when there is no autocomplete so that it has something to extend or append to.
Yes I think you need to return an empty list:
if 'tag' in scope_name(pt):
return ]
Also sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS is requesting that only your completions will appear in the list.
I still think you should test a small subset of your completions before getting too deeply involved. You can suppress them from appearing within 'tag' scope, but if you edit within a tag they won't reappear. And Typing this text will cause your completions to pop up for every letter typed.
So I removed INHIBIT_WORD_COMPLETIONS | INHIBIT_EXPLICIT_COMPLETIONS and sublime-completions files still are not working with this.
Sublime-completion files work well when I return an empty list, but not when that list contains completions. Even after removing the INHIBIT directives.
Would this be a bug?
I'm not sure about this line - it might be creating a generator.
html = ((list(item)-2],"<" + list(item)[1]) for item in html])
try following it with 'print html' and check it in the Console (Ctrl '). Also check the Console for error messages. Alternatively, try
html = (list(item)-2],"<" + list(item)[1]) for item in html]
print html
return (html)
I don't think it's that line I can comment it out and sublime-completions files still don't load for html.
That line just adds a leading less than character to every second string in the completions list.I'm not sure how efficient it is; I'll likely add to a global variable later. | https://forum.sublimetext.com/t/adding-to-auto-completions-py/5521/10 | CC-MAIN-2016-44 | refinedweb | 1,025 | 65.32 |
Difference between revisions of "Module Writing (AMX Mod X)"
Latest revision as of 10:16, 12 February 2012
AMX Mod X Modules are written in C or C++ (the API is C compatible).
Contents
- 1 Introduction
- 2 Necessary Files
- 3 Setup
- 4 Compiler GUI
- 5 Message Module ( Demo )
- 6 In Conclusion
Introduction
So you want to be a module developer for Amxmodx! Well its not too hard. I will be doing this in Windows as I dont have a Linux box to frick around with at the moment. So, windows only at this point, a Linux version will be forthcoming!
Necessary Files
Well first we need to gather up the neccessary files for your computer. The first thing you need to know, its NOT complicated. If you can script in pawn, you can program in C/C++!
Metamod
As we all know metamod is the backbone of the whole thing, without this all mods for and HL game would not be possible unless we programmed it ourselves! So many thanks to willday for the development of Metamod, and for BAILOPAN and his crew helping to maintain and update it! Hats off to all! Ok so now onto the file gathering! First thing is first we need to goto Metamod: metamod.org from there we need to gather up 2 SDK's.
HL-SDK
The Half Life SDK is the first thing in the list to gather up! We can get this at the same spot that we will be gathering up metamod! So we click on the SDK Link or goto: Here. From here we need to grab the HLSDK, preferably the one that has been tweeked out for Metamod. So we are looking for this:
hlsdk-2.3-p3.zip - The Standard SDK v2.3 with various fixes and updates, that Metamod is compiled against. Files are in DOS format.
So lets grab the hlsdk-2.3-p3.zip first! Once this is done you will need to extract this to your Hard Drive. What we will be doing is extracting these files to the following directory, these will be arbitray directories for ease of the document so dont sweat it once you proficient you can move them around :-) Ok so extract the files to the following directory: c:\sdk_files\
Metamod SDK
Ok, so we have the HL-SDK we now need to Metamod SDK. So we need to go back to the root of Metamod here and then click on the v1.xx Sourcecode zip to get the latest SDK from Sourceforge, you can get that file from the best location for you or for a quick link click metamod-1.xx-win.src.zip to get the files.
Amxmodx SDK
Now we need to grab the last of the SDK's so we can go carry on. First we need to goto the Amxmodx Website so that we can get to the SDK via the Downloads area and SVN! Amxmodx SDK.
Unix2Dos the Files
So, once you have all the files we need to fix them up a bit. One of the things that will become bothersome later is the Unix File Format that HLSDK / Metamod has, its suppose to be Windows, but its not, so we need to convert the files, soooo...we will use the following tutorial to fix this GnuWin32 Once you have that setup you can use the following to repair all files easily Repair.bat.
Setup
On Windows, a particular directory layout is not required..
- METAMOD - Path to Metamod headers (where metamod.h resides)
- HLSDK - Path to Half-Life SDK
Compiler GUI
Now that we have all the SDK files we will need to get the Compiler. The one that we will be using will be the Microsoft Visual 2005 Express Edition.
Visual C++ 2005 Express Edition
First thing you need to do is grab the web install files: Visual C++ 2005 Express Edition if you want to grab the network install you should grab the following: Network Install Files, for network install either grab the Image File or the ISO File your choice. Ok so now your have the iso/img file you need to either burn it or use a Virtual CD/DVD to install it. I wont go into detail on this as its pretty simple, but make sure that you only install the Graphical IDE, if you want to you can install the Microsoft MSDN 2005 Express Edition, these are the help files that you can access by hitting F1 once in the GUI, if you haven't downloaded them, thats fine it will connect you to the Online version of them instead.
Platform SDK
The platform SDK is a needed piece of kit, without it we wont be able to compile. You will need to grab it from this site: Platform SDK Site. Specifically you need to grab proper file for your architecture, goto the above page and scroll down to the Files in This Download section. Download the file that you need and install it. Make sure to do a custom install unless you dont mind a bunch of crap on your computer. You want to have only the following items installed:
Microsoft Windows Core SDK Microsoft Direct Show SDK Microsoft Media Services SDK
.NET SDK
Next we need to grab the Dot Net install and the Dot Net SDK. You can grab the dotnet.1.1.exe and install it. Next you will need the .NET SDK, you can grapple dotnet.1.1.sdk.exe Once these are installed your good to start up the GUI.
Message Module ( Demo )
Right, now that we have the base all good and done, we need to get going with the module. First thing we need to do is create a new module, for ease of use I have for the moment made up a demo module called dod_mm, DoD Message Module. Grab this and extract it to the c:\sdk_files directory.
Editing Module Information
First things first you need to open up the dod_mm.vcproj this will open up Visual C++. Once its open you will see on the left a dod_mm under the solutions area. Open moduleconfig.h. In here we will look for the following:
#define MODULE_NAME "--ENTER NAME HERE--" #define MODULE_VERSION "--ENTER VERSION HERE--" #define MODULE_AUTHOR "--ENTER AUTHOR HERE--" #define MODULE_URL "--ENTER URL HERE--" #define MODULE_LOGTAG "--ENTER LOGTAG HERE--" #define MODULE_LIBRARY "--ENTER LIBRARY HERE--" #define MODULE_LIBCLASS ""
This is the infor that will be displayed about the module when you have it running on your server. Lets say you type in amxx modules in the command line, you will get information about all the modules running on your server, including this one, once its done. So now we need to edit it to reflect our new module:
#define MODULE_NAME "DoD Message Module" #define MODULE_VERSION "0.1" #define MODULE_AUTHOR "DoD Plugins Community" #define MODULE_URL "" #define MODULE_LOGTAG "DoDMM" #define MODULE_LIBRARY "dod_mm" #define MODULE_LIBCLASS ""
Exposing Functions
So now that this is done we need to tell the module to use Metamod Natives, this is done by finding the followin:
// metamod plugin? // #define USE_METAMOD
And changing it to:
// metamod plugin? #define USE_METAMOD
Or in other words uncommenting it. :-) So now that this is done we have access to loads of stuff that metamod has to offer us as well as Amxmodx stuff. Once this is done we now need to expose the functions from metamod that we will be using for this message module.
Amxmodx
If you scroll down you will find the following two functions from amxmodx that we will be using:
/** AMXX attach * Do native functions init here (MF_AddNatives) */ // #define FN_AMXX_ATTACH OnAmxxAttach /** AMXX Detach (unload) */ // #define FN_AMXX_DETACH OnAmxxDetach
We will expose these by uncommenting them. Later on in this document I will explain where the function itself goes and what goes in them.
HL API
Next we will expose some of the functions that come from the HL Server Engine that Metamod has exposed for us. The functions that we are exposing are ones that we will need specific to messages:
#define FN_MessageBegin_Post MessageBegin_Post #define FN_MessageEnd_Post MessageEnd_Post #define FN_WriteByte_Post WriteByte_Post #define FN_WriteChar_Post WriteChar_Post #define FN_WriteShort_Post WriteShort_Post #define FN_WriteLong_Post WriteLong_Post #define FN_WriteAngle_Post WriteAngle_Post #define FN_WriteCoord_Post WriteCoord_Post #define FN_WriteString_Post WriteString_Post #define FN_WriteEntity_Post WriteEntity_Post
These are the actual messages that will be caught. Finally we need to catch the registering of those messages we can see in Meta Game. This is the function we are going to expose next:
#define FN_RegUserMsg_Post RegUserMsg_Post
Now that all the functions that we will be using have been exposed we are good to carry on with the meat of the module.
Developing Module
Right on, now we have the basics down. We are ready to start programming. So we need a main file now. So now we need to right click on the dod_mm and goto Add->New Item as shown in the picture below:
Once thats done a new window comes up, we need to click on code, then Header File (.h) giving it the name main.h
Now we have a file to work with. So, we need to first set it up as a standard Header file so that it is only included once into memory. So we add the following:
#ifndef DOD_MM_H #define DOD_MM_H // Code will go between here // And Here! #endif
So this will ensure that all the code we put between those two comments are only loaded once into the compilation process. Now that we have this part complete we will start coding. As we are making a module to catch all the Messages that occur on a HL DoD Server the first thing we need to do is create a structure to keep all the data that we will use. This structure will be called StructUserMsg and will contain two members the id of the message which can be anything from 0 - 256 messages and the name of the message. We can get these names from Meta Game. These will be the messages that will pop up. Now that we have this done the file should look like so:
#ifndef DOD_MM_H #define DOD_MM_H // Code will go between here #include <stdio.h> #include "sdk/amxxmodule.h" struct StructUserMsg { int id; char name[256]; }; // And Here! #endif
CPP File
So now that we have the structure done the first thing we have to do is make a CPP file the same as we did the Header file except with a CPP. We will name this file the same as the Header that we created as this cpp will call the header to get its information. Once we have created the file the first thing we need to do is call the header file by adding this to the top:
#include "main.h"
That will ensure to grab our structure so the rest of the program knows what it is. Once this is done we need to create a few global variables to use. The first one we will use is one that will allow us to output to a file all the data that we are gathering:
FILE *stream;
Next we need a global state variable so that we know what call we are on within the message. This will count all the calls to each specific message:
int g_state;
Now finally we need to create the global variable for the structure we created:
StructUserMsg g_user_msg[MAX_REG_MSGS];
Amxmodx Functions Created
So this is most important. We created a StructUserMsg array of MAX_REG_MSGS in length, this is 256. Ok we are good now on the globals. Pretty simple. Now remember those functions we exposed, well we need to create those functions so that Amxmodx will call them. First we will look at the Amxmodx functions:
void OnAmxxAttach() { fopen_s(&stream, "messages.txt", "w"); } void OnAmxxDetach() { fclose(stream); }
So now that these are created lets understand them. When amxmodx starts up and calls the module we are creating the OnAmxxAttach is the first thing that is called, in our case we are opening up a file called messages.txt that will be in the base HL directory. This will contain all the output from our module.
*** NOTE *** This will take up alot of room, so be carefull
Ok, the second function is just the opposite, when amxmodx closes this is the last function called, so we want to close the file.
Metamod HL API Functions Created
Now onward. The Amxmodx functions are complete now we need to create the functions for the Metamod / HL API. First we will create the most important function the registration function, in order to understand the funtions we first need to know what we are looking for. So we exposed the following:
#define FN_RegUserMsg_Post RegUserMsg_Post
Ok so what the hell is this??? Well if you highlight and copy FN_RegUserMsg_Post then open up amxxmodule.h and do a search for FN_RegUserMsg_Post you will find the following:
#ifdef FN_RegUserMsg_Post int FN_RegUserMsg_Post(const char *pszName, int iSize); #endif // FN_RegUserMsg_Post
This is where we get the function declaration from. So we copy the following:
int FN_RegUserMsg_Post(const char *pszName, int iSize);
And make a function out of it as such:
int RegUserMsg_Post(const char *pszName, int iSize) { RETURN_META_VALUE(MRES_IGNORED, 0); }
Now this is the start of our function. You notice the RETURN_META_VALUE, this is a metamod return, because the function is returning an int we need to tell metamod to ignore what we have done with this function and return an integer. This is what this means. One of the others we will be returning alot is as follows:
RETURN_META(MRES_IGNORED);
This is what we use when a function returns a void or nothing. This function is pretty basic. First thing we are going to do is grab the original return value that we would have gotten if we wanted to do stuff with this function. In this case it is the message id:
int msgid = META_RESULT_ORIG_RET(int);
Next we are going to assign the message id to our message structure at the location of the message id...bit confusing but important:
g_user_msg[msgid].id = msgid;
Finally we are going to assign the message name to the message structure using a string copy:
strcpy(g_user_msg[msgid].name, pszName);
Ok now we have the register function done it will look like so when done:
// First thing when all the messages are sent out to a client we grab them and take a look at them int RegUserMsg_Post(const char *pszName, int iSize) { int msgid = META_RESULT_ORIG_RET(int); g_user_msg[msgid].id = msgid; strcpy(g_user_msg[msgid].name, pszName); RETURN_META_VALUE(MRES_IGNORED, 0); }
Now we need to worry about the next important function, the message begins function. This is called when a message is initialized on the server to be sent out to someone somewhere. It looks like this:
// When a MESSAGE_BEGIN is sent we catch it After it gets to the player void MessageBegin_Post(int msg_dest, int msg_type, const float *pOrigin, edict_t *player) { RETURN_META(MRES_IGNORED); }
Voila...now this is very important part because this is where and when the message begins...remember we can NEVER call another message_begin while we are in a message...so we have to wait until after when we hit message_end. Now we are going to add some stuff to this to make it work. First we want to make sure that its not a bogus or bad message:
if(msg_type < 0 || msg_type >= MAX_REG_MSGS) { g_state = -1; RETURN_META(MRES_IGNORED); // Bad Message }
If it is we bounce out of this message. If not then its a good message and we want to carry on. Next thing we do is print out that we are in the message beginning to both the console and the file:
//);
As you can see we are printing out the message id the message name that we stored in our structure earlier. Once this is done we will check the message destination and print that out as well. There are 3 places a message could be going 0 = all, 1 - 32 is Individually to a player, and 33 - MAX_REG_MSGS is all others. So we will tell which it is with the following:
//"); }
Self explanatory, finally we need to set the global variable g_state to zero:
g_state = 0;
So this funtion is done. Here is the end result:
// When a MESSAGE_BEGIN is sent we catch it After it gets to the player void MessageBegin_Post(int msg_dest, int msg_type, const float *pOrigin, edict_t *player) { if(msg_type < 0 || msg_type >= MAX_REG_MSGS) { g_state = -1; RETURN_META(MRES_IGNORED); // Bad Message } //"); } g_state = 0; RETURN_META(MRES_IGNORED); }
Ok from here on out it will be a bit repetitive so I will just show them too you:
void WriteByte_Post(int iValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [Byte]\t%d\n", g_state++, iValue); fprintf(stream, "%d [Byte]\t%d\n", g_state++, iValue); RETURN_META(MRES_IGNORED); } void WriteChar_Post(int iValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); char buffer[10]; _itoa_s(iValue, buffer, 10); printf("%d [Char]\t%s\n", g_state++, buffer); fprintf(stream, "%d [Char]\t%s\n", g_state++, buffer); RETURN_META(MRES_IGNORED); } void WriteShort_Post(int iValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [Short]\t%d\n", g_state++, (short)iValue); fprintf(stream, "%d [Short]\t%d\n", g_state++, (short)iValue); RETURN_META(MRES_IGNORED); } void WriteLong_Post(int iValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [Long]\t%d\n", g_state++, (long)iValue); fprintf(stream, "%d [Long]\t%d\n", g_state++, (long)iValue); RETURN_META(MRES_IGNORED); } void WriteAngle_Post(float flValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [Angle]\t%d\n", g_state++, (int)flValue); fprintf(stream, "%d [Angle]\t%d\n", g_state++, (int)flValue); RETURN_META(MRES_IGNORED); } void WriteCoord_Post(float flValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [Coord]\t%d\n", g_state++, (int)flValue); fprintf(stream, "%d [Coord]\t%d\n", g_state++, (int)flValue); RETURN_META(MRES_IGNORED); } void WriteString_Post(const char *sz) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [String]\t%s\n", g_state++, sz); fprintf(stream, "%d [String]\t%s\n", g_state++, sz); RETURN_META(MRES_IGNORED); } void WriteEntity_Post(int iValue) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("%d [Entity]\t%d\n", g_state++, iValue); fprintf(stream, "%d [Entity]\t%d\n", g_state++, iValue); RETURN_META(MRES_IGNORED); }
The last but not least function is the message end function. It too is simple to understand:
void MessageEnd_Post(void) { if(g_state == -1) RETURN_META(MRES_IGNORED); printf("[End Message]\n\n"); fprintf(stream, "[End Message]\n\n"); fflush(stream); RETURN_META(MRES_IGNORED); }
The only thing weird about this one is the flush...this just ensures the data we just collected gets sent to the file. Well thats it...we're done...all thats left is the compile and moving the module to the addons/amxmodx/modules/ directory then adding it to the addons/amxmodx/configs/modules.ini file at the bottom. Remember to name it dod_mm not dod_mm_amxx.dll just the first part.
In Conclusion
Ok we are setup to start compiling, you can grab the source code to any of the modules at amxmodx and compile them now...or just play about. I suggest you try the easy one that you can grab here: ESF Model Changing Module just to look at it. Here is the direct download: esf_model_changer
The files for this project can be found DoD Message Module Tutorial / Module
I have made great use of this in helping in developing dodx and dodfun.
Cheers! Zor. | https://wiki.alliedmods.net/index.php?title=Module_Writing_(AMX_Mod_X)&diff=8396&oldid=1486&printable=yes | CC-MAIN-2021-49 | refinedweb | 3,202 | 67.79 |
Date of Birth
Telephone number
Age
The age needs to be calculated based on current date and the Date of Birth. All these information except the age should be obtained from the user and should be stored in an array.
My problem is that I do not know how to receive and name the variables for input for each structure. My book wont tell me how. This is all I have so far. I am stuck. For example , will the same variables be named for each member?
#include <iostream> struct inflatable { char firstName[10]; char lastName[10]; int dateOfBirth[10]; int telephone[10]; int Age[3]; }; int main() { using namespace std; inflatable members[4] = // initializing array of structs { firstName, lastname, dateOfBirth, telephone, Age} | http://www.dreamincode.net/forums/topic/285255-receiving-and-printing-input-from-an-array-of-structs/ | CC-MAIN-2016-26 | refinedweb | 123 | 73.58 |
US' Capitol Hill on the Internet 132
Anguirel writes "Wired has a few stories from the Hill. First up, ICANN gets a hearing before the House to answer questions about proposed fees. Next, House Majority leader Dick Armey denounced the UN e-mail tax saying it's just the UN being greedy and trying to profit from the Internet. Finally, Y2K conspiracy theories gained some credibility as a conference on the President declaring martial law was held by the US Reserve Officers Association. "
Re:The U.S. never did like the U.N. (Score:1)
Re:Executive Orders (Score:1)
That tale's been going around since impeachment days. I heard it from a good friend who used to have a brain before he started earning enough money to fall under the Republican spell (now he'd sell his grandmother into slavery for a tax cut). According to him, the president "passed an executive order" that would let him declare martial law and take over the government if he was convicted.
Stop and think: if a president wants something that the Constitution doesn't allow, can he just write an executive order to set the constitution aside? Do you think abortion would have stayed legal throughout the Regan era if that trick would work? Do you think Clinton would be haggling with the Republican Congress over how to spend the (as yet imaginary) surplus? No, some president long ago would have "passed an executive order" that let him ignore the Congress and the courts.
Either he has the legal right to declare martial law or he doesn't; but if he doesn't, he can't just obtain that right by saying he wants it.
It might be useful to remember that not all the world's FUD is targeted at Linux.
Re:Missing one thing (Score:1)
>These looney right types would be sad if they didn't have so many guns
In this world the real loony is the man without a gun.
Why must we get political here again?
Re:Executive Orders (Score:1)
nmarshall
#include "standard_disclaimer.h"
R.U. SIRIUS: THE ONLY POSSIBLE RESPONSE
Re:how silly.... (Score:1)
Being "fringe" has nothing to do with free speech... its their specific "fringe" beliefs that we're attacking. Just because someone exercises their freedom of speech doesn't mean we have to give them ANY credibility whatsoever.
As for being part of a fringe group by visiting slashdot, I'm proud to be a minority in this country (just take a look at what is played on television to see why I distance myself from the majority).
Fortunately, I can back up the things I say with two things that these people lack: rational thinking and facts.
Re:US controlled? maybe. Net controlled? Yes. (Score:4)
I find it interesting the number of government, quasi-government ond international bodies that actually think they have some authority over domain names (and the number of individuals who think they're right).
DISCLAIMER: I'm not saying anything about ICANN here (yet), but feel free to take this as a bash of WIPO, NetSol, the Clinton administration, etc...
DNS is set up by convention and volentary adherance to RFCs published by the IETF. The many parties involved volentarily go along with this because it's already in place, standards in general are a good thing, and peer pressure to do the right thing. This is as it should be.
If our current DNS system gets FUBARed by the powers that be, there is no law saying another system can't be put online by the people and businesses that use the net. Anybody with a big enough server can run DNS as long as they don't interfere with the operation of the current system. I can serve the domain if I choose to. You can configure your system to use my server if you want to (just set your named.conf to consider my server authoritative for the
.Igreat zone)
There are a few of those now, and a few Wins and other resolvers that can be accessed as well. They remain fringe servers because they're too small to handle a large load, and not everybody can access them. That could change if the current DNS gets FUBARed. The current system has no basis in law, and new systems are not prohibited (or prohibitable).
Re:12/30/99 23:59:59 (Score:1)
Party starts early, huh? =)
Re:Military Take Over (Score:1)
Billy C.?? (Score:1)
Marial law? (Score:1)
To quote Jello Biafra, "Welcome to 1984"
Now THIS is FUD (Score:1)
Community breakdown? If we have it, it will be a short-lived panic by the militantly ignorant. If you're that worried about it, take a two-week cruise over 1/1/2000 or go sit on a beach somewhere. By the time you get back, life will be normal -- and there won't be a bunch of troops marching through the streets.
taxes (Score:2)
They'd tax the air we breathe if they could...
They always argue that "oh, but it's only one cent per 100 emails... that's not that much..." and try to instill some sort of guilt about being greedy.
As soon as there's a tax in place, it'll just go up and up, always with the same argument - it's incrementalism at is best (worst).
Tax? Probably. Martial law? Probably not. (Score:1)
But the martial law nugget is simply ridiculous. If you truly believe that all computerized transactions will fall apart, and that people will forget how to use a pencil and paper to take an order, deposit a check, or write a letter, then you've truly forgotten the ingenuity of the human spirit. People continue to go on with their lives. There won't be a run on food (unless the conspiracists get too vocal and scare everyone into storming the stores, like what happens in the South just before a hurricane hits the land), there won't be problems getting your $$ out of the bank, and
/. will still be online after the big 2000.
Just my Y2K-proof $0.02...
The last story seemed like satire ... almost. (Score:2)
The story about people's fears about martial law was almost amusing. I mean, the people there were spinning scenarios and the author didn't really work to counterbalance that, unless mentioning the person reading the Roswell book was supposed to do that. I don't know anything about the organization that held the conference, but, geez
... I mean, how much credibility is added to the idea that "the big creep" might declare martial law by this story?
Sorry, you'd see mobilization of troops well before New Years Eve anyhow. Clamping down on the US is not a small operation -- even with all the weaponry at the disposal of the military, I'm sure the Michigan Militia will be able to save us all. Failing that, there's always Michael Moore as a backup.
But, 'tis almost the silly season
...
Re:Motives? (Score:1)
What, Reagan was President? And he did stuff that should have had him impeached (or was criminally ignorant of same), and it didn't have to do with his member, and
..?
Man, and here I was thinking that had been a dream. Those Demmycrats sure have dropped the ball. Or they didn't have a wave of millennial hysteria to surf in on. Offtopic note: as Professor Gould over at Harvard has pointed, out, Dennis the Short, on whose calculations our calendar are based (yes, it parses right), got the big J's dates wrong and the popular millennial mark actually occurred in '96 or '97. Besides, we're nowhere near a millenium with the Jewish calendar.
Well, with Screaming Lord Sutch (sp?) dead, who should run the U.S. Monster Raving Loony party? Oh, yeah they've found him already, "W." =)
how silly.... (Score:1)
An obviously ultra-partisan representative
A founder of some fringe global paranoia group
A conspiracy (albeit fiction) writer
..together at the same function would only lead to suspicion? I mean, if the USROA is officially sanctioned by the Gov't (as the Wired article would indicate), why on earth would they allow such a fringe lunacy to associate with the name? They're apparently an "eminently repspectable organization". Has the chair of the Org been dipping into the Agent Orange again, or something? Absurdities...
Repeat after me: (Score:1)
Bad things don't happen in real life.
Bad things that do happen are over with quickly, and only the bad guys get hurt.
So stop worrying and treat yourself to a nice latte at one of your many local Starbucks.
Taxation of email is pretty hard (Score:1)
Re:Executive Orders (Score:1)
Don't think the government doesn't know that... the Treasury Dept. is printing money like crazy to help cover the rush, and there's always a possibility of a bank holiday to force the idiots to keep their money in the bank.
I for one am going to proudly keep my money in the bank (as if I had enough to really worry about anyway
Doug
Re:Ummmm....yeah(open your eyes) (Score:1)
Now, it's stupid to believe that the govt. is going to declare martial law simply because there is an opportunity, but its equally moronic to think it "just can't happen" because you believe the US is somehow different from every other society that came before it.
Re:The U.S. never did like the U.N. (Score:1)
So, the US doesn't like it. It does like NATO, though.
-Imperator
Ka-whump! Stop scaring people! (Score:1)
The American Bankers Association threatened both companies with both lawsuits and denial of loans. The ads got pulled.
So if there's no possibility of trouble, why is the ABA kicking so much ass to prevent trouble?
Re:12/30/99 23:59:59 (Score:2)
How are you planning on drawing the
Re:Executive Orders... and Idiots (Score:1)
You seem to be making an argument for both idiots and non-idiots to withdraw some money...
I disagree (Score:1)
That's all. This article, or the group being reported on, was definitely spreading fear, uncertainty, and doubt. So you can use FUD here. You have my permission.
There's no reason to restrict the definition, just because it orginated in the area of marketing.
kmj
The only reason I keep my ms-dos partition is so I can mount it like the b*tch it is.
Re:Executive Orders (Score:1)
The abortion issue is a convenient stick with which to beat the american public. No president really wants it out of the headlines. In cases where the issue really is very important to the administration/congress, you'll simply see the laws/constitution not enforced and not talked about.
Cases in point are the repeated violations of both the War Powers Act and the Wagner Act by both parties.
If martial law was declared for real in the US it will not be called martial law. It will be "peacekeeping" or "disaster relief" troops posted in the cities. TV reports will show them handing out chocolate bars to little children.
He / she was of course thinking (Score:1)
Psychotic Nationalists bent on World Domination? (Score:1)
"I pledge alliegance to the world
To cherish every living thing
To care for Earth and Sea and Air
With Peace and Freedom everywhere
recognising that people's action towards nature and each other are the source of growing damage to the environment and resources needed to meet human needs and to ensure survival and development.
I pledge to the best of my ability
to help make Earth a secure and hospitable
home for present and future generations."
Is this such a bad thing to teach elementary school children? Okay, so the document has a UN logo at the top. But a logo means next to nothing when compared to the message contained in the writing itself. If the UN followed the pledge on the paper, would it be such a Bad Thing?
They also talk about how they believe the UN is, by putting its name on some national parks, going
to restrict access to parts of the USA for the purpose of "safeguarding the natural resources, including plants, animals and things, that are the province of 'gaia', the earth spirit."
And they call this "ominous". Be scared, everyone. The UN won't let you tear down the national parks because they'd like to save some trees and animals. This is a CONSPIRACY AGAINST THE AMERICAN PUBLIC!
.... No. No, no, no, no NO. If you destroy all the trees for big business, what's left? a barren concrete wonderland. Yippie. No animals, no plants(except the random shrubbery-island in the middle of a concrete ocean, that is), no life.
"This is the battle today because America is in danger of losing what our ancestors fought and died for. "
Wouldn't global peace, harmony, and prosperity immensely please the long-dead souls of the founders of the united states? Tolerance and all that? Working together for a better future? The -real- founding ideals of the states. I'm pretty sure they weren't meant to just apply to a bunch of white people living on one large chunk of land on the planet. One can't be "selectively tolerant". You're either tolerant of all, or you're -not tolerant-. QED.
...There's plenty more, I'm sure. Just picking up on some of the bits that stood out to me.
.end political rant/commentary.
ARA [aranet.org] - fighting fascism everywhere
Doh! (Score:1)
That's what I get for not previewing!
must've forgotten the r in my break tag
kmj
The only reason I keep my ms-dos partition is so I can mount it like the b*tch it is.
Re:The U.S. never did like the U.N. (Score:1)
I'm a proud Canadian, glad to see my Canadian tax dollars going toward paying our *full* amount every year and keeping places like Bosnia, Crete, and the Golan Heights from turning into another Kosovo... funding good programs like UNICEF that feed starving human beings.... but that's another rant.
I believe the American's UN debt stands a billion dollars. For the booming US economy, that's chump change. Heck, Canada could come up with that money if we really had to (It would be about 1% of our budget, and yea, 1.5bln CDN is a big chunk but it's a one time payment!)
Insead the Americans point at the UN, and say it's inefficient, bloaded, and ineffective. It's not really worth the money. The claims about its inefficiency are justified, but geez, look at what they're trying to do! Get hundreds of countries together that don't even speak the same language to stop from killing each other on a weekly basis. People in their twenties today don't have a real idea what real war is even like... An ounce of prevention saves a ton of bomb. A hard lesson for a lot of us to learn.
The UN was a good first try. Just like trying to do a software project with no specifications. You have to try first to see what you really want, then start over again from scratch with good design docs, cut out the functionality that wasn't required, and start again.
Sometimes I sit up at night and dream niavely about a more trim, efficient UN and what it could do to really change the world. Stop hunger, disease, resolve bitter conflicts with the resources to back it up. It would cost a lot of money. How much? Perhaps 100 billion a year? That's a lot, maybe $500 a year spread out amongst worldwide taxpayers. But look what we could do with that! But it's too expensive, even though the Americans must spend that much every year on beer.
On the other hand, if there is no hope, and humankind's differences can't be resolved no matter how much the effort, lets just nuke the planet now and get it over with...
Would you kill preverts? (Score:1)
"Today men, we are going to machine gun an army of preverts in the Oregon state capitol! They are engaged in all kinds of ungodly preversions in there. Do not be fooled that they look like college students, Universities are just hotbeds of preverted activity."
Re:Repeat after me: (Score:1)
Everyone is out to get me.
Good things in life are just cover-ups for bad things.
Good things that do happen are rare, and only happen to other people.
So keep worrying and lock yourself in your nice, safe backyard bomb shelter.
Re:Ummmm....yeah(open your eyes) (Score:1)
And people cry that we're paranoid.
I hold it that a little rebellion, now and then, is a good thing...
Re:12/31/99 23:59:59 (Score:1)
DOH!
My bad.
LK
The US is already under martial law (Score:2)
From what I understand Martial Law was declared back in the early 1900's and just never repealed. So since it already is in effect it would be eaiser to do something about it...
I personally don't think Y2K is going to be the end of civilization. Just trying to share some facts.
Re:12/31/99 23:59:59 (Score:1)
I can drop the beer. Or let go of the woman.
LK
Re:Ka-whump! Stop scaring people! (Score:1)
OOPS!!!! Re:Ummmm....yeah(open your eyes) (Score:1)
The guy I wrote about before is Amadou Diallo.
Sheesh. There are so many textbook examples of F#@%ed up government actions, it's hard to keep them straight.
I hold it that a little rebellion, now and then, is a good thing...
Re:Ummmm....yeah(open your eyes) (Score:1)
>You're referring to crimes of individuals within an organization... not the organization itself.
They have never been punished either by the BATF/Dept. Of Treasury or any DA. These abuses are routine, normal, and accepted by those in charge.
Virtually every crime is committed by individuals. The entire board of directors of a company are still individuals when they take part in criminal activity together.
Even John Dingel (a democrat by the way) called the (B)ATF a bunch of Jackbooted Facists back in the early 80's.
This is nothing new.
LK
Re:Executive Orders (Score:1)
Money in the bank? I'm not getting any WheresGeorge [wheresgeorge.com] hits while it's in the bank! No way.
Re:It's all checks and balances (Score:1)
And there isn't a thing Congress can do about it.
And for those of you that say the Judicial branch can declare this unconstitutional, let them. They have no power to enforce their decisions.
No it isn't (Score:1)
3/4 of our oil is refined at Houston TX. The plant mgrs haven't even started looking at the embedded chips that control the refineries. Imagine there's no gasoline...
Railroad switches are entirely computer controlled, not a single manual one in the entire country. Southern Pacific, loser corp that it is, hasn't even begun to inventory its embedded chips. Pretty hard to run a railroad without switches, and even harder to get coal to the electricity generation plants.
The usual level of intelligence of
Re:Executive Orders (Score:1)
Re:Executive Orders (Score:2)
It's not quite that simple. They aren't just printing money to have more money in the economy - that would cause inflation. They are printing currency to cover the amount of money that is already in the economy. All of these bills get distributed through the Federal Reserve system so that if there is a bank run, people can easily convert their "virtual" money (the numbers you see on your bank account) into hard (well, paper really) currency. But if there isn't a bank run, they'll probably end up destroying the extra bills that were printed, or saving them for the next time that this might be a problem. So the worst case scenario is that we all have lots of $$$ under our mattresses, but prices should stay about the same.
2% is wrong (Score:1)
Re:Would you kill preverts? (Score:1)
O, Irony... (Score:1)
There's been so much hype over Y2K issues that the less-informed public is just as likely to panic over a minor disturbance as not. Of course the government is reviewing declaration and deployment scenarios; to not do so would be grossly negligent. This doesn't mean that they're planning to use martial law; it means that they're being sure that they can if it's necessary. Capische?
I find it amusing to hear so many people concerned over a power that has been available to US Presidents for decades. I'm betting that, should disaster or panic strike in their area, these same people will complain that martial law wasn't declared fast enough or enacted efficiently enough.
--j, who only tries to please some of the people some of the time
Re:Ummmm....yeah(open your eyes) (Score:1)
For one pedant point
... or probably not because I'll misspell his name: you're thinking of Amidou Diallo, not Abner Louima (who was beaten and sodomized with a broomstick by some of NYC's finest). And the jackbooted thugs in question (both times) were local dudes promoting Hizzoner Giuliani's politeness scheme.
Every time I hear people clamoring for law and order I think of how RG has done that job. So, mind you, did Ayatollahs Khomeini and, at least until recently, Khameini (again with the spellings; my apologies).
The point (and, like Ellen deG, I do have one) is that these examples are localized; I agree with the general sentiment that "law and order" regimes^h^h^h^h^h^h^h administrations generally have this kind of worry behind them, but the abuses usually seem to be localized (yes I have heard of Ruby Ridge, Waco, and the Freemen). When the issue is a national "fell-swoop" takeover by the nations' own sons and daughters, forget it, ain't gonna happen -- here I cite not only Waco, but other sagacious
/.ers.
What worries me is the long, slow, corporate takeover. Forget jackboots, think pinstriped suits. Every morning, the news show I listen to breaks in with copious amounts of information about the stock market and large businesses. Social well-being seems to be equated to financial health, which seems for all intents and purposes to be measured by how much the top 1% of income earners make and how good they feel about the economy. But then, that makes this post redundant, and I don't want to be known as "Jeremiah," so I'll end it here.
Re:No it doesn't parse right. :) (Score:1)
Re:Executive Orders (Score:1)
trying to organize a union
African-American
Japanese-American and living in California
female and aspiring to work in a primarily male profession
gay
I was only referring to this particular portion of our nations laws. I in no way implied our government was wonderful and benign before that time. There have always been horrible problems with government in general by its very nature.
If you wish to deiscuss labor or racial laws and policies before 1950 we can do that somewhere else.
Kintanon can be reached at Sleffer@hotmail.com
Re:Ummmm....yeah(open your eyes) (Score:1)
Deal with the issue at hand AC. Ad Hominem attacks do not help your position here.
LK
Think they mean Marital Law (Score:1)
Re:Executive Orders (Score:1)
Should the big jerk declare martial law, and
should the brass follow his orders, and
should the foot soldiers follow their orders...
There are going to be many people in the states dusting off grandpa's old shotgun.
Unless, as a nation, we are so impotent that we won't. In which case, we deserve martial law.
-George
Re:Executive Orders (Score:1)
I for one am going to proudly keep my money in the bank (as if I had enough to really worry about anyway
Can we say "Hyper Inflation"?? That could cause as many problems as a bank collapse. Wouldn't you just love a mexican economy, go to the market for bread in the morning because it will be more expensive if you go in the afternoon....
Printing more money is NOT the answer to this problem. I'm all for letting everything go to Anarchy, I'll be spending new years with my girlfriend and my family in Georgia, in the woods, with our Garden, our well, and our hydroelectric generators.
Kintanon can be reached at Sleffer@hotmail.com
Re:Military Take Over (Score:2)
It's true that the military is trained to follow orders. However, their oaths are not to the President or Congress. The military is sworn to uphold the U.S. Constitution, the same as the President and Congress are. Now I'm not saying that any of these groups aren't above bending the rules if they can get away with it. But we're a long way from the military slavishly following blatantly treasonous orders from the President (especially this President - he's not highly regarded in military circles, in case you hadn't noticed).
If the President can fire the people who can remove him from office, do you really think we would have had a near-impeachment a few months ago? Congress can remove the President, and the President couldn't do anything to stop them. Now that the independent counsel law has expired, the President could fire anyone appointed by the Attorney General to investigate him, but the independent counsel law didn't exist when Nixon was president and he was still impeached. A President who fires someone because they are investigating on behalf of Congress is going to be more likely to be impeached, not less.
The President definitely cannot suspend the Constitution. Yes, the President can declare martial law, and I suppose a really bad apple could try to take over the country by doing so. But even that use of force would not suspend the Constitution - in fact, I'm not sure how any group could suspend the Constitution. I suppose it could be done by a constitutional amendment, but that would require the agreement of both Congress and the states.
Perhaps you should have taken your own advice.
Where do they come up with this stuff? (Score:1)
Also, this whole UN world domination thing is ludicrous. There are people who flipped when the UN declared the Statue of Liberty a World Heritage Site, as they twisted it into a UN invasion.
The UN is not going to take over the United States, folks. Sorry to burst your bubble, but it just isn't going to happen.
Re:The U.S. never did like the U.N. (Score:2)
I'm not sure if you're referring to the U.S. debt to the U.N., or just the U.S. debt in general. I don't know about the back debt to the U.N. I did hear on the news the other day that the U.S. annual contribution to the U.N. was supposed to be around 300 million dollars. The source who was being interviewed mentioned that this was less than 1% of the U.S. budget. To be fair, the interviewee was a member of a pro-U.N. organization.
Re:Military Take Over (Score:1)
>If the President can fire the people who can remove him from office, do you really think we would have had a near-impeachment a few months ago?
The VP and the cabinet can remove him from office without any type of trial or hearing.
He can fire the entire cabinet. He could have fired Ken Starr. We didn't have a near-impeachment, we had an impeachment. He just wasn't convicted.
Actually Nixon DID order the people investigating him fired, a couple of times. It was two reporters who got the goods on him because they couldn't be fired by him
>The President definitely cannot suspend the Constitution. Yes, the President can declare martial law, and I suppose a really bad apple could try to take over the country by doing so.
Perhaps I was sloppy in my wording. The president can suspend constitutional rights by declaring a state of emergency, or rebellion.
I'll post the exact sections that I'm speaking of tomorrow if you'd like.
LK
The people are at least as scarry as the gov (Score:1)
Payments & Paranoids (Score:1)
1. Is today UN-bashing day?
2. Is slashdot paid for redirecting slashdoters to Wired?
3. What's more dangerous on y2k, paranoia or computer glitches?
Re:Now THIS is FUD (Score:1)
Real geniuses, these. One of the attendees was scared because she was reading The Day After Roswell? No Soup for you! You a nutbar! Get out! A couple of Congressmen (probably lawyers) who have the technical ability of my cat. Some looney lawyer. This is a group of people we care about? Please.
If you dig a little in the ROA's site [roa.org], you'll see a recommendation from none other than Strom Thurmond, Lover of Life and Liberty himself. This seems to be a collection of military hardcores. A group that would love to see the "King" kicked out of office. Nice impartial group to spilling this kind of FUD.
I would like to ramble on about the loonies in the House, but I stopped myself. This is just the same bunch that thinks flag burners are dangerous to the country and the 10 Commandments will save school kids from automatic weapons.
Nuff said. Nothing to see here, just give 'em a little room and they'll just tire themselves out.
Chris
Ummmm....yeah (Score:1)
Whenever I read some redneck gun nut ranting about jack-booted thugs and black helicopters I always wonder if perhaps he's only jealous.
Executive Orders (Score:2)
Yeah, it's far fetched and probably will not happen, but what posessed Clinton to pass executive orders (laws the president passes to cirvumvent the normal checks and ballances system) that allow him this supreme power. Even it no one uses this power, why the hell does he HAVE that power?
It's just kinda scary in genneral that these laws exist.
FinkPloyd
Re:how silly.... (Score:1)
>>such a fringe lunacy to associate...
Haven't you heard of that little thing called freedom of speech? One person's fringe is another's mainstream.
It's easy to dismiss some group as "fringe", but
that doesn't address whether their views are valid or not.
To put it into Slashdot perspective, consider that many people consider Linux users and Open Source advocates to be somewhat on the "fringe." It makes a convenient excuse for marginalizing their beliefs.
Do you really want to go down this road?
Martial Law Net taxes etc (Score:1)
As for the net tax, It was proposed by the UN. The UN has no power to tax anyone or anything.
As for the interstate tax thing that is not new with the Net. The same rules have applied to mail/phone order catalogs for a long time. My mother used to order stuff from LL Bean when I was a kid and never paid sales tax on it. And it does not appear that congress has a great intrest in adding a new tax at this point. Esp since we are running a surplus and they are trying to cut taxes.
Re:Motives? (Score:1)
And i though he was a peace candidate. Hell, I voted for him cause he was a draft-dodger.
Oh well, that'll teach me not to vote libertarian!
The U.S. never did like the U.N. (Score:1)
Missing one thing (Score:1)
These looney right types would be sad if they didn't have so many guns and managed to get a few of themsleves elected to Congress.
Of course they're preparing for martial law... (Score:1)
I mean, if they prepare for it, everyone gets scared. If they don't prepare for it, we think they're unprepared and everyone gets scared.
Re:The U.S. never did like the U.N. (Score:1)
Would that it were, it is in fact 3-5 trillion if I remember correctly. Our yearly deficits may have been measured in billions. This year, we apparently hae a surplus that we NEED to squander instead of getting out of debt.
Re:Military Take Over (Score:1)
Define "Attack". Is "Attacking your own country" opening fire on the confused and angry mob that is throwing stones and stuff at the troops because they THINK they are being attacked? Is it opening fire on the looters who are trashing the local Best buy for free TVs? Is it rounding up everyone and keeping them in one place for their own "protection"? Which one of those would you refuse to do if ordered to? Who needs foreign troops when you can convince your own troops that they are saving the citizens from themselves.
Kintanon can be reached at Sleffer@hotmail.com
Re:Military Take Over (Score:1)
>They are predominatly 18-24 year olds, who just got over missing their mommies and have no desire to destroy the U.S., their home, by the way.
In the military they are trained and conditioned to obey the orders of the man with the funniest looking insignias.
If your superior officer gives you an order, you follow it. If his superior officer gave him an order, he is following it. and so on and so on all the way up to the president.
If President Clinton orders that as a part of basic training every private in the platoon must chew the same piece of bubble gum for one dan and pass it to the next wo/man it will be done. Stupid or not, that is the way it works.
Just about 3 years ago they surveyed Marines to find out if they'd be willing to fire on American citizens in the name of gun control. Why would they even ask this question?
Let us ignore for the moment that it's illegal to use the military for domestic law enforcement. Why would they want to?
The president has unlimited power for his term in office. He has the power to FIRE the people who can remove him from office via constitutional channels. He can even suspend the constitution. Understand what is possible. Then decide what is likely.
It is most likely that y2k will be just another date in history without much ballyhoo. However things that are not probable are not necessarily impossible.
Do not fool yourself. Learn the constitution and the laws.
LK
I can see it now. (Score:2)
America is now under martial law.
All constitutional rights have been suspended.
Anyone caught outside the gates of their subdivision sector after curfew WILL BE SHOT.
Remain calm. Do not panic.
Your neighborhood watch officer will be by to collect urine samples in the morning. Anyone caught interfering with the collection of urine samples WILL BE SHOT.
*shiver*
Re:Executive Orders (Score:1)
FinkPloyd
Re:Ummmm....yeah(open your eyes) (Score:1)
One word is the key to distinguishinging between a basically decent system tainted by occasional abuse on the one hand and a corrupt system characterized by abuse on the other. That word is "consequences".
The cop who raped Abner Louima is (according to what I recall from recent news accounts) in prison, where he belongs. Lon Horiuchi (the Ruby Ridge sniper) and Larry Potts (the issuer of the death order) are free to walk the streets, where they manifestly do not belong.
/.
Re: eugenics (Score:1)
Now THAT is a far scarier thought than any Y2K-related panic.
Re: debt (Score:1)
And so, our UN debt continues. And we look like a country of idiots who are too damn cheap to pay our UN tab.
I bet if we took 50 random Slashdotters, put them in a room and told them to solve the country's problems, we'd get a lot more done than the US Senate has gotten done in the past several years. Hell, you could probably take 50 random people off the street and still do better than the government...at least the people off the street would be looking to do the right thing rather than trying to further a political career.
Enough rambling for one night. I gotta work tomorrow
Re:The U.S. never did like the U.N. (Score:1)
As for why the US doesn't like the UN sometimes, look at the genocide in Kosovo. The UN wasn't doing a thing (in large part due to the Russians, who support Serbia and have veto power). If NATO hadn't stepped in, there might not be any Kosovar Albanians left. The UN is supposed to prevent genocides and such from happening, but it's too slow and ineffective. Trying to get any group of people to agree is difficult, and trying to get a bunch of stubborn diplomats who all have veto power in the Security Council to do so is nearly impossible.
The UN is useful as a place for countries to vent their frustrations. Sort of a group therapy for their countries' collective egos. Helps keep everyone calm and not marching off to war. If you can keep everyone occupied doing nothing but talking, then they can't fight each other. Problem is, while they're doing nothing, they're not dealing with the world's other problems. Like Kosovo.
And on those occasions where the UN actually does get its act together (like in Iraq) to do something, who ends up supplying most of the military force? The US, usually. Which makes the politicians wonder why we're paying the UN so that we can fight the UN's battles. They don't see that while the UN has plenty of problems, it's better than nothing.
Still, the UN demonstrates why we still need to have groups like NATO around, particularly for regional problems...less size == less beaurocratic crap == faster, effective action.
Re:Executive Orders (Score:3)
Also, for those of us who are economists you should know that it only takes 2% of the people how have cash in the banks withdrawing that cash to cause our banking system to collapse. There is FAR FAR more money in the world than actual currency. So if 2% of the population decides to be on the safe side and pull their cash out of the Stock Market and Banks, then we have an economic collapse.
Also, these "fringe" people aren't going to be reacting to the problems, they are going to be causing them. A lot of people believe that Y2K is going to be the end of the world, a huge disaster, or something very very close. So they are going to go crazy on new years eve and cause a lot of the problems they fear. Can you imagine having a few hundred thousand people in each city who are primed for the end of the world, and the power goes off for 45 minutes because of an ice storm or something? Or the power goes down for an hour because of some obscure Y2K problem... anything could be enough to turn those people into a raging paranoid mob bent on looting and burning the city. That sort of situation would easily convince me to delcare martial law if I were president, and with everything working so much more efficiently without Congress, well... why bother to lift it? Just boot congress and get a REAL "government" going!
What I'm talking about isn't exactly far fetched if you know a little bit about human nature, and if you have ever worked in phone tech-support you know how paranoid the average idiot is if something he/she doesn't understand happens.
Kintanon is reachable at sleffer@hotmail.com
Military Take Over (Score:2)
What Next? (Score:1)
Hang on a moment, I'll call the 'Sightings' producers to bring in their army of Feng Shui practitioners to ward our nation against inauspicious flows of water that might prolong Y2K rioting. Better yet, we'll have Peter James contact dead government officials from the last turn of the millennium to determine what is the best course of action to quell the terror of the ignorant masses.
Re:how silly.... (Score:1)
>>why on earth would they allow
>>such a fringe lunacy to associate...
Haven't you heard of that little thing called freedom of speech? One person's fringe is another's mainstream.
The USROA is an organisation. Thus, it would seem to reason that they are fairly unified on certain ideas, beliefs, ethics, etc. To have these people there would make it quite easy for the average joe to rationalize and equate the USROA with such beliefs.
I'm not treading on anyone's right to speech. But I hardly think that the conspiracy theorists speak for the whole of the Association, and am quite surprised that such a collection of characters was brought to speak and/or attend a meeting at this apparently highly-reputable group.
US controlled? maybe. Net controlled? Yes. (Score:2)
Among the topics being considered by ICANN? Whether or not individuals (as opposed to trademark owners) should be allowed to own domain names. Whether or not domain dispute policies should require court proceedings, with the loser paying all fees. Whether the domain name in dispute should be turned over to the trademark holder before the dispute resolution process is completed.
And all of this is being decided by a group of non-representative, non-elected lawyers, businesspeople, and others who stand to gain financially from such decisions. to this date, they have refused to allow a constituency of individual, non-commercial, non-organizational domain name owners to have representation in their proceedings.
The working groups deciding these issues are chaired by hand-picked members of the Domain Names Council, instead of elected by the members of the working groups.
The Domain Names council is stacked with officers of ISOC, CORE, and advisory board members from the gTLD-MoU advisory boards, all of whom have a decided financial interest in the outcome of certain decisions.
Decisions are made without any form of formal voting procedure, without regard to fairness, and without consideration for the group's lack of legitimacy and adequate representation. They are attempting to ramrod through a set of decisions before their own mandate requires them to replace the appointed officials with elected ones.
And they're doing it all in the name of the "net community".
Check the DNSO website [dnso.org] to find the archives of the various mailing lists where this is occurring.
Check this link [songbird.com] for a statement in which the chair of the gTLD-MoU proposes capture of the DNSO.
Check the Individual Domain Name Owners [idno.org] Constituency page if you'd like to get involved.
Re:The U.S. never did like the U.N. (Score:1)
Neither do many other countries....
It's all checks and balances (Score:1)
However, this isn't necessarily a bad thing, if the president *does indeed* have too much power, that's why we have the Legislative and Judicial branches to knock him down a notch. If Congress does that and in turn gains too much power of their own, well then there's the other two to take it away.
look on the bright side.... (Score:1)
Re:Tax? Probably. Martial law? Probably not. (Score:1)
Re:Military Take Over (Score:2)
As I said, the President can fire investigators now that the independent counsel law has been allowed to expire by Congress. But before it expired (which was within the last couple of months, IIRC), the President could not have fired Ken Starr. I wasn't aware that Nixon had fired investigators, but I imagine that his actions led directly to the independent counsel act in the first place. Now that it has expired, we'll probably wish we still had it someday.
It's true that the President can fire some of the people who could have him removed from office, (namely those in the Executive branch which you mentioned) but he can't fire all of the people who could remove him - for example, Congress. I'm sorry if I read your original post incorrectly; I understood it to mean that the President could fire any and all threats to his remaining in office, which is not correct.
It's true that there was an impeachment this year - I was incorrect to call it a near-impeachment.
That I can agree with that - for example, Pres. Lincoln suspended the right of habeas corpus (show cause for imprisonment) during the Civil War. He probably suspended other rights as well, that's just the first one I thought of. I won't argue with that interpretation; I just don't think it's correct to say that the President can suspend the entire Constitution. No one can unilaterally do that.
Re:Military Take Over (Score:1)
>I'm sorry if I read your original post incorrectly; I understood it to mean that the President could fire any and all threats to his remaining in office, which is not correct.
My fault, not yours. I was sloppy in my wording.
LK
Big Brother is Watching (Score:1)
The UN E-mail tax (while almost impossible to implement) would be terreble, the US was built on freedom and liberty, not taxes and the UN!!!
That's my 1/50 of $1.00 US
JM
Big Brother is watching, vote Libertarian!!
Some people go bonkers... (Score:1)
Don't panic.
Chuck
Re:Ummmm....yeah(open your eyes) (Score:1)
>I'd rather have Billy C. declare martial law than some crackpot militia with a thinly veiled racial agenda.
Racial agenda? There are a few out on the extreme right who have a racial agenda, the vast majority do not. J.J. Johnson is a very bigwig in the "militia community".
>Whenever I read some redneck gun nut ranting about jack-booted thugs and black helicopters I always wonder if perhaps he's only jealous.
Do you call a black man who says the same things a redneck? Other than jackbooted thugs, what do you call it when government agents kick in someone's front door, stomp on their pet kittens, slam pregnant women into walls, and beat unarmed men?
Jackbooted thugs sounds rather accurate to me.
I live in an area where about 2 years ago there were black helicopters flying about and there were widespread reports of gunshots. Later the Army claimed that during a training excersize they played the sounds of machineguns firing through loudspeakers to add realism. I have no idea what they're doing but they are doing SOMETHING.
Your ignorance and apathy are astounding.
LK
Re:Executive Orders... and human actions (Score:1)
Those of you who have done extensive analysis of the Y2K-related technical problems may see big problems or not, depending on where you looked and the assumptions you worked under. If you looked *only* at the original technical problems, however, you may well have underestimated the magnitude and misdiagnosed the nature of the Y2K situation.
Re:Executive Orders (Score:1)
12/30/99 23:59:59 (Score:1)
At this precise momend in time I will be at a new years eve party with a beer in one hand,my woman in the other, and a
I too think that the major danger of y2k is the lunatic fringe who have decided that *something* big is going to happen with y2k. And if it doesn't they're going to make something happen.
I don't particularly care about the ordinary moe, but I'm making it home that night.
In a situation like that martial law may or may not be warranted but it would most likely be used. In a situation where you've got end of the world freaks and the mindless drones of the military facing off, I definately don't want to be unarmed.
LK | https://slashdot.org/story/99/07/15/160224/us-capitol-hill-on-the-internet | CC-MAIN-2017-13 | refinedweb | 8,186 | 72.76 |
06 August 2008 17:27 [Source: ICIS news]
?xml:namespace>
The contract was settled on a FD (free delivered) NWE (northwest
“We think it is a fair reflection of the market, given the different pressures. Producers are still suffering from the high ethylene prices but we see that MEG is weak in
Sellers had largely disagreed with the July settlement at €940/tonne FD NWE, up €60/tonne, saying that it did not cover raw material increases.
“We thought there was a good argument for a rollover or a decrease in August. But we accept that the €940/tonne July level was not good for sellers and see a €10/tonne August hike as a compromise,” said the buyer.
($1 = €0.65)
For more on MEG visit ICIS chemical intelligence To discuss issues facing the chemicals industry visit ICIS connect
For more on MEG. | http://www.icis.com/Articles/2008/08/06/9146168/1st+europe+august+meg+contract+rises+10t.html | CC-MAIN-2013-20 | refinedweb | 144 | 56.79 |
How can I get the first n characters of a string in PHP? What's the fastest way to trim a string to a specific number of characters, and append '...' if needed?
//The simple version for 10 Characters from the beginning of the string $string = substr($string,0,10).'...';
Update:
Based on suggestion for checking length (and also ensuring similar lengths on trimmed and untrimmed strings):
$string = (strlen($string) > 13) ? substr($string,0,10).'...' : $string;
So you will get a string of max 13 characters; either 13 (or less) normal characters or 10 characters followed by '...'
Update 2:
Or as function:
function truncate($string, $length, $dots = "...") { return (strlen($string) > $length) ? substr($string, 0, $length - strlen($dots)) . $dots : $string; }
Update 3:
It's been a while since I wrote this answer and I don't actually use this code any more. I prefer this function which prevents breaking the string in the middle of a word using the
wordwrap function:
function truncate($string,$length=100,$append="…") { $string = trim($string); if(strlen($string) > $length) { $string = wordwrap($string, $length); $string = explode("\n", $string, 2); $string = $string[0] . $append; } return $string; } | https://codedump.io/share/S8fqjTiUtdXH/1/get-first-n-characters-of-a-string | CC-MAIN-2017-04 | refinedweb | 187 | 64.91 |
My first F# application for AutoCAD
I.
What is the advantage compare with C#? Is it necessary to learn this new language?
Posted by: Spring | November 02, 2007 at 09:25 PM
There's certainly no need to learn it: it's another tool that may be of use to developers working in certain domains. The Wikipedia link in the article should be of some use in understanding the basic concepts of functional programming languages, otherwise this page has a quite good explanation/comparison.
Kean
Posted by: Kean | November 02, 2007 at 11:11 PM
Kean,
Will the F# open an opportunity to Autodesk replaces the AutoLISP language?
F# is a functional programming as AutoLISP so it would be easier to create a cross-language conversion tool to migrate AutoLISP code to F#?
Maybe Autodesk is planning to create its own .NET based AutoLISP...say A# ? :)
Please keep posting information about F# x AutoCAD.
Regards,
Posted by: Fernando Malard | November 04, 2007 at 09:20 PM
There are no plans to replace AutoLISP with F# (and I don't see us having any in my lifetime): the F# language is likely to be of use to developers integrating math-intensive/simulation technologies with AutoCAD (and other, yet-to-be-determined-by-me-at-least uses), but it is not an easy leap, even from LISP.
I'd like to gather information on what we might do inside AutoCAD to make F# a more natural environment for development, but only to provide tighter integration of an additional language option.
Kean
Posted by: Kean | November 04, 2007 at 11:09 PM
NA
Posted by: Ram Raja Hamal | November 06, 2007 at 07:41 AM
You've been kicked (a good thing) - Trackback from CadKicks.com
Posted by: CadKicks.com | November 10, 2007 at 03:46 PM
With Lisp one could use a function from a text string in a database (for example "(setq QTY (* 2.5 WIDTH))" to calculate part specific quantities for BOM:s. I haven't found a way to do this in vb(.net), perhaps F# could do the trick?
Posted by: Thomas | November 16, 2007 at 06:56 AM
Hi Thomas,
Yes - one of the features of LISP is the (eval) function.
It doesn't appeat to be native functionality in either C# or VB.NET, but it does appear to be possible to implement:
It remains to be seen whether either technique can be used to call through AutoCAD's managed API (the first one seems likely, I haven't really looked into the implementation of the second).
The equivalent in F# appears to be the quotations mechanism.
Regards,
Kean
Posted by: Kean | November 16, 2007 at 09:11 AM
DO YOU HAVE ANY DevTV Introduction to REALDWG Programming ?
Posted by: ALBERTO BENITEZ | November 28, 2007 at 07:13 PM
It's in the works and should be available sometime next year.
Kean
Posted by: Kean | November 28, 2007 at 08:03 PM
Are there any extra dependencies on the F#-compiled dll as compared to a C#-compiled one?
I can run this example fine on my machine, but my colleague cannot get the TEST command after he NETLOADs the dll.
Would you have any guess why?
It's always tricky to debug things that work on one machine and not the other, when there's no obvious difference in the setup. He doesn't have F# installed, but I'm assuming that's not necessary?
Any ideas would be greatly appreciated.
Thanks.
Posted by: namin | March 15, 2008 at 06:31 AM
While F# code does compile down to IL, it does depend on certain new namespaces implemented in assemblies that are installed by with the F# implementation. So you will (for now) need to install F# on machines running the code, until it becomes a more fully integrated part of Visual Studio and probably the .NET Framework.
Regards,
Kean
Posted by: Kean | March 15, 2008 at 03:41 PM
Just for completeness, I should mention that it's possible to remove the dependency of an F# application to the F# assemblies by compiling with the --standalone flag. This has the disadvantage of adding about 1Mb to the application as it statistically links the F# library.
Posted by: namin | April 11, 2008 at 08:08 AM | http://through-the-interface.typepad.com/through_the_interface/2007/10/my-first-f-appl.html | crawl-002 | refinedweb | 721 | 60.45 |
The QApplication class manages the GUI application's control flow and main settings. More...
#include <qapplication.h>
Inherits QObject.
List of all member functions.
It contains the main event loop, where all events from the window system and other sources are processed and dispatched. It also handles the application's initialization and finalization, and provides session management. It also handles most system-wide and application-wide settings.
For any GUI application that uses Qt, there is precisely one QApplication object, no matter whether the application has 0, 1, 2 or more windows at any time.
The QApplication object is accessible through the global pointer qApp. Its main areas of responsibility are:
The Application walk-through example contains a typical complete main() that does the usual things with QApplication.
Since the QApplication object does so much initialization, it must be created.)
Non-GUI programs: While Qt is not optimized or designed for writing non-GUI programs, it's possible to use some of its classes without creating a QApplication. This can be useful if you wish to share code between a non-GUI server and a GUI client.
See also Main Window and Related Classes.
See setColorSpec() for full details.
This enum type defines the 8-bit encoding of character string arguments to translate():
See also QObject::tr(), QObject::trUtf8(), and QString::fromUtf8().
The global qApp pointer refers to this application object. Only one application object should be created.
This application object must be constructed before any paint devices (including widgets, pixmaps, bitmaps etc.).
Note that argc and argv might be changed. Qt removes command line arguments that it recognizes. The(). and Macintosh, currently the window system is always initialized, regardless of the value of GUIenabled. This may change in future versions of Qt.
The following example shows how to create an application that uses a graphical interface when available.
int main( int argc, char **argv ) { #ifdef Q_WS_X11 bool useGUI = getenv( "DISPLAY" ) != 0; #else bool useGUI = TRUE; #endif QApplication app(argc, argv, useGUI); if ( useGUI ) { //start GUI version ... } else { //start non-GUI version ... } return app.exec(); }
For Qt/Embedded, passing QApplication::GuiServer for type makes this application the server (equivalent to running with the -qws option).
Warning: Qt only supports TrueColor visuals at depths higher than 8 bits-per-pixel.
This is available only on X11.
Warning: Qt only supports TrueColor visuals at depths higher than 8 bits-per-pixel.
This is available only on X11.
This is useful for inclusion in the Help menu of an application. See the examples/menu/menu.cpp example.
This function is a convenience slot for QMessageBox::aboutQt().().
A modal widget is a special top level widget which is a subclass of QDialog that specifies the modal parameter of the constructor as TRUE. A modal widget must be closed before the user can continue with other parts of the program.
Modal widgets are organized in a stack. This function returns the active modal widget at the top of the stack.
See also activePopupWidget() and topLevelWidgets(). at the top of the stack.
See also activeModalWidget() and topLevelWidgets().
Returns the application top-level window that has the keyboard input focus, or 0 if no application window has the focus. Note that there might be an activeWindow() even if there is no focusWidget(), for example if no widget in that window accepts key events.
See also QWidget::setFocus(), QWidget::focus, and focusWidget().
Example: network/mail/smtp.cpp.
The default path list consists of a single entry, the installation directory for plugins. The default installation directory for plugins is INSTALL/plugins, where INSTALL is the directory where Qt was installed.
See also removeLibraryPath(), libraryPaths(), and setLibraryPaths().
The list is created using new and must be deleted by the caller.
The list is empty (QPtrList::isEmpty()) if there are no widgets.
Note that some of the widgets may be hidden.
Example that as soon as you have finished using it. The widgets in the list may be deleted by someone else at any time.
See also topLevelWidgets(), QWidget::visible, and QPtrList::is() and QApplication::QApplication().
Examples: chart/main.cpp and scribble/scribble.cpp. and".
Qt provides a global pointer, qApp, that points to the QApplication object, and through which you can access argc() and argv() in functions other than main().
See also argc() and QApplication::QApplication().
Examples: chart/main.cpp and scribble/scribble.cpp.
Examples: regexptester/regexptester.cpp and showimg/showimg.cpp.
This function is particularly useful for applications with many top-level windows. It could, for example, be connected to a "Quit" entry in the file menu as shown in the following code example:
// the "Quit" menu entry should try to close all windows QPopupMenu* file = new QPopupMenu( this ); file->insertItem( "&Quit", qApp, SLOT(closeAllWindows()), CTRL+Key_Q ); // when the last window.
Examples: action/application.cpp, application/application.cpp, helpviewer/helpwindow.cpp, mdi/application.cpp, and qwerty/qwerty.cpp.
See also startingUp().
See also QApplication::setColorSpec().
Example: showimg/showimg.cpp.
This function deals with session management. It is invoked when the QSessionManager wants the application to commit all its data.
Usually this means saving all open files, after getting permission from the user. Furthermore you may want to provide a means by which the user can cancel the shutdown.
Note that you should not exit the application within this function. Instead, the session manager may or may not do this afterwards, depending on the context.
Warning: top level widgets. If any event was rejected, the shutdown is canceled.
See also isSessionRestored(), sessionId(), saveState(), and the Session Management overview.
The default value on X11 is 1000 milliseconds. On Windows, the control panel value is used.
Widgets should not cache this value since it may be changed at any time by the user changing the global desktop settings.
See also setCursorFlashTime().
Returns QTextCodec::codecForTr().
The desktop widget is useful for obtaining the size of the screen. It may also be possible to draw on the desktop. We recommend against assuming that it's possible to draw on the desktop, since this does not work on all operating systems.
QDesktopWidget *d = QApplication::desktop(); int w = d->width(); // returns desktop width int h = d->height(); // returns desktop height
Examples:().
This function enters the main event loop (recursively). Do not call it unless you really know what you are doing.
Use QApplication::eventLoop()->enterLoop() instead.
To create your own instance of QEventLoop or QEventLoop subclass create it before you create the QApplication object.
See also QEventLoop.
Example: distributor/distributor.ui.h.().
See also quit(), exit(), processEvents(), and setMainWidget().
Examples: helpsystem/main.cpp, life/main.cpp, network/archivesearch/main.cpp, network/ftpclient/main.cpp, opengl/main.cpp, t1/main.cpp, and t4/main.cpp.
After this function has been called, the application leaves the main event loop and returns from the call to exec(). The exec() function returns retcode.
By convention, a retcode of 0 means success, and any non-zero value indicates an error.
Note that unlike the C library function of the same name, this function does return to the caller -- it is event processing that stops.
See also quit() and exec().
Examples: chart/chartform.cpp, extension/mainform.ui.h, and picture/picture.cpp.
This function exits from a recursive call to the main event loop. Do not call it unless you are an expert.
Use QApplication::eventLoop()->exitLoop() instead.
If you are doing graphical changes inside a loop that does not return to the event loop on asynchronous window systems like X11 or double buffered window systems like MacOS X, and you want to visualize these changes immediately (e.g. Splash Screens), call this function.
See also flushX(), sendPostedEvents(), and QPainter::flush().().
Returns the application's global strut.
The strut is a size object whose dimensions are the minimum that any GUI element that the user can interact with should have. For example no button should be resized to be smaller than the global strut size.
See also setGlobalStrut().
This signal is emitted after the event loop returns from a function that could block..
Multiple message files can be installed. Translations are searched for in the last installed message file, then the one from last, and so on, back to the first installed message file. The search stops as soon as a matching translation is found.
See also removeTranslator(), translate(), and QTranslator::load().
Example: i18n/main.cpp.
By default, Qt will try to use the desktop settings. Call setDesktopSettingsAware(FALSE) to prevent this.
Note: All effects are disabled on screens running at less than 16-bit color depth.
See also setEffectEnabled() and Qt::UIEffect.
Returns TRUE if the application has been restored from an earlier session; otherwise returns FALSE.
See also sessionId(), commitData(), and saveState().
This signal is emitted when the user has closed the last top level window.
The signal is very useful when your application has many top level widgets but no main widget. You can then connect it to the quit() slot.
For convenience, this signal is not emitted for transient top level widgets such as popup menus and dialogs.
See also mainWidget(), topLevelWidgets(), QWidget::isTopLevel, and QWidget::close().
Examples: addressbook/main.cpp, extension/main.cpp, helpviewer/main.cpp, mdi/main.cpp, network/archivesearch/main.cpp, qwerty/main.cpp, and regexptester/main.cpp.
If you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = app.libraryPaths(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See the plugins documentation for a description of how the library paths are used.
See also setLibraryPaths(), addLibraryPath(), removeLibraryPath(), and QLibrary.
Lock the Qt Library Mutex. If another thread has already locked the mutex, the calling thread will block until the other thread has unlocked the mutex.
See also unlock(), locked(), and Thread Support in Qt.
Returns TRUE if the Qt Library Mutex is locked by a different thread; otherwise returns FALSE.
Warning: Due to different implementations of recursive mutexes on the supported platforms, calling this function from the same thread that previously locked the mutex will give undefined results.
See also lock(), unlock(), and Thread Support in Qt.
Returns the current loop level.
Use QApplication::eventLoop()->loopLevel() instead.().
If a widget is passed in w, the default palette for the widget's class is returned. This may or may not be the application palette. In most cases there isn't a special palette for certain types of widgets, but one notable exception is the popup menu under Windows, if the user has defined a special background color for menus in the display settings.
See also setPalette() and QWidget::palette.
Examples: desktop/desktop.cpp, themes/metal.cpp, and themes/wood.cpp.
Usually widgets call this automatically when they are polished. It may be used to do some style-based central customization of widgets.
Note that you are not limited to the public functions of QWidget. Instead, based on meta information like QObject::className() you are able to customize any kind of widget.
See also QStyle::polish(), QWidget::polish(), setPalette(), and setFont().
Note: This function is thread-safe when Qt is built withthread support.Adds the event event with the object receiver as the receiver of the event, to an event queue and returns immediately.
The event must be allocated on the heap since the post event queue will take ownership of the event and delete it once it has been posted.
When control returns to the main event loop, all events that are stored in the queue will be sent using the notify() function.
See also sendEvent() and notify().
You can call this function occasionally when your program is busy performing a long operation (e.g. copying a file).
See also exec(), QTimer, and QEventLoop::processEvents().
Examples: fileiconview/qfileiconview.cpp and network/ftpclient/main.cpp..
It's common to connect the lastWindowClosed() signal to quit(), and you also often connect e.g. QButton::clicked() or signals in QAction, QPopupMenu or QMenuBar to it.
Example:
QPushButton *quitButton = new QPushButton( "Quit" ); connect( quitButton, SIGNAL(clicked()), qApp, SLOT(quit()) );
See also exit(), aboutToQuit(), lastWindowClosed(), and QAction.
Examples: addressbook/main.cpp, mdi/main.cpp, network/archivesearch/main.cpp, regexptester/main.cpp, t2/main.cpp, t4/main.cpp, and t6/main.cpp.
This method is non-portable. It is available only in Qt/Embedded.
See also QWSDecoration.
If you create an application that inherits QApplication and reimplement this function, you get direct access to all QWS (Q Window System) events that the are received from the QWS master process.
Return TRUE if you want to stop the event from being processed. Return FALSE for normal event dispatching.
Qt/Embedded on 8-bpp displays allocates a standard 216 color cube. The remaining 40 colors may be used by setting a custom color table in the QWS master process before any clients connect.
colorTable is an array of up to 40 custom colors. start is the starting index (0-39) and numColors is the number of colors to be set (1-40).
This method is non-portable. It is available only in Qt/Embedded.
This method is non-portable. It is available only in Qt/Embedded.
See also QWSDecoration.
See also addLibraryPath(), libraryPaths(), and setLibraryPaths().
Note: This function is thread-safe when Qt is built withthread support.Removes all events posted using postEvent() for receiver.().
Examples: distributor/distributor.ui.h, network/archivesearch/archivedialog.ui.h, network/ftpclient/ftpmainwindow.ui.h, and showimg/showimg.cpp.
See also setReverseLayout().
This function deals with session management. It is invoked when the session manager wants the application to preserve its state for a future session.
For example, a text editor would create a temporary file that includes the current contents of its.
Warning: Within this function, no user interaction is possible, unless you ask the session manager sm for explicit permission. See QSessionManager::allowsInteraction() and QSessionManager::allowsErrorInteraction() for details.
See also isSessionRestored(), sessionId(), commitData(), and the Session Management overview.
Sends event event directly to receiver receiver, using the notify() function. Returns the value that was returned from the event handler.
The event is not deleted when the event has been sent. The normal approach is to create the event on the stack, e.g.
QMouseEvent me( QEvent::MouseButtonPress, pos, 0, 0 ); QApplication::sendEvent( mainWindow, &me );If you create the event on the heap you must delete it.
See also postEvent() and notify().
Example: popup/popup.cpp.
Note that events from the window system are not dispatched by this function, but by processEvents().
If receiver is null, the events of event_type are sent for all objects. If event_type is 0, all the events are sent for receiver.
Dispatches all posted events, i.e. empties the event queue.
Returns the current session's identifier.
If the application has been restored from an earlier session, this identifier is the same as it was in that previous session.
The session identifier is guaranteed to be unique both for different applications and for different instances of the same application.
See also isSessionRestored(), sessionKey(), commitData(), and saveState().
Returns the session key in the current session.
If the application has been restored from an earlier session, this key is the same as it was when the previous session ended.
The session key changes with every call of commitData() or saveState().
See also isSessionRestored(), sessionId(),:
Be aware that the CustomColor and ManyColor choices may lead to colormap flashing: The foreground application gets (most) of the available colors, while the background windows will look less attractive. check what mode you end up with, call QColor::numBitPlanes() once the QApplication object exists. A value greater than 8 (typically 16, 24 or 32) means true color.
* The color cube used by Qt has 216 colors whose red, green, and blue components always have one of the following values: 0x00, 0x33, 0x66, 0x99, 0xCC, or 0xFF.
See also colorSpec(), QColor::numBitPlanes(), and QColor::enterAllocContext().
Examples: helpviewer/main.cpp, opengl/main.cpp, showimg/main.cpp, t9/main.cpp, tetrax/tetrax.cpp, tetrix/tetrix.cpp, and themes/main.cpp.
Note that on Microsoft Windows, calling this function sets the cursor flash time for all windows.
See also cursorFlashTime().
This is the same as QTextCodec::setCodecForTr().
This static function must be called before creating the QApplication object, like this:
int main( int argc, char** argv ) { QApplication::setDesktopSettingsAware( FALSE ); // I know better than the user QApplication myApp( argc, argv ); // Use default fonts & colors ... }
See also desktopSettingsAware().
Note that on Microsoft Windows, calling this function sets the double click interval for all windows.
See also doubleClickInterval().
Note: All effects are disabled on screens running at less than 16-bit color depth.
See also isEffectEnabled(), Qt::UIEffect, and setDesktopSettingsAware().
On application start-up, the default font depends on the window system. It can vary depending on both the window system version and the locale. This function lets you override the default font; but overriding may be a bad idea because, for example, some locales need extra-large fonts to support their special characters.
See also font(), fontMetrics(), and QWidget::font.
Examples: desktop/desktop.cpp, themes/metal.cpp, and themes/themes.cpp.
Enabling global mouse tracking makes it possible for widget event filters or application event filters to get all mouse move events, even when no button is depressed. This is useful for special GUI elements, e.g. tooltips.::mouseTracking.
The strut is a size object whose dimensions are the minimum that any GUI element that the user can interact with should have. For example no button should be resized to be smaller than the global strut size.
The strut size should be considered when reimplementing GUI controls that may be used on touch-screens or similar IO-devices.
Example:
QSize& WidgetClass::sizeHint() const { return QSize( 80, 25 ).expandedTo( QApplication::globalStrut() ); }
See also globalStrut().
See also libraryPaths(), addLibraryPath(), removeLibraryPath(), and QLibrary.
In most respects the main widget is like any other widget, except that if it is closed, the application exits. Note that QApplication does not take ownership of the mainWidget, so if you create your main widget on the heap you must delete it yourself.
You need not have a main widget; connecting lastWindowClosed() to quit() is an alternative.
For X11, this function also resizes and moves the main widget according to the -geometry command-line option, so you should set the default geometry (using QWidget::setGeometry()) before calling setMainWidget().
See also mainWidget(), exec(), and quit().
Examples: chart/main.cpp, helpsystem/main.cpp, life/main.cpp, network/ftpclient/main.cpp, opengl/main.cpp, t1/main.cpp, and t4/main.cpp.
Application override cursors are intended for showing the user that the application is in a special state, for example during an operation that might take some time.
This cursor will be displayed in all the application's widgetsCursor(Qt::WaitCursor) ); calculateHugeMandelbrot(); // lunch time... QApplication::restoreOverrideCursor();
See also overrideCursor(), restoreOverrideCursor(), and QWidget::cursor.
Examples: distributor/distributor.ui.h, network/archivesearch/archivedialog.ui.h, network/ftpclient/ftpmainwindow.ui.h, and showimg/showimg.cpp.
If className is passed, the change applies only to widgets that inherit className (as reported by QObject::inherits()). If className is left 0, the change affects all widgets, thus overriding any previously set class specific palettes.
The palette may be changed according to the current GUI style in QStyle::polish().
See also QWidget::palette, palette(), and QStyle::polish().
Examples: i18n/main.cpp, themes/metal.cpp, themes/themes.cpp, and themes/wood.cpp.
Changing this flag in runtime does not cause a relayout of already instantiated widgets.
See also reverseLayout().
See also startDragDistance().
See also startDragTime().
Example usage:
QApplication::setStyle( new QWindowsStyle );
When switching application styles, the color palette is set back to the initial colors or the system defaults. This is necessary since certain styles have to adapt the color palette to be fully style-guide compliant.
See also style(), QStyle, setPalette(), and desktopSettingsAware().
Example: themes/themes.cpp.().
For example, if the mouse position of the click is stored in startPos and the current position (e.g. in the mouse move event) is currPos, you can find out if a drag should be started with code like this:
if ( ( startPos - currPos ).manhattanLength() > QApplication::startDragDistance() ) startTheDrag();
Qt uses this value internally, e.g. in QFileDialog.
The default value is 4 pixels.
See also setStartDragDistance(), startDragTime(), and QPoint::manhattanLength().
Qt also uses this delay internally, e.g. in QTextEdit and QLineEdit, for starting a drag.
The default value is 500 ms.
See also setStartDragTime() and startDragDistance().
See also closingDown().
See also setStyle() and QStyle.
See also flushX().
The list is created using new and must be deleted by the caller.
The list is empty (QPtrList::isEmpty()) if there are no top level widgets.
Note that some of the top level widgets may be hidden, for example the tooltip if no tooltip is currently shown.
Example:
// Show as soon you have finished using it. The widgets in the list may be deleted by someone else at any time.
See also allWidgets(), QWidget::isTopLevel, QWidget::visible, and QPtrList::isEmpty().
Note: This function is reentrant when Qt is built with thread support.Returns the translation text for sourceText, by querying the installed messages files. The message files are searched from the most recently installed message file back to the first installed message file.
QObject::tr() and QObject::trUtf8() provide this functionality more conveniently.
context is typically a class name (e.g., "MyDialog") and sourceText is either English text or a short identifying text, if the output text will be very long (as for help texts).
comment is a disambiguating comment, for when the same sourceText is used in different roles within the same context. By default, it is null. encoding indicates the 8-bit encoding of character stings
See the QTranslator documentation for more information about contexts and comments.
If none of the message files contain a translation for sourceText in context, this function returns a QString equivalent of sourceText. The encoding of sourceText is specified by encoding; it defaults to DefaultCodec..
See also QObject::tr(), installTranslator(), and defaultCodec().
Attempts to lock the Qt Library Mutex, and returns immediately. If the lock was obtained, this function returns TRUE. If another thread has locked the mutex, this function returns FALSE, instead of waiting for the lock to become available.
The mutex must be unlocked with unlock() before another thread can successfully lock it.
See also lock(), unlock(), and Thread Support in Qt. a pointer to the widget at global screen position pos, or 0 if there is no Qt widget there.
If child is FALSE and there is a child widget at position pos, the top-level widget containing it is returned. If child is TRUE the child widget at position pos is returned.
The message procedure calls this function for every message received. Reimplement this function if you want to process window messages that are not processed by Qt. If you don't want the event to be processed by Qt, then return TRUE; otherwise return FALSE.
If gotFocus is TRUE, widget will become the active window. Otherwise the active window is reset to NULL.
Returns the color used to mark selections in windows style.
See also setWinStyleHighlightColor().
If you create an application that inherits QApplication and reimplement this function, you get direct access to all X events that the are received from the X server.
Return TRUE if you want to stop the event from being processed. Return FALSE for normal event dispatching.
See also x11ProcessEvent().
It returns 1 if the event was consumed by special handling, 0 if the event was consumed by normal handling, and -1 if the event was for an unrecognized widget.
See also x11EventFilter().
Prints a warning message containing the source code file name and line number if test is FALSE.
This is really a macro defined in qglobal.h.
Q_ASSERT is useful for testing pre- and post-conditions.
Example:
// // File: div.cpp // #include <qglobal.h> int divide( int a, int b ) { Q_ASSERT( b != 0 ); // this is line 9 return a/b; }
If b is zero, the Q_ASSERT statement will output the following message using the qWarning() function:
ASSERT: "b == 0" in div.cpp (9)
See also qWarning() and Debugging.. debug message msg,).
Warning: Passing (const char *)0 as argument to qDebug might lead to crashes on certain platforms due to the platforms printf implementation.
See also qWarning(), qFatal(), qInstallMsgHandler(), and Debugging.
Prints a fatal error message msg).
Warning: Passing (const char *)0 as argument to qFatal might lead to crashes on certain platforms due to the platforms printf implementation.
See also qDebug(), qWarning(), qInstallMsgHandler(), and Debugging.
Installs a Qt message handler h.(); // deliberately core dump } } int main( int argc, char **argv ) { qInstallMsgHandler( myMessageOutput ); QApplication a( argc, argv ); ... return a.exec(); }
See also qDebug(), qWarning(), qFatal(), and Debugging.
Obtains information about the system.
The system's word size in bits (typically 32) is returned in *wordSize. The *bigEndian is set to TRUE if this is a big-endian machine, or to FALSE if this is a little-endian machine.
In debug mode, this function calls qFatal() with a message if the computer is truly weird (i.e. different endianness for 16 bit and 32 bit integers); in release mode it returns FALSE.
Prints the message msg and uses code to get a system specific error message. When code is -1 (the default), the system's last error code will be used if possible. Use this method to handle failures in platform specific API calls.
This function does nothing when Qt is built with QT_NO_DEBUG defined.
Returns the Qt version number as a string, for example, "2.3.0" or "3.0.5".
The QT_VERSION define has the numeric value in the form: 0xmmiibb (m = major, i = minor, b = bugfix). For example, Qt 3.0.5's QT_VERSION is 0x030005.
Prints a warning message msg,).
Warning: Passing (const char *)0 as argument to qWarning might lead to crashes on certain platforms due to the platforms printf implementation.
See also qDebug(), qFatal(), qInstallMsgHandler(), and Debugging.
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.3//qapplication.html | crawl-002 | refinedweb | 4,316 | 51.14 |
Recent:
Archives:
The
java.util.jar and
java.util.zip packages let Java applications programmatically access archive files. With the diverse set of resource types that an archive
file can store, there is a need for better tools that read, query, and load these resources from Java applications. The figure
below illustrates the classes and interfaces that I implement in this article, and which provide a code foundation for such
tools.
Classes and interfaces implemented in this article
John Mitchell, Arthur Choi, and Todd Sundsted have demonstrated how to extract data from archive files in previous JavaWorld articles (see the Resources section below for links). For the purposes of this article, I'm assuming that you're familiar with these basic concepts.
I'll start by explaining what a Java archive filter is and walk through some example source code. Next, I'll implement the
JarClassTable class.
JarClassTable demonstrates how you can combine archive filters with a simple caching system to create simple, easy-to-define views of resources
in Java archive files. You'll see how easy this all is in Java.
An archive file is simply a zip file with entries of various types. You want to be able to load archive files of varying sizes and apply filters to them. The JDK provides ways in which you can access and load the complete contents of an archive, but does not supply code with which you can access specific resources within the file. In order to do that, or create application-defined views of zip files, you'll need an archive filter.
You implement archive filters in much the same way that you would pass a
java.io.FileFilter to the
list() method of
java.io.File in order to get specific files in a directory. In the following example,
SuffixZipEntryFilter, a zip filter, is used by
JarInfo to provide a view of an archive file. The full source code for
SuffixZipEntryFilter and
JarInfo is packaged in the jar file in Resources.
ZipEntryFilter classFilter = new SuffixZipEntryFilter(".class"); JarInfo jinfo = new JarInfo("yourarchive.jar", classFilter); System.out.println( jinfo );
The code above displays all entries within
yourarchive.jar whose zip entry names end with
.class. Like
java.io.FileFilter,
ZipEntryFilter is an interface that has a single
accept() method; however, it filters on
ZipEntry, not
java.io.File.
public interface ZipEntryFilter { public boolean accept( ZipEntry ze ); }
SuffixZipEntryFilter implements
ZipEntryFilter, placing the filter logic inside the
accept() method.
public class SuffixZipEntryFilter implements ZipEntryFilter { private String fSuffix; public SuffixZipEntryFilter(String suffix) { fSuffix = suffix; } public boolean accept( ZipEntry ze ) { if( ze == null || fSuffix == null ) return false; return ze.getName().endsWith(fSuffix); } }
JarInfo uses zip entry filters to weed out unacceptable resources. The following example demonstrates how
JarInfo's constructor takes both a zip file and a zip entry filter to determine which zip entries it should extract from the archive: | http://www.javaworld.com/javaworld/javatips/jw-javatip83.html | crawl-002 | refinedweb | 478 | 56.35 |
#include <deal.II/base/mpi.h>
A class that is used to initialize the MPI system at the beginning of a program and to shut it down again at the end. It also allows you to control the number threads used in each MPI task.
If deal.II is configured with PETSc, the library will be initialized in the beginning and destroyed at the end automatically (internally by calling PetscInitialize() and PetscFinalize()).
If deal.II is configured with p4est, that library will also be initialized in the beginning, and destroyed at the end automatically (internally by calling sc_init(), p4est_init(), and sc_finalize()).
If a program uses MPI one would typically just create an object of this type at the beginning of
main(). The constructor of this class then runs
MPI_Init() with the given arguments. At the end of the program, the compiler will invoke the destructor of this object which in turns calls
MPI_Finalize to shut down the MPI system.
This class is used in step-32, for example.
Definition at line 413 of file mpi.h.
Initialize MPI (and, if deal.II was configured to use it, PETSc) and set the number of threads used by deal.II (via the underlying Threading Building Blocks library) to the given parameter.
max_num_threadsor, following the discussion above, a number of threads equal to the number of cores allocated to this MPI process. However, MultithreadInfo::set_thread_limit() in turn also evaluates the environment variable DEAL_II_NUM_THREADS. Finally, the worker threads can only be created on cores to which the current MPI process has access to; some MPI implementations limit the number of cores each process may access to one or a subset of cores in order to ensure better cache behavior. Consequently, the number of threads that will really be created will be the minimum of the argument passed here, the environment variable (if set), and the number of cores accessible to the thread.
main(). Consequently, this extends to the current class: the best place to create an object of this type is also at or close to the top of
main().
Definition at line 295 of file mpi.cc. | http://www.dealii.org/developer/doxygen/deal.II/classUtilities_1_1MPI_1_1MPI__InitFinalize.html | CC-MAIN-2017-43 | refinedweb | 354 | 60.65 |
As long as there is somthing simple possible like: #define cinepaint_func gimp-func
Advertising
#namespace_GIMP gimp_func (whatever_You_like, arguments); #endif #namespace_CinePaint cinepaint_func (whatever_You_like, arguments); #endif possible, I think all is fine. But starting of something like: /* one design in GIMP: */ gimp_func (opts_float_array[3], "string"); /* and for the same in cinepaint: */ cinepaint_func (integer_A, integer_B, enum SELECT_STRING, double_C); Most important is IMHO to take the same arguments and stay in the sense of an function name (as most as possible) common. Do You have any concerns using gimp_func(....) in CinePaint? regards Kai-Uwe Am 14.08.04, 18:16 +0200 schrieb Sven Neumann: > Hi, > > Kai-Uwe Behrmann <[EMAIL PROTECTED]> writes: > > > Are You interessted into sharing the same PDB function names from > > what is allready gone into CinePaint? > > You aren't still using or even still adding PDB names that use the > GIMP namespace to CinePaint or are you? > > > Sven > _______________________________________________ Gimp-developer mailing list [EMAIL PROTECTED] | https://www.mail-archive.com/gimp-developer@lists.xcf.berkeley.edu/msg07854.html | CC-MAIN-2017-51 | refinedweb | 153 | 50.57 |
heatmapas contextual map tiles with
contextily¶
This document quickly demonstrates how to source form the Strava
heatmap to obtain map tiles that can be easily integrated in any modern Python geo-data workflow.
# Display on the notebook %matplotlib inline # Import contextily import contextily as ctx
src = ''
lvl = ctx.Place('Liverpool', url=url) ctx.plot_map(lvl.im);
import geopandas as gpd import numpy as np import matplotlib.pyplot as plt import rasterio as rio from rasterio import mask from rasterio.plot import show as rioshow from shapely.geometry import mapping as shply2json
Because
contextily allows you to combine webtile maps with any other data you may have, you can easily build more sophisticated maps. In this case, we will recreate the London boroughs example with Strava data. Here is what we will attempt to replicate:
You can download the original borough data from here and a reprojected
GeoJSON from here, which is what we will use:
brs = gpd.read_file('boroughs.geojson') brs.plot();
In order to render the images faster and not have to rely on the remote server to pull the tiles, we will first download them all at once and store them as a
tiff raster file (keep in mind this might take a little bit to run):
%%time raster_link = 'london.tiff' _ = ctx.bounds2raster(minX, minY, maxX, maxY, 12, raster_link, url=src)
CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 3.81 µs
Just to get a sense, this is what the entire area of London looks like through the lens of Strava data:
r_src = rio.open(raster_link) f, ax = plt.subplots(1, figsize=(12, 12)) rioshow(r_src.read(), ax=ax) ax.set_axis_off() plt.show() | http://nbviewer.jupyter.org/urls/gist.github.com/darribas/e9cee717a64125a39db269b77598d998/raw/569d8a727f31b1ad8390a10d3b1429aff2ee8dae/strava_boroughs.ipynb | CC-MAIN-2018-26 | refinedweb | 282 | 63.9 |
Distributed File System: Namespace Management Questions
Published: August 3, 2011
Updated: February 1, 2012
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2
This FAQ answers questions about managing Distributed File System (DFS) namespaces. For other questions, see DFS Namespaces: Frequently Asked Questions.
For a list of recent changes to this topic, see the Change History section of this topic.
No. This is not supported.
No, this service is required to provide domain-based namespace referrals and SYSVOL referrals.
You can enable access-based enumeration on existing namespaces in the following conditions:
- Standalone namespaces on servers running Windows Server 2008 R2 or Windows Server 2008.
- Domain-based namespaces that use the Windows Server 2008 mode (which can be hosted by servers running Windows Server 2008 R2 or Windows Server 2008).
To enable access-based enumeration on a domain-based namespace that uses the Windows 2000 Server mode, you must first migrate the namespace to Windows Server 2008 mode. To do so, see Migrate a Domain-based Namespace to Windows Server 2008 Mode. For information about namespace modes, see Choose a Namespace Type.
No. All namespace servers (also known as root targets) for a given domain-based DFS namespace must be in the same domain.
No, DFS Namespaces can’t selectively hide folders. However, you can prevent unauthorized users from viewing folders to which they don’t have permission to access by enabling access-based enumeration on a namespace server. For more information, see Enable Access-Based Enumeration on a Namespace.
To manage DFS namespaces, you can use the following tools: article 830856 in the Microsoft Knowledge article 262845 in the Microsoft Knowledge Base .
- Administrators must not enable Offline Files on a path that shares the same first level name as a path used for roaming profiles. For example, if roaming profiles are stored in a domain-based namespace named \\Domain\Roam, Offline Files should not be enabled for a domain-based namespace named \\Domain\Project. Similarly, if roaming profiles are stored on a stand-alone namespace or regular shared folder, such as \\Server\Roam, Offline Files should not be enabled for a path such as \\Server\Other.
Offline Files treats the first level of the path nas article 221111 in the Microsoft Knowledge Base.
-. | http://technet.microsoft.com/en-us/library/hh341474(v=ws.10).aspx | CC-MAIN-2014-23 | refinedweb | 381 | 52.09 |
Python Pandas Pro – Session Three – Setting and Operations
Want to share your content on python-bloggers? click here.
Following on from the previous post, in this post we are going to learn about setting values, dealing with missing data and data frame operations.
Setting values
The below example shows how to use the iat command we learned in the last lesson to set values.
Setting values by position
As stated, the implementation below can be used to set values by position:
from gapminder import gapminder as gp import pandas as pd import numpy as np # view the head of the data df = gp.copy() print(df.head(10)) df_orig = df.copy() # ------------------------------------ Setting --------------------------------------------------# df.iat[0, 2] = '2020' # Set values by position print(df.iat[0,2])
Printing this out we can confirm that we have set the year column (column index 2) to a new value.
Setting by assignment with NumPy array
The next example shows how you can set by assignment using a numpy array to replace values in a data frame. This can be implemented below:
df.loc[:, 'pop'] = np.array([5] * len(df)) #Setting by assignment with a NumPy array print(df) print(np.array(5)*len(df))
This assigns the value of the np.array 5 objects to the population column to the extend of the data frame using the len command:
1 Afghanistan Asia 1957 30.332 5 820.853030 2 Afghanistan Asia 1962 31.997 5 853.100710 3 Afghanistan Asia 1967 34.020 5 836.197138 4 Afghanistan Asia 1972 36.088 5 739.981106 ... ... ... ... ... ... ... 1699 Zimbabwe Africa 1987 62.351 5 706.157306 1700 Zimbabwe Africa 1992 60.377 5 693.420786 1701 Zimbabwe Africa 1997 46.809 5 792.449960 1702 Zimbabwe Africa 2002 39.989 5 672.038623 1703 Zimbabwe Africa 2007 43.487 5 469.709298 [1704 rows x 6 columns] 8520
Please note: these two approaches do modify the original data frame, so it may be best to use the .copy() function to take a copy of the data frame.
Missing data
First of all, our dataset contains no missing records, so we will create all the top 1000 records to have missing records:
df[1:1000] = np.nan print(df)
Printing this to the console, you will see the changes have taken effect:
0 Afghanistan Asia 2020.0 28.801 5.0 779.445314 1 NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN NaN ... ... ... ... ... ... ... 1699 Zimbabwe Africa 1987.0 62.351 5.0 706.157306 1700 Zimbabwe Africa 1992.0 60.377 5.0 693.420786 1701 Zimbabwe Africa 1997.0 46.809 5.0 792.449960 1702 Zimbabwe Africa 2002.0 39.989 5.0 672.038623 1703 Zimbabwe Africa 2007.0 43.487 5.0 469.709298
Drop rows that have missing data
To drop rows that have missing data – you can specify in the how command of the dropna function. Here we will specify all values that are NaN, as this is peskty when working with other packages such as SciPy and Scikitlearn.
df2= df.copy() #Take a copy of df to make sure the changes are only made to df2
print(df2.dropna(how=’any’))
To fill values that have an NA with a specific value you can use the fillna method. Other approaches would be to undertake more advanced methods.
To search for missing values you can simply us the is syntax to check if the value is missing:
print(pd.isna(df)) print(any(pd.isna(df))) # Check to see if the missing values are contained anywhere in df
This returns:
country continent year lifeExp pop gdpPercap 0 False False False False False False 1 True True True True True True 2 True True True True True True 3 True True True True True True 4 True True True True True True ... ... ... ... ... ... ... 1699 False False False False False False 1700 False False False False False False 1701 False False False False False False 1702 False False False False False False 1703 False False False False False False [1704 rows x 6 columns] True
The first statement returns a boolean matrix indicating which rows and columns have NAs. The any command can be used to check if null values exist anywhere in the data frame.
Operations on data frames
This section shows how to apply simple and lambda (anonymous) functions on a data frame.
Simple operations
I want to get the mean of all the values in the GapMinder dataset. Then this would simply be achieved underneath:
df = df_orig.copy() print(df.mean())
Giving the mean of all the observations:
year 1.979500e+03 lifeExp 5.947444e+01 pop 2.960121e+07 gdpPercap 7.215327e+03 dtype: float64
To make this a row wise mean – you would need to specify the axis to be used:
print(df.mean(1))
This gives the operation across the numerical values on the row axis, instead of the whole dataset:
0 2.107023e+06 1 2.310936e+06 2 2.567483e+06 3 2.885201e+06 4 3.270552e+06 ... 1699 2.304793e+06 1700 2.676771e+06 1701 2.851946e+06 1702 2.982319e+06 1703 3.078416e+06 Length: 1704, dtype: float64
Applying functions on data frame
Say I want a cumulative sum of the population and GDP per capita. This can be implemented by:
df_sub = df.loc[:,['pop']] df_sub_copy = df_sub print(df_sub.apply(np.cumsum))
Obviously you would want to group this by the country, but we come to that later when we look more into aggregation functions:
pop 0 8425333 1 17666267 2 27933350 3 39471316 4 52550776 ... ... 1699 50394118807 1700 50404823147 1701 50416228095 1702 50428154658 1703 50440465801
Applying anonymous function using Lambda expression
Anonymous functions are functions, but they only exist in the scope that they are called. Meaning they do not stay in memory and are anonymous to any other object in Python.
The function can be implemented to get the interquartile range of the population:
#Lambda functions print(df_sub_copy.apply(lambda x: x.max() - x.min()))
This prints out the IQR of the population:
pop 1318623085 dtype: int64
Histogramming in Python
We will create a separate Pandas series object and use the value_counts() function to undertake and emulate a histogramming function:
s = pd.Series(np.random.randint(0, 7, size=10)) print(s) print(s.value_counts())
This functions creates a series of random numbers from the numpy randon package and these values will be between 0 and 7 and are sized as 10 observations.
The value counts function then counts the frequency:
0 6 1 3 2 3 3 4 4 6 5 0 6 5 7 1 8 4 9 5 dtype: int32 6 2 5 2 4 2 3 2 1 1 0 1 dtype: int64
There is much more that can be done with operations on data frames. This just scratches the surface.
In the next tutorial we will look at how we can merge and join data frames.
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2020/10/python-pandas-pro-session-three-setting-and-operations/ | CC-MAIN-2022-40 | refinedweb | 1,191 | 75.3 |
10 March 2009 02:02 [Source: ICIS news]
NINGBO, CHINA (ICIS news)--Ningbo SK Zhenbang Chemical plans to start up its new 150,000 tonne/year bottle-grade polyethylene terephthalate (PET) plant on 19 March, company officials said.
The plant is located at Cixi, ?xml:namespace>
“Everything is on schedule and we would start feeding in the raw materials on 19 March. The plant can produce up to 180,000 (tonnes/year),” said SK Zhenbang CEO KS Cho.
This PET line in Cixi was reconfigured from an old fibre-grade PET unit, which was formerly 100% owned by Zhejiang Zhenbang Chemical Fibre.
SK Zhenbang would market the new products under the SkyPET brand, which is owned by SK Chemicals of South Korea.
About 40% of the products would be exported while the majority will be sold within the Chinese domestic market.
SK Zhenbang is a joint venture between Zhenbang Chemical Fibre, Korean trading house SK Networks and Korean PET producer SK Chemicals.
Zhenbang owns 54% of the joint venture with the remaining stake held by the two Korean | http://www.icis.com/Articles/2009/03/10/9198701/sk-zhenbang-to-start-up-new-pet-plant-on-19-mar.html | CC-MAIN-2014-52 | refinedweb | 178 | 60.85 |
Introduction: UART and I2C Communications Between UNO and MEGA2560
Arduino is not alone in the universe; it can use different digital communication protocols to talk with quite a few other systems. It's one of the great features of the platform; it has all of the standard protocols built in, allowing it to communicate with thousands of different devices.
Digital communication has numerous advantages. It is less susceptible to noise than analog communication, and it usually only requires two lines to communicate to hundreds of devices. This allows communication with the computer, with other microcontrollers such as the Arduino, with the Internet, and even pages to store data.
Step 1: Things You Need
- 2 Arduinos - In this case, I am using DFRduino Uno Rev3 and DFRobot Mega2560
- Jumper Wires
Step 2: Software Serial and UART Between Arduinos
The serial port, professionally called Universal Asynchronous Receiver/Transmitter (UART) communication, is generally used to program and debug the Arduino via the USB port. There are multiple sensors and systems that use UART as the main communication method, and sometimes we need to discuss between two Arduinos to share information, workload, and so on.
However, most Arduinos only have one serial port, which is used by the USB connection. Serial communication can only happen between two devices. What can we do now? With a bit of luck, we'll have an Arduino Mega or similar that has up to four serial ports, but if we don't, there still is a solution. A special library has been written that simulates an UART port on other digital pins. There are a few drawbacks, but it generally works.
Step 3: How to Do It
Follow steps to connect two Arduinos using software serial:
Assuming we use pins 8 and pin 9 for RX and TX on both Arduinos, connect pin 8 on one Arduino with pin 9 on the other one, and pin 9 on the first Arduino to pin 8 on the second one.
- Connect the GND of both Arduinos together.
- If we don't power up both Arduinos via USB, then we need to power up at least one and connect 5V on each together.
Step 4: Code
The Master Arduino will receive commands from the computer and write them over the soft serial. Take a look at the Controlling the Arduino over serial project now.
// Include the Software Serial library
#include <SoftwareSerial.h> // Define a Software Serial object and the used pins SoftwareSerial softSerial(8, 9); // RX, TX void setup() { Serial.begin(9600); softSerial.begin(9600); } void loop() { // Check for received characters from the computer if (Serial.available()) { // Write what is received to the soft serial softSerial.write(Serial.read()); } }
And here is the slave code that interprets the characters sent from the master. If the character is 'a', it will start the built-in LED. If the character is 'x', it will stop it:
// Include the Software Serial library
#include <SoftwareSerial.h> // Define a Software Serial object and the used pins SoftwareSerial softSerial(8, 9); // LED Pin int LED = 13; void setup() { softSerial.begin(9600); pinMode(LED, OUTPUT); } void loop() { // Check if there is anything in the soft Serial Buffer if (softSerial.available()) { // Read one value from the soft serial buffer and store it in the variable com int com = softSerial.read(); // Act according to the value received if (com == 'x') { // Stop the LED digitalWrite(LED, LOW); } else if (com == 'a') { // Start the LED digitalWrite(LED, HIGH); } } }
Step 5: How It Works
Software serial simulates a standard serial port on different digital pins on the Arduino. It is very handy in general; however, it is simulated, so it doesn't have dedicated hardware. This means it will take resources, particularly execution time and memory. Otherwise, it works just like a normal serial connection. All the functions present in the normal serial port are also present in software serial.
Code Breakdown
First, we will look in the master code, which takes characters received on the normal serial port and writes them to our simulated serial connection. In the beginning, we include the SoftwareSerial.h library:
#include <SoftwareSerial.h>
Then, we need to declare a serial object. We do so using the following syntax:
SoftwareSerial softSerial(8, 9); //RX,TX
The serial connection will be called, in this case, softSerial . It will use pin 8 for RX and pin 9 for TX. Take a look at the There's more… section for some information on which pins we can use.
Using the softSerial object, we can use all functions found in a normal serial connection, such as softSerial.read(), softSerial.write(), and so on. In this code, we check if there is anything in the real serial buffer. If there is, we read it from that buffer and we write it to the software serial:
if (Serial.available()) {
softSerial.write(Serial.read()); }
In the slave code, we run a simplified version of the code from the Controlling the Arduino over serial recipe, except that we use a software serial. This only changes the declaration and instead of writing Serial.read(), Serial.available(), and so on, we write softSerial.read() and softSerial.available().
Step 6: There's More...
Software serial has some important considerations and drawbacks. Here we tackle a few of them.
Usable Pins
We can't use every pin on the Arduino for software serial. For TX, generally, anything can be used, but for the RX pin, only interrupt-enabled pins can. On the Arduino Leonardo and Micro, only pins 8, 9, 10, 11, 14, 15, and 16 can be used, while on the Mega or Mega 2560 only 10, 11, 12, 13, 50, 51, 52, 53, 62, 63, 64, 65, 66, 67, 68, and 69 can be used.
More software serial connections
It is possible to have more than one software serial connection; however, only one can receive data at a time. This will generally cause data loss. There is an alternative software serial library written by Paul Stoffregen, which tackles exactly this problem.
Interference
The software serial library uses the same timer as a few other libraries. This means that other functions might be affected by the use of a simulated serial port. The best known interference is with the Servo library. The best way to overcome this is to use the Arduino Mega, or something similar, which has four hardware serial ports—enough for any project.
General connection tips
UART connections are very simple; however, there are three key aspects to remember. Whenever connecting two serial devices, the TX pin on one device goes to the RX pin on the other device. If we do that the opposite way, we might kill the device! Also, the devices need to at least share the same Ground (GND). Lastly, the devices have to be set at the same speed, typically referred to as the baud rate.
Step 7: to communicate between components on motherboards in cameras and in any embedded electronic system. Here, we will make an I2C bus using two Arduinos. We will program one master Arduino to command the other slave Arduino to blink its built-in LED once or twice depending on the received value.
Step 8: Things You Need
- 2 Arduinos - In this case, I am using DFRduino Uno Rev3 and DFRobot MEGA 2560
- Jumper Wires
Step 9: How to Do It…
Follow these steps to connect two Arduinos using I2C:
- Connect pin A4 and pin A5 on one Arduino to the same pins on the other one.
- The GND line has to be common for both Arduinos. Connect it with a jumper.
Remember never to connect 5 V and 3.3 V Arduinos together. It won't hurt the 5V Arduino, but it will certainly annoy its 3.3 V.
Step 10: Code
The following code is split in two parts: the master code and the slave code,which run on two different Arduinos.
First, let's take a look at the master code:
// Include the standard Wire library for I2C
#include <Wire.h> int x = 0; void setup() { // Start the I2C Bus as Master Wire.begin(); } void loop() { Wire.beginTransmission(9); // transmit to device #9 Wire.write(x); // sends x Wire.endTransmission(); // stop transmitting x++; // Increment x if (x > 5) x = 0; // reset x once it gets 6 delay(500); }
And here is the slave code that interprets the characters sent from the master:
#include <Wire.h>
int LED = 13; int x = 0; void setup() { pinMode (LED, OUTPUT); // Start the I2C Bus as Slave on address 9 Wire.begin(9); // Attach a function to trigger when something is received. Wire.onReceive(receiveEvent); } void receiveEvent(int bytes) { x = Wire.read(); // read one character from the I2C } void loop() { //If value received is 0 blink LED for 200 ms if (x == '0') { digitalWrite(LED, HIGH); delay(200); digitalWrite(LED, LOW); delay(200); } //If value received is 3 blink LED for 400 ms if (x == '3') { digitalWrite(LED, HIGH); delay(400); digitalWrite(LED, LOW); delay(400); } }
Step 11: How It Works…
To briefly go through the theory, I2C requires two digital lines: Serial Data Line (SDA) to transfer data and Serial Clock Line (SCL) to keep the clock. Each I2C connection can have one master and multiple slaves. A master can write to slaves and request the slaves to give data, but no slave can directly write to the master or to another slave. Every slave has a unique address on the bus, and the master needs to know the addresses of each slave it wants to access. Now let's go through the code.
Step 12: Code Breakdown
First, let's look at the master. We need to include the required Wire.h library:
#include <Wire.h>
Then, in the setup function, we begin the I2C bus using the Wire.begin()function.
If no argument is provided in the function, Arduino will start as a master.
Lastly, we send a character x, which is between 0 and 5. We use the following functions to begin a transmission to the device with the address 9, write the character, and then stop the transmission:
Wire.beginTransmission(9); // transmit to device #9
Wire.write(x); // sends x Wire.endTransmission(); // stop transmitting
Now let's explore the slave Arduino code. We also include the Wire.h library here, but now we start the I2C bus using Wire.begin(9). The number in the argument is the address we want to use for the Arduino. All devices with address 9 will receive the transmission. Now we need to react somehow when we receive an I2C transmission. The following function appends a trigger function whenever a character is received. Better said, whenever the
Arduino receives a character on I2C, it will run the function we tell it to run:
Wire.onReceive(receiveEvent);
And this is the function. Here, we simply store the value of the received character:
void receiveEvent(int bytes) {
x = Wire.read(); }
In loop(), we simply interpret that character to blink the built-in LED at different speeds depending on the received character.
Step 13: There's More…
I2C is a complicated transmission protocol, but it's very useful. All Arduinos implement it, with a few differences in pin mappings.
Comparing different Arduino categories
The pins for I2C are different in different Arduino categories. Here are the most common in the image above.
More about I2C
Each I2C bus can support up to 112 devices. All devices need to share GND. The speed is around 100 kb/s—not very fast but still respectable and quite useable. It is possible to have more than one master on a bus, but it's really complicated and generally avoided.
A lot of sensors use I2C to communicate, typically Inertial Measurement Units, barometers, temperature sensors, and some Sonars. Remember that I2C is not designed for long cable lengths. Depending on the cable type used, 2 m might already cause problems.
Connecting more devices
If we need to connect more than two devices on an I2C bus, we just have to connect all SDA and SCL lines together. We will need the address of every slave to be addressed from the master Arduino.
You can find a good explanation on how a master should request information to a slave here. This is an example closer to real life, as this is the way we usually request information from sensors.
Discussions | https://www.instructables.com/id/UART-and-I2C-Communications-Between-UNO-and-MEGA25/ | CC-MAIN-2018-39 | refinedweb | 2,065 | 64 |
Practical .NET
One of the things that I liked about WSDL/SOAP services is that a WSDL contract for a Web Service could be used to generate a client proxy that was guaranteed to work with your Web Service. With a WSDL contract, it was easy for client-side developers to write code against the proxy that would handle the complexity of calling your services (it was also easy for Web Service developers to create test clients to exercise their services).
This meant that, when it came time to add business logic to either clients or services, developers on both sides could work in parallel. Both teams could be confident that the shared WSDL document would ensure there would be no API issues when the consumer was brought together with the service (there could, of course, be lots of other issues).
In a previous column I showed how adding support for the OpenAPI specifaction (through Swagger's Swashbuckle NuGet package) provides an easy way to generate documentation for your ASP.NET Web API project (and throws in a user interface that supports testing your Web API methods, just for fun). The thing is, as useful as that functionality is, it's not the most important part of OpenAPI.
What matters is the OpenAPI specification you can create to describe your service. It's that specification that drives the documentation and testing UI I talked about in the previous column. That specification also gives me what I used to get from WSDL: Automatic generation of client-side code that's guaranteed to work with my service.
However, in this column, rather than continue with Swashbuckle and the other Swagger tools, I'm going to look at part of the NSwag toolset. In part I'm doing this to illustrate one of the attractive aspects of the OpenAPI specification: The variety of tools that support it (Visual Studio has no built-in support, though).
And, just to continue the contrast with my previous column, in this column I'm going to work with an ASP.NET Core Web API project that handles Customer objects, rather than the ASP.NET 4.5 Web API project I used in the previous column.
Generating the Service's Specification
Using NSwag requires two components: NSwagStudio and the NSwag.AspNetCore NuGet package which generates the OpenAPI specification for your service. To enable generating that specification, after adding the NuGet package to your application, you need to open your project's Startup file, find the Configure method that accepts IApplication and IHostingEnvironment parameters, and add to the method code like this:
app.UseStaticFiles();
app.UseSwaggerUi(typeof(Startup).GetTypeInfo().Assembly, settings =>
{
settings.GeneratorSettings.DefaultPropertyNameHandling =
PropertyNameHandling.CamelCase;
});
You'll also need to add a couple of using directives to support this code, but IntelliSense will walk you through that.
After these changes, your service will have two new endpoints. With my service, running from within Visual Studio, the endpoint localhost:61399/swagger took me to the default UI page generated by my minimal NSwag implementation. There wasn't much on that page but what was there was very useful: the second endpoint that allows me to retrieve the OpenAPI specification describing my ASP.NET Core Web API service (, if you're curious).
Generating Clients
With that URL in hand, I could now start up NSwagStudio and generate a variety of clients for my service.
To generate a client, once you start NSwagStudio, just select the Documents tab on the left side of the NSwagStudio window and, in the resulting screen, select the Swagger Specification tab. On that tab, paste the URL to your service's specification into the Swagger Specification URL textbox and click the Create Local Copy button. That causes NSwagStudio to fetch the specification and save it locally (once NSwagStudio has its own copy of the specification, you can shut down your Web API service).
Now, to generate client-side code to call your service, you just need to check off what kind of code you want (TypeScript or C#) and click the Generate Code button. You might want to also take a look at the Settings tab on the code generation window which lets you tweak the output of the code generation process (I used it to control the namespace for the C# code I generated).
Altogether, it probably takes less than 10 minutes from the time you add the NSwag NuGet package to your project to having working client-side code.
To see if the code works, I created a C# WPF application, added a class called CustomerProxy to it, copied the code out of NSwagStudio's code-generation window, and pasted the C# generated code into my class (NSwagStudio is perfectly willing to put that code into files, but it seemed just as easy to me to copy and paste it). After adding the NuGet packages for NewtonSoft.Json and a reference to System.Runtime.Serialization to my WPF project, my C# client was compiling.
If nothing else, this seems like the slickest way to ensure that the Customer objects used by my client match the definition of the Customer objects in my service. But, in addition, the code to call my service was now almost trivially simple. All I needed was this (and it worked the first time!):
CustomerProxy cp = new CustomerProxy();
ObservableCollection<Customer> allCusts = await cp.GetAllAsync();
Futures and Opportunities
It's probably possible, from the OpenAPI specification document, to generate a skeleton Web service. In addition to generating client-side code NSwagStudio will also generate a C# version of your service, targeting ASP.NET 4.5.
That suggests the specification could also be used to communicate design decisions from architects/designers to service developers. However, the specification syntax is sufficiently complex that writing one would be ... challenging. Even if someone were to create an editor that simplified creating an OpenAPI specification, I suspect it would still be easier just to write the appropriate ASP.NET controllers and let NSwag (or Swashbuckle) generate the specification for you.
It's a shame that JSON Schema (which describes messages) and the OpenAPI (which describes services) don't work together (while they say they do, they don't really). There is a proposal for including schemas (including JSON Schema) in OpenAPI that may fix this but, right now, it is just a proposal.
But, quite frankly, if I have to generate the JSON Schemas I use for testing and message validation in a separate process, it's not the end of the world. More importantly, if I can ship to my customers the TypeScript or C# code guaranteed to call my service then that's pretty cool.
Besides, if my clients think I wrote that client-side code myself ... well, there's nothing wrong with that, | https://visualstudiomagazine.com/articles/2018/08/01/defining-web-api-services-nswag.aspx | CC-MAIN-2020-40 | refinedweb | 1,130 | 50.06 |
top five irrational things I hate about XML, but wont fight too much;
1) ampersands...yes we need a few special characters but why could we not just state that an ampersand with whitespace around it is just an ampersand...I know, I know...
2) W3C XML Schema...I dont like its structure, I dont like its complexity...though I think this is more related to 'the right tool for the right scope'....W3C XML Schema says 'enterprise' to me...and along with SOAP and WS-* I would probably embark on using these 'heavy battleship' technologies and ML's when scope demanded it.
3) which leads to 'no simple schema or typing': is it so frightening to have
<somexml ss:</somexml>
which says the value enclosed is an integer, string or whatever the top 5-10 types are?...w/o inheriting from W3C Schema..just a starting point...anything....along these lines why not an example XML document which implicitly defines a structure (e.g. yes exampletron).
4) linking/selection: what a missed opportunity, why did we not just inherit simple linking in XML ?
<x href="" >
with default behavior being a simple include...of course we could expand our definition....by adding xpath|xpointer|xquery
<x href="">
why not some xpath
<x href="(//someelementtype[@test='5'])">
....finally couldnt we use the effective XQuery syntax(sorry dont know compact form)whereby something like this
doc("auction.xml")//music:record[music:remark/@xml:lang = "de"]
turns into something like;({//music:record[music:remark/@
xml:lang=de]})
lets try a more advanced xquery linking/selection example;
{
let $doc := doc("prices.xml")
for $t in distinct-values($doc//book/title)
let $p := $doc//book[title = $t]/price
return
<minprice title="{ $t }">
<price>{ min($p) }</price>
</minprice>
}
ok I am hacking up syntax all over the place e.g. note use of curly brackets, not to mention that url encoding might complain but these are simple challenges.(<minprice><price>min(//book[ti
tle={distinct-values(//book/title)}]/price])</price></minprice>)
5) namespace distraction: I like namespaces...they are a simple method of avoiding name collision which is very useful if u are working at the nexus of many ML vocabularies...to me they are benign and do not cause any consternation other then the various processors which misinterpret the spec on their usage...though this is true of any technology.
I dont understand why a namespace cant link to a meta definition which contains all possible meta data which describes the xml.. whatever you may imagine....for example (for me) ...a mixture of namespace def ala NRL (namespace routing language), dublin core for authoring/versioning information and a bit of RDF..;
<namespace>
<rdf:Description xmlns:
<dc:subject>XHTML</dc:subject>
<dc:subject></dc:subject>
<dc:subject></dc:subject>
<rdf:Description xmlns:
<dc:creator>James Fuller</dc:creator>
<dc:date>2004-05-12</dc:date>
<dc:description>An extensible html superset blah blah blah</dc:description>
</rdf:Description>
</rdf:Description>
<rules xmlns="">
<anyNamespace>
<validate schema="xhtml.rng"
useMode="#attach"/>
<validate schema="xhtml.sch"
useMode="#attach"/>
</anyNamespace>
</rules>
<!---put whatever you want here -->
</namespace>
though I could have added RDDL document...namespace should be a 'container'.. a dense rich multi-layered xml document which describes all aspects of an xml document....neat. One last pratical thing, namespaces make textual merging processes simple...such as encountered with most SCM's.
I will leave the rational things I hate for my own brooding!
Jim Fuller
Sponsored By: | http://www.xml.com/cs/user/view/cs_msg/2342 | crawl-001 | refinedweb | 578 | 50.43 |
Next week we’ll be releasing a Community Technology Preview (CTP) of Windows PowerShell V2.0. I’m going to hold off saying what is in it until next week. The purpose of this email is to set your expectations about the CTP.
Versioning
- The PowerShell CTP REPLACES PowerShell V1.0. They do not run side-by-side.
- To install the PowerShell CTP, you must first turn off and uninstall any previous version of Windows PowerShell. (Instructions for how to do this will be in the release notes).
- This is because of StrongName Binding. If we did side-by-side, no V1 cmdlets would work because they bind against the V1 PowerShell.
- In this regard, we are pursuing exactly the same approach as .NET itself. ORCAS is a replacement for .NET 2.0/3.0 not a side-by-side install. This means that we have to be super super careful about compatibility but that everything continues to work.
- PowerShell CTP is compatible with PowerShell V1.0. Your V1.0 cmdlet, providers and scripts should work without modification on the CTP.
- We try to be super careful about compatibility.
We regularly run > 1 million tests, add to that suite constantly, and constantly self-host. That doesn’t mean that we are fully compatible but it does mean that at least a million things are compatible. In the past we’ve been able to make very substantial changes and (after we ensured that the tests all ran fine) everything as worked.
- V1 compatibility is very important to us so if you encounter any regressions – please let us know ASAP. Clearly there will be some breaking changes – as we add new operators, keywords and cmdlets we are grabbing parts of the namespace that other people might use. We are just going to accept that pain as the price of progress. There are other changes that we are considering that would change existing behavior that we are fairly sure no one is using (because it doesn’t make any sense) and it would have to change to make V2 sensible. We haven’t made any of these yet but as we do, we’ll get your opinions on the changes
- PowerShell Scripts continue to use “.PS1”.
- We will continue to pursue this approach until there is a major change in the CLR or .NET frameworks which force us to go side-by-side. It is at that point that we’ll go from .PS1 to .PS2 . Until then we will stay with .PS1 and everything that runs today will continue to run in new releases.
- You might write a .PS1 script which takes advantage of a cmdlet/feature that is only available in V2. If you send this to someone that has PS V1, it will fail “in some way and at some time”. If it uses some new syntax, it will fail at parse time (with a PARSER error) but if it uses a new command line switch – it won’t fail until it tries to run that command. This is what #REQUIRES is all about. You start your script with
#REQUIRES -Version 2
And we will check version #’s and produce a precise error message
- If you have a #REQUIRES –VERSION 1 in your script, it will continue to run just find on PowerShell (V2) CTP because it is compatible with V1.
Jeffrey Snover [MSFT]
Windows Management Partner Architect
Visit the Windows PowerShell Team blog at:
Visit the Windows PowerShell ScriptCenter at:
The PowerShell 2.0 CTP is nearly here!.
The first CTP of Microsoft Powershell 2.0 is out! It doesn´t work very well with Vista RTM, so install
The Windows PowerShell Team is pleased to release the first Community Technology Preview (CTP) of Windows.
More about the very cool Powershell V2 CTP next week when my NDA expires, but before that a warning on
Wow. Just tried it. I see that #Requires is actually already in v1 so I guess my please is too late :-/
?
Next week we'll be releasing a Community Technology Preview (CTP) of Windows PowerShell V2.0. This
Next week we’ll be releasing a Community Technology Preview (CTP) of Windows PowerShell V2.0. This release | https://blogs.msdn.microsoft.com/powershell/2007/11/02/ctp-versioning/ | CC-MAIN-2017-26 | refinedweb | 702 | 73.27 |
1. Overview
In this quick tutorial, we’ll go through different approaches to finding all substrings within a given string that are palindromes. We'll also note the time complexity of each approach.
2. Brute Force Approach
In this approach, we'll simply iterate over the input string to find all the substrings. At the same time, we'll check whether the substring is a palindrome or not:
public Set<String> findAllPalindromesUsingBruteForceApproach(String input) { Set<String> palindromes = new HashSet<>(); for (int i = 0; i < input.length(); i++) { for (int j = i + 1; j <= input.length(); j++) { if (isPalindrome(input.substring(i, j))) { palindromes.add(input.substring(i, j)); } } } return palindromes; }
In the example above, we just compare the substring to its reverse to see if it's a palindrome:
private boolean isPalindrome(String input) { StringBuilder plain = new StringBuilder(input); StringBuilder reverse = plain.reverse(); return (reverse.toString()).equals(input); }
Of course, we can easily choose from several other approaches.
The time complexity of this approach is O(n^3). While this may be acceptable for small input strings, we'll need a more efficient approach if we're checking for palindromes in large volumes of text.
3. Centralization Approach
The idea in the centralization approach is to consider each character as the pivot and expand in both directions to find palindromes.
We'll only expand if the characters on the left and right side match, qualifying the string to be a palindrome. Otherwise, we continue to the next character.
Let's see a quick demonstration wherein we'll consider each character as the center of a palindrome:
public Set<String> findAllPalindromesUsingCenter(String input) { Set<String> palindromes = new HashSet<>(); for (int i = 0; i < input.length(); i++) { palindromes.addAll(findPalindromes(input, i, i + 1)); palindromes.addAll(findPalindromes(input, i, i)); } return palindromes; }
Within the loop above, we expand in both directions to get the set of all palindromes centered at each position. We'll find both even and odd length palindromes by calling the method findPalindromes twice in the loop:
private Set<String> findPalindromes(String input, int low, int high) { Set<String> result = new HashSet<>(); while (low >= 0 && high < input.length() && input.charAt(low) == input.charAt(high)) { result.add(input.substring(low, high + 1)); low--; high++; } return result; }
The time complexity of this approach is O(n^2). This is an improvement over our brute-force approach, but we can do even better, as we'll see in the next section.
4. Manacher's Algorithm
Manacher’s algorithm finds the longest palindromic substring in linear time. We'll use this algorithm to find all substrings that are palindromes.
Before we dive into the algorithm, we'll initialize a few variables.
First, we'll guard the input string with a boundary character at the beginning and end before converting the resulting string to a character array:
String formattedInput = "@" + input + "#"; char inputCharArr[] = formattedInput.toCharArray();
Then, we'll use a two-dimensional array radius with two rows — one to store the lengths of odd-length palindromes, and the other to store lengths of even-length palindromes:
int radius[][] = new int[2][input.length() + 1];
Next, we'll iterate over the input array to find the length of the palindrome centered at position i and store this length in radius[][]:
Set<String> palindromes = new HashSet<>(); int max; for (int j = 0; j <= 1; j++) { radius[j][0] = max = 0; int i = 1; while (i <= input.length()) { palindromes.add(Character.toString(inputCharArr[i])); while (inputCharArr[i - max - 1] == inputCharArr[i + j + max]) max++; radius[j][i] = max; int k = 1; while ((radius[j][i - k] != max - k) && (k < max)) { radius[j][i + k] = Math.min(radius[j][i - k], max - k); k++; } max = Math.max(max - k, 0); i += k; } }
Finally, we'll traverse through the array radius[][] to calculate the palindromic substrings centered at each position:
for (int i = 1; i <= input.length(); i++) { for (int j = 0; j <= 1; j++) { for (max = radius[j][i]; max > 0; max--) { palindromes.add(input.substring(i - max - 1, max + j + i - 1)); } } }
The time complexity of this approach is O(n).
5. Conclusion
In this quick article, we discussed the time complexities of different approaches to finding substrings that are palindromes.
As always, the full source code of the examples is available over on GitHub. | https://www.baeldung.com/java-palindrome-substrings | CC-MAIN-2021-31 | refinedweb | 717 | 54.12 |
Contents: C# & VB Session | Ch9 Live Anders | Ch9 Live Async Team | Async CTP | Roslyn | F# Session | Languages Panel | Podcasts | Booth | Ask The Experts | Meet Anders and Don
Professional Developers Conference 10 took place October 28-29 on Microsoft campus in Redmond, WA. At this event, we shared Microsoft’s future directions for C#, Visual Basic and F#, and produced a lot of great content! Here’s a roll-up of the materials.
Anders’ talk was the #1 most-viewed and #1 highest rated breakout session at the event! This was the session where we announced the Async CTP and gave an update on project codename “Roslyn”. You can watch the video on-demand here. Learn about all the improvements planned to improve asynchronous programming in the next version of C# and Visual Basic, with no more callbacks!
Twitter Buzz:
· maryjofoley: Anders Hejlsberg: The main emphasis in the next versions of C# and VB will be asynchronous programming · bauketeerenstra: YES! Good move! Anders Hejlsberg: The main emphasis in the next versions of C# and VB will be asynchronous programming #pdc10 #yam · jangray: Anders Hejlsberg's Future of C# and VB talk intros async methods and await operator that make async programming easier for mortals #pdc10 · vonlochow: Great presentation from Anders at #pdc10 yesterday. Simplified async handling ftw! · gioindahouz: C#5.0 Parallelism asynchronous with AsyncCTP and Anders Hejlsberg ! #Pdc10 · nahojd: The new async features that Anders Hjelsberg talked about seems really cool! Downloading the VS async CTP now! · Jmccliment: Watching Anders Hejlsberg's #PDC10 talk on The Future of C# and Visaul Basic. Async extensions look *amazing*. · timjroberts: I'm liking async/await in future C#, but I'd also like Anders at his team to look at formalizing concurrency too (Axum?). #pdc10 · Bennedik: Oop, Anders did it again. Masterplan revealed, and it is beautiful. await C# v.next #PDC10 · obelink: Think synchron, but write asynchronous software... that's the lesson from the session of Anders here at #pdc10 · jmccliment: Anders Hejlsberg: “It's delegates all the way down.” (18:56 in The Future of C# and Visual Basic). Nice. #pdc10 · brian_henderson: via Anders: Compiler as a Service (CaaS): · airdo: Finally had chance to hear Anders Hejlsberg, chief architect of C# language. My fears of possible language degradation are gone. · MostafaElzoghbi: very structured talk by anders hejlsberg for furture of Csharp : Thanks a lot for ur effrots #pdc10 · ramamurthyk: Watching the future of C# and VB. #languages #pdc10. Great presentation from Anders. · HHreinsson: PDC10 C# goes async. Great demo on whats to come in the future in .NET from Anders Hejlsberg. Thanks for pizza, Microsoft. · brian_henderson: highlight of the day.. was brilliant Anders Hejlsberg talk: #pdc10 · estep: #pdc10 Cool! Anders' session is available already! (Yes, I'm an Anders Hejllsberg fanboy. Sue me :) ) · chevenix: Anders Hejlsberg asynchronously saved the #PDC10! · DennisCode: Got to meet Anders Hejlsberg after his session at #PDC10 · thomaslbaker One of my top 5 things from this PDC (Anders Hejlsberg & .net built-in Async) Find his session, for more detail. #pdc10
This session was an hour of Q&A with Anders, while being interviewed by Charles Torre and attendees on Twitter. You can watch the video here.
· Shawty_ds: #ukpdc10 watching anders q&a, rapid fire questions interview on ch9 live from #pdc10 · estep: #pdc10 Watching Anders on @ch9live . It's like watching Edison. · jasonoffutt: Anders Hejlsberg on @ch9live talk about Compiler As A Service. Preparing to have my mind blown when I catch his session. · eoszak: Listening to Anders Hejsberg talk on async VS and caas...awesome stuff! #PDC10 @ch9live · MostafaElzoghbi: really smart discussion about how async is working in c# thanks anders · Kristijanf: Watching Live Q&A with Anders Hejlsberg about async programming. Just awesome. #PDC10 #ch9live · abhi2434: Anders Says: No aspect oriented programming for now · postsharp: Relieved to hear from Anders Hejlsberg himself that MS is not entering AOP so PostSharp will confidently stay in business :). · vstuart: I love being able to tweet questions for #pdc10. Anders just answered my question live on @ch9live. Yeow! · Marcjacobi: My question was answered by Anders Hejlrberg! w00t!
In this session, Charles Torre interviewed the Async design team with questions from the audience about the Async CTP. You can watch the video here.
The Async CTP has generated a lot of buzz. We’ve totaled over one million touch points with developers around the Async CTP, between PDC videos, Channel9 videos, Blog views, whitepaper downloads, web page views, and more. The Async CTP WebPage is a top visited page on the MSDN Dev Centers. More details on Async CTP content below.
· Making Asynchronous Programming Easy by S. Somasegar · Jason Zander: Tutorial: Pic Viewer Revisited on the Async CTP by Jason Zander · What’s Next in C#? Get Ready for Async! by C# FAQ · Announcing the Async CTP for Visual Basic (and also Iterators!) by VB Team · Async CTP Series by Lucian Wischik · Async CTP Series by Stephen Toub (7 posts!) · Async CTP Series by Eric Lippert (0.5 million views!)
· Ch9: Anders Hejlsberg: Introducing Async – Simplifying Asynchronous Programming (117,000 views) · Ch9: Mads Torgersen: Inside C# Async (63,000 views) · Ch9: Lucian Wischik: Inside VB.NET Async and Customizing Awaitable Types (47,000 views) · Ch9: Stephen Toub: Task-Based Asynchrony with Async (50,700 views) · Ch9: Visual Studio Async: Meet the team (78,000 views) · DevExpress Interview: Mads Torgersen on Async
· 101 C#/VB Samples for the Async CTP
· Async Whitepaper · C# Language Specification for Asynchronous Functions · Visual Basic Language Specification for Asynchronous Functions and Iterators · Walkthrough: Getting Started with Async · Walkthrough: Iterators in Visual Basic · Task-Based Asynchronous Pattern Overview · TPL Dataflow
· C# and VB - towards joining F# with asynchronous programming support by Don Syme · C# 5 Blog Series by Jon Skeet · Visual Studio Async CTP for the rest of us… by Michael Crump (CTP walkthrough + install experience) · What is Visual Studio Async? by Gunnar Peipman · Rx and the Async CTP · Visual Studio Async CTP: The C# Perspective · C# 5.0 Asynchrony – A Simple Intro and A Quick Look at async/await Concepts in the Async CTP · Visual Studio Async CTP – An Analysis · Async Series by Peter Richie
· Sneak Peak: Asynchronous Syntax for Visual Basic and C# - Jonathan Allen, 10-28-2010, InfoQ · Microsoft Launches Visual Studio Async CTP - Gaston Hillar, 10-28-2010, Dr. Dobb’s · Microsoft hails async programming for Visual Basic, C# - Paul Krill, 10-29-2010, InfoWorld · Download Async CTP for Visual Studio 2010, Easy Asynchronous Programming – Marius Oiaga, 11-01-2010, Softpedia · Visual Studio async CTP with Silverlight – Nikos Printezis, 01-06-2011, DZone
See questions & comments at the Async CTP Forum on MSDN (162 threads)…
In the last section of the C# & Visual Basic Futures session, Anders gave an update on project codename “Roslyn”, also known as compiler as a service. “Roslyn” is a long lead project in which we’re re-writing the C# & Visual Basic compilers in C# & Visual Basic, respectively. The new managed APIs will allow developers to query the compiler for information. We are looking forward to the new scenarios that this will enable, including a Read-Eval-Print-Loop (REPL), and a host of rich IDE features.
· Svendb: CAAS or Compiler As A Service. Available on Azure soon? #pdc2010 · cnurse: The "Compiler As A Service" prototype that Anders demo'd allows Paste As VB or Paste As C# in VS- cool - #pdc2010 · Wilka: @SteveMallam the compiler as a service stuff looks impressive as well, but I'll prob not use that. Will be cool to see how others use it · joefeser: RT @philiplaureano: Wow. I just heard about the "Compiler as a service" syntax tree features in C# 5.0 today. That's practically LinFu v4.0 · philiplaureano: Wow. I just heard about the "Compiler as a service" syntax tree features in C# 5.0 today. That's practically LinFu v4.0 material. Yummy. · oryBecker: @mikehadlow Compiler as a service seems like a misnomer. The example given by Anders has been possible using the DXCore for ages. :) · TweetDeck mikehadlow: Hmm, so compiler as a service is NOT coming with C#5, but some future version? · mikehadlow: @jagregory The compiler as a service stuff looks really cool, but I haven't got into the details yet. · GaryMcAllister: "Compiler as a service!"..... Yay! · roastedamoeba: Wow - love the Compiler-as-a-Service stuff. That would be perfect for the write-GPU-shaders-in-C# project that I've had brewing for a while. · defeated: catching up on Microsoft #pdc10 video on Future of C# - async/await looks nice! Compiler-as-a-Service was cool 6mo+ ago when Mono did it ;) · PawelPabich: @kkozmic compiler as as a service, I believe it's on the last slide about 2 hours ago via web · dotjosh: @ch9live Anders: How is the new compiler as a service different than the existing System.CodeDom? · patbqc: @ch9live Like PostSharp, AOP might be a powerfull framework if done correctly. Please, help to fit it in compiler as a service about · tacoman667: @ch9live Does the compiler as a service begin to introduce the compiler into more of an interpreter? · nickriggs: watching anders talk about compiler as a service: · MostafaElzoghbi: Compiler as a service a new feature Said Anders #pdc10 about 3 hours ago via TweetDeck · Reply · View Tweet · tacoman667: @ch9live What are the strengths to having a compiler as a service? · johnmcbride: @ch9live Do you think you will have a CTP of the compiler as a service before .NET 5 beta's? :) · raybooysen: anders says no to AOP in the compiler. Surely you'll be able to do something with the compiler as a service anyway? · ecampidoglio: The new managed C# and VB compilers will have object models that allow programs to interact with them, a.k.a "Compiler As A Service" · jasonoffutt: Anders Hejlsberg on @ch9live talk about Compiler As A Service. Preparing to have my mind blown when I catch his session. · raphaeljavaux: #PDC10 C#5.0 : Async callable blocks and compiller as a service (integrated compiler API) · paul_the_kelly: #UKPDC10 Anders just demonstrated C# to VB conversion using Compiler as a Service. Neat. The COBOLATOR is still cooler though! · jmix90: RT @fredrikn: Async. programming and Compiler as a Service, meta programming etc, the future of C# and VB.Net (cont) · JeffreySax: Compiler as a service opens up many fascinating possibilities. · bhrnjica: RT @ChristianNagel copy C#, paste as VB - compiler as a service #pdc10# · rystsov: I wish Hejlsberg said that a new async stuff was implemented easily with a compiler as a service concept, but... #pdc2010 · rystsov: It seems that future C# compiler as a service will have a very poor functionality,just a map from ast to ast, can't add new stuff like await · brooklynDev: I cannot wait to get home & play w/ the new Visual Studio Async CTP. Also, that Compiler as a service looks REAL interesting..AOP baby! · amazedsaint: I'm a bit sad that there is no much news about the internals of Compiler as a Service in C# 5.0 · rneeft: Compiler as a Service: codenamed "Roslyn"?? #PDC2010 · johnmcbride: Was hoping for more info (and possibly a CTP) of the Compiler as a service.... A Little bummed #PDC · benoitdion: Compiler As a Service to bring AOP right in Visual Studio? Now this is getting interesting! #pdc10 · _nilotpaldas: Love the Compiler as a Service (CAAS) bit #pdc10 · mulbud: With Compiler as a Service, Anders demoing a way to copy C# code and pasting it as VB.NET code - pretty cool! · ChristianNagel: copy C#, paste as VB - compiler as a service #pdc10 · josketres: Compiler as a Service?????? #pdc2010 · danvanderboom: Compiler as a service in C# is being developed, but is a post-vNext feature. Perhaps the v6 theme for C# will be metaprogramming. · brian_henderson: via Anders: Compiler as a Service (CaaS): #pdc10 · paulfallon: RT @ChristianNagel: namespace Roslyn.Services Compiler as a service #pdc10 · @jasonbock Anders is giving a "compiler as a service" update now ... · ChristianNagel: namespace Roslyn.Services Compiler as a service #pdc10 · mulbud: Ah, Anders going over Compiler as a Service - one feature I sorely missed from what was promised in .NET 4.0 (promised in 2008) · rneeft: "Compiler as a Service" update #PDC2010 · ChristianNagel: compiler as a service work in progress #pdc10 · johnmcbride: Compiler as a service!! Been waiting for this for a while.... #PDC · fatihboy: C# vNext Focus: Asynchronous Programming, Compiler as a Service about 4 hours ago via TweetDeck · Reply · View Tweet · raphaeljavaux: Hehe ! Looks like Microsoft take inspiration from Mono : C#5 will have compiler as a service (cc @migueldeicaza)
F# continues to spur great interest! Don’s talk was the #7 rated talk at PDC 10 and #15 most viewed. This was the session where we announced F# Type Providers. You can watch the video here.
Twitter Buzz: · Nielsberglund: Another really interesting talk at #PDC10 is Don Syme's: “The Future of F#: Data and Services”! Maybe something about Axum. #fsharp · GeertVL: Second day at #pdc10. Afternoon going to have very interesting sessions. Don Syme and Herb Sutter. · r_keith_hill: #PDC10 - Don Syme talk on future of F# - giving props to PowerShell for bringing contextual info into the prog. environment. · Jangray: Don Syme/F#: important new work on *strongly typed* access to all the world's data & services withType Providers #pdc10 · Talbott: Type Providers = the future of #fsharp – “The world is infomation rich - language needs to be info-rich too!” @dsyme #PDC10 · JustinAngel: W00t! Dom Syme just demoed strongly typing the entire Freebase in F#! that is awesome. #pdc10 · talbott: #PDC10 @dsyme - demonstrating WebData (typed information from the web) full intellisense off Football, Finanical data, weather, etc... · lukevenediger: Don Syme talking about #fsharp at #pdc10 · Future of F# at #PDC10 by @dsyme at · pontemonti: Wow, they're going to use magic in F#! #awesome #pdc10 #malmo · talbott: #PDC10 @dsyme presenting -> Net is information rich - languages are information poor - Proposition: We can fix this - magic - Type Providers · mikebluestein: managed to finish another chapter on my MonoTouch book tonight in spite of the fact that watching Don Syme's F# #pdc10 talk blew my mind :) · mikebluestein: if you haven't seen Don Syme's F# futures talk from #pdc10, you should · cbilson: I really like where f# is going RT @sforkmann: Don Syme's #fsharp #pdc10 talk blew my mind · sforkmann: Don Syme's #fsharp #pdc10 talk blew my mind - Type Providers look really awesome. This will solve tons of my problems. · EdgarSanchez: Viewing Don Syme #pdc10 talk: The Future of F# #fsharp · codingforfood: Ehm, problem is not writing an ITypeProvider, but to get it integrated like in the #FSharp #pdc10 talk by @dsyme -any preview code soon? #in · JustinAngel: I love the future direction for F# strongly typing data sets through compile time type providers. This is awesome. #pdc10 · R_keith_hill: #PDC10 - F# next to have a strongly typed WMI provider using new TypeProvider functionality - paying homage to PowerShell? :-) · Pontemonti: Last #pdc10 session... F# time! · Thomasknudson: So stoked about F# right now!! #pdc2010 #pdc10 · mj1028: The future of F# is called Type Providers. Will Scala follow? (silverlight required) #F# #scala #pdc10
Lucian Wischik and I recorded a podcast with Jesse Liberty on his show, Yet Another Podcast. We discussed the Async CTP, Productivity Power Tools, and Visual Basic for Windows Phone. Mads and Lucian produced another podcast about Async on SkillsMatter a couple of few weeks later.
We had a Visual Studio booth and used this as a place to follow up for additional information outside of sessions. Team members Avner Aharoni, Alex Turner, Karen Liu, Mads Torgersen, Lucian Wischik, and Kevin Pilch-Bisson spent time at the booth, demoing new features and answering questions.
Ask The Experts (ATE) was held both days at lunch. Over 45 team VS Pro team members participated in the event, and had some great conversations attendees. This included 10 tables across C#, Visual Basic, F#, IDE, and Extensibility.
On Friday at Ask The Experts, we also advertised “Meet Anders” and “Meet Don” tables. These were a big hit and had a great turnout. Here’s a quote from one attendee:
@Edcomingatyou: I just had lunch with Anders Hejlsberg. Pardon my language but how f#@%ing cool is that? #pdc10 | http://blogs.msdn.com/b/lisa/archive/2011/03/22/pdc10-future-directions-for-c-visual-basic-and-f.aspx | CC-MAIN-2015-32 | refinedweb | 2,683 | 63.8 |
Created on 2015-01-28 22:13 by takluyver, last changed 2016-05-18 05:14 by ncoghlan. This issue is now closed.
This follows on from the python-ideas thread starting here:
subprocess gains:
- A CompletedProcess class representing a process that has finished, with attributes args, returncode, stdout and stderr
- A run() function which runs a process to completion and returns a CompletedProcess instance, aiming to unify the functionality of call, check_call and check_output
- CalledProcessError and TimeoutExceeded now have a stderr attribute, to avoid throwing away potentially relevant information.
Things I'm not sure about:
1. Should run() capture stdout/stderr by default? I opted not to, for consistency with Popen and with shells.
2. I gave run() a check_returncode parameter, but it feels quite a long name for a parameter. Is 'check' clear enough to use as the parameter name?
3. Popen has an 'args' attribute, while CalledProcessError and TimeoutExpired have 'cmd'. CompletedProcess sits between those cases, so which name should it use? For now, it's args.
Another question: With this patch, CalledProcessError and TimeoutExceeded exceptions now have attributes called output and stderr. It would seem less surprising for output to be called stdout, but we can't break existing code that relies on the output attribute.
Using properties, either stdout or output could be made an alias for the other, so both names work. Is this desirable?
A 1) Opting not to capture by default is good. Let people explicitly request that.
A 2) "check" seems like a reasonable parameter name for the "should i raise if rc != 0" bool. I don't have any other good bikeshed name suggestions.
A 3) Calling it args the same way Popen does is consistent. That the attribute on the exceptions is 'cmd' is a bit of an old wart but seems reasonable. Neither the name 'args' or 'cmd' is actually good for any use in subprocess as it is already an unfortunately multi-typed parameter. It can either be a string or it can be a sequence of strings. The documentation is not clear about what type(s) 'cmd' may be.
A Another) Now that they gain a stderr attribute, having a corresponding stdout one would make sense. Implement it as a property and document it with a versionadded 3.5 as usual.
I haven't checked the code, but does check_output and friends combine stdout and stderr when ouput=PIPE?
Updated patch following Gregory's suggestions:
- The check_returncode parameter is now called check. The method on CompletedProcess is still check_returncode, though.
- Clarified the docs about args
- CalledProcessError and TimeoutExceeded gain a stdout property as an alias of output
Ethan: to combine stdout and stderr in check_output, you need to pass stderr=subprocess.STDOUT - it doesn't assume you want that.
I did consider having a simplified interface so you could pass e.g. capture='combine', or capture='stdout', but I don't think the brevity is worth the loss of flexibility.
Ethan: check_output combines them when stdout=subprocess.STDOUT is passed ().
Never pass stdout=PIPE or stderr= PIPE to call() or check*() methods as
that will lead to a deadlock when a pipe buffer fills up. check_output()
won't even allow you pass in stdout as it needs to set that to PIPE
internally, but you could still do the wrong thing and pass stderr=PIPE
without it warning you.
the documentation tells people not to do this. i don't recall why we
haven't made it warn or raise when someone tries. (but that should be a
separate issue/change)
On Wed Jan 28 2015 at 3:30:59 PM Ethan Furman <report@bugs.python.org>
wrote:
>
> Ethan Furman added the comment:
>
> I haven't checked the code, but does check_output and friends combine
> stdout and stderr when ouput=PIPE?
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
Maybe you don’t want to touch the implementation of the “older high-level API” for fear of subtly breaking something, but for clarification, and perhaps documentation, would the old functions now be equivalent to this?
def call(***):
# Verify PIPE not in (stdout, stderr) if needed
return run(***).returncode
def check_call(***):
# Verify PIPE not in (stdout, stderr) if needed
run(***, check=True)
def check_output(***):
# Verify stderr != PIPE if needed
return run(***, check=True, stdout=PIPE)
If they are largely equivalent, perhaps simplify the documentation of them in terms of run(), and move them closer to the run() documentation.
Is it worth making the CalledProcessError exception a subclass of CompletedProcess? They seem to be basically storing the same information.
Yep, they are pretty much equivalent to those, except:
- check_call has a 'return 0' if it succeeds
- add '.stdout' to the end of the expression for check_output
I'll work on documenting the trio in those terms.
If people want, some/all of the trio could also be implemented on top of run(). check_output() would be the most likely candidate for this, since I copied that code to create run(). I'd probably leave call and check_call as separate implementations to avoid subtle bugs, though.
Sharing inheritance between CalledProcessError and CompletedProcess: That would mean that either CompletedProcess is an exception class, even though it's not used as such, or CalledProcessError uses multiple inheritance. I think duplicating a few attributes is preferable to having to think about multiple inheritance, especially since the names aren't all the same (cmd vs args, output vs stdout).
It’s okay to leave them as independent classes, if you don’t want multiple inheritance. I was just putting the idea out there. It is a similar pattern to the HTTPError exception and HTTPResponse return value for urlopen().
Third version of the patch (subprocess_run3):
- Simplifies the documentation of the trio (call, check_call, check_output) to describe them in terms of the equivalent run() call.
- Remove a warning about using PIPE with check_output - I believe this was already incorrect, since check_output uses .communicate() internally, it shouldn't have deadlock issues.
- Replace the implementation of check_output() with a call to run().
I didn't reimplement call or check_call - as previously discussed, they are more different from the code in run(), so subtly breaking things is more possible. They are also simpler.
Would anyone like to do further review of this - or commit it ;-) ?
I don't think anyone has objected to the concept since I brought it up on python-ideas, but if anyone is -1, please say so.
Have you seen the code review comments on the Rietveld, <>? (Maybe check spam emails.) Many of the comments from the earlier patches still stand. In particular, I would like to see the “input” default value addressed, at least for the new run() function, if not the old check_output() function.
Aha, I hadn't seen any of those. They had indeed been caught by the spam filter. I'll look over them now.
Fourth version of patch, responding to review comments on Rietveld. The major changes are:
- Eliminated the corner case when passing input=None to run() - now it's a real default parameter. Added a shim in check_output to keep it behaving the old way in case anything is relying on it, but I didn't document it.
- The docstring of run() was shortened quite a bit by removing the examples.
- Added a whatsnew entry
I also made various minor fixes - thanks to everyone who found them.
A few observations in passing. I beg your pardon for not commenting after a more in depth study of the issue, but as someone that's written and managed several subprocess module front-ends, my general observations seem applicable.
subprocess needs easier and more robust ways of managing input and output streams
subprocess should have easier ways of managing input: file streams are fine, but plain strings would also be nice
for string commands, shell should always be true. for list/Tupperware commands, shell should be false. in fact you'll get an error if you don't ensure this. instead, just have what is passed key execution (for windows, I have no idea. I'm lucky enough not to write windows software these days)
subprocess should always terminate processes on program exit robustly (unless asked not too). I always have a hard time figuring out how to get processes to terminate, and how to have them not to. I realize POSIX is black magic, to some degree.
I'm attaching a far from perfect front end that I currently use for reference
Jeff: This makes it somewhat easier to handle input and output as strings instead of streams. Most of the functionality was already there, but this makes it more broadly useful. It doesn't especially address your other points, but I'm not aiming to completely overhaul subprocess.
> for string commands, shell should always be true. for list/Tupperware commands, shell should be false
I wondered why this is not the case before, but on Windows a subprocess is actually launched by a string, not a list. And on POSIX, a string without shell=True is interpreted like a one-element list, so you can do e.g. Popen('ls') instead of Popen(['ls']). Changing that would probably break backwards compatibility in unexpected ways.
string vs list: see issue 6760 for some background. Yes, I think it is an API bug, but there is no consensus for fixing it (it would require a deprecation period).
Jeff: in general your points to do not seem to be apropos to this particular proposed enhancement, but are instead addressing other aspects of subprocess and should be dealt with in other targeted issues.
Can I interest any of you in further review? I think I have responded to all comments so far. Thanks!
Is there anything further I should be doing for this?
One thing that just popped into my mind that I don’t think has been discussed: The patch adds the new run() function to subprocess.__all__, but the CompletedProcess class is still missing. Was that an oversight or a conscious decision?
Thanks, that was an oversight. Patch 5 adds CompletedProcess to __all__.
I am still keen for this to move forwards. I am at PyCon if anyone wants to discuss it in person.
I'm at pycon as well, we can get this taken care of here. :)
Great! I'm free after my IPython tutorial this afternoon, all of tomorrow, and I'm around for the sprints.
6a following in-person review with Gregory:
- Reapplied to the updated codebase.
- Docs: mention the older functions near the top, because they'll still be important for some time.
- Docs: Be explicit that combined stdout/stderr goes in stdout attribute.
- Various improvements to code style
New changeset f0a00ee094ff by Gregory P. Smith in branch 'default':
Add a subprocess.run() function than returns a CalledProcess instance for a
thanks! i'll close this later after some buildbot runs and any post-commit reviews.
I expect this can be closed now, unless there's some post-commit review somewhere that needs addressing?
This change has made the subprocess docs intimidating and unapproachable again - this is a *LOWER* level swiss-army knife API than the 3 high level convenience functions.
I've filed to suggest changing the way this is documented to position run() as a mid-tier API that's more flexible than the high level API, but still more convenient than accessing subprocess.Popen directly. | https://bugs.python.org/issue23342 | CC-MAIN-2020-16 | refinedweb | 1,901 | 64.81 |
How to contribute
Getting the latest source code
The OpenSesame source code is hosted on GitHub:
GitHub provides a straightforward way for collaborating on a project. If you're not familiar with GitHub, you may want to take a look at their help site:.
The best (and easiest) way to contribute code is as follows:
- Create a GitHub account.
- Create a fork of OpenSesame.
- Modify your fork.
- Send a 'pull request', asking for your changes to be merged back into the main repository.
Each major version of OpenSesame has its own branch. For example, the
ising branch contains the code for 3.0 Interactive Ising. The
master branch contains the code for the latest stable release.
Developing a plugin or extension
For plugin or extension development, see:
Translate the user interface
For instructions on how to translate the user interface, see:
Coding-style guidelines
The goal is to maintain a readable and consistent code base. Therefore, please consider the following style guidelines when contributing code:
Exception handling
Exceptions should be handled via the
libopensesame.exceptions.osexception class. For example:
from libopensesame.exceptions import osexception raise osexception(u'An error occurred')
Printing debug output
Debug output should be handled via
libopensesame.debug.msg(), and is shown only when OpenSesame is started with the
--debug command-line argument. For example:
from libopensesame import debug debug.msg(u'This will be shown only in debug mode')
Indentation
Indentation should be tab based. This is the most important style guideline of all, because mixed indentation causes trouble and is time consuming to correct.
Names, doc-strings, and line wrapping
- Names should be lower case, with words separated by underscorses.
- Each function should be accompanied by an informative doc string, of the format shown below. If a doc-string is redundant, for example, because a function overrides another function that has a doc-string, please indicate where the full doc-string can be found.
- Please do not have lines of code extend beyond 79 characters (where a tab counts as 4 characters), with the exception of long strings that are awkward to break up.
def a_function(argument, keyword=None): """ desc: This is a YAMLDoc-style docstring, which allows for a full specification of arguments. See also <>. arguments: argument: This is an argument. keywords: keyword: This is a keyword. returns: This function returns some values. """ pass def a_simple_function(): """This is a simple doc-string""" pass
Writing Python 2 and 3 compatible code
Code should be compatible with Python 2.7 and 3.4 and above. To make it easer to write Python 2 and 3 compatible code, a few tricks are included in the
py3compat module, which should always be imported in your script like so:
from libopensesame.py3compat import *
This module:
- Remaps the Python-2
strand
unicodetypes to the (roughly) equivalent Python-3
bytesand
strtypes. Therefore you should code with
strobjects in most cases and
bytesobject in special cases.
- Adds the following functions:
safe_decode(s, enc='utf-8', errors='strict')turns any object into a
strobject
safe_encode(s, enc='utf-8', errors='strict')turns any object into a
bytesobject
- Adds a
py3variable, which is
Truewhen running on Python 3 and
Falsewhen running on Python 2.
- Adds a
basestrobject when running on Python 3.
Unicode and strings
Assure that all functionality is Unicode safe. For new code, use only Unicode strings internally.
my_value = 'a string' # not preferred my_value = u'a string' # preferred
For more information, see:
Other
With the exception of the guidelines shown above, please adhere to the following standard: | https://osdoc.cogsci.nl/3.3/dev/howtocontribute/ | CC-MAIN-2021-25 | refinedweb | 585 | 56.45 |
trouble understanding pixel access
I am just getting into opencv and set up a demo just to get familiar.
I made a function to get and set pixel values, but it doesn't behave how I would expect. In this case I converted the Mat to RGB, but the only channel I can seem to effect is img.[1], which does alter the green component. Anything else, however doesn't work. I converted a mask Mat which is the output of an edge detector to RGB, and then also tried converting that to HSV and changing the img.[0] hue value, but that doesn't seem to do anything.
I am asking because even though my application is just experimenting, it seems like I am missing something about how to work with colors, and how values are contained in MAts.
For instance, if I just wanted to scan through the edge detector output and change the white pixels to random hue values how would I do that?
#include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include "opencv2/video.hpp" #include <windows.h> #include <iostream> using namespace cv; using namespace std; inline void processBgs(const Ptr<BackgroundSubtractor>& p, Mat& f, Mat& fg, double rate){ p->apply(f, fg, rate); } inline void processCanny(Mat& img, Mat& edges, int lowthresh, int hithresh, int sobelsize){ Canny(img, edges, lowthresh, hithresh, sobelsize); } // I'm attempting to change pixels to any color here void rainbow(Mat& img){ for(int i=0; i<img.rows; i++){ for(int j=0; j<img.cols; j++){ Vec3b & color = img.at<Vec3b>(i,j); color[0] = 200; } } } int main(int argc, char** argv) { VideoCapture cap(0); namedWindow( "Display window", CV_WINDOW_AUTOSIZE ); HWND hwnd = (HWND)cvGetWindowHandle("Display window"); Mat frame; Mat fgMask; Ptr<BackgroundSubtractor> bgs = createBackgroundSubtractorMOG2(); cap >> frame; cout << frame.type(); imshow("Display window", frame); while(IsWindowVisible(hwnd)) { cap >> frame; if(frame.empty()){ break; } processBgs(bgs, frame, fgMask, 0.01); processCanny(fgMask, fgMask, 50, 150, 3); cvtColor(fgMask, fgMask, CV_GRAY2RGB); // cvtColor(fgMask, fgMask, CV_RGB2HSV); rainbow(fgMask); imshow("Display window", fgMask); if(cvWaitKey(5) == 27){ break; } } waitKey(0); return 0; } | https://answers.opencv.org/question/181974/trouble-understanding-pixel-access/?answer=181979 | CC-MAIN-2019-35 | refinedweb | 349 | 55.84 |
BEIJING, Oct. 1 (Xinhuanet) --More than 3,000 textile
companies have been selected to share in 18 per cent of China's textile export
quotas to the European Union next year, according to the Ministry of Commerce.
It is the result of the first online bidding for next year's quotas set by the European Union, which started on
Tuesday and finished on Friday.
Of the total of 5,284 companies that bid, 3,385 won
contracts.
The bidding move was made upon the request of many
textile manufacturers in China for a more transparent and fairer process. It
will also help better manage exporters' performance.
Another 12 per cent of the total quotas will go
through the bidding process next time.
A special committee under the ministry has been set
up to take charge of bid invitations.
The majority of the export quotas, 70 per cent, are
allocated based on textile dealers' shipments from the previous year.
The partial bid process is aimed at preventing a
repeat of last month's stockpile fiasco.
Up to 80 million garments started piling up in
European warehouses and customs checkpoints after Chinese companies used up
their quotas.
Commerce Minister Bo Xilai and EU Trade Commissioner
Peter Mandelson had to sign another agreement in Beijing on September 5 to allow
the release of the Chinese garments. After a 10-hour closed-door discussion in
Shanghai in May, the two sides reached a consensus to set the export quotas
until 2007.
In another development, China announced low-tariff
import quotas for sugar at 1.945 million tons for 2006 the same as 2005 as part
of its commitment as a member of the World Trade Organization.
The State-owned firms would hold 70 per cent of the
import quotas while 30 per cent would be issued to private firms, the Commerce
Ministry said.
Import quotas for wool were set at 287,000 tons for
2006, the ministry said. Enditem
(Source: China Daily) | http://news.xinhuanet.com/english/2005-10/01/content_3571599.htm | crawl-002 | refinedweb | 328 | 61.06 |
HOME HELP PREFERENCES
SearchSubjectsFromDates
My understanding of your problem from the two emails is that you have
some images (of notices) and some corresponding OCR text for each image.
Greenstone does not handle this well by default but there are several
options you can try.
1. Build a collection on only the image files, and add the text as
metadata to each image. You can do this manually using the Librarian
Interface: drag all the images into a new collection and add
Date/Subject/Text metadata to each one. This is probably not appropriate
for large collections.
Alternatively you can put the metadata into a metadata.xml file - the
format is described in Section 2.1 in the developers guide.
For this case it is possible to write a simple script that takes each
file of OCR text and adds it as metadata to the appropriate image in the
metadata.xml file.
If you want to have simple searching over the text as a whole, you just
need to create one metadata element, eg NoticeText, and create the index
on that. If you want fielded searching, eg by Date or Subject, you will
need to create separate metadata items for each field.
2. You can also use the html files you have created. Build the
collection on the html files, - the image will be kept as an associated
file, and the text will be indexed. When you display the document you
then need to change the DocumentText format statement to only display
the image.
There are two ways to do this:
A. Modify the HTML plugin to create a new metadata element eg
NoticeImage whose value is the image name. Then use [NoticeImage] in
your format statement.
B. Put each HTML file and its image into a separate directory in the
import folder. Give all the images the same name eg notice.jpg.
Then the format statement can use
/gsdl/collect/gsarch/index/assoc/[assocfilepath]/notice.jpg
to display the image
I hope this has given you some ideas of things to try.
Regards,
Katherine Don
Rajesh Jha wrote:
> dear sir,
> I am Rajesh Jha working as a trainee in C-DOT on the
> project digital library.I want to add one feature that
> is enotice.I have made some html pages which includes
> the jpg image of the notices and the OCR text in it,
>
> Is it possible to make a collection of such html pages
> so that one can search the page according to the
> dates, text or subject in the OCR text and view the
> notice image in the html page without the text
> embedded in it.
>
also
>
> Hello all,
>
>
> I have a collection of images with descriptions for
> each is it possible to make collection how?.
>
>
> | http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-----dfr--0-1l--11-en-50---20-about-Frans+van+'t+Hof--00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=gsarch&cl=CL1.17.2&d=40846913-8070902-cs-waikato-ac-nz | CC-MAIN-2013-48 | refinedweb | 460 | 61.87 |
speaker is driven through pin D13, the
Arduino’s built-in LED will also
momentarily flash during sound
annunciation. This serves as an
additional visual cue that the robot is
communicating to you.
#define NOTE_B3 247
#define NOTE_C3 131
#include <Servo.h>
#include "pitches.h"
For tune making, I’m using a
variation of example code provided on
the
arduino.cc website for playing
melodies. I’ve simplified it a bit, and
the routines are encapsulated in a
simple function labeled make Tone.
Servo servoLeft; // Define left servo
Servo servoRight; // Define right servo
// etc.
// Rest of Listing 2 is the same
This function is called using three
parameters:
and so on. Also notice these are compiler #define
definitions, which means they sort-of look like variables but
they aren’t actual variables. They don’t take up any of the
Arduino’s memory. Rather, when the sketch is compiled,
the Arduino IDE (integrated development environment)
software substitutes the constant name — such as NOTE_B3
— which is its actual value; in this case, 247. That number
corresponds to a frequency of 247 Hz (cycles per second).
As a point of reference, a concert A pitch is denoted
as NOTE_A4. It’s the A right below middle C on a piano
keyboard. This pitch has a (more or less) standardized
frequency of 440 Hz.
You can experiment with more tones using Listing 3.
It’s the same as Listing 2 but rather than coding the
pitches in the main sketch, it uses an external file called
pitches.h. This file is included at the article link for all the
sketches for this installment.
To alter a tone, just copy its constant name (NOTE_F7,
or NOTE_D6) and paste it into your sketch. Be sure to place
the pitches.h file in the same folder as your Listing 3
(
ardbot_sound_pitches.ino) sketch. Otherwise, the Arduino
IDE won’t be able to find it, and will display a series of
errors when it cannot fathom what you mean by
“NOTE_F7.”
• tones — an array of frequencies to play.
SERVO 10.2013 65
LISTING 3. ardbot_sound_pitches.
• toneDurations — another array that indicates the
number of “beats” for each note.
• The number of int elements in the tones array. This
allows the make Tone function to know how many
notes to play. To play all the notes — the usual
procedure — I use some well-known code that self-describes the number of elements in the array.
Depending on your usage, you can alter the number
of tones to play to be less than the actual number of
elements in the tones array.
Here is an example of setting up the tones and
duration, and then calling the make Tone function. This is
for when the left bumper switch has been hit:
int tones[] = {NOTE_C4, NOTE_B3, NOTE_C4};
int toneDurations[] = {4,4,4};
reverse();
makeTone(tones, toneDurations,
sizeof(tones)/sizeof(int));
Coming Up:
(The process is the same for the right bumper, except
for that switch I’ve used two instead of three notes, and
the notes are different. This is to differentiate between a
right and left bumper strike.)
Remote Control ArdBot
Notice that rather than actual tone frequencies, the
tones are defined as constants. This makes it easier to refer
to them in code. The constants are defined at the top of
the sketch:
We’ve run out of room for this installment. I was going
to introduce you to using an infrared sensor to operate your
ArdBot II with a universal remote control, but alas, it’ll have
to wait until next time. In upcoming parts, you’ll also
discover some additional cool ways to allow your ArdBot II
to think and act on its own. SV
About the Author
Gordon McComb is the author of the best-selling
Robot Builder’s Bonanza and the new Arduino Robot
Bonanza, both published by McGraw-Hill.
Refer to Part 1 of this series for a full list of
mechanical parts for the ArdBot II.
Click to subscribe to this magazine | http://servo.texterity.com/servo/201310/?pg=65 | CC-MAIN-2019-30 | refinedweb | 669 | 64.2 |
>> whether Amal can win stone game or not in Python
Suppose there are two players Amal and Bimal, they are playing a game, and with Amal starts first. Initially, there are n different stones in a pile. On each player's turn, he makes a move consisting of removing any square number (non-zero) of stones from the pile. Also, if one player is unable to make a move, he loses the game. So if we have n, we have to check whether Amal can win the game or not.
So, if the input is like n = 21, then the output will be True because at first Amal can take 16, then Bimal takes 4, then Amal takes 1 and wins the game.
To solve this, we will follow these steps −
squares := a new list
square := 1
increase := 3
while square <= n, do
insert square at the end of squares
square := square + increase
increase := increase + 2
insert square at the end of squares
dp := a blank list of size (n + 1)
dp[0] := False
for k in range 1 to n, do
s := 0
dp[k] := False
while squares[s] <= k and dp[k] is empty, do
if dp[k - squares[s]] is empty, then
dp[k] := True
s := s + 1
return last element of dp
Example
Let us see the following implementation to get better understanding
def solve(n): squares = [] square = 1 increase = 3 while square <= n: squares.append(square) square += increase increase += 2 squares.append(square) dp = [None] * (n + 1) dp[0] = False for k in range(1, n + 1): s = 0 dp[k] = False while squares[s] <= k and not dp[k]: if not dp[k - squares[s]]: dp[k] = True s += 1 return dp[-1] n = 21 print(solve(n))
Input
21
Output
True
- Related Questions & Answers
- Program to check whether first player win in candy remove game or not in Python?
- Program to check person 1 can win the candy game by taking maximum score or not in Python
- Python program to check whether we can pile up cubes or not
- Program to check whether we can take all courses or not in Python
- Program to check whether we can unlock all rooms or not in python
- Program to check whether all can get a seat or not in Python
- Program to check whether first player can win a game where players can form string char by char in C++
- Program to find winner of stone game in Python
- Program to check whether we can get N queens solution or not in Python
- Program to check whether parentheses are balanced or not in Python
- Program to check whether one point can be converted to another or not in Python
- C++ program to check xor game results 0 or not
- Program to find maximum score in stone game in Python
- Program to check whether we can convert string in K moves or not using Python
- Stone Game in C++ | https://www.tutorialspoint.com/program-to-check-whether-amal-can-win-stone-game-or-not-in-python | CC-MAIN-2022-27 | refinedweb | 493 | 57.81 |
- OSI-Approved Open Source (8)
- GNU General Public License version 2.0 (4)
- GNU Library or Lesser General Public License version 2.0 (2)
- BSD License (1)
- Educational Community License, Version 2.0 (1)
- MIT License (1)
- Mozilla Public License 1.1 (1)
- NASA Open Source Agreement (1)
- Open Software License 3.0 (1)
- PHP License (1)
- Sun Public License
Extra Language for Existing Software
Software to build extra language resource for existing software and software library packages and tools for language computing0 weekly downloads
Kodougu Project
Kodougu is an open source web-based modeling tool. This tool also has a web-based modeling-language designer.0 weekly downloads
epo
"epo" is an advanced archiving system on Windows platforms. It tightly integrates into Explorer through its shell namespace extension and offers very easy-to-use archiving features.0 weekly downloads | http://sourceforge.net/directory/natlanguage:japanese/developmentstatus:planning/os:mswin_2000/ | CC-MAIN-2015-35 | refinedweb | 140 | 50.12 |
Hi everyone,
I really deserve a kick for asking this question so please bear with me.
Assume i have two classes class a and class b.
Now class b depends on class a to compile correctly. Class a has been compiled and the class file is generated correctly. Now when i compile class b, it says it can't find class a although class a is in the same directory as the source of class b.
This is what in class a
public class a { public void prt() { System.out.println("Jwenting is going to kill me for asking this question"); } }
This is what is class b
public class b { public static void main(String args[]) { a w = new a(); a.prt(); } }
Please note that both the sources of class a and class b are in the same directory and there are no packages involved.
Now this is how i am compiling class b
javac b.java
Is there any classpath setting that i need to take care of so that my class b can compile.
Again i apologize for my stupidy
Yours Sincerely
Richard West | https://www.daniweb.com/programming/software-development/threads/34773/compiling | CC-MAIN-2017-34 | refinedweb | 186 | 80.31 |
Alexa is the cloud based, natural voice service from Amazon. With Alexa, you can create natural voice experience apps, quizzes, games and even chatbots. It's an intuitive way of interacting with technology - using natural spoken language, and once you are past the learning curve, developing for Alexa can be quite fun.
In December 2019 Umbraco HQ introduced "Umbraco Heartcore", the headless CMS. With Umbraco Heartcore, all your content, media etc are exposed via a REST API. Your CMS acts as a data storage for your structured content and the API is managed for you, which means developers no longer need to spend time keeping the API up-to-date. And being a REST API, it can be used to power any front-end - from websites to even IOT devices.
In this article, I will make an attempt at using Heartcore as a data store to my Alexa Skill.
I come from India. Talk about my home country and you just cannot not talk about spices. Spices might be "the ingredient which adds taste and aroma to your food" but they have a variety of uses from cleaning to being used in quick home remedies. So I have designed a Custom Alexa Skill called "Spice It Up" which gives a random fact about spices to the users. All the "facts" are stored in Umbraco Heartcore and my Alexa Skill will give out one randomly picked fact to the user. In addition to the "spice facts" I also store some messaging and images on Heartcore.
Umbraco Heartcore Set Up
Let's start off by looking at the Heartcore set up. I have set up a project in the Umbraco Cloud Portal. More information on how you can set up a trial project can be found here. Although there are some differences, the backoffice of a Heartcore project is very similar to a normal Umbraco backoffice. You can read about the Umbraco Heartcore backoffice here. The biggest difference is the "Headless" tree in the Settings section. This section has the API Browser which can be used to test the endpoints and Webhooks which can be used to post information about an action elsewhere. And of course, being the headless CMS we done have templates either!
API Browser in Umbraco Heartcore
As mentioned above, I use Umbraco Heartcore to store my "facts" about spices. So that calls for some document types. I have set up 3 document types
- Spice Facts Container - The container for all my facts. It's allowed as root and also holds some information which is quite generic about my skill
- Spice Fact - This is used to store my fact.
I use the above doc types to set up content. And I have my content tree ready now.
Spice Facts Container Doctype
Spice Facts Container Sample Content
Spice Fact doctype
Spice Fact sample content
There are two APIs endpoints available for any Heartcore project
- Content Delivery API - The read only API to query content, media etc
- Content Management API - A protected endpoint for performing CRUD operations on content, media, languages, members etc.
You can also opt to the Content Delivery API endpoint secure. Notes on how you can connect to both these APIs securely can be found here.
To access and use the REST API, the HQ has developed some client libraries. There are client libraries available for .Net Core and Node.js.
Introduction to Alexa Skill Development
The apps you develop for Alexa are called Skills. These skills can be extended using Alexa Skills Kit(ASK), which is a collection of tools, APIs, samples and documentation. There are various types of Alexa Skills. What I am showcasing here is a Custom Skill, which is the most versatile of the lot, where you as a developer exercises fine control over every aspect of the skill.
A custom skill has 4 main components
- Invocation Name - The name which Alexa uses to identify the skill, in my example it's "spice it up".
- Intents - The functionality of the skill or what actions can be performed by the skill. In my example, rendering a random fact about spices is the functionality of my skill and hence an intent. I have named my intent SpiceIntent
- Utterances - These are words or phrases which you as a developer define for every intent in the skill. When user says what they want the skill to do, Alexa tries to match the spoken words/phrases to an intent.
- Your endpoint - It can be a cloud based or hosted API, available over HTTPS.
Invocation name, Intents and Utterances together make what is called the "Interaction Model". Alexa uses AI and Machine Learning behind the scenes to train itself on the interaction model. It is the brain of a skill.
The Amazon Developer Console
Alexa Skill is configured in the Amazon Developer Portal. Let me take you through the configuration of my skill in the portal.
Alexa Developer Console
I start off by creating a skill in the Developer Console. I enter my skill name and choose my default language. Further languages can be added in a later screen. I am creating a custom skill so I choose Custom as the skill model. I plan to host my API in Azure so I choose Provision you own as the method to host my API.
Skill Creation
The next screen is the template selection screen. I prefer to start from scratch. The difference here is that for the available templates some of the built-in intents specific to games, quizzes etc are added for you. When you choose to start from scratch only the compulsory built-in intents are added to your skill model. As you can see a fact skill template is already available, choosing a template from scratch here is purely personal preference. But if you are creating a quiz or a game skill for the first time I would highly recommend choosing a template so that you can see all the right built-intents that needs to be configured and handled for your skill.
Skill Template
Past this screen, I can start configuring my interaction model.
At this point you can choose to add more languages to your skill using the language dropdown in the top left corner.
Add more languages with the language dropdown in the upper left
Important: If your skill suppports multiple languages you must make sure that your response is translated correctly. For e.g. responses from a skill which supports and is being used in a German locale must be in German, it cannot be in English. This is an important functional requirement.
The first item in the interaction model is the invocation name. This is the name used by the user to start conversing with a skill. Alexa identifies a skill using the invocation name. There are requirements around the invocation name which can be found here. But the most important ones are
- Only lower-case alphabets, spaces and periods are allowed
- The invocation name must not contain any of the Alexa skill launch phrases or connecting words or wake words
Invocation Name
I can now start configuring intents and utterances. There are two types of intents - built-in and custom. There are about 25 standard built-in intents available out of which the following are compulsory to any custom skill.
- AMAZON.NavigateHomeIntent - Active only on Alexa devices with screen where it returns users to home screen.
- AMAZON.HelpIntent - Provides help on how to use the skill
- AMAZON.FallbackIntent - This intent is triggered when Alexa cannot match the words spoken by the user to any of the utterances specified.
- AMAZON.CancelIntent - It can be used to exit a skill or cancel a transaction, but remain in the skill.
- AMAZON.StopIntent - Used to exit the skill
You can read about all the standard built-in intents here.
Even though the above intents are compulsory to any custom skill, it is not necessary to handle them in your API. But as a best practice it is highly recommended as you can control the output response and provide your users with information that is contextual to your skill.
Custom intents are specified by the skill developer. There are requirements around the names of custom intents as well. Only alphabets and underscores are allowed. If you noticed the name of built-in intents are in the AMAZON namespace and they are specified using a period. But it's not possible to do that with custom intents. No special characters or numbers are allowed. For my skill I specify a custom intent called SpiceIntent. This intent will help serve the random fact to the users.
Finally, let us talk about utterances. Utterances are words or phrases that can be specified for each intent. When your user uses any of the utterances while conversing with the skill, Alexa tries to find a match for the words/phrases among the intents in the skill. If no match is found among the intents, the FallbackIntent is triggered. The quality of a skill depends upon the level of natural conversations you can have with the skill. As a developer you have to make sure that maximum conversations result in proper conversational response served to the user then fallback messages. So the trick lies in thinking from a user's perspective on what phrases they can use, how they order their words, how they phrase their conversation and mapping them as utterances in your intents. There are requirements on how utterances should be and you can read about it here. You must specify utterances for your custom intents. The built-in intents ship with a few sample utterances but you can always add more utterances if you need to.
Intents and Utterances
How users interact with your skill
Conversations with a custom skill needs to follow a certain pattern like so.
-wake word- -launch-phrase- -invocation-name- -connecting-word- -utterances-
Wake word is usually Alexa. Upon encountering this word Alexa starts listening to the user. Launch phrases are keywords and some examples are Ask, Tell, Open, Launch. These words are used to start a conversation with your skill. Connecting words are word like for, whether, what etc to make the conversation sound natural. More information on these can be found here.
For e.g. you can talk to my skill like so "Alexa, Ask Spice it up for a spice fact".
My skill serves a random fact to the user at this point.
You can also do the above as a 2 step conversation like so
"Alexa, Launch Spice it up"
This opens a session with the skill and Alexa understands that further conversations from the user must be served using my skill. The user gets to hear a custom welcome message from my skill at this point.
"Tell me a spice fact"
I have specified the above as an utterance for my custom intent. So the user gets to hear a fact from my skill.
Now, you must be wondering how a user gets information on how to converse with a skill. When you publish your skill to Amazon skill store you are required to enter some example utterances to help get a user started and this information is available in the skill card in the skill store.
Skills Card in the Amazon skills store
The API
The API is where all the magic happens. Alexa does a secure HTTPPOST to your API, so your API must be available over HTTPS. The API can be hosted on a dedicated server or any cloud platform like Azure. Alexa posts a structured response to your API, so your API must be capable of handling the request JSON and must also be able to serve up a structured JSON as the response.
I have developed the API for my skill as a .Net Core web API. You can read about how to build one here. My web API needs to communicate with my Heartcore project so I begin by installing the Heartcore .Net Core client library into my project. You can do that using the following command in Visual Studio Package Manager Console.
Install-Package Umbraco.Headless.Client.Net
My web API needs knowledge of the Heartcore project alias to communicate with the REST API and I specify it in the appsettings.json as a section called Umbraco. If you choose to protect the Content Delivery API in your Heartcore project the API key can be specified in the file as well.
"Umbraco": { "ProjectAlias": "your-project-alias", "APIKey":"your-API-key" }
Now I can connect to my Heartcore project and get an instance of the ContentDeliveryService as shown below.
private ContentDeliveryService GetContentDeliveryService() { //get the Umbraco section from appsettings.json var umbracoConfig = this._configuration.GetSection("Umbraco"); //get the value of project alias var projectAlias = umbracoConfig.GetValue<string>("ProjectAlias"); //get an instance of the ContentDeliveryService return new ContentDeliveryService(projectAlias); }
My API needs to accept the structure JSON request and serve up the structured JSON response. So my end point starts to look like below.
[HttpPost] [Route("spicefacts")] public async Task GetSpiceFactsAsync([FromBody]SkillRequest request) { SkillResponse skillResponse = new SkillResponse { Version = AlexaConstants.AlexaVersion, SessionAttributes = new Dictionary<string, object>() { { "locale", "en_GB" } } }; switch (request.Request.Type) { case "LaunchRequest": skillResponse.Response = await this.LaunchRequestHandlerAsync(); break; case "IntentRequest": skillResponse.Response = await this.IntentRequestHandlerAsync(request); break; } return skillResponse; }
The SkillRequest object is a C# representation of the request JSON. The code can be found here. Similarly, the C# representation of the response JSON SkillResponse can be found here.
Taking a deeper look at the code above, the first step is to ensure the type of request. The incoming JSON has a request object which contains information on the type of request. A LaunchRequest type is fired to the API when the user initiates a conversation with the skill but they do not expect the skill to perform any function. An IntentRequest is fired when Alexa finds a matching intent for the utterances from the user. I have separate handler methods for handling both request types.
private async Task LaunchRequest); var response = new Response { OutputSpeech = new OutputSpeech() { Ssml = this.SsmlDecorate(spiceContainer.Properties["welcomeMessage"].ToString()), Text = spiceContainer.Properties["welcomeMessage"].ToString(), // get the welcome message text from spice facts container node and serve it up as SSML Type = "SSML" }, Reprompt = new Reprompt() { OutputSpeech = new OutputSpeech() { Text = spiceContainer.Properties["welcomeRepromptMessage"].ToString(), Type = "SSML", Ssml = this.SsmlDecorate(spiceContainer.Properties["welcomeRepromptMessage"].ToString()), } }, Card = new Card() { Title = spiceContainer.Properties["alexaSkillName"].ToString(), // get the value of alexaSkillName property from the node Text = spiceContainer.Properties["welcomeMessage"].ToString(), Type = "Standard", Image = new CardImage() { LargeImageUrl = media.Url, SmallImageUrl = media.Url } }, ShouldEndSession = false }; return response; }
The output response has three main objects
- OutputSpeech - The output speech heard by the user. The type I have used is SSML(Speech Synthesis Markup Language). You can control how Alexa generates speech and other speech effects using SSML. I am not going into it in great detail as that is outside the scope of the article.
- Reprompt - This is what the user hears as a reprompt. When the user does not talk further after launching the skill, Alexa initiates the reprompt to the user
- Card - This is relevant for Alexa devices with screen. You can choose to show the spoken message along with a a title and an image as a visual card to the user.
This is how my method for IntentRequest handling looks like. In my handler method I check for the actual intent triggered and serve up corresponding response. As mentioned above I am handling both custom and built-in intents.
private async Task IntentRequestHandlerAsync(SkillRequest request) { Response response = null; switch (request.Request.Intent.Name) { case "SpiceIntent": case "AMAZON.FallbackIntent": response = await this.SpiceIntentHandlerAsync(); break; case "AMAZON.CancelIntent": //handling built-in intents case "AMAZON.StopIntent": response = await this.CancelOrStopIntentHandler(); break; case "AMAZON.HelpIntent": response = await this.HelpIntentHandler(); break; default: response = await this.SpiceIntentHandlerAsync(); break; } return response; }
And this is how the intenthandler for my custom intent SpiceIntent looks like.
private async Task SpiceIntent); // get all spice facts and choose a random node var spiceFactItems = await service.Content.GetByType("SpiceFact"); var next = random.Next(spiceFactItems.Content.Items.Count()); var item = spiceFactItems.Content.Items.ElementAt(next); var response = new Response { OutputSpeech = new OutputSpeech() { Ssml = this.SsmlDecorate(item.Properties["fact"].ToString()), // get the value of the "fact" property from the node and serve it up as SSML Text = item.Properties["fact"].ToString(), Type = "SSML" }, Card = new Card() { Title = spiceContainer.Properties["alexaSkillName"].ToString(), Text = item.Properties["fact"].ToString(), Type = "Standard", Image = new CardImage() { LargeImageUrl = media.Url, SmallImageUrl = media.Url } }, ShouldEndSession = false }; return response; }
I am not going into great detail of code around the handler methods for built-in intents. All the code is available in my GitHub Repo
I host my API in Azure and add the end point in the Amazon Developer Console as shown below.
Adding an endpoint in the Amazon Developer Console
Once saved, you will need to go back to the intents section and build the model. Alexa begins to train itself on the interaction model of your skill.
Building the model in the intents section
Once the model has been successfully built you can use the test console to test your skill. Make sure you enable skill testing before you start!
I have noticed a slight blip in the visual card in the test console. However, if you are logged into your Alexa device or Alexa smartphone app with the same account as your developer account, you can test a skill which in development and see the visual cards. I had to resort to it while I was testing my skill.
Test console
Make your skill available in the skill store
If you wish to make your skill live and available to end users via the skill store you will need to complete the certification process. More information on it can be found here. There are security and functional tests performed on your API by Amazon as a part of this. The requirements for this can be found here. Documentation on it can be found in the link mentioned above. To get some inspiration on securing the API according to the requirements visit the Github Repo. This is the demo I put together for my talk at the UK Festival last year. The APIController and RequestValidationHandler should help you out with crossing off all the points in the security checklist.
Conclusion
I know, I know it's been a lengthy article. But even then this only covers the very basics of Alexa Skill development. There are so many advanced things that you can do like introducing slots, delegating conversation, account linking and even making purchases. The developer documentation is a good starting point for it. It's quite strong and getting better day by day. Amazon introduces new features into Alexa regularly, so cast an eye on the monthly newsletter that comes by. So there you go, give this a spin yourself and let me know :-)
Ta for now!
Also in Issue No 58
Tabs and groups — make it work, then make it better
by Søren Kottal | https://skrift.io/issues/spice-up-your-alexa-skill-using-umbraco-heartcore/ | CC-MAIN-2020-40 | refinedweb | 3,173 | 65.01 |
[size="3"]What is a Sound Server?
A Sound Server is some code to play sound. (What a general definition !). Each time you want to code computer sound, you have to use a Sound Server. A Sound Server can be represented as a single thread, managing the System sound device all the time. The user (you !) just send some sound data to the Sound Server regularly.
[size="3"]Why need I a sound Server?
Let's imagine you just coded a great demo, and you want to add some music. You like these old soundchip tunes and you want to play an YM file (ST-Sound music file format). You download the package to play an YM file, but unfortunately the package is only a "music rendering" package. That is, with the library, you can ONLY generate some samples in memory, not into the sound device ! Many music library are made like this. Traditionally, the library provides a function like:
So the SoundServer is for you !!! My SoundServer provides all Windows sound device managing, and call your callback regulary. Here is what your code should be:
MusicCompute(void *pSampleBuffer,long bufferSize)
[size="3"]How does it work?
#include
#include "SoundServer"
static CSoundServer soundServer;
void myCallback(void *pSampleBuffer,long bufferLen)
{
MusicCompute(pSampleBuffer,bufferSize); // original music package API
}
void main(void)
{
soundServer.open(myCallback);
// wait a key or anything you want
soundServer.close();
}
Managing sound device under Windows can be done with various API. Today we'll use the classic Multimedia API called WaveOut. So our Sound Server will work properly even if you don't have DirectSound. We'll see a DirectSound version of the Sound Server in the next article.
The main problem is that we're speaking of sound, so we have some rules to respect to avoid listening some nasty *blips* in the sound stream. Let's imagine we want to play an YM file at 44100Khz, 16bits, mono, and we have internal buffers of 1 second. First, we fill our buffer with the start of the music, and we play the buffer through the Windows API. After one second, buffer is finished, so Windows tell us that the buffer is done, and we have to fill it again with the next of the song. We can fill our buffer again, and send back to the sound device. BUT, in this case, playback is stopped until we fill the buffer again, so we hear some *blips* !!
To avoid that problem, we'll use the queuing capability of the WaveOut API. Just imagine you have two buffers of one second each, already filled with valid sound data (let's call them buffer1 and buffer2). If you play buffer1 and IMMEDIATELY play buffer2, buffer1 is not cutted. Buffer2 is just "queued", and buffer1 is still playing. when buffer1 is finished, Windows starts IMMEDIATELY buffer2 so there is no *blips*, and inform you buffer1 is finished through a callback. So you have 1 second to fill buffer1 again and send it to the sound device. Quite simple, no?
[size="3"]Let's do the code
All the sound server is encapsulated in a class called CSoundServer. You start the server by calling the "open" method. Open method gets your own callback function as an argument. Then we initialize the Windows WaveOut API by calling
Please note that waveOutProc is our internal callback. And this callback will call your user-callback.
waveOutOpen( &m_hWaveOut, WAVE_MAPPER, &wfx, (DWORD)waveOutProc, (DWORD)this, // User data.
(DWORD)CALLBACK_FUNCTION);
Then we fill all our sound buffer (remember the multi-buffering to avoid *blips*).
Let's have a look to the most important function: "fillNextSoundBuffer". First, we have to call your user callback, to fill the sound buffer with real sample data.
for (i=0;i
{{
fillNextSoundBuffer();
}
Then we have to prepare the buffer before sending it to the sound device:
// Call the user function to fill the buffer with anything you want ! :-)
if (m_pUserCallback) m_pUserCallback(m_pSoundBuffer[m_currentBuffer],m_bufferSize);
and finally we can send it to the device with the waveOutWrite
// Prepare the buffer to be sent to the WaveOut API
m_waveHeader[m_currentBuffer].lpData = (char*)m_pSoundBuffer[m_currentBuffer];
m_waveHeader[m_currentBuffer].dwBufferLength = m_bufferSize;
waveOutPrepareHeader(m_hWaveOut,&m_waveHeader[m_currentBuffer],sizeof(WAVEHDR));
That's all folks !! Quite easy, no ??
// Send the buffer the the WaveOut queue
waveOutWrite(m_hWaveOut,&m_waveHeader[m_currentBuffer],sizeof(WAVEHDR));
[size="3"]How can I use it?
I like "clean and short" code. Traditionally, when I get a source code from the web, it's always a nightmare to compile and run it. So I try to do things as simple as possible. To use the sound server, just copy SoundServer.cpp and SoundServer.h files in your project directory.
WARNING: Do not forget to link your project with WINMM.LIB to use the Sound Server.
[size="5"]Part 2 : DirectX Sound Server
In part 1, we learned to make a SoundServer using the windows WaveOut API. Now we'll use the famous DirectSound API.
[size="3"]Is DirectSound better than WaveOut?
As all simple questions, answer is quite not simple ! :-) In fact, you have to know exactly what's important for your sound server. If your program have to be very accurate (I mean game, demo, or anything requiring high visual/sound synchronization), use DirectSound. The drawback is that it's a bit more complicated to use (thread usage) and user should have the DirectX API installed. If you only want to play a streaming audio in the background in a tool, just use the WaveOut version.
[size="3"]How does it work?
If you read the previous part, you're familiar with the multi-buffering. With DirectSound, we don't use the same technique. Basically DirectSound provides a set of sample buffers, and mix them together. If you want some sound fx in your next generation video game, just create a DirectSoundBuffer for each sample, and play them. DirectSound manages all the mixing, polling etc.. for you !
So you say, "great", that's quite easy ! Yes, but we're speaking of a sound server, for streaming purpose ! So we have the same problem: we want a short sound buffer, and we want the sound server call our own callback periodically. Unfortunately, DirectX7 does not provide streaming sound buffer (maybe in DX8). So we'll use that scheme:
- Create a DirectSoundBuffer
- Create and launch a thread rout, which goal is to poll the SoundBuffer without end. Each time we have a little space in it, we fill the buffer with our own data, and so on.
DirectSound uses SoundBuffer to play sound. You can use one sound buffer for each sound effect you have to play. All these sounds are mixed into an special sound buffer called the primary sound buffer. All DirectSound app must create a primary sound buffer. For our SoundServer, we can fill directly the primary sound buffer with our data, but writing to the primary sound buffer is not allowed on all drivers or operating system (NT). So we'll use a second buffer, which is not a primary one. We can lock/unlock and write data in that new buffer without trouble. So our SoundServer will contain a primary sound buffer and a classic sound buffer.
[size="3"]Let's do the code
All the sound server is encapsulated in a class called CDXSoundServer. You have to send the handle of your main window to the constructor, because DirectX need it. Then you can start the server by calling the "open" method. Open method gets your own callback function as an argument. Let's see the open method in detail:
1) Create a DirectSound object.
WARNING: In our sample I simply check if all is ok. If not, open returns FALSE. You have to add some better error message handler. As an example, if DirectSoundCreate returns an error, maybe DirectSound is not installed on the machine.
HRESULT hRes = ::DirectSoundCreate(0, &m_pDS, 0);
2) At that point, m_pDS is a valid LPDIRECTSOUND. Now we set a cooperative level. We choose DSSCL_EXCLUSIVE because we want our app to be the only one to play sound (others apps stops playing sound if they don't have the focus) and DSSCL_PRIORITY allowing us to set our own sound buffer format. (This is only for easy enderstanding, because DSSCL_EXCLUSIVE includes DSSCL_PRIORITY).
3) Now we can create the primary sound buffer and set its internal format.
hRes = m_pDS->SetCooperativeLevel(m_hWnd,DSSCL_EXCLUSIVE | DSSCL_PRIORITY);
4) Now create a normal sound buffer (to be filled by our rout) and set the same format as the primary. Of course you can set another format, but in that case you'll get a speed penalty.
DSBUFFERDESC bufferDesc;
memset(&bufferDesc, 0, sizeof(DSBUFFERDESC));
bufferDesc.dwSize = sizeof(DSBUFFERDESC);
bufferDesc.dwFlags = DSBCAPS_PRIMARYBUFFER|DSBCAPS_STICKYFOCUS;
bufferDesc.dwBufferBytes = 0;
bufferDesc.lpwfxFormat = NULL;
hRes = m_pDS->CreateSoundBuffer(&bufferDesc,&m_pPrimary, NULL);
if (hRes == DS_OK)
{
WAVEFORMATEX format;
memset(&format, 0, sizeof(WAVEFORMATEX));
format.wFormatTag = WAVE_FORMAT_PCM;
format.nChannels = 1; // mono
format.nSamplesPerSec = DXREPLAY_RATE;
format.nAvgBytesPerSec = DXREPLAY_SAMPLESIZE * DXREPLAY_RATE;
format.nBlockAlign = DXREPLAY_SAMPLESIZE;
format.wBitsPerSample = DXREPLAY_DEPTH;
format.cbSize = 0;
hRes = m_pPrimary->SetFormat(&format);
WARNING: Please notice the DSBCAPS_STICKYFOCUS flags. This flags allow our app to play sound even if we don't have have focus. Very useful if you write a sound player. The DSBCAPS_GETCURRENTPOSITION2 tells DirectSound we'll use the GetPosition method later on that sound buffer.
DSBUFFERDESC bufferDesc;
memset(&bufferDesc,0,sizeof(bufferDesc));
bufferDesc.dwSize = sizeof(bufferDesc);
bufferDesc.dwFlags = DSBCAPS_GETCURRENTPOSITION2|DSBCAPS_STICKYFOCUS;
bufferDesc.dwBufferBytes = DXREPLAY_BUFFERLEN;
bufferDesc.lpwfxFormat = &format; // Same format as primary
hRes = m_pDS->CreateSoundBuffer(&bufferDesc,&m_pBuffer,NULL);
5) And finally, play the empty sound buffer in loop mode, and launch a new thread to fill it:
[size="3"]Some words about threads...
hRes = m_pBuffer->Play(0, 0, DSBPLAY_LOOPING);
m_hThread = (HANDLE)CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)threadRout,(void *)this,0,&tmp);
What's a thread ?? A thread is another task of your program. That is, you have benefit of multi-tasking AND memory sharing ! Our thread have to check the sound buffer all the time. So let's imagine we have only two threads running: our app and our sound thread. All threads will share 50% of CPU each. But I'm sure you don't want your SoundServer takes 50% of CPU time !! :-) So we'll use the "sleep" function. Sleep tells window to forgot the thread for a given amount of time. Sleep(20) suspends the thread for 20ms, so the app have 100% of CPU in that time ! 20ms is a good timing for a sound server. Of course, in practice your app will never have exactly 100% CPU, because of the operating system himself. Our thread routs looks like:
NOTE: You may have notice the m_bThreadRunning is "volatile". Don't forget thread uses shared memory, so the m_bThreadRunning member can be changed by another task. That's why we don't want the compiler uses registers. Volatile tells compiler the memory can be changed by an interrupt routine.
static DWORD WINAPI __stdcall threadRout(void *pObject)
{
CDXSoundServer *pDS = (CDXSoundServer *)pObject;
if (pDS)
{
while ( pDS->update() )
{
Sleep(20);
}
}
return 0;
}
[size="3"]How to fill a sound buffer...
Our thread rout calls DCXSoundServer::update() as often as possible. That function have to check where the sound buffer is currently playing (sound buffer are circular). We keep an internal position (m_writePos) which is our own position. Let's imagine the sound buffer is 8192 bytes len and we already computed 120 bytes. so m_writePos = 120. At the same time, let's say the playing position is 4120. So we can compute safely 4000 bytes of new data from m_writePos to playPos. (because we can't write over the playing cursor without hear nasty glitches). So, first we get the playing position, and we compute the data size to be generated from m_writePos to playPos (don't forget we're in a circular buffer)
Now we can safely compute"wrileLen" bytes of data at the m_writePos. To fill a DirectSoundBuffer, we have to lock it:
HRESULT hRes = m_pBuffer->GetCurrentPosition(&playPos,&unusedWriteCursor);
if (m_writePos < playPos) writeLen = playPos - m_writePos;
else writeLen = DXREPLAY_BUFFERLEN - (m_writePos - playPos);
Please notice that lock can returns an error when SoundBuffer have to be restored. Our error check is very lame, because we don't check the DSERR_BUFFERLOST error code. But that's quite enough for our article! Finally we can call our user callback with valid pointer and size:
while (DS_OK != m_pBuffer->Lock(m_writePos,writeLen,&p1,&l1,&p2,&l2,0))
{
m_pBuffer->Restore();
m_pBuffer->Play(0, 0, DSBPLAY_LOOPING);
}
[size="3"]Source code and sample project...
if (m_pUserCallback)
{
if ((p1) && (l1>0)) m_pUserCallback(p1,l1);
if ((p2) && (l2>0)) m_pUserCallback(p2,l2);
}
As always, the source code of a very "short and simple" sound server using DirectSound API. If you want to use it in your project, just use DXSoundServer.cpp and DXSoundServer.h
As a sample, you can download a complete project containing a sin wave generator, using both WaveOut or DirectSound API.
WARNING: Do not forget to link your project with WINMM.LIB to use the WaveOut Sound Server, and DSOUND.LIB to use the DirectSound API.
Hope you like the article ! | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/sound-server-programming-r1348 | CC-MAIN-2017-30 | refinedweb | 2,173 | 58.99 |
Kevin P. Murphy
Mentioned 3
A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach.
Given a posterior p(Θ|D) over some parameters Θ, one can define the following:
The Highest Posterior Density Region is the set of most probable values of Θ that, in total, constitute 100(1-α) % of the posterior mass.
In other words, for a given α, we look for a *p** that satisfies:
and then obtain the Highest Posterior Density Region as the set:
Using the same notation as above, a Credible Region (or interval) is defined as:
Depending on the distribution, there could be many such intervals. The central credible interval is defined as a credible interval where there is (1-α)/2 mass on each tail.
For general distributions, given samples from the distribution, are there any built-ins in to obtain the two quantities above in Python or PyMC?
For common parametric distributions (e.g. Beta, Gaussian, etc.) are there any built-ins or libraries to compute this using SciPy or statsmodels?
From my understanding "central credible region" is not any different from how confidence intervals are calculated; all you need is the inverse of
cdf function at
alpha/2 and
1-alpha/2; in
scipy this is called
ppf ( percentage point function ); so as for Gaussian posterior distribution:
>>> from scipy.stats import norm >>> alpha = .05 >>> l, u = norm.ppf(alpha / 2), norm.ppf(1 - alpha / 2)
to verify that
[l, u] covers
(1-alpha) of posterior density:
>>> norm.cdf(u) - norm.cdf(l) 0.94999999999999996
similarly for Beta posterior with say
a=1 and
b=3:
>>> from scipy.stats import beta >>> l, u = beta.ppf(alpha / 2, a=1, b=3), beta.ppf(1 - alpha / 2, a=1, b=3)
and again:
>>> beta.cdf(u, a=1, b=3) - beta.cdf(l, a=1, b=3) 0.94999999999999996
here you can see parametric distributions that are included in scipy; and I guess all of them have
ppf function;
As for highest posterior density region, it is more tricky, since
a = b = .5 ( as can be seen here);
But, in the case of Gaussian distribution, it is easy to see that "Highest Posterior Density Region" coincides with "Central Credible Region"; and I think that is is the case for all symmetric uni-modal distributions ( i.e. if pdf function is symmetric around the mode of distribution)
A possible numerical approach for the general case would be binary search over the value of
p* using numerical integration of
p*;
Here is an example for mixture Gaussian:
[ 1 ] First thing you need is an analytical pdf function; for mixture Gaussian that is easy:
def mix_norm_pdf(x, loc, scale, weight): from scipy.stats import norm return np.dot(weight, norm.pdf(x, loc, scale))
so for example for location, scale and weight values as in
loc = np.array([-1, 3]) # mean values scale = np.array([.5, .8]) # standard deviations weight = np.array([.4, .6]) # mixture probabilities
you will get two nice Gaussian distributions holding hands:
[ 2 ] now, you need an error function which given a test value for
p* integrates pdf function above
p* and returns squared error from the desired value
1 - alpha:
def errfn( p, alpha, *args): from scipy import integrate def fn( x ): pdf = mix_norm_pdf(x, *args) return pdf if pdf > p else 0 # ideally integration limits should not # be hard coded but inferred lb, ub = -3, 6 prob = integrate.quad(fn, lb, ub)[0] return (prob + alpha - 1.0)**2
[ 3 ] now, for a given value of
alpha we can minimize the error function to obtain
p*:
alpha = .05 from scipy.optimize import fmin p = fmin(errfn, x0=0, args=(alpha, loc, scale, weight))[0]
which results in
p* = 0.0450, and HPD as below; the red area represents
1 - alpha of the distribution, and the horizontal dashed line is
p*.
I am new at the domain of machine learning and i have noticed that there are a lot of algorithms/ set of algorithms that can be used: SVM, decision trees, naive bayes, perceptron etc... That is why I wonder which algorithm should one use for solving which issue? In other words which algorithm solves which problem class?
So my question is if you know a good web site or book that focuses on this algorithm selection problematic?
Any help would be appreciated. Thx in advance.
Horace
It is very hard answer the question “which algorithm for which issue?”
That ability comes with a lot of experience and knowledge. So I suggest, you should read few good books about machine learning. Probably, following book would be a good starting point.
Machine Learning: A Probabilistic Perspective
Once you have some knowledge about machine learning, you can work on couple of simple machine learning problems. Iris flower dataset is a good starting point. It consists of several features belonging to three types of Iris species. Initially develop a simple machine learning model (such as Logistic Regression) to classify Iris species and gradually you could move to more advanced models such as Neural Networks.
So in Matlab I perform PCA on handwritten digits. Essentially, I have say 30*30 dimensional pictures, i.e. 900 pixels, and I consider after PCA the components which capture most of the variance, say the first 80 principal components(PC) based on some threshold. Now these 80 PCs are also of 900 dimension, and when i plot these using imshow, I get some images, like something looking like a 0, 6, 3, 5 etc. What is the interpretation of these first few of the PCs (out of the 80 I extracted)?
PCA extracts the most important information from the data set and compresses the size of the data set by keeping only the important information - principal components.
The first principal component is constructed in such a way that it has the largest possible variance. The second component is computed under the constraint of being orthogonal to the first component and to have the largest possible variance.
In your case the data is a set of images. Let's say you have 1000 images and you compute the first five principal components (5 images, constructed by the PCA algorithm). You may represent any image as 900 data points (30x30 pixels) or by the combination of 5 images with the corresponding miltiplication coefficients.
The goal of the PCA algorithm is to construct these 5 images (principal componenets) in such a way that images in your data set are represented most accurately with the combination of the given number of principal components.
UPDATE:
Consider the image below (from the amazing book by Kevin Murphy). The image shows how points in 2 dimensions (red dots) are represented in 1 dimension (green crosses) by projecting them to the vector (purple line). The vector is the first principal component. The purpose of PCA is to build these vectors to minimize the reconstruction error. In your case these vectors can be represented as images.
You may refer to this article for more details on using PCA for handwritten digit recognition. | http://www.dev-books.com/book/book?isbn=0262018020&name=Machine-Learning | CC-MAIN-2017-39 | refinedweb | 1,181 | 53.92 |
from __future__ import print_function grid = [[0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0],#0 are free path whereas 1's are obstacles [0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0]] ''' heuristic = [[9, 8, 7, 6, 5, 4], [8, 7, 6, 5, 4, 3], [7, 6, 5, 4, 3, 2], [6, 5, 4, 3, 2, 1], [5, 4, 3, 2, 1, 0]]''' init = [0, 0] goal = [len(grid)-1, len(grid[0])-1] #all coordinates are given in format [y,x] cost = 1 #the cost map which pushes the path closer to the goal heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))] for i in range(len(grid)): for j in range(len(grid[0])): heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1]) if grid[i][j] == 1: heuristic[i][j] = 99 #added extra penalty in the heuristic map #the actions we can take delta = [[-1, 0 ], # go up [ 0, -1], # go left [ 1, 0 ], # go down [ 0, 1 ]] # go right #function to search the path def search(grid,init,goal,cost,heuristic): closed = [[0 for col in range(len(grid[0]))] for row in range(len(grid))]# the referrence grid closed[init[0]][init[1]] = 1 action = [[0 for col in range(len(grid[0]))] for row in range(len(grid))]#the action grid x = init[0] y = init[1] g = 0 f = g + heuristic[init[0]][init[0]] cell = [[f, g, x, y]] found = False # flag that is set when search is complete resign = False # flag set if we can't find expand while not found and not resign: if len(cell) == 0: resign = True return "FAIL" else: cell.sort()#to choose the least costliest action so as to move closer to the goal cell.reverse() next = cell.pop() x = next[2] y = next[3] g = next[1] f = next[0] if x == goal[0] and y == goal[1]: found = True else: for i in range(len(delta)):#to try out different valid actions x2 = x + delta[i][0] y2 = y + delta[i][1] if x2 >= 0 and x2 < len(grid) and y2 >=0 and y2 < len(grid[0]): if closed[x2][y2] == 0 and grid[x2][y2] == 0: g2 = g + cost f2 = g2 + heuristic[x2][y2] cell.append([f2, g2, x2, y2]) closed[x2][y2] = 1 action[x2][y2] = i invpath = [] x = goal[0] y = goal[1] invpath.append([x, y])#we get the reverse path from here while x != init[0] or y != init[1]: x2 = x - delta[action[x][y]][0] y2 = y - delta[action[x][y]][1] x = x2 y = y2 invpath.append([x, y]) path = [] for i in range(len(invpath)): path.append(invpath[len(invpath) - 1 - i]) print("ACTION MAP") for i in range(len(action)): print(action[i]) return path a = search(grid,init,goal,cost,heuristic) for i in range(len(a)): print(a[i]) | http://python.algorithmexamples.com/web/Graphs/a_star.html | CC-MAIN-2020-24 | refinedweb | 498 | 59 |
piecewise function with variable within domains
Dear all,
I am trying to define a piecewise function where the domains are parametrized by some variable:
import sage.all as sage sage.var('x') sage.var('x0') sage.var('x1') sage.assume( 0<x0) sage.assume(x0<x1) sage.assume(x1< 1) sage.piecewise([((0,x0),0), ([x0,x1],1), ((x1,1),1)], var=x)
However, I cannot seem to get it to work:
TypeError: unable to simplify to a real interval approximation
Any idea how to make it work? Thanks!
Martin
a workaround is to define it as linear combination of heaviside, or step functions. these special functions are documented here.
The constructor
crashes because
RLF( x0 )makes no sense. It is generally not a good idea to assume that code written for a special purpose can do the same job for a similar mathematical, possibly more general situation. But let's say the constructor would have instantiated something (now with possibly broken methods).
Which is the application of the above?
Make "it" work for which purpose?
What should be done now with "that
fthat works" ? | https://ask.sagemath.org/question/39073/piecewise-function-with-variable-within-domains/ | CC-MAIN-2017-43 | refinedweb | 185 | 58.89 |
After creating a brand new AMI from the latest and greatest FreeBSD, we would be remiss if we did not properly test that it boots and can do a basic build.
This article is continuation of Creating Custom FreeBSD AMIs for Jenkins - Part 1.
This job will be broken down into the following sections:
- Copy the
env.txtfile to populate AMI ID with
- Inject the environment variables from the
env.txtfile
- Test the image to make sure it contains the packages needed and the compiler works
- Promote the AMI ID to the main AMI used for all the other jobs
- Clean up the old AMIs that are not needed anymore
Create a new Freestyle Project job in Jenkins and name it something like
new-base-AMI-test.
Copy the
env.txt file
Under the Build section of the job select
Add build step and choose
Copy artifacts from another project.
Set
Project name to:
new-base-AMI-build/ARCH=${ARCH},BUILD_NODE=aws,REL=freebsdmain
Set
Which build to:
Latest successful build
Set
Artifacts to copy to:
env.txt
Inject the environment variables from
env.txt
Under the Build section of the job select
Add build step and choose
Inject environment variables.
Set the
Properties File Path to match the last line above
env.txt and
leave the
Properties Content section blank.
This will be used to populate the AMI ID in a future step.
Test the image
Under the Build section of the job select
Add build step and choose
Execute shell.
For some basic testing, check to make sure the packages necessary are installed and the complier can compile a basic “Hello World” program:
uname -a pkg info PKGS="bsdec2-image-upload git jq nginx openjdk12 poudriere-devel rsync" for p in ${PKGS}; do pkg info ${p} done echo '#include <stdio.h>' >> test.c echo 'int main() {' >> test.c echo ' printf("Hello World!\n");' >> test.c echo ' return 0;' >> test.c echo '}' >> test.c cc -o test test.c ./test
Promote the AMI ID to be used by the other jobs
Under the Build section select
Add build step and choose
Execute system Groovy script.
This will modify the test AMI configured in the cloud section of Jenkins so make sure the names line up:
import jenkins.model.*; import hudson.model.* import hudson.AbortException import hudson.plugins.ec2.*; def config = new HashMap() config.putAll(binding.variables) def logger = config['out'] def envvars = new HashMap() envvars.putAll(build.getEnvironment(listener)) def newami = envvars['NEWAMI'] def arch = envvars['ARCH'] Jenkins.instance.clouds.each { println('cloud: ' + it.displayName) if (it.displayName == 'engineering-aws') { it.getTemplates().each { if (it.description == 'FreeBSD-main-' + arch) { println('description: ' + it.description) def oldami = it.getAmi() if (oldami == newami) { println("Current AMI: " + oldami + "; new AMI: " + newami) throw new AbortException("AMIs are the same") } else { println("Current AMI id: " + oldami) it.setAmi(newami) println("New AMI: " + it.getAmi()) } } } } } Jenkins.instance.save()
Clean up old AMIs
Since AWS charges for every little bit and to keep a clean list of images remove any old AMIs that are not needed anymore.
Under the Build section choose
Add build step and select the type to be
Execute shell.
export AWS_DEFAULT_REGION=us-east-2 export AWS_DEFAULT_OUTPUT=json env if [ "${ARCH}" = "amd64" ]; then T=x86_64 else T=arm64 fi # deregister old AMIs for ami in $(aws ec2 describe-images --owners self --filters Name=architecture,Values=${T} | jq '.Images[].ImageId' | sed -e 's/"//g'); do [ "${ami}" = "${NEWAMI}" ] && continue # Check name matches NAME=$(aws ec2 describe-images --image-ids ${ami} | jq '.Images[].Name' | sed -e 's/"//g') #Strip down "FreeBSD main-aarch64-24" to "main-aarch64" NAME=${NAME##FreeBSD } NAME=${NAME%-*} [ "${NAME}" != "main-${ARCH}" ] && continue SNAP=$(aws ec2 describe-images --image-ids ${ami} | jq '.Images[].BlockDeviceMappings[] | select(.DeviceName == "/dev/sda1") | .Ebs.SnapshotId' | sed -e 's/"//g') echo "Removing AMI: ${ami}; Name: ${NAME}; Snap: ${SNAP}" aws ec2 deregister-image --image-id ${ami} # delete its snap sleep 1 aws ec2 delete-snapshot --snapshot-id ${SNAP} done
Configure the build job to trigger this job
In the
new-base-AMI-build job add a build trigger to trigger the
new-base-AMI-test job whenever the build is successful. | https://beerdy.io/2021/11/creating-custom-freebsd-amis-for-jenkins-part-2/ | CC-MAIN-2022-21 | refinedweb | 690 | 57.16 |
Hush
Hush is designed to help developers configure their applications at runtime and in release mode, retrieving configuration from multiple providers, without having to depend on secret files or hardcoded configuration.
Documentation can be found at.
Overview
Hush can be used to inject configuration that is not known at compile time, such as environmental variables (e.g.: Heroku's PORT env var), sensitive credentials such as your database password, or any other information you need.
# config/prod.exs alias Hush.Provider.{AwsSecretsManager, GcpSecretManager, SystemEnvironment} config :app, Web.Endpoint, http: [port: {:hush, SystemEnvironment, "PORT", [cast: :integer]}] config :app, App, cdn_url: {:hush, GcpSecretManager, "CDN_DOMAIN", [apply: &{:ok, "https://" <> &1}]} config :app, App.RedshiftRepo, password: {:hush, AwsSecretsManager, "REDSHIFT_PASSWORD"}
Hush resolves configuration from using providers, it ships with a
SystemEnvironment provider which reads environmental variables, but multiple providers exist. You can also write your own easily.
Installation
Add
hush to your list of dependencies in
mix.exs:
def deps do [ {:hush, "~> 0.5.0"} ] end
Run
mix deps.get to install it.
Some providers may need to initialize applications to function correctly. The providers will be explicit about whether they need to be loaded at startup or not.
GcpSecretsManager unlike
SystemEnvironment is one such example. To load the provider you need to configure it like so. Note:
SystemEnvironment does not need to be loaded at startup.
# config/config.exs alias Hush.Providers.GcpSecretManager config :hush, providers: [ GcpSecretManager ]
Usage
Hush can be loaded in two ways, at runtime in your application, or as a Config.Provider in release mode. A sample app has been written so you can see how it's configured.
# application.ex def start(_type, _args) do Hush.resolve!() end
To load hush as a config provider, you need to define in your
releases in
mix.exs.
def project do [ # ... releases: [ app: [ config_providers: [{Hush.ConfigProvider, nil}] ] ] ] end
If you are using Hush in runtime and release mode, make sure to only resolve configuration in non release mode:
# application.ex def start(_, _) do unless Hush.release_mode?(), do: Hush.resolve!() end
Configuration format
Hush will resolve any tuple in the following format into a value.
{:hush, Hush.Provider, "key", options \\ []}
Hush.Providercan be any module that implements its behaviour.
"key"is passed to the provider to retrieve the data.
optionsis a a Keyword list with the following properties:
default: any()- If the provider can't find the value, hush will return this value
optional: boolean()- By default, Hush will raise an error if it cannot find a value and there's no default, unless you mark it as
optional.
apply: fun(any()) :: {:ok, any()} | {:error, String.t()}- Apply a function to the value resolved by Hush.
cast: :string | :atom | :charlist | :float | :integer | :boolean | :module- You can ask Hush to cast the value to a Elixir native type.
to_file: string()- Write the data to the path give in
to_file()and return the path.
After Hush resolves a value it runs them through Transfomers.
Examples
By default if a given
key is not found by the provider, Hush will raise an error. To prevent this, provide a
default or
optional: true in the
options component of the tuple.
Default
# config/config.exs alias Hush.Provider.SystemEnvironment config :app, url: {:hush, SystemEnvironment, "HOST", default: "example.domain"} # result without environmental variable assert "example.domain" == Application.get_env(:app, :url) # result with env HOST=production.domain assert "production.domain" == Application.get_env(:app, :url)
Casting
Here we are reading the
PORT environmental variable, casting it to an integer and returning it
# config/config.exs alias Hush.Provider.SystemEnvironment config :app, port: {:hush, SystemEnvironment, "PORT", cast: :integer, default: 4000} # result without environmental variable assert 4000 == Application.get_env(:app, :url) # result with env PORT=443 assert 443 == Application.get_env(:app, :url)
Optional
# config/dev.exs alias Hush.Provider.SystemEnvironment config :app, can_be_nil: {:hush, SystemEnvironment, "KEY", optional: true} # result without environmental variable assert nil == Application.get_env(:app, :can_be_nil) # result with env KEY="is not nil" assert "is not nil" == Application.get_env(:app, :can_be_nil)
Transfomers
By default Hush ships with the following transformers:
- Hush.Transfomer.Cast: Takes an argument
castand converts a value into a specific type.
- Hush.Transfomer.ToFile: Takes an arugment
to_fileand outputs the value into the path provided.
It is possible to add more transformers by the following configuration:
# config/prod.exs alias Hush.Provider.SystemEnvironment config :hush, transfomers: [ App.Hush.JsonToMapTransfomer ] config :app, allowed_urls: {:hush, SystemEnvironment, "alloweds_urls", [json: true]}
It is also possible to override the transforms Hush will process, and the order they will execute in. See below for more information.
Writing your own transfomer
The currently shipped transfomers are good examples on how to implement transformers.
Transformers are executed in order they are defined, first is
Cast, next is
ToFile and then the ones configured by you, e.g.:
# config/prod.exs config :hush, transformers: [ App.Hush.JsonTransformer ]
Lets dissect a transformer as an example. A transformer has to implement the
Hush.Transformer behaviour, and as such it has to implement the
key/0 and
transform/2 functions.
A transformer is going to be executed if a configuration tuple requests it by passing the value of
key/0 into its options. An example is seeing the
json parameter being passed into the
value configuration. Hush will process any transformers in which their
key/0 function returns
:json.
Once a configuration tuple requests a transfomer, a function
transform/2 is called, where the first argument is what is passed as a value of the
key/0 (in the example below it would be
:abort_on_failure), and the second argument would be the current value returned by the provider transformed by any previous transformers.
# config/prod.exs config :app, value: {:hush, SystemEnvironment, "key", [json: :abort_on_failure]}
# lib/app/hush/JsonTransformer.ex defmodule App.Hush.JsonTransformer do @behaviour Hush.Transformer @impl true @spec key() :: :json def key(), do :json @impl true @spec transform(config :: any(), value :: any()) :: {:ok, any()} | {:error, String.t()} def transform(config, value) do try do Jason.decode!(value) rescue error -> case config do :abort_on_failure -> {:error, "Couldn't convert #{value} to json: #{error.message}"} _ -> {:ok, nil} end end end end
Overriding Transformers
The following example woud take a value passed as an environment variable
ALLOWED_URLS='[""]' into a file named
/tmp/urls.json with the contents
[""], all due to the order in which the transformers are executed and the fact that
override_transformers is
true.
# config/prod.exs config :hush, override_transformers: true, transformers: [ Hush.Transformer.Cast, App.Hush.HttpToHttpsTransformer, App.Hush.JsonTransformer, Hush.Transformer.ToFile, ] config :app, value: {:hush, SystemEnvironment, "ALLOWED_URLS", [http_to_https: true, json: true, to_file: "/tmp/urls.json" ]}
# lib/app/hush/HttpToHttpsTransfomer.ex defmodule App.HttpToHttpsTransfomer do @behaviour Hush.Transformer @impl true @spec key() :: :http_to_https def key(), do :http_to_https @impl true @spec transform(config :: any(), value :: any()) :: {:ok, any()} | {:error, String.t()} def transfomer(_config, value) do {:ok, Enum.map(value, &http_to_https(&2))} end def http_to_https(value) do Regex.replace(~r/^http:/, value, "https:") end end
Providers
Writing your own provider
An example provider is
Hush.Provider.SystemEnvironment, which reads
environmental variables at runtime. Here's an example of how that provider
would look in a app configuration.
alias Hush.Provider.SystemEnvironment config :app, Web.Endpoint, http: [port: {:hush, SystemEnvironment, "PORT", [cast: :integer, default: 4000]}]
This behaviour expects two functions:
load(config :: Keyword.t()) :: :ok | {:error, any()}
This function is called at startup time, here you can perform any initialization you need, such as loading applications that you depend on.
fetch(key :: String.t()) :: {:ok, String.t()} | {:error, :not_found} | {:error, any()}
This function is called when hush is resolving a key with you provider. Ensure that you implement a
{:error, :not_found}if the value can't be found as hush will replace with it a default one if the user providede one. Note: All values are required by default, so if the user did not supply a default or made it optional, hush will trigger the error, you don't need to handle that use-case.
To implement that provider we can use the following code.
defmodule Hush.Provider.SystemEnvironment do @moduledoc """ Provider to resolve runtime environmental variables """ @behaviour Hush.Provider @impl Hush.Provider @spec load(config :: Keyword.t()) :: :ok | {:error, any()} def load(_config), do: :ok @impl Hush.Provider @spec fetch(key :: String.t()) :: {:ok, String.t()} | {:error, :not_found} def fetch(key) do case System.get_env(key) do nil -> {:error, :not_found} value -> {:ok, value} end end end
License
Hush is released under the Apache License 2.0 - see the LICENSE file. | https://hexdocs.pm/hush/readme.html | CC-MAIN-2021-25 | refinedweb | 1,402 | 51.14 |
Red Hat Bugzilla – Bug 547000
openlog(xx, 0, LOG_KERN) acts like openlog(xx, 0, LOG_USER)
Last modified: 2010-02-15 02:39:13 EST
This test program LOG_KERN.c:
#include <syslog.h>
int main(int argc, char **argv)
{
openlog("log_kern_test", 0, LOG_KERN);
syslog(LOG_NOTICE, "test test test");
return 0;
}
results in "test test test" being logged with LOG_USER facility.
strace shows:
send(3, "<13>Dec 13 05:21:37 log_kern_tes"..., 49, MSG_NOSIGNAL) = 49
"<13>" is wrong, it should have "<5>" there.
It happens because LOG_KERN is zero, and zero is used as "use default facility" by openlog_internal code:
if (logfac != 0 && (logfac &~ LOG_FACMASK) == 0)
LogFacility = logfac;
This is used in __vsyslog_chk to use default facility:
/* Get connected, output the message to the local logger. */
if (!connected)
openlog_internal(LogTag, LogStat | LOG_NDELAY, 0);
One possible fix is to drop "logfac != 0" check in 1st code fragment and use explicit LOG_USER instead of 0 in that call to openlog_internal.
Man page of openlog states:
LOG_KERN kernel messages (these can't be generated from user processes)
...but there is no way to prevent it. Any process can open a socket to
"/dev/log" and write a string "<0>Voila, a LOG_KERN + LOG_EMERG message!" there.
Making openlog(xx, xx, LOG_KERN) intentionally broken does not help one iota in
preventing this sort of "attack". It only makes writing legitimate code (e.g. klogd) harder.
As far as I see, "man openlog" does not state that "facility" argument in openlog() call can be set to 0, or that LOG_KERN is prohibited. As I see it, programmers should not pass 0 there since such usage is not documented, they should pass LOG_USER if that's the facility they want.
To forestall future improper usage, it makes sense to amend "(default)" at LOG_USER in manpage. Now it looks like this:
LOG_USER (default)
generic user-level messages
How about "(default if openlog was not called explicitly)" ?
Hello,
please could you clarify, why this is not glibc bug and this is expected behaviour and attach here the patch for man-pages which will fix the issue.
Andreas, could you please clarify what is the right behaviour and describe describe here the man-page change fix?
Hello, there was no replay whether the behavior is wanted and man-page should be fixed or not. SZo I'm closing the bug (insufficient_data). | https://bugzilla.redhat.com/show_bug.cgi?id=547000 | CC-MAIN-2016-30 | refinedweb | 389 | 63.19 |
NAME
Linux::Unshare - Perl interface for Linux unshare system call.
SYNOPSIS
use Linux::Unshare qw(unshare :clone); # as root ... unshare(CLONE_NEWNS) # now your mounts will become private unshare(CLONE_NEWNET) # get a separate network namespace
DESCRIPTION
This trivial module provides interface to the Linux unshare system call. It also provides the CLONE_* constants that are used to specify which kind of unsharing must be performed. Note that some of these are still not implemented in the Linux kernel, and others are still experimental.
The unshare system call allows a process to 'unshare' part of the process context which was originally shared using clone(2).
SEE ALSO
unshare(2) Linux man page.
AUTHOR
Boris Sukholitko, <boriss@gmail.com> Marian Marinov, <hackman@cpan.org>
Copyright (C) 2009 by Boris Sukholitko Copyright (C) 2014-2017 by Marian Marinov
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.10.0 or, at your option, any later version of Perl 5 you may have available. | https://metacpan.org/pod/Linux::Unshare | CC-MAIN-2018-43 | refinedweb | 174 | 56.76 |
I wonder if there is any possibility to add win32api extensions (pywin32) to Sublime Text? My plugin sends text lines via TCP to a localhost server process which then processes the data and then returns the result back to ST3. All this works fine. I would like to check if the server process is running. If not, then my plugin would start the server before sending first lines. For this I need win32api but there really are no information how to extend the Python distribution which comes built-in with ST3. Is there any easy (or hard) way to do this?
I already tried to copy pywin32 directories to Sublime Text's python3.3.zip with no luck. I also tried to copy pywin32 folder to "C:\Users...\AppData\Roaming\Sublime Text 3\Packages" folder but it did not work either.
Why do you need win32api? Why not just create a windows batch file that starts your server and call that from your plugin?
My solution for the equivalent on OS X (using PyObjC from inside Sublime) was to inline the Python code required as a multiline string and run it as a subprocess.
It's a tad ugly, but gets the job done: github.com/lunixbochs/AppleScri ... escript.py
Here's a standalone example (python.exe needs to be in your path):
import os
import subprocess
script = '''
import sys
print 'hi'
print sys.stdin.read()
'''
info = None
if os.name == 'nt':
info = subprocess.STARTUPINFO()
info.dwFlags |= subprocess.STARTF_USESHOWWINDOW
info.wShowWindow = subprocess.SW_HIDE
p = subprocess.Popen(
'python', '-c', script], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
startupinfo=info,
)
print('process output:', p.communicate('example input')) | https://forum.sublimetext.com/t/using-win32api-extensions/11765/3 | CC-MAIN-2016-22 | refinedweb | 277 | 60.31 |
Important: Please read the Qt Code of Conduct -
Executing second window causes program to get stuck
Hey guys,
Some of this is a generic C++ problem but some of it also seems to be Qt. Basically I have the MainWindow that is there by default and I created a second window with its own class called "ProcDialog". I want this window to pop-up when a button is pressed in the file menu of my MainWindow, which I accomplished. This is how I have the window popup:
#include "procdialog.h" ... void MainWindow::on_actionStart_Calibration_triggered() { ProcDialog CalibPD; CalibPD.setModal(true); CalibPD.exec(); ProcDialog::fnStatusWindowUpdate(); }
As you can see I have a function under the ProcDialog class I have a function called "fnStatusWindowUpdate". I have this function declared as a static under the public section of the class. The problem, is that I am trying to using this second window as a processing status window that reports progress as the program continues, but what keeps happening is that the program seems to get stuck on the line where the second window is created. I want the program to pop-up the second window, and then immediately run that function. But what happens is the window pops up with "CalibPD.exec();" and the line right after (the function) does NOT run until I close the second window that popped up. I am not entirely sure what happened because this wasn't a problem before when I first created the window and I don't remember changing anything that would cause this behavior.
The function itself seems to be OK. This is what it does (for now I just have it with testing filler it will do more later:
void ProcDialog::fnStatusWindowUpdate() { qDebug() << "Working"; }
Once the second window is closed "Working" appears in the Application Output tab. So what I need to know (if anyone can figure this thing out) is how to have it so after the window pops up the program just keeps running as normal. I don't know why it gets stuck on " CalibPD.exec();" and requires the window to be closed to continue.
Thank you for your time.
EDIT:
I tried using "CalibPD.show();" instead which does allow the program to continue but then the second window appears entirely blank with nothing that I added appearing on it.
@oblivioncth said:
Hi
CalibPD.exec(); is blocking.
Its a event loop. so its stays in there. (it must)
You can use show() to avoid this.'
BUT you code has a classic bug if you use show
void MainWindow::on_actionStart_Calibration_triggered()
{
ProcDialog CalibPD;
CalibPD.setModal(true);
CalibPD.show();
ProcDialog::fnStatusWindowUpdate();
} <<< here CalibPD is deleted ..
so to you show() you must new it
void MainWindow::on_actionStart_Calibration_triggered()
{
ProcDialog *CalibPD= new ProcDialog(this);
CalibPD->show();
ProcDialog::fnStatusWindowUpdate();
}
make sure to delete it again;
you can set
CalibPD->setAttribute( Qt::WA_DeleteOnClose, true );
to make Qt delete it when closed.
Works flawlessly. I had seen on many other posts people talking about making sure objects aren't destroyed before you used them again but I didn't have the experience here to see that was the problem.
You're a life saver. Thanks again mrjj.
@oblivioncth
You are most welcome.
And no worries, you will do the show() trick from time to time. I do :)
Hey, sorry to be a bug, but it seems i have another problem. This is now my function:
void MainWindow::on_actionStart_Calibration_triggered() { ProcDialog *CalibPD = new ProcDialog(this); CalibPD->setModal(true); CalibPD->setAttribute(Qt::WA_DeleteOnClose,true); CalibPD->show(); ProcDialog::fnStatusWindowUpdate(); AR_Main(); }
I now have the additional function call "AR_Main();". This function is under another source file so I have "#include "AR_Main.h" at the top of this file.
The issue is that the contents of the Proc Dialog pop-up window do stick around when ProcDialog::fnStatusWindowUpdate(); is called, but are destroyed (the window goes blank) once AR_Main(); gets called. If I comment AR_Main(); out the window contents are NOT destroyed so it seems to be a particular issue with calling AR_Main(); and not just the function MainWindow::on_actionStart_Calibration_triggered() ending.
@oblivioncth said:
hi, np
AR_Main();
so what does this function do ?
There should be no reason that CalibPD should be affected.
(the window goes blank)
It sounds to me that perhaps you make endless loop ?
in AR_Main();
It is a giant function that does a lot of image processing with OpenCV, so its hard to summarize.
I did figure out of more things though that might help.
So when the program hits the line:
CalibPD->show();
The window pops up and is blank. I determined this by setting a break point just before it. When I say blank I just mean its a window with the header bar at the top but the body is just white with none of the window objects I setup. Therefore, the window is still obviously blank while AR_Main is running. I tried stepping through the program to determine when the contents of the window actually appear but stepping past everything only got me into the dissembler and then finally to:
int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); <- Won't go past this }
and wont let me step past "return a.exec();". The window contents only show up when I click "continue" in the debugger so I am not sure what is actually causing them to show up. Maybe it is when the program idles some kind of update routine is called?
I do remember reading under the documentation for "show" under QDialog that using that method would require some kind of update function to be called now and then. Could that have something to do with it?
Basically it seems that the window objects show up sometime after the program idles and I need them to basically appear and be intereactable by the time that "CalibPD->show();" is called.
If it helps I could perhaps make a quick video demonstrating the issue.
@oblivioncth said:
return a.exec();
This is the application main loop or event loop.
It runs until application is quit. Think of it as Qt main function allowing
signals and events to work.
if
CalibPD->show(); does display as expected if you dont call AR_Main();
i assume it loops / use lots of cpu.
And since its called sort while CalibPD is still displaying then
i assume CalibPD never really got the needed time to fully "paint"
try to insert
qApp->processEvents();
Right after CalibPD->show();
Just to clarify, I was wrong earlier. I thought that it worked without AR_Main being called but really what was happening is that without AR_Main the program would run through and the contents would eventually be painted. The reason it wasn't working with AR_Main is that there is a line in AR_Main that freezes the program (it was a press enter to continue prompt, I am porting a console app over to this GUI). So when the AR_Main was called the program never got to a point where Qt would paint the window, but without calling it the window would be painted due to the wait for enter prompt never happening. So the issue was actually there all along.
Regardless, adding that command did allow the screen to be painted even while AR_Main was running. I may have to use that command every time I want to update the text edit box i have on that window, but that is ok. I am going to leave this open for the moment because I have really in my gut I might run into another small issue or two while manipulating this window and I want to have it fully working before I close the thread again haha. I only have a couple more things to add so it shouldn't take too long to find out.
You did solve my immediate problem so thanks again. I think that event update command is what I was referring to in my last post.
@oblivioncth
Hi
ok, so it was not a heavy loop but an input. :)
When you go from console to event based GUI , its important NOT to make
large loops/massive calculations as then the whole GUI will appear frozen.
( a.exec(); is never allowed to run)
You can indeed often fix this with
qApp->processEvents(); to allow Qt to function but its not a fix all issues.
If the AR_Main will du heavy stuff, you might need to put in its own thread to keep your
mainwindow/dialogs responsive.
@oblivioncth
well, Just keep in mind that if u make huge loops, the main window and dialogs will stop
looking alive :)
I have definitely noticed that while debugging haha. Ok so while you definitely solved the issue of the window freezing, the reason I had some of these function calls in the first place were part of a larger problem. I currently have these two windows, and each of them is in their on class like I previously described. However, the issue is that manipulating these windows usually uses the ui->UIMEMBER->UIFUNCTION method, but ui's scope is only within the .cpp file of each window respectively. ui for mainwindow is different than ui in ProcDialog (obviously). I need to be able to use these functions on these members from .cpp files other than ProcDialog.cpp and MainWindow.cpp.
The reason for a lot of what you just helped me with is that I basically need to run:
ui->pteCalibProcWindow->appendPlainText(qsMesg); //Where qsMesg is a QString
while within AR_Main.cpp. Obviously AR_Main.cpp doesn't know what "ui" is so the way I attacked this problem was with what you were helping me with. The approach was what I am used to doing for situations like this, where you prototype a function in a header file and then include that header in the other source file where you want to use that function. So what I did was make the function:
void ProcDialog::fnStatusWindowUpdate(QString qsMesg) { ui->PshBtnCalibFinish->setEnabled(true); }
within ProcDialog and I prototyped like so:
public: ... void fnStatusWindowUpdate(QString qsMesg);
In other projects I could then just call the function in any .cpp that I included "procdialog.h" in but since the function is within I class I thought that just calling it via "ProcDialog::fnStatusWindowUpdate()" would work. But while I figured that should do the trick (I've made functions in other source files available this way in C before when not dealing with classes or the -> method) I run into a paradoxical problem. Just doing the above, the compiler throws a "illegal call of a non-static member function" when I try to call fnStatusWindowUpdate() in AR_Main. But if I make the function static, that error goes away but then the compiler says:
error: C2227: left of '->PshBtnCalibFinish' must point to class/struct/union/generic type error: C2227: left of '->setEnabled' must point to class/struct/union/generic type
While I don't fully understand the -> architecture it is clear that making a function static interferes with using the -> reference. So it seems like the way I approached the issue won't work.
Is there any simple way to manipulate my QPlainTextEdit that is on my ProcDialog window from AR_main.cpp?
I know that this is something that is not Qt specific, but I have never had to do any C++ codding that require this complex of a program structure until using Qt.
hi
- "ProcDialog::fnStatusWindowUpdate()" would work.
To call a function like that requires its to be a static function and its
not normally needed unless for certain use cases.
So that is what "illegal call of a non-static member function" comes from.
To call a function defined in a class, you need an instance of that class
Myclass a; << an instance.
a.print();
or
Myclass *a=new Myclass;
a->print();
notice that non new syntax uses "." and if pointer we use "->"
.print() vs ->print()
- Is there any simple way to manipulate my QPlainTextEdit that is on my ProcDialog window from AR_main.cpp?
Is there anything Qt in AR_main.cpp?
Normally you would use slot and signals for such task but if all of AR_main.cpp is "foreign" then you cannot easy send signals.
You could give AR_main a pointer to ur dialog
AR_main( dialogpointer )
and AR_main could call public functions in this instance.
Not pretty but i fear it will not be your biggest issue :)
There is something magic about forums. I swore I tried something just like what you were saying before and it didn't work lol, but now it did!. I must have had a thing or two wrong.
I first just tried to create an arbitrary member of ProcDialog and use that to call the function. It would run but the QPlainTextEdit woudn't show anything. So then I tried chaning AR_Main to be:
int AR_Main(ProcDialog *TEST)
and when I called it from mainwindow.cpp I did:
AR_Main(CalibPD);
Then finally, in AR_Main I wrote:
TEST->fnStatusWindowUpdate("Sample string");
and it did exactly what I wanted.
Thank you for being patient with me and dealing with the fact I have some holes in my knowledge for C++. I can't say this will happen here but I started off just as leachy on another form, but eventually became a well known moderator lol. So trust me, I am learning this stuff as people help me with me it. It just takes some time :). I try to make up for the fact I keep asking questions by at least making my posts clear and well formatted, and so that they don't just sound like "Helps givez me codes".
Cheers.
@oblivioncth said:
Ok, its shaping up , super :)
"Helps givez me codes".
We do help those too but good questions often get better answers.
Also its clear that you do try to fix it first yourself, so some holes in c++ is no issue.
Cheers | https://forum.qt.io/topic/65552/executing-second-window-causes-program-to-get-stuck | CC-MAIN-2021-21 | refinedweb | 2,324 | 70.43 |
Opened 7 weeks ago
Closed 7 weeks ago
Last modified 7 weeks ago
#31592 closed Bug (invalid)
Reverting Django 3.1. to Django 3.0.6 raises "binascii.Error: Incorrect padding".
Description
I upgraded to Django 3.1a1 and ran migrations. I also created my own migration because of the change from
from django.contrib.postgres.fields import JSONField to
from django.db.models import JSONField. My own migration was generated automatically:
# Generated by Django 3.1a1 on 2020-05-14 16:25 from django.db import migrations, models import speedy.match.accounts.models class Migration(migrations.Migration): dependencies = [ ('match_accounts', '0006_auto_20200121_1731'), ] operations = [ migrations.AlterField( model_name='siteprofile', name='diet_match', field=models.JSONField(default=speedy.match.accounts.models.SiteProfile.diet_match_default, verbose_name='Diet match'), ), migrations.AlterField( model_name='siteprofile', name='relationship_status_match', field=models.JSONField(default=speedy.match.accounts.models.SiteProfile.relationship_status_match_default, verbose_name='Relationship status match'), ), migrations.AlterField( model_name='siteprofile', name='smoking_status_match', field=models.JSONField(default=speedy.match.accounts.models.SiteProfile.smoking_status_match_default, verbose_name='Smoking status match'), ), ]
Most of the things worked with Django 3.1a1 except a few things which I will report later. But when trying to use Django 3.0.6 again, I ran the following commands:
./manage_all_sites.sh migrate auth 0011 ./manage_all_sites.sh migrate match_accounts 0006
Then, after running my websites with Django 3.0.6, I received an error message:
Incorrect padding
I can't run my websites locally without deleting the database completely including all users. I didn't find any way to fix the database to run it with Django 3.0.6.
Change History (5)
comment:1 Changed 7 weeks ago by
comment:2 Changed 7 weeks ago by
comment:3 follow-up: 5 Changed 7 weeks ago by
Does this error also occur if I migrate forward (from 3.0 to 3.1) or only backwards?
Anyway, I think Django should be able to delete the relevant cache when migrating (backwards or forward) and not just raise exceptions which don't make sense to the users. Maybe just check the version of the session objects and if it's not correct then delete them.
comment:4 Changed 7 weeks ago by
Also, does it mean migrating from Django 3.0 to 3.1 will log out all the users? I use persistent cookies for 30 years and I don't want to log out users who didn't log out or delete their cookies.
comment:5 Changed 7 weeks ago by
Does this error also occur if I migrate forward (from 3.0 to 3.1) ...?
No, support for user sessions that use the old hashing algorithm remains until Django 4.0, see release notes.
Thanks for this ticket. This error is not related with migrations but with sessions and changes of a hashing algorithm from the SHA-1 to SHA-256:
To fix this you need to remove sessions from cache, e.g. | https://code.djangoproject.com/ticket/31592 | CC-MAIN-2020-29 | refinedweb | 477 | 53.68 |
Following my previous article, we are going to continue with an introduction to React Native. Assuming you have everything set up and ready, and you have created your first project and have seen “Hello world!” on the screen, what’s next?
Well… We have been instructed to open App.js and made our modifications. So, let’s look into the project folder.
Before we continue I would like to mention that for this lesson I am using create-react-native-app, therefore if you have created your project using react-native-cli or ignite-cli, project folder will differ from what I will be referring to.
There it is. App.js. and as much as it is tempting to open it, let’s first see what we can learn from README.md.
If we want to keep our project up-to-date we will need to update our dependencies, which should be quite a simple task. But beware of dependency compatibility. Create React Native App relies on 3 dependencies: react-native, react and expo, each being compatible with a narrow version of other two.
Not to forget, the expo also needs to have a specific SDK version, which can be set in the app.json file. Beside specifying SDK version, app.json can help us to name our app, give it an icon, version, specify supported platform, build details, and set many other settings.
Moving along we will see that we have 5 predefined scripts:
npm start or yarn startused to run the app in development mode. If you need to clear cache run
npm start --reset-cacheflag.
npm testused for running test. The project is set up to use jest, but you are free to change it to your likening. If you decide to continue with jest, all you need to do is to create files inside a __test__ folder or with .test extension, and your tests will be ready running.
npm run iosused to run the project on iOS Simulator. The same can be achieved with
npm startsince one of the available options is to run the app on iOS Simulator as well.
npm run androidwill run the app on Android device or emulator. Keep in mind that you will need a device connected or emulator started before running this script. As for the previous script, same goes for this one, the app can be run on Android with
npm startas well.
npm run ejectif you find yourself in need to add native code, do not worry, running this script will put you on the same track as if you have created your app with react-native-cli, but make sure that you have react-native-cli installed globally.
If you are unlucky, and things don’t run smoothly on the first try, there is a list of troubleshooting suggestions related to networking, iOS Simulator not running, and QR Code not scanning.
We have prolonged opening App.js enough. So let’s check it out.', }, });
What do we have here?
First of all, we can notice that code is written using ES6. I am assuming that we are all familiar with it to some degree. For those that want to learn more please check out this link.
React Native comes with ES6 support so there is no need to worry about compatibility. To ensure you how simple it is to write the code using ES6, here is a link to the code after it is compiled.
Moving along we can see
<Text>Lorem Ipsum</Text>, which might be unusual to some of us. This is JSX, React feature for embedding XMLwithin JavaScript. Contrary to other framework languages that let you embed your code inside markup language React lets us write markup language inside the code. This is one of the reasons why we have React as one of our dependencies.
React
We need to be familiar with React if we want to write React Native, but if you are not familiar, you will still be able to follow along. That being said, we will look into two types of data that control React component and two different types of components.
import React, { Component } from 'react'; import { Text, View } from 'react-native'; const Greeting = ({name}) => <Text>Hello {name}!</Text>; export default class GreetPeople extends Component { constructor(props) { super(props); this.state = { doeName: 'Doe' }; } render() { const <Greeting name={janesName} /> <Greeting name={this.state.doeName} /> </View> ); } }
The code above is all we need, so let’s break it down.
Functional Components
Also known as presentational or stateless components. They are dump components that should not contain any complex logic and should be used for displaying a specific part of context or as wrappers to apply specific styles.
const Greeting = (props) => { return ( <Text>Hello {props.name}!</Text> ) }; ----- or after shortening the code ----- const Greeting = ({name}) => <Text>Hello {name}!</Text>;
Class Components
More complex components that should deal with all necessary logic for displaying the content. React has something called component lifecycle and those should be handled inside class components. One of the lifecycle examples is the constructor. Some of the others are componentDidMount, componentWillUnmonut, and others. We will cover them as we go.
export default class GreetPeople extends Component { constructor(props) { super(props); this.state = { doeName: 'Doe' }; } render() { const <Greeting name={janesName} /> <Greeting name={this.state.doeName} /> </View> ); } }
I would like to note that we do not need to have the constructor in every class component.
Props
Props are used for customising the components. In our case
<Greeting /> component has a prop
name, and as we have seen we can pass it a string directly or we can pass it a constantan inside curly braces.
const <Greeting name={janesName} /> </View> );
If we are passing prop, lets say fullName, to a class component, we will access it with this.props.fullName, while in a functional component we will use props.fullName (we will omit this keyword).
State
While props are passed from parent to child, the state is what we are going to use with the component itself. We can initialise the state in the constructor, but if we want to change it we should use
this.setState() function.
It accepts two parameters, object that will update the state, and a callback function that will execute once the state is updated. We should not worry about re-rendering the component since it will re-render every time the state or the props for the component have been changed.
constructor(props) { super(props); this.state = { doeName: 'Doe' }; } componentDidMount(){ setTimeout( () => this.setState({doeName:'Alex'}), 3000 ) }
Ok! I think we had enough of React for now. Let’s go back to React Native.
Styling
Well, sooner or later we will get bored at looking at same style over and over. So, what can we do about it? It’s easy. We will create CSS file, add our style, and… NO! React Native does not play that way. Styling React Native is not at all more complex. We will write our styles using JavaScript.
All of the core components accept prop named style which accepts an object that contains our styling that we write similar to CSS. The difference is that instead of dashes we are using camel casing (instead of
font-family we will have
fontFamily) and that our values, unless they are only numeric, need to be surrounded by quotations (
color: blue is
color: 'blue').
Without further ado, let’s look how we style React Native component.
import React, { Component } from 'react'; import { StyleSheet, Text, View } from 'react-native'; export default class LotsOfStyles extends Component { render() { return ( <View style={{margin:40}}> <Text style={styles.red}>just red</Text> <Text style={styles.bigblue}>just bigblue</Text> <Text style={[styles.bigblue, styles.red]}>bigblue, then red</Text> <Text style={[styles.red, styles.bigblue]}>red, then bigblue</Text> <Text style={{color:'yellow'}}>yellow</Text> <Text style={[styles.bigblue, {color:'green'}]}>bigblue, then green</Text> </View> ); } } const styles = StyleSheet.create({ bigblue: { color: 'blue', fontWeight: 'bold', fontSize: 30, }, red: { color: 'red', }, });
There are two different approaches, somewhat similar to the inline style and class-based style. We can easily combine styles by passing an array of objects to style prop instead of an object.
Dimensions
It should not be strange to us that we can use height and width to determine component size on the screen. Also, we can assume that our dimension can be fixed and flexible. So, how we are setting each in React Native?
As for fixed, it is as simple as setting height and width. The trick is that all dimensions in React Native are unitless and represent density-independent pixels.
import React, { Component } from 'react'; import { View } from 'react-native'; export default class FlexDimensionsBasics extends Component { render() { return ( <View style={{flex: 1}}> <View style={{flex: 3, backgroundColor: 'powderblue'}} > <View style={{width: 50, height: 50, backgroundColor: 'orange'}} /> <View style={{width: '50%', height: 50, backgroundColor: 'green'}} /> <View style={{width: 150, height: 50, backgroundColor: 'red'}} /> </View> <View style={{flex: 2, backgroundColor: 'skyblue'}} /> <View style={{flex: 1, backgroundColor: 'steelblue'}} /> </View> ); } }
Now let’s check flexible dimensions. This can be achieved with flex property.
A component can only expand to fill available space if its parent has dimensions greater than 0. If a parent does not have either a fixed
widthand
heightor
flex, the parent will have dimensions of 0 and the
flexchildren will not be visible.
While talking about flex, we need to say that we can use flexbox principles to layout our components. Something that we need to keep in mind is that flexbox in React Naive slightly differs from what we are used to on the web.
Flexbox works the same way in React Native as it does in CSS on the web, with a few exceptions. The defaults are different, with
flexDirectiondefaulting to
columninstead of
row, and the
flexparameter only supporting a single number.
Components
Now we know how to apply a basic style to our app. Now we need something to style, and that something is React Native components. So let’s take a look at a few:
View: the most fundamental component for building UI — as div for web
Text: used for displaying text — heading, and paragraphs for web
TextInput: used for inputting text into the app via keyboard — input for web
StyleSheet: creates styles — acts style tag for web
Image: used for displaying images — same as img on web
Button: gives us buttons — acts like button for web
Platform: helps us to determine either we are running our code on Android or iOS
Alert / AlertIOS: thename says it — as JavaScript alert message
ScrollView: helps us to scroll the content that is higher or wider than our device
FlatList: displays similarly structured data; unlike ScrollView it renders only visible items
Touchable “components”: creates touchable views — similar to anchor tags on web
Let’s not go overboard with listing components. These are some basic ones that we will be using the most.
We have already seen how we can style Textand View in our examples. Styling others is not much different. For the full list of styling attributes, you can check the official documentation.
Conclusion
We now know how to create a project using create-react-native-app and we have a basic understanding of what we get out of the box when doing so. We have learned just enough React to be able to write code for our app, and we have shown how to style React Native component, as well as how to give it dimensions. In addition to that, we have mentioned fundamental React Native components to have us up and ready when we want to build a simple app.
The code used here can be found on my GitHub repository. | https://www.maestralsolutions.com/react-native-intro/ | CC-MAIN-2020-40 | refinedweb | 1,949 | 63.49 |
How to install \ deploy django on Xampp webserver with wsgiscriptalias on windows
1. Installing XAMPP for Windows
The most common web server to use MySQL, PHP or Perl application is Apache. Many websites use this server – it’s stable, well documented and it’s open-source. Of course you can use Apache as well, and learn all the things about running and maintaining web server and MySQL database. But if you need an “easy to use” and portable web server and still learn all the things about Apache you should consider XAMPP.
You can download XAMPP here. It’s really easy to install it, it’s well documented and it has installer or zip version.
As the installation process is vary well documented, we wouldn't focus on that now.
2. Installing python
Apache itself does not work as python (or django) server, so we need to configure it for our purpose. First thing to do is installing python on our server. You can download it from here. You should download the latest stable version. You should with version 2, until you would read that version 3 is fully supported by Apache and django. Python goes with Windows installer, so it shouldn’t be a problem to install it. Just follow the instructions on the screen. Remember to install Python for all users, because it wouldn’t work with Apache server. Apache server is run on special, system user and it needs access to your python files.
After installing Python you should add a path to python.exe to your system PATH variable.
3. Installing mod_wsgi
We would use python through cgi scripts, and for that we would need to install mod_wsgi. You could get raw version from project website here or compiled version with .so extension - here .Make sure, that you use a right version for your python and apache server.
You should place a file in apache modules folder – xampp\apache\modules and change it’s name for mod_wsgi.so .Next you have to locate and change apache configuration file - \xampp\apache\conf\http.conf. Edit it and at the end of LoadModule section add a line:
LoadModule wsgi_module modules/mod_wsgi.so
At this point we should create a folder for our application. It is the best practice, to set up a folder away from root web folder (not in xampp\htdocs). I’ve created one directly at partition d:\django-app .
Now we have to show our apache server where we store python scripts and how it should use it. In configuration file http.conf we search for “<Directory>” section. At the beginning we have there:
<Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory>
Under that we add a WSGI handler with a line:
WSGIScriptAlias /wsgi "d:/django-app/wsgi_handler.py"
It means that we will use file wsgi_handler.py in d:/django-app for handling request from . The part /wsgi is the part after localhost in web browser address.
Next we have to set permissions to application folder, so no one could edit it. We do it by adding another <Directory> section:
<Directory "d:/django-app"> AllowOverride None Options None Order deny,allow Allow from all </Directory>
Remember that in Apache we change all “\” on “/” in Windows paths.
No we set up our test handler by creating file wsgi_handler.py at d:/django-app .
Inside we should put this code:
def application(env, start_response): start_response("200 OK", []) output = "<html>Hello World! Request: %s</html>" output %= env['PATH_INFO'] return [output]
We can test if it works opening in web browser.
4. Installing django
We should also install django. Download it from here and unpack. Installing django is done by python script, so we have to use command line. Press start and write cmd in search box (Run in other versions of Windows), then press Enter. A command line console would appear. Move to django directory by using change directory command and run the installer by typing:
python setup.py
You would be asked if you would like to create superuser.
Do it - we will use the user and password to get to admin panel later in the project
To check django installation write:
python >>>import django >>>print django.get_version()
Now you can add a PATH to your django installation in python. Django is installed as a python library in Lib\site-packages\django\ in python directory. Add this folder to your system PATH.
5. First django project
To start django project use:
django-admin.py startproject SimpleCRM
It will creates a folder for a project and initial files. It is important to be at the folder that would be used for django applications (in our case it is d:\django-app)
Now we would edit wsgi_handler.py for using django. Delete previous content an put this code:
import os import sys path = 'd:/django-app' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'SimpleCRM.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()
Now check results at | https://hubpages.com/technology/Installing-django-on-Xampp-webserver | CC-MAIN-2019-09 | refinedweb | 842 | 67.04 |
Design Guidelines, Managed code and the .NET Framework
We got some great feedback from this entry on DateTime. The dev lead for the BCL took time to post to my comments, but I thought I’d put them into the mainfeed as I think they are generally interesting.
I am a colleague of Brad's and I'm the development owner of DateTime. I wanted to respond to the feedback on this thread, particularly the excellent feedback from Douglas Husemann (here) and Jeremy Gray (here).
Serialization in XML is the number 1 complaint associated with the DateTime. It is definitely something we are committed to fixing in Whidbey.
There may be a misunderstanding, but there is no current plan to introduce a new "Date" data type to the System namespace. It is important to keep the number of distinct data types in the System namespace small because so much code needs to special-case it. The DateTime class was designed to meet the scenarios associated with a Date, a Time and a combined Date and Time. There are cases in the rest of the framework where some formatting options effectively preclude the usage as a stand-alone Date which we will be correcting in Whidbey. The use of the "zzz" format by XML Serialization, XML Convert, and DataSet is the most extreme example of this because this format is only valid in cases where the time is significant and the time is Local. It precludes Date-only and UTC scenarios. If we could wind the clock back to before shipping V1.0 we would have left the "zzz" format off when serializing DateTime.
Regarding the request for a servicing release. Introducing new functionality into our V1.0 and V1.1 releases is always significantly more risky and expensive than a regular release. Feedback such as you are providing here is very useful in convincing people that the extra cost and risk is justified. In fact the more specific business impact and adoption impact about this issue from as many distinct sources as I can get is useful in trying to get all the right teams and individuals bought in that this work should be done. If you know others impacted by this, or you can provide more information about its impact on you, that would be helpful in making something happen. Please contact me (amoore-at-nospam.microsoft.com) if there is more feedback you can provide..
Regarding compatibility: the only circumstances when I see customers significantly more inflamed than you are right now about this is not when we have bugs in our product, but when fixing bugs causes reasonably functioning code to break. And compatibility issues often go directly to impacting your users and customers as they upgrade their operating systems or runtimes, or even when they install service packs. Unfortunately, it is easy to take a dependency on the weird way DateTime serializes in XML, so we have to be very careful about how we introduce fixes in Whidbey, and we have to be even more careful about anything we consider in servicing, which has an even higher compatibility bar.
Requests for a "historical" DateTime have been very infrequent. We are unlikely to provide such a thing in the Longhorn timeframe due to lack of demand. There may be a good 3rd party opportunity for a library for historical dating that interoperates well with the DateTime class.
I will pass on the request for more calendars to the team that owns that area.
I am not clear about your request for a more culturally aware DateTime. This FAQ may be of help in finding ways to do this with the current framework:
We got some great feedback from this entry on DateTime. The dev lead for the BCL took time to post to my comments, but I thought I’d put them into the mainfeed as I think they are generally interesting. I am a colleague of Brad's and I'm the | http://blogs.msdn.com/brada/archive/2004/04/20/117279.aspx | crawl-002 | refinedweb | 667 | 61.06 |
Hi, 6000 debs built, and so far it all seems to work pretty well. I can't share the debs yet (internal and customer use only for now), but I would like to get consensus on armel patches before I start submitting them. The first candidate is dpkg. Guillem Jover's patch available here: changes DEB_HOST_GNU_{SYSTEM,TYPE} to have -gnueabi at the end. I've found that this doesn't work too well. For example, util-linux does stuff like this all over debian/rules: ifeq ($(DEB_HOST_GNU_SYSTEM),linux-gnu) MOUNTBINFILES = mount/mount mount/umount MOUNTSBINFILES = mount/swapon mount/losetup endif And ruby1.8 does: arch_dir = $(subst linux-gnu,linux,$(target_os)) (which turns arch_dir into arm-linuxeabi instead of arm-linux-eabi.) I asked Joey Hess, and he felt that there are probably more packages that depend on linux-gnu than on having gnueabi, which makes sense. The only packages that really need to know about gnueabi are binutils, gcc and glibc, the rest should just be checking defined(__ARM_EABI__). Opinions? cheers, Lennert | http://lists.debian.org/debian-arm/2007/01/msg00013.html | crawl-002 | refinedweb | 173 | 62.98 |
Java 14 Java Flight Recorder and JFR Event Streaming in Java 14 Get a stream of high-volume data points about your running app. by Ben Evans February 27, 2020 Download a PDF of this article In this article, I discuss a new feature arriving with Java 14. This feature, referred to as JFR Event Streaming (JEP 349), is the latest iteration of a mature set of profiling and monitoring technologies that have a long history. The original Java Flight Recorder (JFR) and Java Mission Control (JMC) tools were obtained by Oracle as part of the acquisition of BEA Systems back in 2008. The two components work together. JFR is a low-overhead, event-based profiling engine with a high-performance back end for writing events in a binary format, whereas JMC is a GUI tool for examining a data file created by JFR from the telemetry of a single JVM. The tools were originally part of the tooling offering for BEA’s JRockit JVM and were moved to the commercial version of the Oracle JDK as part of the process of merging JRockit with Java HotSpot VM. After the release of JDK 9, Oracle changed the release model of Java and announced that JFR and JMC would become open source tools. JFR was contributed to OpenJDK and was delivered in JDK 11 as JEP 328. JMC was spun out into a standalone open source project and exists today as a separate download. The arrival of Java 14 introduces a new feature to JFR: the ability for JFR to produce a continuous stream of events. This change also provides an API to enable events to be handled immediately, rather than by parsing a file after the fact. This change makes JFR Event Streaming a great foundation on which to build monitoring and observability tools. One issue, however, is that because JFR and JMC only recently became open source tools, many Java developers are not aware of their considerable capabilities. So before I get into the new Java 14 features, let’s explore JMC and JFR from the beginning. Introducing JFR Because JFR first became available as open source as part of OpenJDK 11, you’ll need to be running that version (or a more recent one) or be an Oracle customer running the commercial JDK. There are various ways to create a JFR recording, but I will look at two in particular: the use of command-line arguments when starting up a JVM and the use of jcmd. First, let’s see what command-line switches you need to start JFR. The key switch is this: -XX:StartFlightRecording:<options> This switch enables JFR recording at process start time. Until Java 14, JFR produced a file of profiling data. This could be presented as either a one-off dump file or a continuous ring buffer. A large number of individual command-line options control what data is being captured. In addition, JFR can capture more than a hundred different possible metrics. Most of these are very low impact, but some do incur noticeable overhead. Managing the configuration of all these metrics individually is a huge task. Instead, to simplify the process, JFR uses profiling configuration files. These files are simple XML files that contain the configuration for each metric and determine whether the metric should be captured. The standard JDK download contains two basic files: default.jfc and profile.jfc. The default level of recording, controlled by default.jfc, is designed to be extremely low overhead and to be usable by basically every production Java process. The profile.jfc configuration contains more detailed information but this, of course, comes at a higher runtime cost. Beyond the two supplied files, it is possible to create a custom configuration file that contains just the data points you want. The JMC tool (described in the next section) has a template manager that enables easy creation of these files. Other options that can be passed on the command line include the name of the file in which to store the recorded data and how much data to keep (in terms of the age of the data points). For example, an overall JFR command line might look like this: -XX:StartFlightRecording:disk=true,filename=/sandbox/service.jfr,maxage=12h,settings=profile This would create a rolling buffer of 12 hours duration containing the in-depth profiling information. There is no stipulation on how big this file could get. Note: When JFR was a part of the commercial build, it was unlocked with the -XX:+UnlockCommercialFeatures switch. However, Oracle JDK 11 and later versions emit a warning when this option is used. The warning is issued because all the commercial features have been open sourced and because the flag was never part of OpenJDK, it does not make sense to continue to use it. In OpenJDK builds, using the commercial features flag results in an error. One of the great features of JFR is that it does not need to be configured when the JVM starts up. Instead, JFR can be activated from the command line by using the jcmd program to control a running JVM from the UNIX command line: $ jcmd <pid> JFR.start name=Recording1 settings=default $ jcmd <pid> JFR.dump filename=recording.jfr $ jcmd <pid> JFR.stop Not only that, but it is possible to attach to JFR after the application has already started. You can see that JMC (see next section) provides the capability of dynamically controlling JFR within JVMs running on the local machine. No matter how JFR is activated, the result is the same: a single file per profiling run per JVM. The file contains a lot of binary data and is not human-readable, so you need some sort of tool to extract and visualize the data, such as JMC. Introducing JMC 7 Use the JMC graphical tool to display the data contained in JFR output files. It is started from the jmc command. JMC used to be bundled with the Oracle JDK download, but it is now available separately. Figure 1 shows the startup screen for JMC. After loading the file, JMC performs some automated analysis on it to identify any obvious problems present in the recorded run. Figure 1. The JMC startup screen Note: To profile, JFR must be enabled on the target application. As well as using a previously created file, JMC provides a tab on the left of the top left panel marked JVM Browser for attaching dynamically to local applications. One of the first screens that you’ll encounter in JMC is the overview telemetry screen (Figure 2), which shows a high-level dashboard of the overall health of the JVM. Figure 2. Initial telemetry dashboard The major subsystems of the JVM all have dedicated screens to enable deep analysis. For example, garbage collection has an overview screen (Figure 3) to show the garbage collection events over the lifetime of the JFR file. The Longest Pause display allows you to see where any anomalously long garbage collection events have occurred over the timeline. Figure 3. Garbage collection overview In the detailed profile configuration, it is also possible to see the individual events where new allocation buffers (TLABs) are handed out to application threads (Figure 4). This gives you a much more accurate view of resources within the process. Figure 4. TLAB allocations TLAB allocations let you easily see which threads are allocating the most memory; in Figure 4, it’s a thread that is consuming data from Apache Kafka topics. The other major subsystem of the JVM is the JIT compiler, and as Figure 5 shows, JMC allows you to dig into the details of how the compiler is working. Figure 5. Watching the compiler A key resource is the available memory in the JIT compiler’s code cache. A key resource is the available memory in the JIT compiler’s code cache, which stores the compiled version of methods. For processes that have many compiled methods, this area of memory can be exhausted, causing the process to not reach peak performance. Figure 6 shows the data related to this cache. Figure 6. Monitoring the code cache JMC also includes a method-level profiler (Figure 7), which works in a very similar way to those found in VisualVM or commercial tools such as JProfiler or YourKit. Figure 7. Profiling pane One of the more advanced screens within JMC is the VM Operations view (Figure 8), which shows some of the internal operations the JVM performs and how long they take. This is not a view that you would expect to need for every analysis, but it would be very useful for detecting certain types of problems. Figure 8. Viewing JVM operations For example, one of the key data points visualized here is bias revocation time. Most programs thrive with the JVM’s biased locking technology (which is broadly the idea that the first thread to lock an object is likely to be the only one ever to lock it). However, some workloads (such as Apache Cassandra) are architected in such a way that they are negatively affected by biased locking. The VM Operations view would allow you to spot such a case very easily. Parsing a JFR File Programmatically Using JMC to examine a file from a single JVM is an important use case for profiling and developing applications. However, in service-based and enterprise environments, the model of using a GUI to investigate files one by one is not always the most convenient approach. Instead, you might want to build tools that can parse the raw data captured by JFR and provide more convenient ways to access and visualize it. Fortunately, it’s very simple to get started with the programmatic API for JFR events. The key classes are jdk.jfr.consumer.RecordingFile and jdk.jfr.consumer.RecordedEvent. The former is made up of instances of the latter, and it is easy to simply step through the events in a simple piece of Java code, such as this: var recordingFile = new RecordingFile(Paths.get(fileName)); while (recordingFile.hasMoreEvents()) { var event = recordingFile.readEvent(); if (event != null) { var details = decodeEvent(event); if (details == null) { // Log a failure to recognize details } else { // Process details System.out.println(details); } } } Each event may carry slightly different data, so you need to have a decoding step that takes into account what type of data is being recorded. For simplicity in this example, I’m choosing to take in a RecordedEvent and convert it to a simple map of strings: public Map<String, String> decodeEvent(final RecordedEvent e) { for (var ent : mappers.entrySet()) { if (ent.getKey().test(e)) { return ent.getValue().apply(e); } } return null; } The real magic happens in the decoders, which have a slightly complex signature: private static Predicate<RecordedEvent> testMaker(String s) { return e -> e.getEventType().getName().startsWith(s); } private static final Map<Predicate<RecordedEvent>, Function<RecordedEvent, Map<String, String>>> mappers = Map.of(testMaker("jdk.ObjectAllocationInNewTLAB"), ev -> Map.of("start", ""+ ev.getStartTime(), "end", ""+ ev.getEndTime(), "thread", ev.getThread("eventThread").getJavaName(), "class", ev.getClass("objectClass").getName(), "size", ""+ ev.getLong("allocationSize"), "tlab", ""+ ev.getLong("tlabSize") )); The mappers are a collection of pairs. Each pair consists of a predicate and a function. The predicate is used to test incoming events. If it returns TRUE, the associated function is applied to the event and transforms the event into a map of strings. This means that the decodeEvent() method loops over the mappers and tries to find a predicate that matches the incoming event. As soon as it does, the corresponding function is called to decode the event. Each version of Java has a slightly different set of events that JFR supplies. In Java 11, there are more than 120 different possible types. This presents a challenge for supporting all of them because each one will need a slightly different decoder. However, supporting a subset of metrics of particular interest is very feasible. Java 14 JFR Event Streaming With Java 14, a new usage mode for JFR becomes available, which is JFR Event Streaming. This API provides a way for programs to receive callbacks when JFR events occur and respond to them immediately. One obvious and important way that developers might make use of this is via a Java agent. This is a special JAR file that uses the Instrumentation API. In this API, a class declares a special static premain() method to register itself as an instrumentation tool. An example Java agent might look like this: public class AgentMain implements Runnable { public static void premain(String agentArgs, Instrumentation inst) { try { Logger.getLogger("AgentMain").log( Level.INFO, "Attaching JFR Monitor"); new Thread(new AgentMain()).start(); } catch (Throwable t) { Logger.getLogger("AgentMain").log( Level.SEVERE,"Unable to attach JFR Monitor", t); } } public void run() { var sender = new JfrStreamEventSender(); try (var rs = new RecordingStream()) { rs.enable("jdk.CPULoad") .withPeriod(Duration.ofSeconds(1)); rs.enable("jdk.JavaMonitorEnter") .withThreshold(Duration.ofMillis(10)); rs.onEvent("jdk.CPULoad", sender); rs.onEvent("jdk.JavaMonitorEnter", sender); rs.start(); } } } The premain() method is called and a new instance of AgentMain is created and run in a new thread. The recording stream is configured and when it is started, it never returns (which is why it’s created on a separate instrumentation thread). In the recording stream, a large number of events may be generated. Fortunately, the JFR API provides some basic filtering to reduce the number of events that the callbacks are expected to process: Enabled: Determines whether the event be recorded at all Threshold: Specifies the duration below which an event is not recorded Stack trace: Determines whether the stack trace from the Event.commit() method should be recorded Period: Specifies the interval at which the event is emitted, if it is periodic Note that further filtering capabilities must be implemented by the application programmer. For example, a Java program that is allocating several GB per second (which is a high allocation, but entirely possible for large applications) can easily generate hundreds of thousands of events. The example Java agent starts a recording stream with each event being sent to an object called a JfrStreamEventSender, which looks like this: public final class JfrStreamEventSender implements Consumer<RecordedEvent> { private static final String SERVER_URL = ""; @Override public void accept(RecordedEvent event) { try { var payload = JFRFileProcessor.decodeEvent(event); String json = new ObjectMapper().writeValueAsString(payload); var client = HttpClient.newHttpClient(); var request = HttpRequest.newBuilder() .uri(URI.create(SERVER_URL)) .timeout(Duration.ofSeconds(30)) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(json)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); Logger.getLogger("JfrStreamEventSender").log( Level.INFO, "Server response code: " + response.statusCode() + ", body: " + response.body()); } catch (IOException | InterruptedException e) { Logger.getLogger("JfrStreamEventSender").log( Level.SEVERE, "Unable to send JFR event to server", e); } } } This stateless class simply takes in the event, decodes it in the same way shown in the file processing case and then uses the Jackson library to encode it as JSON. The Java 11 HTTP client is then used to send the metric payload to a service that can process it, which in this example is at port 8080 of localhost. There is, of course, much more to do to take this simple example code and build a production-grade monitoring tool from it. Building at scale requires creating HTTP endpoints capable of handling a large number of incoming data points. Problems such as uniquely identifying JVMs and ensuring that inbound data is correctly attributed must also be handled. A data analysis pipeline and medium-term storage system also need to be deployed, as well as front-end visualization and display pages. It is essential that all of these systems are operated at the highest reliability levels, because they are monitoring production systems. Despite these challenges, JFR Event Streaming represents a significant step forward in the technologies you can use to monitor and observe JVMs running applications. There is keen interest in the new APIs and it seems very likely that shortly after the release of Java 14, support for the streaming events will be added to open source tracing and monitoring libraries. The benefits of the JFR approach to monitoring and profiling are coming to a JVM near you soon. | https://blogs.oracle.com/javamagazine/java-flight-recorder-and-jfr-event-streaming-in-java-14?source=:em:nw:mt::RC_WWMK190726P00001:NSL400036067&elq_mid=156196&sh=2419091808192613082213293109241520&cmid=WWMK190726P00001C0006 | CC-MAIN-2021-31 | refinedweb | 2,697 | 53.71 |
I am trying to read a file using NIOS with the eventual goal of loading that readdata into an EEPROM.
I have followed the instructions outlined in :
I have gone to the BSP editor -> Software Packages and enabled altera_hostfs. The hostfs_name is /mnt/host
My code (included below) gets stuck at fopen but if I use the NIOSII debugger, I am able to step through the whole program and readout "Hello World" into the console. Any suggestions would be appreciated.
#include <stdio.h>
#include <alt_types.h>
#include <io.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#define BUF_SIZE (12)
void main()
{
FILE* fp;
char buffer[BUF_SIZE];
printf("start\n");
fp = fopen("/mnt/host/read_test.txt", "r");
printf("fopen command\n");
if (fp == NULL)
{
printf ("connot open file for reading\n");
exit(1);
}
fread(buffer, BUF_SIZE, 1, fp);
fclose(fp);
printf("%s", buffer);
}
Link Copied
Hi,
I see the following reference is useful for you, please let me know if it isn't useful:...
BTW, please make sure you flush the cash to be able to see the output.
Regards,
It was confirmed by an FAE that fopen will only work while using the Debug tool.
Thanks anyway.
Thanks Valentina for your confirmation, Please let me know if you need help. | https://community.intel.com/t5/FPGA-Intellectual-Property/NIOS-fopen-command-only-works-if-using-the-NIOSII-Debug-tool/td-p/669634 | CC-MAIN-2021-21 | refinedweb | 212 | 66.13 |
How to set up a Barometric pressure sensor BMP085 on Raspberry Pi with Raspbian
Works with Raspbian hf Aug/Sept 2012
Under Pressure
From previous blog posts, you’ll know I have a Raspberry Pi set up to read two temperature sensors and two light sensors (inside and outside) and log the data online at COSM
Setting up temperature sensors and COSM feed – perfectly tailored for their Occidentalis Linux distro. I use Raspbian hf, so I’m detailing instructions for that. They’re very similar, but a couple of subtle differences. So, first we’ll start with the software…
sudo apt-get update
sudo apt-get install python-smbus
y to confirm
This installed
i2-tools as well, which we’ll use a little later. :)
How to Enable i2c in Raspbian
Then it was necessary to make a tweak to enable i2c in raspbian. A quick google of how to enable i2c in raspbian revealed this page from S K Pang (I bought my Pi Cobbler from S K Pang). Here’s what I did…
sudo nano /etc/modprobe.d/raspi-blacklist.conf
…and comment out line 3 (the one with i2c in) with a
# at the start of the line…
ctrl+o
<ENTER>
ctrl+x
Then…
sudo nano /etc/modules
you need to add…
i2c-dev on the last line
ctrl+o
<ENTER>
ctrl+x
Now we’re going to add the user pi to the group i2c.
sudo adduser pi i2c
Now we need to reboot to activate the new settings. Or if your sensor is not yet connected, you could shut down and connect it while the Pi is powered down. (Do disconnect the power after shutting down) :)
sudo reboot to reboot, or
sudo halt to shut down.
Hooking it all up
The board my sensor came on has eight pins. Only four of them are used here. Your board may be different, depending on where you source it from. Here’s a shot of mine…
- GND goes to Ground on the Pi
- 3.3 goes to 3V3 on the Pi
- SDA goes to SDA on the Pi (Rev 1 GPIO 0|Rev 2 GPIO 2)
- SCL goes to SCL on the Pi (Rev 1 GPIO 1|Rev 2 GPIO 3)
Login and try it out
When it comes back up, log in as pi and type
i2cdetect -y 0 (change 0 to 1 if you have a Rev. 2 Pi)
If your sensor or other i2c device is connected correctly, you’ll get an indication of its address.
Adding pi to the i2c user group means we no longer need to use sudo for i2c access commands.
Now we need to install some software from the lovely guys at Adafruit. We use git for this. If you don’t have git installed, install that first with
sudo apt-get install git. Then,
git clone
cd Adafruit-Raspberry-Pi-Python-Code
cd Adafruit_BMP085
Then you can test out your sensor with…
python Adafruit_BMP085_example.py
And if you did everything correctly, you should be rewarded with three readings; temperature, pressure and altitude…
I was elated. This worked straight away for me and the whole thing took about 15-20 minutes. :)
Let’s go COSMic
I’m not going to fully detail this part, but essentially I took the parts I needed from the Adafruit_BMP085_example.py script and libraries…
This needs to be near the top of your python script
from Adafruit_BMP085 import BMP085set up resolution mode of sensor
bmp = BMP085(0x77)
And these files need to be in same directory as your script
Adafruit_BMP085.py
Adafruit_I2C.py
And these function calls will get you your data
temp = bmp.readTemperature()
pressure = bmp.readPressure()
altitude = bmp.readAltitude()
Then I incorporated the bits I needed into my weather station Pi’s Python code, to log the new sensor’s data in my COSM feed…
PeterO from the Milton Keynes Raspberry Jam forum pointed out the trace could do with some low pass filtering, to make it a bit less noisy. So I tweaked my script to give a weighted average of the last five readings. It’s a little smoother. :) Could probably be further improved. I may well tweak it some more.
If you install a BMP085, I hope yours works out as smoothly as mine did. :)
Hi, I tried to use your good tutorial by I got several problems. Perhaps you could help me.
1°) My sensors doens’t looks like the yours
2°) I connected the 4 pins using adafruit setup, installed and enable i2c, but it doesn’t detect anything.
Do you have an idea of what could happens ?
Thank you
Manuel
It must be something to do with your wiring.
I have just been moving my BMP085 from my Arduino to my Pi. If you have a V2 Pi you need to change a line in the Adafruit I2C class, as the address of the SMBus has changed:
class Adafruit_I2C :
def __init__(self, address, bus=smbus.SMBus(1), debug=False):
The old value was smbus.SMBus(0). If you don’t do this you either get I2C errors, or the Pi can see your sensor.
Ian.
Yep, that’s absolutely right. I did the same thing on mine last week :)
I have been trying to get my BMP085 to work for ages now and i was just wondering if anyone knew what was wrong with mine. when i follow the tutorial, i do everything as it says, but when i do the i2cdetect -y 1 function it comes up as the same as the tutorials but there are no numbers on the table indicating that there is no i2c device connected. i have checked that i have wired up correctly. when i first tried i just ignored that it hadnt recognised an i2c device and carried on. i got to the part where you run the Adafruit_BMP085_Example.py and it says that there is a syntax error in the Adafruit_I2C.py which is in the same directory as the Adafruit_BMP085_Example.py. I was messing about and found that there is another Adafruit_I2C.py in a different directory, so i moved that file into the same directory as the Adafruit_BMP085_Example.py which replaced the other Adafruit_I2C.py file and ran the Adafruit_BMP085_Example.py and it came up with:
Temperature: 12.80 C
Pressure: 127.51 hPa
Altitude: 14449.01
I was just wondering if anyone knew what the problem is and any help will be really helpful
Matt.
Are you using the correct SMBus for your rev of Pi? Rev 1 and Rev 2 are different.
I have the same issue,
Error accessing 0x77: Check your I2C address
Error accessing 0x77: Check your I2C address
Temperature: 12.80 C
Pressure: 127.51 hPa
Altitude: 14449.01
I have modified Adafruit_I2C.py to
def __init__(self, address, busnum=1, debug=False):
self.address = address
# By default, the correct I2C bus is auto-detected using /proc/cpuinfo
# Alternatively, you can hard-code the bus version below:
# self.bus = smbus.SMBus(0); # Force I2C0 (early 256MB Pi’s)
self.bus = smbus.SMBus(1); # Force I2C1 (512MB Pi’s)
#self.bus = smbus.SMBus(
# busnum if busnum >= 0 else Adafruit_I2C.getPiI2CBusNumber())
self.debug = debug
but still have no joy
You need to connect also GND.
I have resolved my issue which looked as per above by using the GND on pin 5 instead of the GND on pin 13, even though the continuity checks out???
Thank you a lot for your beautiful article! All worked fine with my raspberry-pi (v.2) without any error!
Excellent. Always nice to hear that :)
Thanks a lot for your info. In 10 (ten) minutes the bmp085 & raspi fly on!
Thanks – up and running in under 10 minutes! Hope to link up barometer feed to time lapse pi-cam.
Excellent :)
Article very helpful, thanks. Pi up and running in no time. Have two in fact, dev and prod version. Was also able to integrate to xively to post these values as an activated device. You get the nice ui and history. Worth the effort of extra one install.
[…] RasPi.TV […]
I hooked mine up and with this software it works…..Thanks… I need deg. F , Feet , and inHg. values… | https://raspi.tv/2012/how-to-set-up-a-barometric-pressure-sensor-bmp085-on-raspberry-pi-with-raspbian?replytocom=42056 | CC-MAIN-2019-18 | refinedweb | 1,372 | 73.88 |
Custom tools for viewers and custom toolbars¶
Writing a custom tool for a viewer toolbar¶
Here we take a look at how to create a tool to include in a viewer’s toolbar (either one of the built-in viewers or a custom viewer) There are two types of tools: ones that can be checked and unchecked, and ones that simply trigger an event when pressed, but do not remain pressed. These are described in the following two sub-sections.
Non-checkable tools¶
The basic structure for a non-checkable tool is:
from glue.config import viewer_tool from glue.viewers.common.qt.tool import Tool @viewer_tool class MyCustomTool(Tool): icon = 'myicon.png' tool_id = 'custom_tool' action_text = 'Does cool stuff' tool_tip = 'Does cool stuff' shortcut = 'D' def __init__(self, viewer): super(MyCustomMode, self).__init__(viewer) def activate(self): pass def close(self): pass
The class-level variables set at the start of the class are as follows:
icon: this should be set either to the name of a built-in glue icon, or to the path to a PNG file to be used for the icon. Note that this should not be a
QIconobject.
tool_id: a unique string that identifies this tool. If you create a tool that has the same
tool_idas an existing tool already implemented in glue, you will get an error.
action_text: a string describing the tool. This is not currently used, but would be the text that would appear if the tool was accessible by a menu.
tool_tip: this should be a string that will be shown when the user hovers above the button in the toolbar. This can include instructions on how to use the tool.
shortcut: this should be a string giving a key that the user can press when the viewer is active, which will activate the tool. This can include modifier keys, e.g.
'Ctrl+A'or
'Ctrl+Shift+U', but can also just be a single key, e.g.
'K'. If present, the shortcut is added at the end of the tooltip. If multiple tools in a viewer have the same shortcut, a warning will be emitted, and only the first tool registered with a particular shortcut will be accessible with that shortcut.
When the user presses the tool icon, the
activate method is called. In this
method, you can write any code including code that may for example open a Qt
window, or change the state of the viewer (for example changing the zoom or
field of view). You can access the viewer instance with
self.viewer.
Finally, when the viewer is closed the
close method is called, so you should
use this to do any necessary cleanup.
The
@viewer_tool decorator tells glue that this class represents a viewer
tool, and you will then be able to add the tool to any viewers (see
Customizing the content of a toolbar) using the
tool_id.
Checkable tools¶
The basic structure for a checkable tool is similar to the above, but with an
additional
deactivate method, and a
status_tip attribute:
from glue.config import viewer_tool from glue.viewers.common.qt.tool import CheckableTool @viewer_tool class MyCustomButton(CheckableTool): icon = 'myicon.png' tool_id = 'custom_tool' action_text = 'Does cool stuff' tool_tip = 'Does cool stuff' status_tip = 'Instructions on what to do now' shortcut = 'D' def __init__(self, viewer): super(MyCustomMode, self).__init__(viewer) def activate(self): pass def deactivate(self): pass def close(self): pass
When the tool icon is pressed, the
activate method is called, and when the
button is unchecked (either by clicking on it again, or if the user clicks on
another tool icon), the
deactivate method is called. As before, when the
viewer is closed, the
close method is called. The
status_tip is a
message shown in the status bar of the viewer when the tool is active. This can
be used to provide instructions to the user as to what they should do next.
Customizing the content of a toolbar¶
When defining a tool as above, the
@viewer_tool decorator ensures that
the tool is registered with glue, but does not add it to any specific viewer.
Which buttons are shown for a viewer is controlled by the
tools class-level
attribute on viewers:
>>> from glue.viewers.image.qt import ImageViewer >>> ImageViewer.tools ['select:rectangle', 'select:xrange', 'select:yrange', 'select:circle', 'select:polygon', 'image:colormap']
The strings in the
tools list correspond to the
tool_id attribute on the
tool classes. If you want to add an existing or custom button to a viewer, you
can therefore simply do e.g.:
from glue.viewers.image.qt import ImageViewer ImageViewer.tools.append('custom_tool')
Including toolbars in custom viewers¶
When defining a data viewer (as described in Writing a custom viewer for glue with Qt), it
is straightforward to add a toolbar that can then be used to add tools. To do
this, when defining your
DataViewer subclass,
you should also specify the
_toolbar_cls and
tools class-level
attributes, which should give the class to use for the toolbar, and the default
tools that should be present in the toolbar:
from glue.viewers.common.qt.data_viewer import DataViewer from glue.viewers.common.qt.toolbar import BasicToolbar class MyViewer(DataViewer): _toolbar_cls = BasicToolbar tools = ['custom_tool']
In the example above, the viewer will include an toolbar with one tool (the one
we defined above). Currently the only toolbar class that is defined
is
BasicToolbar.
Note that the toolbar is set up after
__init__ has run. Therefore, if you
want to do any custom set-up to the toolbar after it has been set up, you
should overload the
initialize_toolbar method, e.g:
class MyViewer(DataViewer): _toolbar_cls = BasicToolbar tools = ['custom_tool'] def initialize_toolbar(self): super(MyViewer, self).initialize_toolbar() # custom code here
In
initialize_toolbar (and elsewhere in the class) you can then access the
tool instances using
self.toolbar.tools (which is a dictionary where each
key is a
tool_id). | https://glueviz.readthedocs.io/en/stable/customizing_guide/toolbar.html | CC-MAIN-2018-51 | refinedweb | 975 | 60.95 |
RationalWiki talk:Community Standards/Archive3
Contents
- 1 Strange Names
- 2 Shootings on WIGO
- 3 Another bit about online communities
- 4 New sysop guide
- 5 Added some at my user page for discussion
- 6 Mainspace articles
- 7 Additions without community discussion......
- 8 Death of TK
- 9 Learning from our Mistakes
- 10 Crisis of the Mobocracy
- 11 Standards of Behavior additions
- 12 A group again
- 13 NOMA standards
- 14 Protecting talk pages?
- 15 Essay Namespace
- 16 Oversight
- 17 Discussing long term blocks
- 18 "attempting to promote a radical agenda "
Strange Names[edit]
I propose that all names longer than 24 characters (for example) be truncated without warning.--Bobbing up 15:05, 9 December 2007 (EST)
Seconded! (But 20 letters like racehorses?) Susanpurrrrr ... 15:13, 9 December 2007 (EST)
27. --Signed by Elassint the Great Hi!
- I suggest 21, since it's a Fibonacci number. --AKjeldsenGodspeed! 15:21, 9 December 2007 (EST)
- I don't think it should go any less then 21. --Signed by Elassint the Great Hi! 15:22, 9 December 2007 (EST)
- If that's too low, maybe 23, a prime number? --AKjeldsenGodspeed! 15:26, 9 December 2007 (EST)
- That would work i guess. --Signed by Elassint the Great Hi! 15:28, 9 December 2007 (EST)
User:kkkkkkkkkkkkkkkkkkkkkk actually, thats still a bit to short.
- I agree as long as it's enforced by software, not by hand. I suggest 26 characters. 26 is a theologically significant number, the number of dimensions required to satisfy bosonic string theory. --Jeєv☭sYour signature uses all my CPU time... 15:47, 9 December 2007 (EST)
- I propose that we require user names to have a minimum of 17 characters. human
16:27, 9 December 2007 (EST)
- I propose that usernames should not be allowed if they do not interfere with the wiki-markup in some way. -- מְתֻרְגְּמָן וִיקִי שְׁלֹום!
Shootings on WIGO[edit]
It is a fact that CP is going to make hay over any school (or out of school) shootings that are going to happen from time to time. Harping on this fact in WIGO, I sense, is liable to be counter-productive to RW in the long haul. So, I'd like to propose that reaction of CP reaction of such stories be limited in WIGO. CЯacke® 16:17, 10 December 2007
Please Discuss[edit]
I basically agree. It's hard to make any comment on these things without virtually doing the same thing - trying to make political hay, even if second hand. human
18:04, 10 December 2007 (EST)
- I agree that continued posting in WIGO is unnecessary, but I think there's value in keeping track of the CP reaction to those events, if nothing else to document the sheer number of times that the political cheap shot card is played. Maybe they could go on their own special cheap shot page separate from WIGO. I guess it could be argued that RW would be playing the second-hand cheap shot card too by just having that page, but it'd do a better job of showing CP's mentality about this sort of thing than a bunch of WIGO posts.--Bayesupdate 16:25, 7 January 2008 (EST)
- How about a classy article on the topic, written when there are none of these killings in the news? human
16:48, 7 January 2008 (EST)
Another bit about online communities[edit]
Meatball Wiki [1]. Right now, its downish (can still poke about it in google cache [2] [3] etc... ah ha! the one I was looking for [4] given the retirement template). Googlecache is rather inelegant, but it works. Once its up again I strongly recommend people to browse it - there are many gems hidden in there that are useful to read in understanding what is going on here and there. --Shagie 18:08, 14 December 2007 (EST)
- Was downish when I looked then. Is nice and happy now. Here are the refs I linked to above - and for those putting retired on their page --Shagie 04:48, 15 December 2007 (EST)
New sysop guide[edit]
I just started a rough version of RationalWiki:Sysop_guide. Please to all and sundry to make it a mob project and not just "my" ideas. human
18:21, 20 December 2007 (EST)
Added some at my user page for discussion[edit]
Please discuss there, since they're a couple different guidelines, and then if any one seems workable we'll move them over here. The link is user:AmesG/wikilawyering.-αmεσ (spy) 02:00, 23 December 2007 (EST)
- Premature archiving? Naw, no one but me does that, right? --TK/MyTalk 02:13, 23 December 2007 (EST)
- See? THIS is trolling. I archived everything OLDER THAN 19 NOVEMBER. Chill. Do you remember the conditions of your ban being lifted? I think another 1 day may be in order soon... technically you've already used your second 1-day fibonnaci block, but, maybe we should have mercy.-αmεσ (spy) 02:16, 23 December 2007 (EST)
- Since the earlier ban (thanks for using the right term, instead of block) wasn't legal under RW Community Standards, your attached conditions have no weight or value. You, like Andy, another Lawyer, just like to intimidate. Some of those discussions, like the ones about check user, shouldn't be pushed aside, without good reason, is what I was saying, and some, I am sure, have been gone, or missed the later November discussions. Think of this as me trying to point out, when to archive is subjective, so bashing people for it, really isn't living up to the spirit here, eh? Pity you consider any disagreement from anyone you hate, to be trolling. Human, as a Bureaucrat, isn't trolling posting on my page, about something there is ample proof to support, his mis-stating my intentions, eh? You just go ahead and block away, Ames. Keep calling me names. Every time you do that, it only diminishes you (and RW), not me. --TK/MyTalk 02:25, 23 December 2007 (EST)
Still going at it, eh guys...
Premature archiving? Naw, no one but me does that, right? TK, that is sort of similar to the snarky comments that you often receive and are irritated by. It could be interpreted as an unhelpful comment and a veiled accusation. Now I don't think it was, and I'm not accusing you of such! Personally I think AmesG overreacted a bit, there wasn't really a reason to accuse you of trolling. But on the other hand, TK you also seem to... overreact a bit to these sort of commments sometimes. It's understandable due to bad past experiences with editors here, you feel a lot of comments are made out of spite... but you take a lot of things as insults that might normally be brushed off. Not that you should have to, but these things really don't matter all that much, and for the sake of moving on...
And I know everyone can point to a previous edit or comment and say "There, look what they did, I was justified". Or they can say "I know from emails and past discussions that they are really like this". And I know that this stance seems infinitely reasonable and rational to the person doing so, considering the circumstances, the history, emails, discussions, etc, etc... but none of that is helping!
I'll be blunt - but I'll also assume the benefit of the doubt.
TK, please stop acting like a martyr. Not everyone likes you, but that doesn't mean they don't want to move on as well. AmesG, Human, whoever else... please stop treating TK like a prisoner on parole. Not everyone doesn't like him, and he wants to move on as much as you do.
Does everybody hate me yet? UchihaKATON! 03:20, 23 December 2007 (EST)
- Thanks for the effort, Uchiha, I over-reacted, as you said. Sometimes it's hard to discern the innocent from the nefarious. I am totally okay with the above being nuked. --TK/MyTalk 03:44, 23 December 2007 (EST)
Mainspace articles[edit]
I have changed our mainspace article description from "any topic" to the mission statement. Please revert me if you think this is a mistake.--Bobbing up 14:37, 7 January 2008 (EST)
Additions without community discussion......[edit]
I noticed that the following changes have been made without discussion, by administrative fiat:
"In egregious cases, some users may be subjected to an accelerated block policy, but only upon agreement of most of the sysops." was added here by AmesG, apparently in a bid to silence anyone some admins do not like.
And here Point number three was "amended" and the new text I have put in italics: "Personal information about other users that is not volunteered by that user should not be posted on this site. This includes IP addresses, and even where an IP address is volunteered, discussion of the user's geographical location, place of employment, or other private information (even if publicly available) is frowned upon."'
While I agree with the addition about personal information, I do not agree that it is in the best interests of RW to allow major changes to blocking policy to be made without community discussion, which seems to me to be the hallmark of RW from the very start. Many of us who can never agree on anything political do agree that this blocking business is getting more and more punitive and subjective. Others, members of the administrators seem to be moving more and more toward a totalitarian outlook to silence anyone they don't agree with. I wonder if all the members of this community really want to cede that much authority to the janitors? And don't our rules demand changes in community standards at least be discussed before major changes are made? Are we going to be told no major changes have been made? Shouldn't the revised passages at least be removed pending discussion? --TK/MyTalk"Lowly" editor 22:15, 14 January 2008 (EST)
- Perhaps the first new edit should be removed pending discussion, however I feel that the second is just making things a bit more specific and not actually adding anything new, and therefore should stay. Just my 2 cents. Pinto's5150 Talk 22:19, 14 January 2008 (EST)
- I totally agree with adding the more detailed information about personal information, Locke. However, policy should be adhered to in all cases, no? Without a proposal to change, discussion and community agreement, doesn't that bring Rational Wiki down to being like Conservapedia? I have said mostly in jest that some Admins are beginning to act more and more like Andy, but things done like this make it seem like its coming true. What is so different about here if we allow administrators to change the rules to make it easier for them to block members of the community? Isn't this place supposed to be about being tolerant? Tolerant even when we 100% disagree and hate what is being said? If this is where things are going, and Bureaucrats are fine with acting like Andy, they might as well shut this place done, IMO. --TK/MyTalk"Lowly" editor 22:26, 14 January 2008 (EST)
So Vote![edit]
Apologies, I considered the first (no private information) to be agreed upon; I also considered the second (special block sequence) de facto agreed upon, and a mere codification of what you're already subjected to, TK, but it might be best to leave that unwritten. The diff is this, I'll remove until discussion is complete.-αmεσ (spy) 23:55, 14 January 2008 (EST)
- What I am already subjected to was implemented, like these changes, AmesG, by you. Not by the Community. I am not the Community. You are not the Community. Everyone together, deciding is. The whole point, if one reads the history of RW, was to avoid the crazy-ass blocking and arbitrary running off of people. Supposedly this was of great importance, and debated at great length when RW 2.0 came along. Supposedly the users here wanted absolutely no blocking that even smacked at how they do at CP. I can understand draconian blocking for outright vandals, but to block a user simply because some hate them, is really anti-intellectual bullshit. --TK/MyTalk"Lowly" editor 01:41, 15 January 2008 (EST)
- Nice, I am now prevented from editing or voting. Good work! --TK/MyTalk"Lowly" editor 00:21, 15 January 2008 (EST)
- Uh... I don't follow.-αmεσ (spy)
- Read the history...I detect some wiki glitch. --TK/MyTalk"Lowly" editor 00:26, 15 January 2008 (EST)
- That was weird to say the least. Pinto's5150 Talk 00:28, 15 January 2008 (EST)
- o_0 This is fucking bizarre...!! UchihaKATON! 00:33, 15 January 2008 (EST) (P.S. I reverted myself to avoid totally wamboozling the page)
- TK, it's sweet of you to blame us, but it's not our fault. I've added your votes as I saw them in wikihistory. Try restarting your computer...-αmεσ (spy) 00:36, 15 January 2008 (EST)
- It's not just TK having problems, look at what happened when Uchiha tried to edit the page. Pinto's5150 Talk 00:38, 15 January 2008 (EST)
- Ames, btw, reading from the page history TK attempting to add an "Aye" vote (correct me if I'm wrong though). UchihaKATON! 00:44, 15 January 2008 (EST)
- Ah, ok... So... I'll put that one back then? UchihaKATON! 00:47, 15 January 2008 (EST)
- My votes have been added correctly, thanks for doing it! AmesG--can you at least admit your own bias? I never blamed "anyone", and in point of fact said I detected some unknown wiki glitch. "Good work" was only me alluding to stupid wikimedia....sorry my comment was taken for something it wasn't, all. --TK/MyTalk"Lowly" editor 01:39, 15 January 2008 (EST)
I added the privacy issue; I consider the blocking policy addition DOA.-αmεσ (spy) 12:27, 15 January 2008 (EST)
Privacy[edit]
- Aye αmεσ (spy)
- Aye Pinto's5150 Talk
- Aye --TK/MyTalk"Lowly" editor 03:34, 15 January 2008 (EST)
- Aye
Genghis Does this really need to be voted on? 02:23, 15 January 2008 (EST)
- Aye --Bobbing up 03:16, 15 January 2008 (EST)
- Aye --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:56, 15 January 2008 (EST)
- No/nay (unanimity is boring, isn't it?) Ed @but not the Poor one! 06:10, 15 January 2008 (EST)
- Aye --AKjeldsenGodspeed! 06:16, 15 January 2008 (EST)
- Yarr --NightFlareSpeak, mortal 06:52, 15 January 2008 (EST)
- Aye -- Edgerunner76
- Aye -- People don't necessarily know their IP tells where they live. human
Block Sequence[edit]
- Aye αmεσ (spy)
- Undecided Pinto's5150 Talk
- Nay --TK/MyTalk"Lowly" editor
- Aye
Genghis De facto policy 02:24, 15 January 2008 (EST)
- Nay should be: "In egregious cases, some users may be subjected to an accelerated block policy at sysop discretion." If people feel they are being badly treated they can complain publicly at at RationalWiki:Administrative Abuse--Bobbing up 04:50, 15 January 2008 (EST)
- Neigh Hark! I hear horses approaching! --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 04:57, 15 January 2008 (EST)
- No (sorry I don't understand this ayenay archaic english). We are RW after all. Ed @but not the Poor one! 06:09, 15 January 2008 (EST)
- Nay, prefer Bob_M's version. --AKjeldsenGodspeed! 06:17, 15 January 2008 (EST)
- Nay --NightFlareSpeak, mortal 06:53, 15 January 2008 (EST)
- Aye -- Edgerunner76
- Neutral for now. We already leap from hours to days if we feel it is justified, and at the day rate, blocks get long fast. If this "policy" is not clear, then I vote Aye to make it clear. human
Apathetic Voters[edit]
- Meh Voting is just so very hard. — Unsigned, by: Radioactive afikomen / talk / contribs
- Me too - living in two countries, haven't voted in either since 2002 or 2003. Ed @but not the Poor one! 06:12, 15 January 2008 (EST)
- Goat --AKjeldsenGodspeed! 06:17, 15 January 2008 (EST)
- Diebold stole my vote and fed it to the goat.human
- Indifferently apathetic on block issue as it doesn't really seem to add anything.--Bobbing up 12:14, 15 January 2008 (EST)
Death of TK[edit]
Moved to RationalWiki talk:Community Standards/TK.-αmεσ (spy) 12:34, 16 January 2008 (EST)
- Don't forget to go there and click on "observe" if you want to continue to follow the discussion. human
12:41, 16 January 2008 (EST)
Indeed! Further discussion of the action should be had on that page; however, I anticipate that we should probably refine our bannination standards, in the wake of that massive failure, on this page-αmεσ (spy) 12:42, 16 January 2008 (EST)
Learning from our Mistakes[edit]
First, this is an extremely important landmark for us. TK is the first real dissent or user problem that this Wiki has ever had, and it's very important that we manage this well, and enunciate our principles clearly and strongly, balancing our own protection against our commitment to tolerance, equality, and other good liberal virtues. Currently, I believe we've reached the right decision, but we need to all agree, and we need to know why we agree.
Whether and how we deal with persistent abuse and trolling goes to our most basic principles. We've committed ourselves to tolerance and courtesty, as a result of our negative experience at Conservapedia, and vowed not to make their mistakes. One of our early statements of these principles was to disavow permanent bans, since we view these actions, in general, as manifestations of everything wrong with Conservapedia. In dealing with TK, we have to come to terms with both our general princple of tolerance, and the outgrowth of that principle, the injunction against permabans.
Clearly, infinite-banning TK goes against our "no permanent ban" idea, but I don't think that's the end of the inquiry. If we adhere blindly to "no permanent ban," I think we tie our hands. Clearly there are times when a permanent ban would be justified; persistent, non-funny vandalism (say, by a neo-Nazi, as we've had beffore), would be cause enough, since the only reason our "no permanent ban" idea works lately is that vandals go away. But suppose they didn't; I doubt many would disagree with a permanent ban in that case. To force ourselves to turn the other cheek, would be to let our reason for being eviscerate our... being. I don't think we can blindly follow "no permanent ban."
But, I do think it's acceptable and reconcilable with the spirit of the Wiki to allow permanent bans in exceedingly rare circumstances. I see "no permanent bans" as an iteration of the deeply-rooted RationalWiki policy of courtesy, respect, and second chances. So, let's look higher up the funnel of abstraction: will there ever be cases where a permanent ban will not violate our principles of respect?
I think the answer is yes, although the examples will be rare. Assume a neo-Nazi who, banned for days and days at a time, keeps coming back. He has to be done away with permanently. Assume a user who harrasses and threatens users in public and privately, seeking to divide the Wiki against itself, who, after repeated warnings, keeps coming back. I think that's justified, and in fact mandated. This is to say, in short, that tolerance and courtesy only mean tolerance and courtesy so long as no objective cause to the contrary exists: if there's an objective reason for kicking a user, for endangering our ethics and mission, and the mob rallies around it, I think a permaban is fine.-αmεσ (spy) 13:26, 16 January 2008 (EST)
- I think it's moot - a one year block, say, is almost the same as "permanent". Especially since that is longer than the wiki has even existed. Heck, even a one month block creates a lot of breathing room. So "no permanent bans" is a fig leaf if we do occasionally resort to very long blocks. So to me, the question is more like "How long and for what might we permit "long" block periods?". Alternatively, "What are the criteria for shifting up time periods in the Fibonacci sequence? (IE, hours to days, days to months, months to years)" Just my .02 human
14:08, 16 January 2008 (EST)
- I've been looking over TK's contributions and he definately comes off as the (not so)master manipulator. I, however, had been having the "we're as bad as CP" feeling, but just looking at how much more discussion we've put into this than the CPers ever would makes me feel that we have done the right thing. --PROMHQEUS - FORETHOUGHT 14:18, 16 January 2008 (EST)
Crisis of the Mobocracy[edit]
Are we having one? What should be done to fix it?-αmεσ (spy) 13:35, 16 January 2008 (EST)
- Are we having one? Yes. TK's banning - which seemed to be justified in the most part to things going on "behind the scenes" that the rest of us had to take others at their word at speaks to a new phase in RW's evolution and a definite shift in the way things are done around here. I still maintain the TK being difficult of the talk pages is less of an issue than the Metapedia crowd - or people who keep coming back to the libertarian and NWO pages to fuck with our content. What should be done to fix it? I dunno - admit we're fucking up and find a way to make this place fun again. I know that aside from this, I'm staying away from RW's internal politics and trying to concentrate on what brought me here in the first place; goats and making fun of Schlafly. PFoster 13:41, 16 January 2008 (EST)
- Actually, the nazi and GW and NWO vandals are easy to deal with - about four clicks and the junk is gone, along with whatever rude user name they rode in on. A divisive editor rucking up things on talk pages is more difficult. The "fun quotient" here has almost paralleled TK's activity level for ages - at least, from how I see it. That might be a made up "Human Statistic" so take it with a grain of salt! human
14:11, 16 January 2008 (EST)
- The mobocracy most certainly has a crisis. As I see it, it has revealed itself to be an anarchy - and I'm not talking about the cool and revolutionary type, or even the nice and cuddly peaceful and utopian type, but just the embarrasingly inefficient and indecisive kind. Basically, we've been pussyfooting around the TK issue on and off for over six months, and then when things finally come to a head, the best thing we can do is run the most chaotic voting process I've ever seen. Clearly, none of this is ideal, so I think it's obvious that we need to drop the mobocracy in favour of clearly documented and legitimized rules and processes for handling these matters. I also think it's important that we don't just rush into this, but rather take the time necessary to discuss and work these things out. --AKjeldsenGodspeed! 14:27, 16 January 2008 (EST)
- I agree. --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 14:54, 16 January 2008 (EST)
- I agree with AKj. I've written my idea of a couple of rules at user:AmesG/Command-Mints; if we'd like to consider something like it, I'd appreciate suggestions. Regardless of what IP users think, I think my suggestions are pretty fair.-αmεσ (spy) 14:55, 16 January 2008 (EST)
Standards of Behavior additions[edit]
Due to recent complaints, I would suggest the following additions to the Standards of Behavior section:
- Speculations on sockpuppeteering are strongly discouraged.
- When discussing the content of an article, stick to the content of the article and discussion of sources; avoid commenting on the actions of a particular editor unless it is particularly disruptive to the wiki or the community.
- Language that is insulting to fellow editors is discouraged as it may be construed as a personal attack.
Just a thought.
Sterilexx 17:22, 25 March 2008 (EDT)
- Also, I would encourage all to keep this at a community-wide level. I think personal back and forth accusations are just going to fall apart into another epic debate. Sterilexx 17:25, 25 March 2008 (EDT)
After spending most of the morning arguing my case, I've given up. I've said all I can say about it and am moving on. MarcusCicero 18:06, 25 March 2008 (EDT)
- What exactly can you solve though? Make a statement that its wrong to call people names? That'll happen no matter what. In articles and in the front page, standards should be kept when writing about 'the other side'. Thats my stance, at least. This is no-ones personal blog. MarcusCicero 18:13, 25 March 2008 (EDT)
- I can agree, Marcus, and understand your point. Sterile, sorry, but sometimes one needs to get on record, and not only here at RW, what the spirit of a proposal is, right? Examples are oftentimes better, and easier to understand. Name calling can and will happen in the heat of discussion and debate, and I agree no rule, or rules can really stop that, but "time outs" can. Seconds, not days or months. Having said that, RW needs to do better at stopping pack mentality and piling on, which was supposedly one of the founding tenets here, marking it as different and more tolerant than some other wikis, no? I think Sterile has made a sincere and good effort to begin this with his idea. --TK/MyTalk"Lowly" editor 18:21, 25 March 2008 (EDT)
- So you want the community to divulge into a morass? Some of us are trying to be forward looking. Those three items address a subset of your complaints, I think. Can anyone "enforce" good behavior in the lack of an arbitration process? No, of course not. But some of us would like to get back to the good faith editing that we had before instead of these heated he-said-she-said debates. The last two weeks have been horribly disruptive, and some of us want to move beyond that. I was trying to include you in that, but perhaps you aren't interested. Sterilexx 18:20, 25 March 2008 (EDT)
- I understand the point, but again, in the absence of a policy, I guarantee this will divulge into he-said-she-said. Telling me that accusations of puppeteering are good because it makes people honest I think is a productive discussion. Telling me that you were offended because on paragraph 33 on some talk page, I guess can be productive, if it is evidence of your point, but in isolation, is unproductive. And I agree, maybe it's unrealistic, but tell me why it's unrealistic then. Sterilexx 18:28, 25 March 2008 (EDT)
OK Sterile, have you ever considered that the articles running about 'before' were just damn wrong? Perhaps Ratwiki needs a change in its sails. Some of the front page news stories can be cringy. Some articles are biased to the point of no return, and throw in vague fringe stories too. This is the real issue.
With regards to supposed namecalling, what is your solution? My problem is that saying 'don't call people names' won't work unless 90% of the users here are banned. Its not good or bad but a lot of people swear, its natural. The key is trying to come up with a resolution, man to man rather than ignoring or running away from the problem. I'm skeptical whether an arbcom would work in a community this small anyway. Please don't offense to anything I have just said either. MarcusCicero 18:29, 25 March 2008 (EDT)
- I thought that I was trying to brainstorm for a solution. Thanks for encouraging the idea. Unfortunately, I don't have all the answers, and hence the discussion. I admit most of my suggestions are just suggestions, good faith suggestions. Perhaps "Escalation of a debate by repeated namecalling is discouraged." Weak, I know. But perhaps someone could word it stronger. Sterilexx 18:41, 25 March 2008 (EDT)
- Swearing may come naturally (to some) in conversation when there is little time to deliberate over the effect of your outpouring, but there is ample time to reconsider what you have written before clicking the submit button. Swearing in text appears borish and gives the appearance of being ill-educated. A bit like being the only drunk in the room. Lily Ta, wack! 18:43, 25 March 2008 (EDT)
- And I say, keep up the good fight on the offensive articles. There are things here that offend me as well. Discussion is a good thing, but I say keep it to content, not actions of editors. Sterilexx 18:45, 25 March 2008 (EDT)
- Part of the problem with conflicts, unkind remarks, personal offenses is too often due to talking past one another, rather than to. Arbcom's often elevate the trivial (except to the people directly involved/offended) to a status above and beyond the disputes actual worth. IMO, oftentimes a one on one chat with Admin, or a conference between those in a dispute, allow for more productive resolution and face saving. Saving face is often what many disputes evolve around, and should not be discounted. Yes, some will say that isn't exactly being transparent. I agree. But not everything should be, so long as the parties involved agree, and are proceeding in good faith, willing to both give and take. --TK/MyTalk"Lowly" editor 18:46, 25 March 2008 (EDT)
- Actually, we already have a standard that says something like "Personal attacks are strongly frowned upon." The problem is that we can write community standards from now until Judgement Day, but it won't solve anything unless we also have some way of enforcing those stardards. That way is sysop action, but that leads to another issue, specifically that our policies on sysops are more or less non-existent, barring a vague and pretty naïve notion that sysopship isn't really anything special at all. There's a whole range of issues that need to be addressed here. --AKjeldsenGodspeed! 18:47, 25 March 2008 (EDT)
- I agree with AKjeldsen. The hardest thing in running a site or business, big or small, is getting uniformity of action by management. Another hard-to-do thing for many of us is enforcing rules or guidelines against friends and people we have come to know....--TK/MyTalk"Lowly" editor 18:51, 25 March 2008 (EDT)
IMO, oftentimes a one on one chat with Admin, or a conference between those in a dispute, allow for more productive resolution and face saving.
That sums up everything I was trying to say. Maybe if there was a way to talk solely to the people involved, off wiki, when there's a problem it could save everyone a good deal of face. I admit I can appear abrasive, but the thing is my intentions are good and when I was been labelled a troll I didn't take kindly to it. There's give and take in these situations. I just feel that sorting things out, man to man, is how any dispute should be solved. Unless of course there's a genuine crime involved. MarcusCicero 18:53, 25 March 2008 (EDT)
- There seems to be a real conflict among board/wiki/site users about instant messaging, etc. I have always felt it was more direct, time-saving and helpful to just yank the disputed parties into a chat with a couple of admins, and hammer out a resolution than having hundreds of flaming posts whipping up the situation as can be plainly seen in things that have happened here while I was banned, and not a party to. I think Sterile, AKjeldsen and you, Marcus, all see the problem. This wiki is evolving, as it has to do, as those directly involved in the disputes with CP long ago fade away and/or evolve as well. All communities, online and off, suffer at the hands of those who love disputes and chaos. Some view making such mischief as "lulz". Most of us view it for what it is: manipulation of an entire community for personal yucks or retribution against someone. --TK/MyTalk"Lowly" editor 19:02, 25 March 2008 (EDT)
- I think it is possible to demote a person on a temporary basis while blocked, so that would not be a problem. Problem is, many sysops and bureaucrats have sock accounts, which IMO, should no longer be allowed. RW has grown beyond that sort of thing, I think. --TK/MyTalk"Lowly" editor 19:02, 25 March 2008 (EDT)
- I strongly agree. TmtamesP 19:39, 25 March 2008 (EDT)
- I can't associate myself with that interpretation, TK. I don't think anyone here, or at least very few, actively "love disputes and chaos" as you put it. Rather, I think a very fundamental issue here is that RW has become a marketplace, or perhaps more like a battlefield of ideas wihtout at the same time developing structures that are able to handle the conflicts that will inevitably occur under such circumstances. --AKjeldsenGodspeed! 19:15, 25 March 2008 (EDT)
Proposal[edit]
In the case of a conflict, such as what recently happened:
- All participants in the conflict will be blocked for twenty minutes. And that means all, even the drive-by commenters. This is to prevent accusations of favoritism (i.e. "He was fighting, and you didn't block him!").
After the block expires...
- User A feels persecuted/harassed/oppressed/belittled by user B. Someone needs to start an "official discussion page" (in the RationalWiki namespace; talk pages keep things too personal).
- Users A and B enter a one-on-one dialogue/debate/argument/discussion (however you want to put it). NO OTHER USERS MAY PARTICIPATE, EVEN IF INVITED. Non-participants may only comment on the talk page.
- Once one special discussion is opened, NO OTHER SPECIAL DISCUSSIONS MAY BE STARTED. This is to prevent any one user from having to participate in multiple special discussions, and prioritizes everyone's attention on a single discussion, rather than thinning our focus across several.
- Participants must focus on the discussion at hand, not wander around editing other articles or commenting on other talk pages. Exceptions will be made if the edit or comment is to rectify a mistake relevant to the discussion. If participants wander anyways, they may find those pages they are editing to be locked. (Yes, this will piss other users off. But the only alternative is to block the wandering participant, which would unfairly serve the purposes of the other discussion participants; it is too similar to Conservapedia's modus operandi).
- Other users, those not participating in the special discussion, will be strongly discouraged from continuing to fight across the wiki's talk pages, in order to avoid perpetuating the conflict between other users. --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 19:20, 25 March 2008 (EDT)
So many rules would make all of our heads hurt, I think. Maybe what is simplest is best? Simply create a debate page where arguments are had and try to prevent the ganging up mentality that goes on here sometimes. MarcusCicero 19:36, 25 March 2008 (EDT)
- I think your proposal has merit, but possibly imposes too big a burden on the sysops for enforcement. How about the first step being a chat conference with the members having issues, with two or more sysops/bureaucrats, working to hammer out a resolution, and if they do, the opponents will post where the dispute cropped up, making nice and thus signaling their various supporters or detractors that there is nothing more to see, move along? If they refuse that path, implement your suggestion above. Past problem being that even sysops and bureaucrats will proceed with making nasty remarks and comments on other pages.......--TK/MyTalk"Lowly" editor 19:39, 25 March 2008 (EDT)
- Well, it started out as loose template for what to do... but I feared that others would raise a bunch of "what if..." objections. My approach is to account for every conceivable contingency. So, how about the basic idea, of a "special discussion page"? --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 19:51, 25 March 2008 (EDT)
PalMD's recent one day block of TK, with no reason, highlights the most serious problem on this wiki. TmtamesP 20:00, 25 March 2008 (EDT)
- What, PalMD's alchoholism? We tried to keep that a secret, but... --AKjeldsenGodspeed! 20:01, 25 March 2008 (EDT)
While the suggestion seems better than others, it sounds a bit too complicated/not fun. Maybe simply encourage solving personal disputes through e-mail? NightFlareSpeak, mortal 20:09, 25 March 2008 (EDT)
- No. That way there's no record of the resolution. Email elevates it to the personal. We should keep these conflicts on-site. --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 20:11, 25 March 2008 (EDT)
- I know that I'm totally new here and completely unestablished, but I have been lurking for a good while. While I've not participated, I have seen the arguments and problems that have led to this point. I think RA's proposal is great. It provides the "time-out" block that so often seems needed. I know it sounds unfun and complex, but I don't think the arguments seem that much fun either. A complex process might even deter this problem in the future, with editors knowing they will be held to this process. The only criticism I would make is that there's no set finish to the process (i.e., do both parties have to agree that a settlement is made? what happens in a stalemate?). Still, I think this is a fair rational way to avoid Headless Chicken Mode. Arcan 21:34, 25 March 2008 (EDT)
- Not a bad idea, but one concern that I have is that narrowing conflicts to the two most outspoken users might be counterproductive if more than two people are involved. What happens if, say, 4 users have one opinion and 3 others oppose it? Does each side select a champion to go fight by proxy for all of them in the Thunderdome? And if so, what if the resolution is unsatisfying to those "left out"? Seems like it could lead to an endless series of Thunderdome challenges. (BTW if you haven't noticed, I've adopted the name Thunderdome for RA's proposal. TWO MAN ENTER!!!! ONE MAN LEAVE!!!!)--Bayesyikes 21:48, 25 March 2008 (EDT)
- I think it would be preferable to minimize the number of conflicts in the first place. Personally, I think many of the conflicts we have had can be traced back to the fact that we have a number of editors who are, to put it delicately, rather convinced both of the validity of their own point of view on certain issues and of the necessity of stating that view as strongly as possible. It would be a tremendous improvement if we could work out a method for dealing with several points of view at once on the more contentious issues. As one possibility, I suggets taking a look at MetaWiki's article on Multiple point of view and its related articles. --AKjeldsenGodspeed! 21:55, 25 March 2008 (EDT)
A group again[edit]
There were some people who were not around the last time there was a significant policy debate on this wiki and Google tells me the last time I searched for this was January 15th, which is about right. Go read A Group Is Its Own Worst Enemy. It is probably the most useful thing to understand when talking about site politics, what failed where, and things that you cannot ignore.
In this case, the things that must be accepted is that there are some people who have a say as to what the mission of this community is. The rest of us (myself included) are here following that vision. In the link above, its the "Three Things to Accept" section points 2 and 3.
Whatever policies are developed it should be with the intent to keep the site in line with this vision. Additionally, people who are attempting to disrupt the group should not be made welcome. This disruption can come in many flavors. My personal opinion is that those who can do something about it have put up with far too much for too long.
I believe that stepping back a few points we need to have a clear community goal drawn up (and ultimately its that core group that is going to say what it is). Then, sit down and figure
As it is, the "we are not CP" philosophy allows many things (from chronic annoyances to major pains) to escalate until it bothers someone enough that they leave. Unfortunately, the annoyance often stays and the core member leaves. This is ultimately a bad thing for the community.
The other thing that needs to be done is that whatever rules do come out, they need to be consistent. We mock CP for having rules after the fact and then punishing the person. It is a small step from some of the arbitrariness that exists here to CP's standards. Unfortunately, this also means much less tolerance for resident trolls.
Now go back and read A Group Is Its Own Worst Enemy again, hopefully consider what I've written here and then start at the rules. I believe it would be best if the old time sysops came up with a set of policies in private and then handed them down allowing some discussion, but ultimately being the ones to say "this is how it will be." What we have at the moment is somewhat in lines with the LambdaMOO story. We need a government, and as much as we like libertarians, anarchy, or a true democracy - that doesn't work too well. --Shagie 20:27, 25 March 2008 (EDT)
- I read the whole thing. And... I'm not sure what to say, beyond the fact that we need to radically reorganize things around here. --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:10, 25 March 2008 (EDT)
- I've always been in favor of more rules (being a law talkin guy). I'd love to help draft them, but I'm really busy lately... and I'm glad we're having this discussion. Hopefully, if it's a productive resolution, we'll lure some old users back.-αmεσ (spy) 23:29, 25 March 2008 (EDT)
- "Re"-organize? How about organize to start with? Honestly, the project really just grew, and was never subject to much organization to start with, more like a group blog than anything. So maybe we should look at stuff like organization in the first place and how to apply it without taking away the fun parts. Or at least not all of them. --Kels 23:33, 25 March 2008 (EDT)
- Well, we've sure pretended to be organized all this time... --
Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:38, 25 March 2008 (EDT)
- We have? We don't even have a style guide - it's more like "whatever Human and Radioactive afikomen agree not to revert each other over"! human
13:57, 26 March 2008 (EDT)
- I suggest we radically unorganize things around here. Or leave them the way they are, which might amount to the same thing. Rational Ed5 or 6 edits 14:44, 26 March 2008 (EDT)
Continuing the discussion[edit]
Moved to RationalWiki_talk:Constitutional_Convention to keep the discussion in one place.
NOMA standards[edit]
I propose that all religious articles on the site be reviewed so as to ensure that they comply with Noma standards - that is to say that they do not attempt to sue scientific reasoning to attack any religious viewpoint. Alternatively that all "religious" articles have the two valid viewpoints NOMA and non NOma.Tolerance 13:09, 11 April 2008 (EDT)
- You keep using that word (Term?). I don't think it means what you think it means. NightFlareSpeak, mortal 13:31, 11 April 2008 (EDT)
- Convince me that NOMA is a valid epistemological framework first...Debate:Is Non-overlapping magisteria merely political correctness?. tmtoulouse beset 13:41, 11 April 2008 (EDT)
Protecting talk pages?[edit]
What's the policy regarding sysops protecting the talk pages of non sysops? I was recently threatened. --SHahB 23:00, 19 May 2008 (EDT)
- Yes, we don't do it, but it's also against community standards to delete content from talk pages as seen here. ThunderkatzHo! 23:05, 19 May 2008 (EDT)
- Read the edit summary. I see no mention in the standards of a time limit for archiving. Besides, its against community standards to be a JERK as seen HERE. SHahB 23:08, 19 May 2008 (EDT)
- Thanks, Thunderkatz. And, on an unrelated note, what's the policy on people deliberately misinterpreting jokes with the intention of stirring up trouble? <Jellyfish!</font>Doors in the rudders of big ships 23:07, 19 May 2008 (EDT)
- Well seeing as it's only really been done by two people..... Pinto's5150 Talk 23:13, 19 May 2008 (EDT)
- Two? Really? Who's the second one? <Jellyfish!</font>Doors in the rudders of big ships 23:14, 19 May 2008 (EDT)
Essay Namespace[edit]
The existing description is:
- Essay namespace - used for original works by a particular editor. The article will be well marked as an original piece with the name of the submitting user and direct readers to the talk page for community reaction and discussion. The actual articles under Essay should only be edited by the original author.
Should we add a note saying that opinions or articles not liked by the mob will be censored? Tolerance 15:39, 25 May 2008 (EDT)
Define censored first. As we are under no obligation to host other people's incoherent rants (see Time Cube, for instance), why should we be compelled to host whatever they toss into the namespace? Now if you mean censoring as in editing someone's essay in order to remove content or change intent, then I can agree it shouldn't be done. --Kels 15:44, 25 May 2008 (EDT)
- (EC) In the case we're discussing, it's not a question of "not liked by the mob" Don't build straw men like that. People's objections, or at least mine and Kels (it's a Canadian thing, you wouldn't understand...) is that this blog shouldn't be a place where people can do just anything. That "essay," and I use the terms with some reservation, is not objectionable because it expresses ideas that some don't like, but because it has no ideas, full stop. It's the ramblings of crazy guy outside the subway. It has no basis in reality, no intellectual depth, nothing. It's a bunch of words strung together with little rhyme and no reason. Why should we keep that and not a string of random characters? PFoster 15:47, 25 May 2008 (EDT)
- The essay in question presents an idea (an illogical and inflammatory one, but an idea nonetheless), it is more than a simple string of random characters.
- Though this sections' initial comment really needs a neutral wording. NightFlareSpeak, mortal 15:56, 25 May 2008 (EDT)
- Now you're overreacting, PF. The essay is not well written by any standard, and its contents are not exactly pleasant or agreeable, but it does contain coherent ideas and arguments. --AKjeldsenCum dissensie 15:55, 25 May 2008 (EDT)
- Also, should we allow libel? Hate speech? Neo-Nazi fascist rants? On another note, should we allow essays on how cute kittens are, how awesome unicorns are, or how delicious cookies are? I'd say no on all counts. But wouldn't all of those count as censorship? ThunderkatzHo! 15:51, 25 May 2008 (EDT)
- If that is the case we need to redefine what is an acceptable essay.Tolerance 15:53, 25 May 2008 (EDT)
- No, no we don't. It happens pretty rarely - case by case basis will be fine. Why do I get the feeling you're trying to provoke Headless Chicken Mode? PFoster 16:41, 25 May 2008 (EDT)
- That's really out of line, PFoster. We're trying to have a civil discussion here.--AKjeldsenCum dissensie 15:58, 25 May 2008 (EDT)
- @AKJ. You're right. Apologies to Tolerance and the Mob writ large.
- @TKatz, no, we don't... and the neo-Nazi rant problem is not really comparable to the essay that provoked this discussion. It discusses a topic we have articles related to. ħψɱɐ₦
15:59, 25 May 2008 (EDT)
- But we do have articles related to Neo-Naziism as well. If we don't allow Neo-Nazis calling all Jews hell-bound heathens, why should we allow fundamentalist christians claiming all atheists are hell-bound heathens? ThunderkatzHo! 16:06, 25 May 2008 (EDT)
- Did someone mention Headless Chicken Mode? SHahB 15:59, 25 May 2008 (EDT)
-
- Re: Nazis. Because none of us support Nazism in any way, shape or form. Some/many of us are believers of one sort or another. Anyway, that atheism essay is teh hilarious. ħψɱɐ₦
22:04, 25 May 2008 (EDT)
- Alright then, can we at least add a template to the top of the essay (and others like it that have yet to be created) that says the mob uses it purely for goats and giggles? ThunderkatzHo! 22:27, 25 May 2008 (EDT)
Agree[edit]
Disagree[edit]
Tolerance 15:39, 25 May 2008 (EDT)
That's not the question, dammit[edit]
- PFoster 15:47, 25 May 2008 (EDT)
- ThunderkatzHo! 15:51, 25 May 2008 (EDT)
- ħψɱɐ₦
15:59, 25 May 2008 (EDT)
- Kels 16:04, 25 May 2008 (EDT)
What the heck is going on?[edit]
Radioactive afikomen Please ignore all my awful pre-2014 comments. 16:31, 25 May 2008 (EDT)
You need to go here and read the talk.--Bobbing up 16:46, 25 May 2008 (EDT)
Oversight[edit]
Discussion has emerged on the WIGO talk page about our use of oversight. It was added as part of the TK era of RW and used once to remove supposedly private information about him. It was then used one time to cover of a "sock" of an RW user on CP that outed themselves or something. And once more as a failed joke. Other than that it has never been used. Should it go the way of Checkuser? tmtoulouse beset 00:44, 20 June 2008 (EDT)
- I'd say yes, as transparency IMO is one of our most vital elements.
- If for whatever reason we should keep the feature availible, we could do so but not give the user right to anybody until it needs to be used, this way everyone would know that it has happened (which is, IMO, the scariest part). NightFlareStill doesn't have a (nonstub) RWW article. 01:19, 20 June 2008 (EDT)
- I say kill it. And that's speaking as one of the two (?) people who theoretically have access to it right now. We don't need it, don't want it, won't use it. ħuman
01:48, 20 June 2008 (EDT)
- Agreed; we should remove oversight entirely. The very nature of oversight violates the spirit of openness and trust on RationalWiki.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 20:45, 20 June 2008 (EDT)
- But its it my favorite pass time!!!! Oh wait, I don't even have it.........actually come to think of it no one does................or more succinctly "it is gone." tmtoulouse beset 20:45, 20 June 2008 (EDT)
- You mean you removed it, like checkuser?
Radioactive afikomen Please ignore all my awful pre-2014 comments. 22:08, 20 June 2008 (EDT)
- Yes, it is gone, the only one that make things magically disappear from the wiki is me. tmtoulouse beset 22:10, 20 June 2008 (EDT)
Do you mind if I ask (and for the sake of everyone else who's not sure) - what exactly does/did Oversight do? What's its function? UchihaKATON! 22:15, 20 June 2008 (EDT)
- Hides a revision, so lets say I put something horrible and libelous and evil on a page, and someone comes by and deletes it. But it is easy to find it in the page histories and diffs, oversight can "hide" the revision so that only those with oversight can see it. tmtoulouse beset 22:18, 20 June 2008 (EDT)
Discussing long term blocks[edit]
Highly Selective Quote Mining of Recent Arguments![edit]
I provide this section only to give a quick reference to what everyone has been arguing in the recent discussion.
- We have had the discussion many times, and the evidence is also obvious. I unblocked user names that were blocked for a year, after they were a few days old. Most of these editors don't return anyway, and if they want, they just create new editor names (Hence the Jezuz crew). No need for lengthy blocks, just revert wandalism and block for a day or so. They come back, we do again. It's easier to revert than it is to do what they are doing, so they waste their energy. ħuman
01:14, 26 June 2008 (EDT)
- "Most of these editors don't return anyway" -- Then what possible reason could you have for unblocking them? Is it symbolic, perhaps?
- "If they want, they just create new editor names" -- No, as I explained above, many vandals cannot do this, so a single block is effective. Your explanation of this is a bit of an oversimplification.
- ""No need for lengthy blocks, just revert wandalism and block for a day or so" -- Why block at all? Why don't we tell our editors to just revert constantly until they get bored and wander off? You are assuming that all vandalism is noticed immediately (it plainly isn't), and that devoting time to clearing up unnecessary mess doesn't have any impact on the running of the wiki (it does). <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 01:29, 26 June 2008 (EDT)
- A review of the block logs will show that this person/these people do in fact reuse old accounts, so I can't at all see why we should make it easier for them by unblocking them. That seems to send a ver strange message for no purpose at all. Let's at least let them go through the minimal amount of trouble to make new accounts when they want to vandalize us. --AKjeldsenPotential fundamentalist! 18:02, 26 June 2008 (EDT)
...
- Chaos, you should go read our block policy. The blocks I undid were in clear violation of the fibonacci sequence. Also, you should, please, get a grip. Tehn rollbacks a day ain;t exactly destroying the site. It takes about 30 seconds to undo a wandal's work and send them packing for the day. Long ago, I found myself in the middle of the night confronting our first real wandal, armed only with the fibonacci sequence in minutes to slow them down. We agreed to allow using hours for real vandals, or days if they were hard core. I have now spent more time typing this comment than has been needed to "fend off" wandals over the least several days. And, please, if you really have an issue with this, user talk pages are not the place to discuss it, it should be at talk:guidelines, wherever that is, so people interested in policy will notice it. I am also changing the section header so people can see what the heck is being discussed here on RC. Thanks. ħuman
22:22, 26 June 2008 (EDT)
- May I suggest that you copy everything from the various places we were discussing this? Otherwise, you are asking us to duplicate our comments out of context. ħuman
00:48, 27 June 2008 (EDT)
- Huh? Most of these are given fairly good context, or are self-explanatory. There are all from here, if anyone wants to see what I left out. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 00:53, 27 June 2008 (EDT)
My actual proposals[edit]
Our current policy of blocking vandals leaves much to be desired. In particular, the Fibonacci sequence guidelines are extremely unhelpful -- they are essentially arbitrary, and most of the time only seem to make it unclear how long vandals are expected to be blocked for, making such block lengths a matter of personal judgement for the sysop concerned. I hereby propose we come up with a set of clear and specific rules that address block lengths -- ideally, ones that remove personal judgement from the equation. I'm also in favour of longer/indefinite blocks for outright vandals (Grawp, etc) on the basis that it is pointless to make it so easy for them. I agree that the work of tidying up after them is minimal, but I also think it is bizarre to create this work for no reason -- such as when Human recently unblocked the original Jezuz, and he quickly took advantage of the new account to begin his next spree.
And I agree with AK (he says most of this better).
And I'm not ranting. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 00:35, 27 June 2008 (EDT)
- First, in order to alter what we do, I think it is incumbent on the discursee to present a problem that needs to be solved. Data, please? ħuman
00:46, 27 June 2008 (EDT)
- Don't be obtuse. The problem is that our guidelines are arbitrary and do not give sysops enough specific information about block decisions. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 00:53, 27 June 2008 (EDT)
- I am sorry, I am not being obtuse. YOU are. You created this section with a non-intuitive header. You copied some parts of the discussions you started elsewhere here. You have not even started the section with a clear statement as to what you want to change.
Seriously, that's enough. Lets all calm down and relax for moment
I think it's true to say that the Fibonacci sequence, while very elegant in principle, is almost never observed. --Bobbing up 10:21, 27 June 2008 (EDT)
- I'd say that is indeed true. I also think that while Fibonacci does have a certain point behind it, it may not be the best policy towards people who are demonstrably here only for unfunny wandalism, such as the "Jezuz X" or the "My password is..." people. In those cases, a more appropriate policy might be something along the lines of "Kill it with fire!!!" or "Blood for the blood god!" or somesuch. If you know what I mean. --AKjeldsenPotential fundamentalist! 11:10, 27 June 2008 (EDT)
I have an aversion to long blocks for various reasons. I am just not strong enough in my own self I guess to over come a reactionary element of my psyche(Dianetics is in the mail). But perhaps some kind of compromise can be reached. An inconveniently long block for the vandal but not so long it offends my senses. Something like 10 days, or whatever but in that range. If it really is a return vandal using a single IP or account 5 minutes of vandalism every 10 days is not very fulfilling and should lead them to giving up and minimizing the cleanup. But the blocks expire in reasonable time. tmtoulouse beset 13:00, 27 June 2008 (EDT)
- Correct me if I'm wrong, but I was under the impression that block length was originally kept very short to somehow be "different" (read: better) than CP. Wasn't it that first RW didn't block at all, then only IP addresses, then only clear vandals, and so on? Now, I think things have changed much from those times. Do CP Sysops really care of the length of our blocks? Do they come here or veiledly say there: "Haha, you RW are no better than us, you block too!", and if they did, do we really care? There is no moral high ground in being "tolerant" to vandals, even if it were our friend (?) Icewedge. (Editor at) CP:no intelligence allowed 13:10, 27 June 2008 (EDT)
- That is part of it, the other part is that harsher we are on vandals the more we will attract. They are looking for a reaction, it is part of the game. If it was totally up to me we wouldn't block at all, just rollback their edits till they give up and go away. I am actually working on altering the mediawiki code to allow for a "vandal" user group that will limit their abilities including the rate at which they can edit. It could be dropped to one edit every minute or less. Those are the kinds of solutions I prefer. But for the time being I would like to arrive at a solution that doesn't lead to a page full of 1 year long blocks that just attract more vandals and more effort. tmtoulouse beset 13:16, 27 June 2008 (EDT)
- Do you think - just asking, you as site admin know better - that they first look our page on blocked users and get motivation from there? Isn't rollback already the reaction they look for? Is a 1 year ban more motivating to come back than a 1 day ban? I ask 'cause I never was a vandal, just a sock. (Editor at) CP:no intelligence allowed 13:21, 27 June 2008 (EDT)
Button[edit]
Sorry to go off topic for a bit, but I have to reply to Human.
- "You created this section with a non-intuitive header" Not important. I explained my proposals in the text.
- "You copied some parts of the discussions you started elsewhere" For convenience. I really don't see your issue with this.
- "Furthermore since you are not a sysop, this work does even devolve upon you" You're suggesting only sysops are allowed to discuss policy? Clearly that is not the case.
- "What is your issue here?" I've explained this several times now, but here goes again:
- I think the Fibonacci sequence is pointless and not specific enough.
- I think vandals should be blocked for longer. Ten days is fine by me.
I'm not sure why you keep insisting that I haven't explained myself -- it seems obvious that I have. Have I offended you some way? If so, I do apologise -- this is all just policy, after all, nothing personal. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 13:36, 27 June 2008 (EDT)
Seriously, that's enough. Lets all calm down and relax for moment
I hate to say it, but Human sounds like he's about to ask Jellyfish/Chaos for a writing plan... ThunderkatzHo! 13:39, 27 June 2008 (EDT)
- Nah :D We
jellyfChaoses do not plan things. We just happen. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 13:45, 27 June 2008 (EDT) OneTwo quick replies to Chaos - 1. no, no offense, sorry if it seemed like there was, and 2. Non-intuitive headers are not a good idea for "important" topics - (I realize this one is going to say "button", though! AT least it's RW talk CS that's being edited though.) people watching RC see something like "breaking news" and think, "eh, I don't care about that" but if they see "block policy" they may perk up and come see what's being discussed. ħuman
16:07, 27 June 2008 (EDT)
- Oh right, I see now; I hadn't thought of that. I'll try and stick to more formal section titles when we discuss important stuff, then. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 16:10, 27 June 2008 (EDT)
- Actually, the correct plural of "chaos" - to the extent that it can be pluralized - would be "chaoi". --AKjeldsenPotential fundamentalist! 10:37, 28 June 2008 (EDT)
I have attempted to implement a rube goldbergesque solution to this problem. I have created a vandal user group that should pretty much shut down a vandal with out the need block them permanently or for long periods of time. It still allows them 2 edits an hour if they want to plead a change of heart or whatever. My suggestion is for sysops to do a short term block on a vandal and ask a crate that they move them to the vandal group. Then just let it expire and move on. tmtoulouse beset 14:34, 28 June 2008 (EDT)
- That's a neat solution indeed. But is there a way to set it up so that ordinary sysops can put people in the vandal group? There might not always be a crat on hand, after all, and it just seems more efficient that way. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 14:38, 28 June 2008 (EDT)
- I am working on that now. tmtoulouse beset 14:43, 28 June 2008 (EDT)
- I should've known you'd already be on it :) <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 16:23, 28 June 2008 (EDT)
- Actually, even if you can't "automate" it or make it so any sysop can move them in to the group, what any sysop can do is this: block for, say, 24 hours, then leave a note on a crat's page asking to add them to the vandal group and unblock them after they do it. That should avoid any accidental gap between the block for wandalism and the group change. ħuman
17:22, 28 June 2008 (EDT)
- It's lovely - it works.
ContribsTalk 19:09, 28 June 2008 (EDT)
- Yes, I noticed -- I wish I was a crat so I could play with it for a bit. And, more importantly, I think, if I'm not much mistaken, we just managed to come up with a compromise that satisfied both sides of the debate. Wikipedia has nothing on us :) <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 19:11, 28 June 2008 (EDT)
- Not "we" - TMT!
ContribsTalk 19:16, 28 June 2008 (EDT)
- Well, yes, but I still think it reflects RW's niceness as well. <blink>↳</blink> <blink>↑</blink> ⇨ ↕ ▽ ← 19:18, 28 June 2008 (EDT)
"attempting to promote a radical agenda "[edit]
That means I have to leave the blog, according to some definitions of "radical." We need to refine this. PFoster 12:07, 2 December 2008 (EST)
- I added to Javascap's new suggestion, so that it now defines radical as offensive to the vast majority of the RationalWiki community. Just my suggestion of what he means, and an attempt to distinguish our radicalism ("death to creationists!") from those radicalisms we want no part of.-caius (ninja) 12:16, 2 December 2008 (EST)
- "Please, don't be a dick. Even though we encourage those with alternative points of view to join this wiki and share their ideas with us, we ask and encourage you to consider how your actions look to other people. Hate speech or attempting to promote an agenda (for ANY group!), radical to the point of offending 99.9% of the community, will result in the article being deleted, and if you persist in recreating the article, you may be removed from this site for a period of time."
I have some comments about this as written - it starts off talking about general behavior, then tries to define "unacceptable", then suddenly shifts gear to talking about "articles."
My comments on each "section":
The "don't be a dick" section seems sensible, although perhaps the word "dick" could be sanitized a bit.
The "unacceptable" part, to me, drifts into weird - 99.9% = everyone here. If we're gonna Schlafly stat it, why not 80-90% or so? Also, there really isn't a way to define what constitutes a percentage of the "community". I would reword that section "Hate speech or attempting to promote an agenda that does not relate to RationalWiki's mission anywhere but on talk pages will almost certainly be removed or archived rapidly."
As far as "articles" - we don't need to "warn" people, just rewrite or delete them. That's what the "article", or main, space is all about. We're always changing things we have written and arguing about whether things "belong". The same would, of course, apply to spam, hate, or agenda-pushing. It's really not that hard to make trash into a decent article by overwriting it one way or another. (Feminism comes to mind, dig out the diffs to before my first edit...) | https://rationalwiki.org/wiki/RationalWiki_talk:Community_Standards/Archive3 | CC-MAIN-2019-43 | refinedweb | 11,556 | 71.44 |
Opened 4 years ago
Closed 3 years ago
Last modified 2 years ago
#21648 closed Cleanup/optimization (fixed)
Deprecate "is_admin_site" option of auth.views.password_reset()
Description
Hello,
Here the option "is_admin_site" is not documented and its name doesn't describe what it is doing.
According to and it uses the host of the request instead of the site framework, therefor it should be name something like "use_domain_from_request".
Kind regards,
Change History (11)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
Is the use case for that parameter still accurate? Shouldn't we deprecate it?
comment:3 Changed 3 years ago by
comment:4 Changed 3 years ago by
comment:5 Changed 3 years ago by
A use case is if you are using just one Django instance (=> only one settings.py => only one settings.SITE_ID) and you want to use django CMS (which relies on django.contrib.sites) to host a website on xyz.domain.tld and you have several subdomains abc.domain.tld, xyz.domain.tld, etc. which do not use django CMS but only provide a (branded) login where users can also click the "forgot password" link and in the resulting email the referring URL (e.g. xyz.domain.tld) should be used.
django CMS:
/cms/models/pagemodel.py
from django.contrib.sites.models import Site site = models.ForeignKey(Site, help_text=_('The site the page is accessible at.'), verbose_name=_("site"))
cms.models.pagemodel.Page
site (django.contrib.sites.models.Site instance) – Site to put this page on
Be sure to have 'django.contrib.sites' in INSTALLED_APPS and set SITE_ID parameter in your settings: they may be missing from the settings file generated by django-admin depending on your Django version and project template.
comment:6 Changed 3 years ago by
Do you have each subdomain setup as a site?
comment:7 Changed 3 years ago by
No I've got:
- One Django instance with one settings.py which contains
SITE_ID = 1
- Django CMS installed and INSTALLED_APPS containing django.contrib.sites The django CMS pages are only rendered for
- Several subdomains pointing to the same Django instance. nginx is redirecting / to /admin on these subdomains as we only allow login and password reset but do not display any django CMS content there: abc.domain.tld, def.domain.tld, ghi.domain.tld, jkl.domain.tld
- In the templates I use a templatetag which returns different configurations (CSS, etc.) for the subdomains in use based on the host:
host = context['request'].get_host()
I could remove the single sites object in the database and then password_reset would return RequestSite(request).domain instead of Site.objects.get_current().domain but the sites object in the database is used at other places in the code. So this is not an option. I will get rid of django CMS in this Django instance in the near future so I will be able to remove django.contrib.sites from INSTALLED_APPS - but that will just be a solution for me at that point in time.
comment:8 Changed 3 years ago by
Thanks for the details. Do you think removing the option isn't acceptable then? Any sense whether this sort of setup is common? Would updating your site to use the sites framework be too much work?
comment:9 Changed 3 years ago by
Good question. I think that updating to the sites framework would be a lot of overhead for us as we just provide a branded site based on the URL. I don't know if this is a common setup but probably it's worth thinking about why the domain defined for a site shall be used for the password reset link in the email and if it would be sufficient to just use request.get_host()
Yes, it seems poorly named, but I'm not sure it's worth renaming due to backwards compatibility concerns at this point. The parameter was added in the merge of magic-removal pre-1.0, but the functionality didn't appear to be used or tested until [9305c0e1]. | https://code.djangoproject.com/ticket/21648 | CC-MAIN-2017-39 | refinedweb | 678 | 65.52 |
CCD Spectrograph Core - Fast 16-bit
What's new
The CCD Transmission Spectrograph project was cool enough, but it lacked two things - the ability to digitize all 3694 pixels, and the ability to do so at 16-bit resolution. The solution to the first problem was found in the 8-bit fast spectrograph core project. This new project is the solution to both problems - width and depth. An AD7667 16-bit 1 MSPS converter and an ATmega1284 as an Arduino. Using the 16-bit 1MSPS converter it can digitize a frame in 16mS, the same as as the 8-bit fast version.
Hardware
The circuit is built on a 100mm x 100mm PCB, and I used a 48-pin SchmartBoard component carrier for the AD7667. It makes soldering the 48-pin TQFP a little easier than soldering it to the PCB directly. The AD8021 amplifier is soldered on the PC board directly underneath the ADC. There is a power supply that provides digital +5V and analog ±5V from a single 7 to 12VDC input.
This schematic has a different look than those I usually do because this is the "real" schematic from which the board is made. I normally translate the schematic into an easier to read version using software that I don't use for making boards, but this was a little too complicated for me to keep the two schematics in sync.
The ATmega1284 has to generate clocks to drive the CCD and the ADC. On port D there are just enough lines to do that and still have serial communication with the host. Three lines for the CCD, and three lines for the ADC. Timer 2 is used to generate the MCLK signal on pin OC2A so no PWM can be used on Timer 2. There are four more PWM pins on the unused port B.
The logic analyzer screen capture below shows the clocks generated and how they are synchronized with the MCLK. ICG and SH are the other CCD clocks while CNVST, RD, and BYTESWAP are the ADC control lines.
CCD Sensor
The sensor is a Toshiba TCD1304AP, a 3648 pixel linear CCD sensor which operates on a single voltage (3.0V to 5.5V). The sensor is driven directly by the outputs of the ATmega1284.
The Toshiba TCD1304 data sheet is illegible and incomplete rubbish. It offers no insight into the workings of the device. From poking and prodding this thing I think I have a handle on how it operates. The TCD1304 is always clocking the shift registers if the MCLK is running. It is only when you hit it with the combination of ICG LOW and SH HIGH and then LOW that it dumps the photodiodes onto the shift registers. You then have 14776 MCLK cycles (3694 total pixels) worth of data, and it's back to empty pixels. That is a good thing, though. It keeps dark signal from building up in the output shift registers.
The data sheet shows the SH clock running at 1/2 the pixel rate. Judging by the behavior of the output, I would speculate that the SH clock alternately puts pixel data from the two shift registers on the output buffer's gate. The MCLK determines which pixel pair that would be in the full stream. A complete assumption.
Video Buffer
The range and polarity of the CCD output is wrong for the ADC. The signal starts out at 2.6V and goes toward ground with increasing exposure. At saturation, the output voltage is 480mV. That is a 2.2V signal span between the background noise and saturation. The ADC needs the voltage to start near ground and work its way up to 2.5V max. I say near ground because we need the noise floor to be accessible to the ADC so it can be removed digitally to calibrate the signal.
Analog Devices recommends the AD8021 amplifier, and I see no point in trying to outsmart them. The AD8021 is wired as a unity gain inverting amplifier. The 2.6V offset is removed by supplying the non-inverting input with the adjustable voltage from a pot.
ADC
The ADC is the Analog Devices AD7667 16-bit 1 MSPS Unipolar ADC. It has both serial and parallel interfaces, but I chose parallel for speed. The data is taken from the upper 8 bits using the BYTESWAP signal to select first the low byte and then the high byte. Raw 16-bit data is stored in RAM in the microcontroller for later use.
The input range is 0 to 2.5V, of which we use 2.2V for signal.
There are capacitors all over this circuit. They are necessary to keep the digital noise from getting in the analog circuits and messing up that beautiful 16-bit reading. I used 10uF ceramics, but the board will take more or less. The sensitivity of the 16 bit converter is 2.5V / 65536 = 38µV per ADU. The noise doesn't have to be below that, but the SNR has to be high enough that the signal doesn't get lost.
Microcontroller
The microcontroller is an Atmel ATmega1284. The ATmega1284 has 16kB of RAM, and our video buffer uses just about half of it. The code is very simple, and only uses around 4kB of flash.
The 16MHz ATmega1284 is programmed with an Arduino bootloader and the serial connection allows you to program it from within the Arduino IDE. The CCD, ADC and serial use two full ports, C and D, but ports A and B are available for other uses. That includes four PWM outputs on port B and 8 analog inputs on port A.
Buffer Amp Power Supply
Ok. I originally designed this with the MAX232 as a charge pump, and when I was only powering the opamp, that worked fine. When I started powering all of the analog circuitry from it the MAX232 failed to keep up. I had to redesign the power supply mid-stream. I changed to a 78L05 running from the input supply and a MAX660 running from the digital 5V supply.
The MAX660 is a charge pump power supply that puts out approximately (-)Vin, where Vin is the 5V line. The opamp uses ±5V while the ADC and CCD use +5V. Digital and analog +5V supplies are separate. The analog +5V is derived from the input power source using an 78L05. In my case the input voltage was 5.02V and the output was -4.95V.
Before Using
There is a 2-pin header on the board labeled "CAL". Don't put the jumper on until you have adjusted the pot labeled "OFS" to set the inverting amplifier output offset to just a bit above ground. The pot will allow you to adjust the voltage to just under ground, too, and that would be catastrophic if the jumper was installed. The ADC has protection diodes to short out voltages more than 0.3V below ground. Excessive current would flow. The first casualty would be the very expensive ADC.
In fact, it is a good idea never to adjust the offset pot with the jumper installed.
To adjust the offset, pull the "CAL" jumper if installed and connect an oscilloscope to the pin farthest from the ADC. That is the amplifier output. Adjust the "OFS" pot to set the amplifier lowest output voltage to just over ground by 100mV or so. Then put the jumper on and test.
Microcontroller Firmware
The Arduino spectrograph software is written in C in the Arduino IDE. Line readout consists of driving the CCD and ADC clocks and reading the output of the ADC. With the AD7667 we can comfortably digitize one pixel every 4.5µS. The 888.88kHz MCLK and the pixel read are coordinated to be synchronous. Each pixel takes 4 MCLK cycles, but the MCLK is a free-running clock generated by Timer2, which is synchronized at the start of each line. There are 18 CPU cycles in one MCLK. Not that we care.
The instructions in the "readLine" function are arranged to use exactly the right number of cycles for the MCLK speed. If you speed up the MCLK, and leave the CPU clock the same, you must also remove cycles from this routine. If you slow down the MCLK, you must add cycles here. There are 72 CPU cycles in one pixel time. This we care about. We have to use every one of those cycles during the readout process to keep the clocks from skewing and messing up the association between our pixel number and the physical pixel on the CCD.
#include <util/delay_basic.h> #define RD (1<<2) #define CNVST (1<<3) #define BYTESWAP (1<<4) #define ICG (1<<5) #define SH (1<<6) #define MCLK (1<<7) // Full frame, including dark pixels // and dead pixels. #define PIXEL_COUNT 3691 // Ports and pins #define CLOCKS PORTD #define CLOCKP PIND #define CLOCKS_DDR DDRD #define DATA_PINS PINC #define DATA_PORT PORTC #define DATA_DDR DDRC // 10mS exposure time. #define EXPOSURE_TIME 10 // Initial clock state. uint8_t clocks0 = (RD + CNVST + ICG); // 16-bit pixel buffer uint16_t pixBuf[PIXEL_COUNT]; char cmdBuffer[16]; int cmdIndex; int exposureTime = EXPOSURE_TIME; int cmdRecvd = 0; /* * readLine() Reads all pixels into a buffer. */ void readLine() { // Get an 8-bit pointer to the 16-bit buffer. uint8_t *buf = (uint8_t *) pixBuf; int x = 0; uint8_t scratch = 0; // Disable interrupts or the timer will get us. cli(); // Synchronize with MCLK and // set ICG low and SH high. scratch = CLOCKS; scratch &= ~ICG; scratch |= SH; while(!(CLOCKP & MCLK)); while((CLOCKP & MCLK)); TCNT2 = 0; _delay_loop_1(1); __asm__("nop"); __asm__("nop"); __asm__("nop"); __asm__("nop"); __asm__("nop"); CLOCKS = scratch; // Wait the remainder of 4uS @ 20MHz. _delay_loop_1(22); __asm__("nop"); __asm__("nop"); // Set SH low. CLOCKS ^= SH; // Wait the reaminder of 4uS. _delay_loop_1(23); // Start the readout loop at the first pixel. CLOCKS |= (RD + CNVST + ICG + BYTESWAP + SH); __asm__("nop"); do { // Wait a minimum of 250nS for acquisition. _delay_loop_1(2); // Start the conversion. CLOCKS &= ~CNVST; CLOCKS |= CNVST; // Wait a minimum of 1uS for conversion. _delay_loop_1(4); // Read the low byte of the result. CLOCKS &= ~RD; _delay_loop_1(4); *buf++ = DATA_PINS; // Setup and read the high byte. CLOCKS &= ~(BYTESWAP); _delay_loop_1(4); *buf++ = DATA_PINS; // Set the clocks back to idle state CLOCKS |= (RD + BYTESWAP); // Toggle SH for the next pixel. CLOCKS ^= SH; } while (++x < PIXEL_COUNT); sei(); } /* * clearLine() Clears the CCD. */ void clearLine() { int x = 0; // Set ICG low. CLOCKS &= ~ICG; CLOCKS |= SH; _delay_loop_1(14); // Set SH low. CLOCKS ^= SH; _delay_loop_1(10); // Reset the timer so the edges line up. TCNT2 = 0; CLOCKS |= (RD + CNVST + ICG + BYTESWAP + MCLK); do { CLOCKS ^= SH; _delay_loop_1(10); } while (++x < PIXEL_COUNT); } /* * sendLine() Send the line of pixels to the user. */ void sendLine() { uint16_t x; for (x = 0; x < PIXEL_COUNT; ++x) { Serial.print(x); Serial.print(","); Serial.print(pixBuf[x]); Serial.print("\n"); } } /* * setup() * Set the data port to input. * Set the clock port to output. * Start timer2 generating the Mclk signal */ void setup() { delay(10); CLOCKS_DDR = 0xff; CLOCKS = 0; //clocks0; DATA_DDR = 0x0; Serial.begin(115200); // Setup timer2 to generate an 888kHz frequency on D10 TCCR2A = (0 << COM2A1) | (1 << COM2A0) | (1 << WGM21) | (0 << WGM20); TCCR2B = (0 << WGM22) | (1 << CS20); OCR2A = 8; TCNT2 = 0; delay(10); } /* * loop() * Read the CCD continuously. * Upload to user on switch press. */ void loop() { int x; char ch; // If we got a command last time, execute it now. if (cmdRecvd) { if (cmdBuffer[0] == 'r') { // Send the readout to the host. sendLine(); } else if (cmdBuffer[0] == 'e') { delay(10); Serial.write(cmdBuffer); // Set the exposure time. sscanf(cmdBuffer + 1, "%d", &exposureTime); if (exposureTime > 1000) exposureTime = 1000; if (exposureTime < 1) exposureTime = 1; } // Get ready for the next command. memset(cmdBuffer, 0, sizeof(cmdBuffer)); cmdIndex = 0; cmdRecvd = 0; } // Clear the CCD. clearLine(); // Integrate. delay(exposureTime); // Read it for real. readLine(); // See if the host is talking to us. if (Serial.available()) { ch = Serial.read(); // If char is linefeed, it is end of command. if (ch == 0x0a) { cmdBuffer[cmdIndex++] = '\0'; cmdRecvd = 1; // Otherwise it is a command character. } else { cmdBuffer[cmdIndex++] = ch; cmdRecvd = 0; } } }
Operation
There are two commands. One, "r\n", causes the spectrograph to spit out a csv organized set of data with pixel number followed by the video data for that pixel. The CCD is continuously read, with the results stored in RAM. When the "r\n" command is received, the data from RAM is shipped serially at 115.2kBaud to the host.
The other command "e<nnn>\n" sets the exposure time in milliseconds, that is "e100\n" sets the exposure time to 100mS.
Host Software
Amplitude Calibration
- Dark Frame - A frame which contains all signal not generated by light input.
- Flat Field Frame - A frame which contains only signal generated by light input, normalized to one.
- Science Frame - The reason you are doing all of this - the final result.
The pixels in a CCD line sensor are photo diodes. They are all created at the same time, and so should be fairly consistent from one to the next. Fairly. Maximum 10% non-uniformity. To do photometry you don't need fair consistency, you need certainty. That is where calibration comes in.
Dark Frames
Standard darks and flats are used to calibrate the amplitude readings. A little explanation is in order. A dark frame is a frame that contains all of the signal present in the output that is not based on light input. You must use the same integration time, readout time, and temperature for each dark, each flat, and each science frame. They individual dark frames are median combined. That gets rid of much of the noise, leaving just the dark signal. You use as many dark frames as practical to generate the master dark frame. The noise goes down as the square root of the number of frames. If Xn is the noise in one frame, the noise in 2 frames is Xn / √2, or Xn / 1.414.
Flat Field Frames
A flat field frame is a frame that shows only the signal that is based on the light input. It is the opposite of a dark frame. You make a flat field by turning on the lamp with no filter in place, and capturing several frames. Each frame has the dark frame subtracted from it, and then the median found as above. The pixel values in the flat field frame are normalized to 1. That is, the highest value is divided into each pixel, resulting in a line of numbers all between zero and one. That is the master flat field frame. "Flat" may be appropriate because if you flat field an empty frame you get a perfectly flat line on the graph.
Making the Final Product
To produce a science frame, you place the transmission filter or other subject on the stage, make a frame, and then process it. To process the frame, you subtract the dark frame from it, then divide each pixel by the corresponding pixel in the flat field frame. The result is a frame that consists only of amplitude corrected light generated signal, and the all of the noise. To cut the noise, you median several science frames, and noise is reduced as in the dark and flat field frames.
Relative vs. Absolute Brightness
You may notice from all of the above that the calibration process gives you a relative brightness level, 0% to 100%. There is nothing absolute involved. That is because what we are measuring, the response of dichroic filters, is a relative thing. A filter passes x% in its passband and y% in its stopband. That is all you need to know about the filter. But what if you need to get a spectrogram from an outside source, like the sodium vapor lamp down the street? You still want relative brightness between places on the spectrum, but the absolute brightness may be anywhere in a large range. To get around that you change the integration time. The default integration time is 10mS. You may set the integration time to be anything from 1 to 1000mS.
When you change the integration time, you need to generate new darks and flats. Keep them around, labeled as dark_001, flat_010, etc. The number is the integration time in mS. When you make a science frame, use the appropriate dark and flat for the integration time you are using.
Wavelength Calibration
This is more tricky. If you know the wavelength at 90° incidence, and the angle of incidence of two wavelengths, you can calculate up an equation to compensate.
Output
The board in action. The CCD is covered with 4 layers of black microfiber cloth and the room is not well lit. The picture was taken with a flash. The scope is on the input to the ADC, and shows the CCD is almost saturated - maybe 90% capacity. The image below is the readout of the CCD with the same setup.
I ordered some holographic diffraction gratings from Edmund to try them out. They were inexpensive at $16.95 for 15. We'll see how well they work. The next better one was $85.
Things I Might Like to Change (already)
- An FT232H parallel FIFO to USB converter chip. I feel the need for speed after downloading over 6 kB for a single frame. That would provide a single USB connection which could transfer 8MB per second. It would use four more control lines while sharing the data bus with the ADC. Perhaps more appropriate for an area sensor like the KAF-0400, KAF-1600, or KAF-3200.
- A separate board for the CCD. This would make it easier to embed the device. There is a limit to how long you can make the clock lines, though. Ringing on the clock lines may damage the CCD.
- Include meta-data in the downloaded frame so automation could select a proper set of calibration frames. The inclusion of the exposure time would allow the program to choose the correct dark frame and flat field frame. The ADC has a very accurate analog temperature sensor in it that could be read by one of the ATmega1284's analog inputs.
- Automatic exposure control. Before it makes a science frame it checks then adjusts the levels to keep the CCD from blooming. | http://davidallmon.com/pages/ad7667-spectrograph | CC-MAIN-2017-17 | refinedweb | 3,035 | 74.49 |
fine-GRAPE: fine-grained APi usage extractor – an approach and dataset to investigate API usage
- First Online:
DOI: 10.1007/s10664-016-9444-6
- Cite this article as:
- Sawant, A.A. & Bacchelli, A. Empir Software Eng (2016). doi:10.1007/s10664-016-9444-6
- 332 Downloads
Abstract, many of the mining algorithms used for such purposes do not take type information into account. Thus making the results unreliable. In this paper, we aim to rectify this by introducing fine-GRAPE, an approach that produces fine-grained API usage information by taking advantage of type information while mining API method invocations and annotation. By means of fine-GRAPE, we investigate API usages from Java projects hosted on GitHub. We select five of the most popular APIs across GitHub Java projects and collect historical API usage information by mining both the release history of these APIs and the code history of every project that uses them. We perform two case studies on the resulting dataset. The first measures the lag time of each client. The second investigates the percentage of used API features. In the first case we find that for APIs that release more frequently clients are far less likely to upgrade to a more recent version of the API as opposed to clients of APIs that release infrequently. The second case study shows us that for most APIs there is a small number of features that is actually used and most of these features relate to those that have been introduced early in the APIs lifecycle.
KeywordsApplication programming interface API usage API popularity Dataset
1 Introduction
An Application Programming Interface (API) is a set of functionalities provided by a third-party component (e.g., library and framework) that is made available to software developers. APIs are extremely popular as they promote reuse of existing software systems (Johnson and Foote 1988).
The research community has used API usage data for various purposes such as measuring of popularity trends (Mileva et al. 2010), charting API evolution (Dig and Johnson 2006), and API usage recommendation systems (Mandelin and Kimelman 2006).
For example, Xie et al. have developed a tool called MAPO wherein they have attempted to mine API usage for the purpose of providing developers API usage patterns (Xie and Pei 2006; Zhong et al. 2009). Based on a developers’ need MAPO recommends various code snippets mined from other open source projects. This is one of the first systems wherein API usage recommendation leveraged open source projects to provide code samples. Another example is the work by Lämmel et al. wherein they mined data from Sourceforge and performed an API usage analysis of Java clients. Based on the data that they collected they present statistics on the percentage of an API that is used by clients.
One of the major drawbacks of the current approaches that investigate APIs is that they heavily rely on API usage information (for example to derive popularity, evolution, and usage patterns) that is approximate. In fact, one of the modern techniques considers as “usage” information what can be gathered from file imports (e.g., import in Java) and the occurrence of method names in files.
This data is an approximation as there is no type checking to verify that a method invocation truly does belong to the API in question and that the imported libraries are used. Furthermore, information related to the version of the API is not taken into account. Finally, previous work was based on small sample sizes in terms of number of projects analyzed. This could result in an inaccurate representation of the real world situation.
With the current work, we try to overcome the aforementioned issues by devising fine-GRAPE (fine-GRained APi usage Extractor), an approach to extract type-checked API method invocation information from Java programs and we use it to collect detailed historical information on five APIs and how their public methods are used over the course of their entire lifetime by 20,263 client projects.
In particular, we collect data from the open source software (OSS) repositories on GitHub. GitHub in recent years has become the most popular platform for OSS developers, as it offers distributed version control, a pull-based development model, and social features (Barr et al. 2012). We consider Java projects hosted on GitHub that offer APIs and quantify their popularity among other projects hosted on the same platform. We select 5 representative projects (from now on, we call them only APIs to avoid confusion with client projects) and analyze their entire history to collect information on their usage. We get fine-grained information about method calls using a custom type resolution that does not require to compile the projects.
The result is an extensive dataset for research on API usage. It is our hope that our data collection approach and dataset not only will trigger further research based on finer-grained and vast information, but also make it easier to replicate studies and share analyses.
For example, with our dataset the following two studies can be conducted:
First, the evolution of the features of the API can be studied. An analysis of the evolution can give an indication as to what has made the API popular. This can be used to design and carry out studies on understanding what precisely makes a certain API more popular than other APIs that offer a similar service. Moreover, API evolution information gives an indication as to exactly at what point of time the API became popular, thus it can be studied in coordination with other events occurring to the project.
Second, a large set of API usage examples is a solid base for recommendation systems. One of the most effective ways to learn about an API is by seeing samples (Robillard and DeLine 2011) of the code in actual use. By having a set of accurate API usages at ones’ disposal, this task can be simplified and useful recommendations can be made to the developer; similarly to what has been done, for example, with Stack Overflow posts (Ponzanelli et al. 2013).
In our previous work titled “A dataset for API Usage” (Sawant and Bacchelli 2015b), we presented our dataset along with a few details on the methodology used to mine the data. In this paper, we go into more detail into the methodology of our mining process and conduct two case studies on the collected data which make no use of additional information.
The first case is used to display the wide range of version information that we have at our disposal. This data is used to analyze the amount of time by which a client of an API lags behind the latest version of the API. Also, the version information is used to calculate as to what the most popular version of an API is. This study can help us gain insights into the API upgrading behavior of clients.
The second case showcases the type resolved method invocation data that is present in our database. We use this to measure the popularity of the various features provided by an API and based on this mark the parts of an API that are used and those that are not. With this information an API developer can see what parts of the API to focus on for maintenance and extension.
The first study provided initial evidence of a possible distinction between upgrade behavior of clients of APIs that release frequently compared to those that release infrequently. In the former case, we found that clients tend to hang back and not upgrade immediately; whereas, in the latter case, clients tend to upgrade to the latest version. The results of the second case study highlight that only a small part of an API is used by clients. This finding requires further investigation as there is a case to be made that many new features that are being added to an API are not really being adopted by the clients themselves.
This paper is organized as follows: Section 2 presents the approach that has been applied to mine this data. For the ease of future users of this dataset an overview of the dataset and some introductory statistics of it can be found in Section 3. Section 4 presents the two case studies that we performed on this dataset. In Section 5 we describe the limitations of our approach and the dataset itself. Section 6 concludes this article.
2 Approach
We present the 2-step approach that we use to collect fine-grained type-resolved API usage information. (1) We collect data on project level API usage from projects mining open source code hosting platforms (we target such platforms due to the large number of projects they hosted) and use it to rank APIs according to their popularity to select an interesting sample of APIs to form our dataset; (2) apply our technique, fine-GRAPE, to gather fine-grained type-based information on API usages and collect historical usage data by traversing the history of each file of each API client.
2.1 Mining of Coarse Grained Usage
In the construction of this dataset, we limit ourselves to the Java programming language, one of the most popular programming languages currently in use (Tiobe index 2015). This reduces the types of programs that we can analyze, but has a number of advantages: (1) Due to the popularity of Java there would be a large source of API client projects available for analysis; (2) Java is a statically typed language, thus making the collection of type-resolved API usages easier; (3) it allows us to have a more defined focus and more thoroughly test and refine fine-GRAPE. Future work can be to extend it to other typed-languages, such as C#.
In the dependency tag from a sample POM file pictured above, we see that the JUnit dependency is being declared. We find the APIs name in the artifactId tag. The groupId tag generally contains the name of the organization that has released the API, in this case it matches the artifactId. However, there are other cases such as the JBoss-Logging API for which the groupID is org.jboss.logging and the artifactId is jboss-logging. The version of JUnit to be included as a dependency is specified in the version tag and in this case it is version 4.8.2.
2.2 Fine-Grained API Usage
To ensure that precise API usage information is collected, one has to reliably link each method invocation or annotation usage to class in the API to which it belongs. This can be achieved in five ways:
- Text matching:
This is one of the most frequently used techniques to mine API usage. For example, it has been used in the investigation into API popularity performed by Mileva et al. (2010). The underlying idea is to match explicit imports and corresponding method invocations directly in the text of source code files.
- Bytecode analysis:
Each Java file produces one or more class files when compiled, which contain Java bytecode that is platform independent. Another technique to mine API usage is to parse byte code in these class files to find all method invocations and annotation usages along with the class to which they belong to. This approach guarantees accuracy as the class files contain all information related to Java program in the Java file in question.
- Partial program analysis:
Dagenais et al. have created an Eclipse plugin called Partial Program Analysis (PPA) (Dagenais and Hendren 2008). This plugin parses incomplete files and recovers type bindings on method invocations and annotations, thus identifying the API class to which a certain API usage belongs.
- Dynamic analysis:
Dynamic analysis is a process by which the execution trace of a program is captured as it is being executed. This can be a reliable method of determining the invocation sequence in a program as it can even handle the case where type of an object is decided at runtime. Performing dynamic analysis has the potential of being highly accurate as the invocations in the trace are type-resolved being recovered from running of bytecode.
- AST analysis:
Syntactically correct Java files can be transformed into an Abstract Syntax Tree (AST). An AST is a tree based representation of code where each variable declaration, statement, or invocation forms a node of the tree. This AST can be parsed by using a standard Java AST parser. The Java AST parser can also recover type based information at each step, which aids in ensuring accuracy when it comes to making a connection between an API invocation and the class it belongs to.
All five of the aforementioned approaches can be applied for the purpose of collecting API usage data, but come with different benefits and drawbacks.
The text-matching-based approach proves especially problematic in the case of imported API classes that share method names, because method invocations may not be disambiguated without type-information. Although some analysis tools used in dynamic languages (Ducasse et al. 2000) handle these cases through the notion of ‘candidate’ classes, this approach is sub-optimal for typed languages where more precise information is available.
- 1.
Processing class files requires these files to be available, which, in turn, requires being able to compile the Java sources and, typically, the whole project. Even though all the projects under consideration use Maven for the purpose of building, this does not guarantee that they can be built. If a project is not built, then the class files associated with this project cannot be analyzed, thus resulting in a dropped project.
- 2.
To analyze the history of method invocations it is necessary to checkout each version of every file in a project and analyze it. However, checking out every version of a file and then building the project can be problematic as there would be an ultra-large number of project builds to be performed. In addition to the time costs, there would still be no warranty that data would not be lost due build failures.
The partial program analysis approach has been extensively tested by Dagenais and Hendren (2008) to show that method invocations can be type resolved in incomplete Java files. This is a massive advantage as it implies that even without building each API client one can still conduct a thorough analysis of the usage of an API artifact. However, the implementation of this technique relies on Eclipse context, thus all parsing and type resolution of Java files can only be done in the context of an Eclipse plugin. This requires that each and every Java file is imported into an Eclipse workspace before it can be analyzed. This hinders the scalability of this approach to large number of projects.
Dynamic analysis techniques result in an accurate set of type resolved invocations. However, they require the execution of the code to acquire a trace. This is a limitation as not all client code might be runnable. An alternative would be to have a sufficient set of tests that would execute all parts of the program so that traces can be obtained. This too might be unfeasible as many projects may not have a proper test suite (Bacchelli et al. 2008). Finally, this technique would also suffer from the same limitations as the bytecode analysis technique; where analyzing every version of every file would require a large effort.
2.3 Fine-GRAPE
Due to the various issues related to first four techniques, we deem the most suitable technique to be the AST based one. This technique utilizes the JDT Java AST Parser (Vogel), i.e., the parser used in the Eclipse IDE for continuous compilation in background. This parser handles partial compilation: When it receives in input a source code file and a Java Archive (JAR) file with possibly imported libraries, it is capable of resolving the type of methods invocation and annotations of everything defined in the code file or in the provided jar. This will allow us to parse standalone files, and even incomplete files in a quick enough way such that we can collect data from a large number of files and their histories in a time effective manner.
We created fine-GRAPE that, using the aforementioned AST parsing technique, collects the entire history of usage of API artifacts over different versions. In practice, we downloaded all the JAR files corresponding to the releases of the API projects chosen. Although this has been done manually in the study presented here, this process of downloading the JAR files has been automated in the current version for the ease of the user. Then, fine-GRAPE uses Git to obtain the history of each file in the client projects and runs on each file retrieved from the repository and the JAR with the corresponding version of the API that the client project declares in Maven at the time of the commit of the file. The fine-GRAPE leverages the visitor pattern that is provided by the JDT Java AST parser to visit all nodes in the AST of a source code file of the type method invocation or annotation. These nodes are type resolved and are stored in a temporary data structure while we parse all files associated with one client project. This results in accurate type-resolved method invocation references for the considered client projects through their whole history. Once the parsing is done for all the files and their respective histories in the client, all the data that has been collected is transformed into a relational database model and is written to the database.
An API usage dataset can also contain the information on the method, annotations, and classes that are present in every version of every API for which usage data has been gathered such that any kind of complex analysis can be performed. In the previous steps we have already downloaded the API JAR files for each version of the API that is used by a client. These JAR files are made up of compiled class files, where each class file relates to one Java source code file. fine-GRAPE then analyzes these JAR files with the help of the bytecode analysis tool ASM (Bruneton et al. 2002), and for each file the method, class and annotation declarations are extracted.
2.4 Scalability of the Approach
The approach that we have outlined runs on a large number of API client projects in a short amount of time. In its most recent state, all parts of the process are completely automated, thus needing a minimum of manual intervention. A user of the fine-GRAPE tool has to just specify the API which is to be mined, and this will result in a database that contains type-resolved invocations made to an API.
We benchmarked the amount of time it takes to process a single file. To run our benchmark, we used a server with two Intel Xeon E5-2643 V2 processors. Each processor consists of 6 cores and runs at a clock speed of 3.5 GHz. We ran our benchmark on 2,045 files from 20 client projects. To get an accurate picture, this benchmark was repeated 10 times. Based on this we found that the average amount of time spent on a single file was 165 milliseconds, the median was 31 milliseconds, the maximum was 1,815 ms for a large file.
2.5 Comparison to Existing Techniques
Previous work mined API usage examples, for example in the context of code completion, code recommendation, and bug finding. We see how the most representative of these mining approaches implemented in the past relate to the one we present here.
One of the more popular applications of API usage datasets is in the creation of code recommendation tools with the aim of augmenting existing API documentation. In this field, Xie et al. proposed MAPO (Xie and Pei 2006; Zhong et al. 2009). The goal of this tool is to recommend relevant code samples to developers. MAPO runs its analyzer on source code files from open source repositories and uses the JDT compiler to compile a file and recover type-resolved API usages. These fine-grained API usages are then clustered using the frequent itemset mining technique (Agrawal et al. 1994). Uddin et al. (Uddin et al. 2012) mine API usage patterns by identifying features committed in the same transaction and then clustering the features accordingly. Uddin et al.used PPA to extract API usage information from client code. However, as mentioned earlier, the current implementation of PPA does not scale. More recently, tools such as UP-Miner (Wang et al. 2013) have been developed to mine high coverage usage patterns from open source repositories by using multiple clustering steps. In 2015, Saied et al.proposed a tool called MLUP to improve upon the clustering of MAPO (Saied et al. 2015). This tool also uses the Eclipse JDT compiler to extract API usage from client code; the largest improvement over MAPO is the usage of DBSCAN (Ester et al. 1996) instead of the frequent itemset analysis approach to cluster usage patterns. Differently from approach, the aforementioned approaches do not take into account the versions of the various APIs. Moreover, our approach requires neither to build the files nor to have the all declared dependencies available.
Mining of API usage patterns has also been done to detect bugs by finding erroneous usage patterns. To this end, researchers developed tools such as Dynamine (Livshits and Zimmermann 2005), JADET (Wasylkowski et al. 2007), Alattin (Thummalapenta and Xie 2009) and PR-Miner (Li and Zhou 2005). All these tools rely on the same mining technique i.e., frequent itemset mining (Agrawal et al. 1994). The idea behind this technique is that statements that occur frequently together can be considered to be a usage pattern. This technique can result in a high number of false positives, due to the lack of type information. fine-GRAPE tackles this problem by taking advantage of type information.
The earliest technique that was employed in mining API usage was used by the tool CodeWeb (Michail 1999) that was developed by Amir Michail. More recently it has been employed in the tool Sourcerer (Bajracharya et al. 2006) as well. This technique employs a data mining technique that is called generalized association rule mining. An association rule is of the form \((\bigwedge _{x \epsilon X}x) \Rightarrow (\bigwedge _{y\epsilon Y}y)\). This implies that for an event x that takes place, then an event y will also take place with a certain confidence interval. The generalized association rule takes not just this into account but also takes a node’s descendants into account as well. These descendants represent specializations of that node. This allows this technique to take class hierarchies into account while mining reuse patterns. However, just like frequent itemset mining this can result in false positives due to the lack of type information.
Recently, Moreno et al. (2015) presented a technique to mine API usages using type resolved ASTs. Differently from fine-GRAPE, the approach they propose builds the code of each client to retrieve type resolved ASTs. As previously mentioned in the context of bytecode analysis, this could result in the loss of data, as some client projects may not build, and low scalability.
3 A Dataset for API Usage
Using fine-GRAPE we build a large dataset of usage of popular APIs. Our dataset is constructed using data obtained from the open source code hosting platform GitHub. GitHub stores more than 10 million repositories (Gousios et al. 2014) written in different languages and using a diverse set of build automation tools and library management systems.
3.1 Coarse-Grained API Usage: the Most Popular APIs
Popularity of APIs referenced on Github
This is in-line with a previous analysis of this type published by Primat as a blog post (Primat 2013). Interestingly, our results show that JUnit is by far the most popular, while Primat’s results report that JUnit is just as popular as SLF4J. We speculate that this discrepancy can be caused by the different sampling approach (he sampled 10,000 projects on GitHub, while we sampled about 42,000 on GitHub), further research can be conducted to investigate this aspect more in detail.
3.2 Selected APIs
Subject APIs
- 1.
Guava is the new name of the original Google collections and Google commons APIs. It provides immutable collections, new collectsion such as multiset and multimaps and finally some new collection utilities that are not provided in the Java SDK. Guava’s collections can be accessed by method invocations on instantiated instances of the classes built into Guava.
- 2.
Guice is a dependency injection library provided by Google. Dependency injection is a design pattern that separates behavioral specification and dependency resolution. Guice allows developers to inject dependencies in their applications with the usage of annotations.
- 3.
Spring is a framework that provides an Inversion of Control(IoC) container. This allows developers to access Java objects with the help of reflection. The Spring framework comprises of a lot of sub projects, however we choose to focus on just the spring-core, spring-context and spring-test modules due to their relatively high popularity. The features provided by Spring are accessed in a mixture of method invocations and annotations.
- 4.
Hibernate Object Relational Mapping (ORM) provides a framework for mapping an object oriented domain to a relational database domain. It is made up of a number of components that can be used, however we focus on just two of the more popular one i.e., hibernate-core and hibernate-entity manager. Hibernate exposes its APIs as a set of method invocations that can be made on the classes defined by Hibernate.
- 5.
Easymock is a testing framework that allows for the mocking of Java objects during testing. Easymock exposes its API to developers by way of both annotations and method invocations.
3.3 Data Organization
We apply the approach outlined in Section 2 and store all the data collected from all the client GitHub projects and API projects in a relational database, precisely PostgreSQL (Momjian 2001). We have chosen a relational database because the usage information that we collect can be naturally expressed in forms of relations among the entities. Also we can leverage SQL functionalities to perform some initial analysis and data pruning.
Database schema for the fine-grained API usage dataset
A coarse-grained connection between a client and an API is done with a SQL query on the tables ProjectDependency, Api and Api_version. The finer-grained connection is obtained by also joining Method_invocation/Annotation and Api_class on parent class names & Method_invocation/Annotation and Api_method on method names.
The full dataset is available as a PostgreSQL data dump on FigShare (Sawant and Bacchelli 2015a), under the CC-BY license. For platform limitations on the file size the dump has been split in various tar.gz compressed files, for a total download size of 51.7 GB. The dataset uncompressed requires 62.3 GB of disk space.
3.4 Introductory Statistics
Introductory usage statistics
3.5 Comparison to Existing Datasets
The work of Lämmel et al. (2011) is the closest to the dataset we created with fine-GRAPE. They target open source Java projects hosted on the Sourceforge platform and their API usage mining method relies on type resolved ASTs. To acquire these type resolved ASTs they build the APIs client projects and resolve all of its dependencies. This dataset contains a total of 6,286 client projects that have been analyzed and the invocations for 69 distinct APIs have been identified.
Our dataset as well as that of Lämmel et al. target Java based projects, though the clients that have been analyzed during the construction of our dataset were acquired from GitHub as opposed to Sourceforge. Our approach also relies on type resolved Java ASTs, but we do not build the client projects as fine-GRAPE is based on a technique able to resolve parsing of a standalone Java source file. In addition, the dataset by Lämmel et al. only analyzes the latest build. In terms of size this dataset is comprised of usage information gleaned from 20,263 projects as opposed to the 6,286 projects that make up the Lämmel et al. dataset. However, this dataset contains information on only 5 APIs whereas Lämmel et al. identified usages from 69 APIs.
4 Case Studies
We present two case studies to showcase the value of our dataset and to provide examples for others to use it. We focus on case studies that require minimal processing of the data and are just on basic queries to our dataset.
4.1 Case 1: Do Clients of APIs Migrate to a New Version of the API?
As with other software systems, APIs also evolve over time. A newer version may replace an old functionality with a new one, may introduce a completely new functionality, or may fix a bug in an existing functionality. Some infamous APIs, such as Apache Commons-IO, are stagnating since long time without any major changes taking place, but to build our API dataset we took care of selecting APIs that are under active development, so that we could use it to analyze as to whether clients react to a newer version of an API.
4.1.1 Methodology
In practice, we consider the commit date of each method invocation (this is done by performing a join on the method_invocation and class_history tables), determine the version of the API that was being used by the client at the time of the commit (the project_dependency table contains information on the versions of the API being used by the client and the date at which the usage of a certain version was employed), then consider the release date of the latest version of the API that existed at the time of the commit (this data can be obtained form the API_version table in the database), and finally combine this information to calculate the lag time for each reference to the API and plot the probability density.
Lag time can indicate how far a project is at the time of usage of an API artifact, but it does not give a complete picture of the most recent state of all the clients using an API. To this end, we complement lag time analysis with the analysis of the most popular versions of each API, based on the latest snapshot of each client of the API (we achieve this by querying the project_dependency table to get the latest information on clients).
4.1.2 Results
- Guava:
In the case of the 3,013 Guava clients on GitHub the lag time varies between 1 day and 206 days. The median lag time for these projects is 67 days. The average amount of time a project lags behind the latest release is 72 days. Figure 4 shows the cumulative distribution of lag time across all clients. Since Guava generally releases 5 versions on average per year, it is not entirely implausible that some clients may be one or two versions behind at the time of usage of an API artifact.
Although the latest (as of September 2014) version of Guava is 18, the most popular one is 14 with almost one third of the clients using this version (as shown in Fig. 5). Despite 4 versions being released after version 14 none of them figure in the top 5 of most popular versions. Version 18 has been adopted by very few clients (41 out of 3,013). None of the other newer versions (16 and 17) make it in the top 5 either.
- Spring:
- Spring clients lag behind the latest release up to a maximum of 304 days. The median lag time is 33 days and the first quartile is 15 days. The third quartile of the distribution is 60 days. The average amount of lag time for the usages of various API artifacts is 50 days. Spring is a relatively active API and releases an average of 7 versions (including minor versions and revisions) per year (Fig. 7).Fig. 4
Probability density of lag time in days, by APIFig. 5
Proportion of release adoption, split in the 3 most popular, the latest, and all the other releases, by APIFig. 6
Lag time distribution in days, by APIFig. 7
Release frequency for each API from 2009 (the dataset covers from 2004)Table 3
Publication date, by API, of the 3 most popular and latest releases, sorted by the number of their clients
At the time of collection of this data, the Spring API had just released version 4.1.0 and only a small portion (30) of projects have adopted it. The most popular version is 3.1.1 (2,013 projects) as is depicted in Fig. 5. We see that despite the major version 4 of the Spring API being released in December 2013, the most popular major version remains 3. In our dataset, 344 projects still use version 2 of the API and 12 use version 1.
- Hibernate:
The maximum lag time observed over all the usages of Hibernate artifacts is 271 days. The median lag time is 18 days, and the first quartile is just 10 days. The third quartile is also just 26 days. The average lag time over all the invocations is 19 days. We see in Fig. 4 that most invocations to Hibernate API do not lag behind the latest release considerably, especially in relation to the other APIs, although a few outliers exist. Hibernate releases 17 versions (including minor versions and revisions) per year (Fig. 7).
Version 4.3.6 of Hibernate is the latest release that available on Maven central at the dataset creation time. A very small portion of projects (32) use this version, and the most popular version is version 3.6.10, i.e., the last release with major version 3. We see that a large number of clients have migrated to early versions of major version 4. For instance, version 4.1.9 is almost (352 projects versus 376 projects) as popular as version 3.6.10 (shown in Fig. 5). Interestingly, in the case of Hibernate, from our data we see that there is not a clearly dominant version as all the other versions of Hibernate make up about three fourths of the current usage statistics.
- Guice:
Among all usages of the Guice API, the largest lag time is 852 days. The median lag time is 265 days and the first quartile of the distribution is 80 days. The average of all the lag times is 338 days. The third quartile is 551 days, showing that a lot of projects have a very high lag time. Figure 4 shows the cumulative distribution of lag times across all Guice clients. Guice is a young API and, relatively to the other APIs, releases are few and far between (10 releases over 6 years, with no releases on 2010 or 2012, Fig. 7).
The latest version of Guice that has been released, before the construction of our dataset, is the fourth beta of version 4 (September 2014). Version 3 is unequivocally the most adopted version of Guice, as seen in Fig. 5. This version was released in March of 2011 and since then there have been betas for version 4 released in 2013 and 2014. We speculate that this release policy may have lead to most of the clients sticking to an older version and preferring not to transition to a beta version.
- Easymock:
Clients of Easymock display a maximum, median, and average lag time of 607, 280, and 268 days, respectively. The first quartile and third quartile in the distribution are 120 and 393 days, respectively. Figure 4 shows the large number of projects that have a large amount of lag, relatively to the analyzed projects. Easymock is a small API, which had 12 releases, after the first, over 10 years (Fig. 7).
The most recent version of Easymock is 3.3.1, released in January 2015. However, in our dataset we record use of neither that version nor the previous one (3.3.0). The latest used version is 3.2.0, released in July 2013, with 42 clients. Versions 3.0.0 and 3.1.0 are the most popular (211 and 190 clients) in our dataset, as seen in Fig. 5. Version 2.5.2 and 2.4.0 also figure in the top three in terms of popularity, despite being released in 2009 and 2008.
4.1.3 Discussion
Our analysis lets emerge an interesting relation between the frequency of releases of an API and the behavior of its clients. By considering the data summarized in Fig. 7, we can clearly distinguish two classes of APIs: ‘frequent releaser’ APIs (Guava, Hibernate and Spring) and ‘non-frequent releaser’ APIs (Guice and Easymock).
For all the APIs under consideration we see that there is a tendency for clients to hang back and to not upgrade to the most recent version. This is especially apparent in the case of the ‘frequent releaser’ APIs Guava and Spring: For these APIs, the older versions are far more popular and are still in use. In the case of Hibernate, we cannot get an accurate picture of the number of clients willing to transition because the version popularity statistics are quite fractured. This is a direct consequence of the large number of releases that take place every year.
For Guice and Easymock (‘non-frequent releaser’ APIs), we see that the latest version is not popular. However, for Guice the latest version is a beta and not an official release, thus we do not expect it to be high in popularity. In the case of Easymock, we see that the latest version (i.e., 3.3.1) and the one preceding that (i.e., 3.3.0) are not at all be used. In general, we do see that most clients of ‘non-frequent releaser’ APIs use a more recent version compared to clients of ‘frequent releaser’ APIs.
By looking at Figs. 4 and 6, we also notice how the lag time of ‘frequent releaser’ APIs’ clients is significantly lower than of ‘non-frequent releaser’ APIs’ clients. This relation may have different causes: For example, ‘non-frequent releaser’ APIs’ clients may be less used to update the libraries they use to more recent versions, they may also be less prone to change the parts of their code that call third-party libraries, or code that calls APIs that have non-frequent release policy may be more difficult to update. Testing these hypothesis goes beyond the scope of this paper, but with our dataset researchers can do so to a significant extent. Moreover, using fine-GRAPE, information about more APIs can be collected to verify whether the aforementioned relations hold with statistically significant samples.
4.2 Case 2: How much of an API is broadly used?
Many APIs are under constant development and maintenance. Some API producers do this to evolve features over time and improve the architecture of the API; others try to introduce new features that were previously not present. All in all, many changes take place in APIs over time (Henkel and Diwan 2005). Here we analyze which the features (methods and annotations) introduced by API developers are taken on board by the clients of these APIs.
This analysis is particularly important for developers or maintainers to know whether their efforts are useful and to decide to allocate more resources (e.g., testing, refactoring, performance improvement) in more used parts of their API, as resulting returns on investment may be greater. Moreover, API users may have more interest in reusing popular API features, as they are probably better tested through users (Thummalapenta and Xie 2008).
4.2.1 Methodology
For each of the APIs, we have a list of features in the API_method and API_class tables (Sawant and Bacchelli 2015a). We also have the usage data of all features per API that has been accumulated from the clients in the method_invocation and annotation tables. Based on this, we can mark features of the API have been used by clients. We can also count how many clients use a specific feature, thus classifying each feature as: (1) hotspot, in the top 15 % of features in term of usage; (2) neutral, features that have been used once or more but not in the top 15 % and (3) coldspot, if not used by any client. This is the same classification used by Thummalapenta and Xie (2008) in a similar study (based on a different approach) on the usage of frameworks’ features.
To see which used features were introduced early in an APIs lifetime, we can use the API_version table to augment the date collected above with accurate version information per feature; then, for each of the used features, we see which version is the lowest wherein that feature has been introduced.
4.2.2 Results
Percentage breakdown of usage of features for each of the APIs
Probability distribution of (log) number of clients per API features, by ‘non-frequent releaser’ APIs
Probability distribution of (log) number of clients per API features, by ‘frequent releaser’ APIs
Generally, we see that the proportion of used features is never higher than 20 % (Fig. 8) and that the number of clients that use the features has a heavily right skewed distribution, which is slightly flattened by considering the logarithm (Figs. 9 and 10). Moreover, we do not see a special behavior in this context of clients of ‘non-frequent releaser’ APIs vs. clients of ‘frequent releaser’ APIs.
In the following, we present the breakdown of the usage based on the definitions above.
- Guava:
Only 9.6 % of the methods in Guava are ever used; in absolute numbers, out of 14,828 unique public methods over 18 Guava releases, only 1,425 methods are ever used. Looking at the used methods, we find that 214 methods can be classified as hotspots. The rest (1,211) are classified as neutral spots. The most popular method from the Guava API is newArrayList from the class com.google.common.collect.Lists class and it has 986 clients using it.
Guava provides 2,310 unique classes over 18 versions. We see that only 235 (10 %) of these are ever used by at least client. Furthermore, only 35 of these classes can be called hotspots in the API. A further 200 classes are classified as neutral. And we can classify a total of 2,075 classes as coldspots as they are never used. The most popular class is used 1,097 times and it is com.google.common.collect.Lists.
With Guava we see that 89.4 % of the usages by clients of Guava relate to features that have been introduced in version 3 that was released in April 2010. Following which 7 % of the usages relate to features that were introduced in version 10 that was released in October 2011.
- Spring:
Out of the Spring core, context and test projects, we see that 7.4 % of the features are used over the 40 releases of the API. A total of 840 features have been used out of the 11,315 features in the system. There are 126 features that can be classified as hotspots. Consequently, there are 714 features classified as neutral.
The most popular feature is addAttribute from the class org.springframework.ui.Model and has been used 968 clients.
The Spring API provides a total of 1,999 unique classes. Out of these there are only 319 classes that are used by any of the clients of the Spring API. We can classify 48 of these classes as hotspot classes and the other 271 can be classified as neutral. We classify 1,680 classes as coldspots as they are never used. The most popular class has 2,417 clients and it is org.springframework.stereotype.Controller.
Looking deeper, we see that almost 96 % of the features of Spring that are used by clients are those introduced in Spring version 3.0.0 that was released in December 2009.
- Hibernate:
From the Hibernate core and entitymanager projects we see that only 1.8 % of the features are used. 756 out of the 41,948 unique public features provided over 77 versions of Hibernate have been used by clients in GitHub. Of these, 114 features that can be classified as hotspots and a further 642 features can be classified as neutral. The getCurrentSession method from the class org.hibernate.SessionFactory is the most popular feature, used by 618 clients.
Hibernate is made up of 5,376 unique classes. Out of these only 245 classes are used by clients. We can classify 37 of these classes as hotspots. The rest 208 classes are classified as neutral. We find that Hibernate has 5,131 coldspot classes. The most popular class is org.hibernate.Session with 917 clients using it.
In the case of Hibernate over 82 % of the features that have been used were introduced in version 3.3.1 released in September 2008 and 17 % of the features were introduced in 3.3.0.SP1 released in August 2008.
- Guice:
Out of the unique 11,625 features presented by Guice, we see that 1.2 % (138) of the features are used by the clients of Guice. There are 21 features that are marked as being hotspots, 117 features marked as being neutral, and 11,487 classified as coldspots. The most popular provided by the Guice API is createInjector from class com.google.inject.Guice and is used by 424 clients.
The Guice API is made up of 2,037 unique classes that provide various features. Out of these only 61 classes are of any interest to clients of the API. We find that 9 of these classes can be classified as hotspots and the other 52 as neutral spots. This leaves a total of 1,976 classes as coldspots. The most popular class provided by Guice is com.google.inject.Guice and there are 424 clients that use it.
Close to 96 % of the features of Guice that are popularly used by clients were introduced in its first iteration which was released on Maven central in May 2009.
- Easymock:
There are unique 623 features provided by Easymock, out of which 13.4 % (84) are used by clients. This implies that 539 features provided by the API are never by used by any of the clients and are marked as coldspots. 13 features are marked as hotspots, while 71 features are marked as neutral. the The most popular feature is getDeclaredMethod from the class org.easymock.internal.ReflectionUtils and is used by 151 clients.
Easymock being a small API consists of only 102 unique classes. Out of these only 9 classes are used by clients. Only 1 can be classified as a hotspot class and the other 8 are classified as neutral spots. This leaves 93 classes as coldspots. The most popular class is org.easymock.EasyMock and is used by 205 clients.
We observe that 95 % of the features that are used from the Easymock API were provided starting version 2.0 which was released in December 2005.
4.2.3 Discussion
We see that for Guava, Spring and Easymock, the percentage of usage of features hovers around the 10 % mark. Easymock has the largest percentage of features that are used among the 5 APIs under consideration. This could be down to the fact that Easymock is also the smallest API among the 5. Previous studies such as that by Thummalapenta and Xie (2008) have shown that over 15 % of an API is used (hotspot) whereas the rest is not (coldspot). However, the APIs that they analyzed are very different to the ones that are here as they are all smaller APIs comparable to the size of Easymock, however none of them are of the size of the other APIs such as Guava and Spring. Also, their mining technique relied on code search engines and not on type resolved invocations.
In the case of Hibernate and Guice we see a much smaller percentage (1.8 % and 1.2 % respectively) of utilization of features. This is far lower than that of other APIs in this study. We speculate that due to the fact that the most popular features that are being used are also those that were introduced very early in the APIs life (version 3.3.1 in the case of Hibernate and version 1.0 in the case of Guice). These features could be classified as core features of the API. Despite API developers adding new features, there may be a tendency to not deviate from usage of these core features as these may have been the ones that made the API popular in the first place.
This analysis underlines a possibly unexpected low usage of API features in GitHub clients. Further studies, using our dataset, can be designed and carried out to determine which characteristics make certain feature more popular and guide developers to give the same characteristics to less popular features. Moreover, this popularity study can be used, for example, as a basis for developers to decide whether to separate more popular features of their APIs from the rest and provide them as a different, more supported package.
5 Limitations
Mining API usages on such a large scale and to this degree of accuracy is not a trivial task. We report consequent limitations to our dataset.
Master Branch:
To analyze as many projects as possible on GitHub, we needed to checkout the correct/latest version of the project on GitHub. GitHub uses Git as a versioning system which employs branches, thus making the task of automatically checking out the right version of the client challenging. We consider that the latest version of a given project would be labeled as the ‘master’ branch. Although this is a common convention (Chacon 2009), by restricting ourselves to only the master branch there is a non-negligible chance that some projects are dropped.
Inner and Internal Classes:
The method we use to collect all data about the features provided by the APIs, identifies all classes and methods in the API that are publicly accessible and can be used by a client of the API. These can include inner public classes and their respective methods. Or it can also consist of internal classes that are used by the features of the API itself but not meant for public consumption. The addition of these classes and methods to our dataset can inflate our count of classes and methods per API. If a more representative count is desired, it would be necessary to create a crawler for the API documentation of each API that is hosted online.
Maven Central:
We target only projects based on a specific build automation tool on GitHub, i.e., Maven. This results in data from just a subset of Java projects on GitHub and not all the projects. This may in particular affect the representativeness of the sample of projects. We try to mitigate this effect by considering one of the most popular building tools in Java: Maven. Moreover, the API release dates that we consider in our dataset correspond to the dates in which the API were published on Maven central, rather than the dates in which the API were official released on their websites. This could have an impact on the computed lag time.
GitHub:
Even though GitHub is a very popular repository for open source software projects, this sole focus on GitHub leads to the oversight of projects that are on other open source platforms such as Sourceforge and Bitbucket. Moreover, no studies have yet ensured the representativeness of GitHub projects with respect to industrial ones; on the contrary, as also recently documented by Kalliamvakou et al. (2014), projects on GitHub are all open source and many of the projects may be developed by hobbyists. This may result in developers not conforming to standard professional software maintenance practices and, in turn, to abnormal API update behavior.
Hibernate:
In the case of Hibernate, we could not retrieve data for version 2 or 1. This is due to the fact that neither of these versions were ever released on the maven central repository. This may have an impact on both of the case studies as the usage results can get skewed towards version 3 of the API.
6 Conclusion
We have presented our approach to mine API usage from OSS platforms. Using fine-GRAPE we created a rich and detailed dataset that allows researchers and developers alike to get insights into trends related to APIs. A conscious attempt has been made to harvest all the API usage accurately. We mined A total of 20,263 projects and accumulated a grand total of 1,482,726 method invocations and 85,098 annotation usages related to 5 APIs.
We also presented two case studies that were performed on this dataset without using external scripts or data sources. The first case study analyzes how much clients migrate to new versions of APIs. Besides confirming that clients tend not to update their APIs, this study highlights an interesting distinction between clients of APIs that frequently release new version and those that do not. For the former, the lag time is significantly lower. Although our sample of APIs is not large enough to allow generalization, we deem this finding to deserve further research as it could potentially help API developers decide which release policy to adopt, depending on their objectives. In the second case study, we analyze which proportion of the features of the considered API is used by the clients. Results show that a considerably small portion of an API is actually used by clients in practice. We suspect that this may be a result of clients only using features that an API was originally known for as opposed to migrating to new features that have been provided by the API.
Overall, it is our hope that our large database of API method invocations and annotation usages will trigger even more precise and reproducible work in relation to software APIs; for example to be used in conjunction with tools to visualize how systems and their library dependencies evolve (Kula et al. 2014), allowing a finer grained analysis.
7 Resources
The sample version of fine-GRAPE, titled fine-GRAPE-light can be found at:. The dataset can be found on our Figshare repository. | http://link.springer.com/article/10.1007%2Fs10664-016-9444-6 | CC-MAIN-2017-13 | refinedweb | 9,098 | 60.85 |
Results 1 to 2 of 2
Hello, I'm trying to connect to a PostgreSql database at the localhost using Apache 2.0 server. I'm working with Fedora Core 4. The cgi script is written in Python 2.4, ...
- Join Date
- Sep 2005
- 2
Problems with connecting to Postgresql database via Apache
I'm trying to connect to a PostgreSql database at the localhost using Apache 2.0 server.
I'm working with Fedora Core 4.
The cgi script is written in Python 2.4, I am using the PyGreSql library.
The script works fine when started from command line, but when I start it via localhost I get the message:
could not create socket: Permission denied
Details:
1. I created the user 'www' and granted him select privilege on the table to access
2. I edited the httpd.conf, added "User www"
3. The script looks like this
*********
#!/usr/bin/python2.4
import cgi
import cgitb; cgitb.enable()
import pgdb
print "Content-Type: text/html" # HTML is following
print # blank line, end of headers
print "<TITLE>CGI script output</TITLE>"
print "<H1>This is my first CGI script</H1>"
print "Hello, world!"
con = pgdb.connect(host = 'localhost', database= 'words',user = 'www', password = '')
cursor = con.cursor()
select = "select * from main;"
cursor.execute(select)
data = cursor.fetchall()
for i in data:
print i
************
4. When I start it from the apache server, I get the following:
********************
InternalError Python 2.4.1: /usr/bin/python2.4
Sun Sep 4 23:41:18 2005
A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.
/var/www/cgi-bin/test.py
9 print "<H1>This is my first CGI script</H1>"
10 print "Hello, world!"
11 con = pgdb.connect(host = 'localhost', database= 'words',user = 'www', password = '')
12 cursor = con.cursor()
13 select = "select * from main;"
con undefined, pgdb = <module 'pgdb' from '/usr/lib/python2.4/site-packages/pgdb.pyc'>, pgdb.connect = <function connect>, host undefined, database undefined, user undefined, password undefined
/usr/lib/python2.4/site-packages/pgdb.py in connect(dsn=None, user='www', password='', host='localhost', database='words')
369 # open the connection
370 cnx = _connect_(dbbase, dbhost, dbport, dbopt,
371 dbtty, dbuser, dbpasswd)
372 return pgdbCnx(cnx)
373
dbtty = '', dbuser = 'www', dbpasswd = ''
InternalError: could not create socket: Permission denied
args = ('could not create socket: Permission denied\n',)
************
5. The previous experiments with granting privileges to public and to user apache led to the same result.
What is wrong?
Thanks in advance.
Michael
- Join Date
- May 2006
- 1
There is a problem with selunix and postgres/apache. Try turning off selinux. It worked for me. | http://www.linuxforums.org/forum/servers/43476-problems-connecting-postgresql-database-via-apache.html | CC-MAIN-2014-23 | refinedweb | 447 | 59.8 |
from __future__ import print_function import numba import numpy as np import math import llvm import ctypes print("numba version: %s \nNumPy version: %s\nllvm version: %s" % (numba.__version__,np.__version__, llvm.__version__))
numba version: 0.12.0-10-g4e41ab2-dirty NumPy version: 1.7.1 llvm version: 0.12.0
NumPy provides a compact, typed container for homogenous arrays of data. This is ideal to store data homogeneous data in Python with little overhead. NumPy also provides a set of functions that allows manipulation of that data, as well as operating over it. There is a rich ecosystem around Numpy that results in fast manipulation of Numpy arrays, as long as this manipulation is done using pre-baked operations (that are typically vectorized). This operations are usually provided by extension modules and written in C, using the Numpy C API.
numba allows generating native code from Python functions just by adding decorators. This code is wrapped and directly callable from within Python.
There are many cases where you want to apply code to your NumPy data, and need that code to execute fast. You may get lucky and have the functions you want already written in the extensive NumPy ecosystem. Otherwise you will end with some code that is not that fast, but that you can improve execution time by writing code the "NumPy way". If it is not fast enough, you can write an extension module using the Numpy C API. Writing an extension module will take quite a bit of time, and forces you to a slow compile-install-test cycle.
Wouldn't it be great if you could just write code in Python that describes your function and execute it at speed similar to that of what you could achieve with the extension module, all without leaving the Python interpreter? numba allows that.
Numba is NumPy aware. This means:
NumPy arrays are understood by numba. By using the numba.typeof we can see that numba not only knows about the arrays themshelves, but also about its shape and underlying dtypes:
array = np.arange(2000, dtype=np.float_) numba.typeof(array)
array(float64, 1d, C)
numba.typeof(array.reshape((2,10,100)))
array(float64, 3d, C)
From the point of view of numba, there are three factors that identify the array type:
The base type (dtype)
The number of dimensions (len(shape)). Note that for numba the arity of each dimension is not considered part of the type, only the dimension count.
The arrangement of the array. 'C' for C-like, 'F' for FORTRAN-like, 'A' for generic strided array.
It is easy to illustrate how the arity of an array is not part of the dtype in numba with the following samples:
numba.typeof(array.reshape((2,10,100))) == numba.typeof(array.reshape((4,10,50)))
True
numba.typeof(array.reshape((2,10,100))) == numba.typeof(array.reshape((40,50)))
False
In numba you can build the type specification by basing it on the base type for the array.
So if numba.float32 specifies a single precision floating point number:
numba.float32
float32
numba.float32[:] specifies an single dimensional array of single precision floating point numbers:
numba.float32[:]
array(float32, 1d, A)
Adding dimensions is just a matter of tweaking the slice description passed:
numba.float32[:,:]
array(float32, 2d, A)
numba.float32[:,:,:,:]
array(float32, 4d, A)
numba.float32[:,::1]
array(float32, 2d, C)
numba.float32[:,:,::1]
array(float32, 3d, C)
column-major arrays (F-type) have elements in the first dimension packed together:
numba.float32[::1,:]
array(float32, 2d, F)
numba.float32[::1,:,:]
array(float32, 3d, F)
The use of any other dimension as consecutive is handled as a strided array:
numba.float32[:,::1,:]
array(float32, 3d, A)
Note that the array arrangement does change the type, although numba will easily coerce a C or FORTRAN array into a strided one:
numba.float32[::1,:] == numba.float32[:,::1]
False
numba.float32[::1,:] == numba.float32[:,:]
False
In all cases, NumPy arrays are passed to numba functions by reference. This means that any change performed on the argument in the function will modify the contents of the original matrix. This behavior maps the usual NumPy semantics. So the array values passed as arguments to a numba functions can be considered as input/output arguments.
Indexing and slicing of NumPy arrays are handled natively by numba. This means that it is possible to index and slice a Numpy array in numba compiled code without relying on the Python runtime. In practice this means that numba code running on NumPy arrays will execute with a level of efficiency close to that of C.
Let's make a simple function that uses indexing. For example a really naive implementation of a sum:
def sum_all(A): """Naive sum of elements of an array... assumes one dimensional array of floats""" acc = 0.0 for i in xrange(A.shape[0]): acc += A[i] return acc
sample_array = np.arange(10000.0)
The pure Python approach of this naive function is quite underwhelming speed-wise:
%timeit sum_all(sample_array)
100 loops, best of 3: 5.44 ms per loop
If we relied on NumPy it would be much faster:
%timeit np.sum(sample_array)
10000 loops, best of 3: 19.8 µs per loop
But with numba the speed of that naive code is quite good:
sum_all_jit = numba.jit('float64(float64[:])')(sum_all)
%timeit sum_all_jit(sample_array)
100000 loops, best of 3: 11.7 µs per loop
This is in part possible because of the native support for indexing in numba. The function can be compiled in a nopython context, that makes it quite fast:
sum_all_jit = numba.jit('float64(float64[:])', nopython=True)(sum_all)
In NumPy there are universal functions(ufuncs) and generalized universal functions (gufuncs).
ufuncs are quite established and allows mapping of scalar operations over NumPy arrays. The resulting vectorized operation follow Numpy's broadcasting rules.
gufuncs are a generalization of ufuncs that allow vectorization of kernels that work over the inner dimensions of the arrays. In this context a ufunc would just be a gufunc where all the operands of its kernels have 0 dimensions (i.e. are scalars).
ufuncs and gufuncs are typically built using Numpy's C API. Numba offers the possibility to create ufuncs and gufuncs within the Python interpreter, using Python functions to describe the kernels. To access this functionality numba provides the vectorize decorator and the GUVectorize class.
vectorize is the decorator to be used to build ufuncs. Note that as of this writing, it is not in the numba namespace, but in numba.vectorize.
print(numba.vectorize.__doc__)
vectorize(ftylist[, target='cpu', [**kws]]) A decorator to create numpy ufunc object from Numba compiled code. Args ----- ftylist: iterable An iterable of type signatures, which are either function type object or a string describing the function type. target: str A string for code generation target. Defaults to 'cpu'. Returns -------- A NumPy universal function Example ------- @vectorize(['float32(float32, float32)', 'float64(float64, float64)']) def sum(a, b): return a + b
For example, let's write a sample ufunc that performs a lineal interpolation between A and B. The 'kernel' will look like this:
def lerp(A,B,factor): """interpolates A and B by factor""" return (1-factor)*A + factor*B
lerp(0.0, 10.0, 0.3)
3.0
Now let's do a ufunc for the floating point types. I will be using vectorize as a function, but remember that you could just add the decorator in the definition of the kernel itself.
lerp_ufunc = numba.vectorize(['float32(float32, float32, float32)', 'float64(float64, float64, float64)'])(lerp)
Now we can run our lerp with all of NumPy's niceties, like broadcasting of one operand (in this case the factor).
A = np.arange(0.0, 100000.0, 2.0) B = np.arange(100000.0, 0.0, -2.0) F = np.array([0.5] * 50000)
lerp_ufunc(A,B,0.5)
array([ 50000., 50000., 50000., ..., 50000., 50000., 50000.])
It is also quite fast:
%timeit lerp_ufunc(A, B, 0.5)
1000 loops, best of 3: 364 µs per loop
%timeit lerp(A, B, 0.5)
10000 loops, best of 3: 175 µs per loop
Note that in this case the same original function can be used to generate the ufunc and to execute the equivalent NumPy vectorized version. When executing there will be differences in how the expression is evaluated.
When using NumPy the expression is evaluated one operation at a time, over the entire vector. Numba generated code will evaluate the full expression in one go, for each element. The numba approach approach avoids having temporal intermmediate arrays built, as well as avoiding revisiting operands that are being used more than once in a expression. This is useful with big arrays of data where there will be savings in process memory usage as well as better cache usage.
def sample_poly(x): return x - x*x*x/6.0 + x*x*x*x*x/120.0
S = np.arange(0, np.pi, np.pi/36000000)
sample_poly_ufunc = numba.vectorize(['float32(float32)', 'float64(float64)'])(sample_poly)
%timeit sample_poly(S)
1 loops, best of 3: 2.51 s per loop
%timeit sample_poly_ufunc(S)
1 loops, best of 3: 633 ms per loop
It is also worth noting that numba's vectorize provides similar convenience to that of NumPy's vectorize, but with performance similar to an ufunc.
For example, let's take the example in NumPy's vectorize documentation:
def myfunc(a, b): "Return a-b if a>b, otherwise return a+b" if a > b: return a - b else: return a + b
myfunc_input = np.arange(100000.0)
numpy_vec_myfunc = np.vectorize(myfunc) %timeit numpy_vec_myfunc(myfunc_input, 50000)
10 loops, best of 3: 32.1 ms per loop
numba_vec_myfunc = numba.vectorize(['float64(float64, float64)'])(myfunc) %timeit numba_vec_myfunc(myfunc_input, 50000)
1000 loops, best of 3: 597 µs per loop
In the same way the vectorize allows building NumPy's ufuncs from inside the Python interpreter just by writing the expression that forms the kernel; guvectorize allows building Numpy's gufuncs without the need of writing a C extension module.
print(numba.guvectorize.__doc__)
guvectorize(ftylist, signature, [, target='cpu', [**kws]]) A decorator to create numpy generialized-ufunc object from Numba compiled code. Args ----- ftylist: iterable An iterable of type signatures, which are either function type object or a string describing the function type. signature: str A NumPy generialized-ufunc signature. e.g. "(m, n), (n, p)->(m, p)" target: str A string for code generation target. Defaults to "cpu". Returns -------- A NumPy generialized universal-function Example ------- @guvectorize(['void(int32[:,:], int32[:,:], int32[:,:])', 'void(float32[:,:], float32[:,:], float32[:,:])'], '(x, y),(x, y)->(x, y)') def add_2d_array(a, b): for i in range(c.shape[0]): for j in range(c.shape[1]): c[i, j] = a[i, j] + b[i, j]
Generalized universal functions require a dimension signature for the kernel they implement. We call this the NumPy generalized-ufunc signature. Do not confuse this dimension signature with the type signature that numba requires.
The dimension signature describe the dimensions of the operands, as well as constraints to the values of those dimensions so that the function can work. For example, a matrix multiply gufunc will have a dimension signature like '(m,n), (n,p) -> (m,p)'. This means:
First operand has two dimensions (m,n).
Second operand has two dimensions (n,p).
Result has two dimensions (m,p).
The names of the dimensions are symbolic, and dimensions having the same name must match in arity (number of elements). So in our matrix multiply example the following constraints have to be met:
As you can see, the arity of the dimensions of the result can be infered from the source operands:
Result will have as many rows as rows has the first operand. Both are 'm'.
Result will have as many columns as columns has the second operand. Both are 'p'.
You can find more information about Numpy generalized-ufunc signature in NumPy's documentation.
When building a gufunc you start by writing the kernel function. You have to bear in mind which is the dimension signature and write the code to handle a single element. The function will take both, input arguments and results, as parameters. The result will be the last argument of the function. There shouldn't be any return value to the function, as the result should be placed directly in the last argument. The result of modifying an argument other than the result argument is undefined.
def matmulcore(A, B, C): m, n = A.shape n, p = B.shape for i in range(m): for j in range(p): C[i, j] = 0 for k in range(n): C[i, j] += A[i, k] * B[k, j]
Note how the m, n and p are extracted from the input arguments. The extraction of n is done twice to reinforce the notion that both are the same. That extraction is not really needed, as you could directly index inside the shape when defining the range.
To build a generalized-ufunc from the function is just a matter of using the guvectorize decorator. The interface to guvectorize is akin that of vectorize, but also requires the NumPy generalized-ufunc signature.
gu_matmul = numba.guvectorize(['float32[:,:], float32[:,:], float32[:,:]', 'float64[:,:], float64[:,:], float64[:,:]' ], '(m,n),(n,p)->(n,p)')(matmulcore)
The result is a gufunc, that can be used as any othe gufunc in NumPy. Broadcasting and type promotion rules are those on NumPy.
matrix_ct = 10000 gu_test_A = np.arange(matrix_ct * 2 * 4, dtype=np.float32).reshape(matrix_ct, 2, 4) gu_test_B = np.arange(matrix_ct * 4 * 5, dtype=np.float32).reshape(matrix_ct, 4, 5)
%timeit gu_matmul(gu_test_A, gu_test_B)
100 loops, best of 3: 9.63 ms per loop
Some recap on the difference between vectorize and guvectorize:
vectorize generates ufuncs, guvectorize generates generalized-ufuncs
In both, type signatures for the arguments and return type are given in a list, but in vectorize function signatures are used to specify them, while on guvectorize a list of types is used instead, the resulting type being specified last.
When using guvectorize the NumPy generalized-ufunc signature needs to be supplied. This signature must be coherent with the type signatures.
Remember that with guvectorize the result is passed as the last argument to the kernel, while in vectorize the result is returned by the kernel.
There are some points to take into account when dealing with NumPy arrays inside numba compiled functions:
In numba generated code no range checking is performed when indexing. No range checking is performed as to allow generating code that performs better. So you need to be careful about the code as any indexing that goes out of range can cause a bad-access or a memory overwrite, potentially crashing the interpreter process.
arr = np.arange(16.0).reshape((2,8)) print(arr) print(arr.strides)
[[ 0. 1. 2. 3. 4. 5. 6. 7.] [ 8. 9. 10. 11. 12. 13. 14. 15.]] (64, 8)
As indexing in Python is 0-based, the following line will cause an exception error, as arr.shape[1] is 8, and the range for the column number is (0..7):
arr[0, arr.shape[1]] = 42.0
--------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-47-06c1e5ef06d0> in <module>() ----> 1 arr[0, arr.shape[1]] = 42.0 IndexError: index 8 is out of bounds for axis 1 with size 8
However, as numba doesn't have range checks, it will index anyways. As the index is out of bounds, and the array is in C order, the value will overflow into the next row. In this case, in the place reserved for element (1, 0).
@numba.jit("void(f8[:,:])") def bad_access(array): array[0, array.shape[1]] = 42.0 bad_access(arr) print(arr)
[[ 0. 1. 2. 3. 4. 5. 6. 7.] [ 42. 9. 10. 11. 12. 13. 14. 15.]]
In this sample case we where lucky, as the out-of-bounds access fell into the allocated range. Unchecked indexing can potentially cause illegal accesses and crash the process running the Python interpreter. However, it allows for code generation that produces faster code. | https://nbviewer.ipython.org/github/albop/numba/blob/master/tutorials/Numpy%20and%20numba.ipynb | CC-MAIN-2021-49 | refinedweb | 2,671 | 56.96 |
I have seen many Sharepoint Newbies cracking their head to create a Console/Windows application in VS2010 and make it talk to Sharepoint 2010 Server. I had the same problem when i started with Sharepoint in the begining.
It is important for you to acknowledge that SharePoint 2010 is based on .NET Framework version 3.5 and not version 4.0.
In VS 2010 when you create a Console/Windows application, Make Sure you select .Net Framework 3.5 in the New Project Dialog Window.If you have missed while creating new Project Go to the Application tab of project properties and verify that .NET Framework Version 3.5 is select as the Target Framework.
Now that you have selected the correct framework, will it work? Nope if the application is configured as x86 one it will not work. Sharepoint is a 64 Bit application and when you create a windows application to talk to Sharepoint it should also be a 64 Bit one. Go to Configuration Manager, Select x64. If x64 is not available select <New…> and in the New Solution Platform dialog box select x64 as the new platform copying settings from x86 and checking the Create new project platforms check box.
This is not applicable if you are making a console application to talk to sharepoint with Client Object Model.
I had a fancy requirement where i had to get all tags associated to a document set in a document library. The normal tag could webpart was not working when i add it to the document set home page, so planned a custom webpart.
Was checking in net to find a straight forward way to achieve this, but was not lucky enough to get something. Since i didnt get any samples in net, i looked into Microsoft.Sharerpoint.Portal.Webcontrols and found a solution.The socialdataframemanager control in 14Hive/Template/layouts/SocialDataFrame.aspx directed me to the solution. You can get the dll from ISAPI folder. Following Code snippet can get all Terms associated to the List Item given that you have list name and id for the list item.
using System;
using
namespace
{
url = listItem.Web.Url.TrimEnd(
}
Reference dlls added are Microsoft.Sharepoint , Microsoft.Sharepoint.Taxonomy, Microsoft.office.server, Microsoft.Office.Server.UserProfiles from ISAPI folder.
This logic can be used to make a custom tag cloud webpart by taking code from OOB tag cloud, so taht you can have you webpart anywhere in the site and still get Tags added to a specifc libdary/List.
Hope this helps some one. | http://geekswithblogs.net/GinoAbraham/archive/2012/03.aspx | CC-MAIN-2014-42 | refinedweb | 426 | 66.84 |
Index:
Part 1: Introduction
Part 2: JOptionPane
Part 3: Basic Wrappers
Part 4: Random class
Part 5: Actual full source code
~~~~~~~~~~~~~Part 1~~~~~~~~~~~~~
Introduction:
Hello, again! I'm writing this tutorial as a sort of first step for all the beginners looking to start making games. Most beginners start out with this idea in their mind of some epic rpg or mmorpg or fps. Sorry to burst your bubbles, but that's quite an enormous feat. You need the basics. I mean, you can't really do calculus without knowing 1+1=2, right? If you look at a games code, you are going to be blown away. Take it one step at a time. This tutorial is based on my r, p, s program I wrote eons ago
~~~~~~~~~~~~Part 2~~~~~~~~~~~~~~
JOptionPane:
So, as you know, java is really good for its GUI, or Graphical User Interface. Without it, everything would be on the console, and that's just boring! So, everyone uses GUI to make their programs fancy. This window itself is considered GUI. Meh.
The first thing you need to know is... your IDE does not automatically put it into your package. You have to import it. There are actually 2 packages for GUI, but for simplicity's sake, and the fact that I only used the 1 in this program, we are only going to stick with the one (the other one is totally outdated, anyways).
Before your class decloration, you should have a statement that looks like this:
import javax.swing.*;
This statement is called the import. I used the wildcard (*) because then you don't have to do a seperate import statement for every class you're tying to call. That would be super lame. It includes everything.
Next, we learn how this GUI works. For this program, you are only going to use the input dialog boxes and message dialog boxes, and the most simplest forms at that. You can look more in depth GUI concepts in the tutorial section. Now, I should state that the input dialog reads strings, so you're going to have to parse them.
Input dialogs will always be in the format:
String input = JOptionPane.showInputDialog(String s);
where input is a String variable that catches what the user inputs through the input dialog, and String s is a string (can be predefined). In my program I declared it like this:
//prompts for user input and then parses it to uchoice String input = JOptionPane.showInputDialog("What'll it be? Rock, paper, or scissors?\n" + "1 for rock, 2 for paper, and 3 for scissors: ");
Not too bad, eh?
Now, it's time for just simple message dialogs. You'd think they'd be easier, but since when did java ever make sense? Tee hee hee. Where there is a null, just go with it. I'll tell you about it when you get older
JOptionPane.showMessageDialog(null, String s);
Same deal, now just with a null, and no input string.
JOptionPane.showMessagePane(null, "Hello, World!!!");
Pretty lame by itself, ya? Yes, it is....
~~~~~~~~~~~~Part 3~~~~~~~~~~~~~~~
Basic Wrappers:
I'm only going to give a brief description. If you put an interger, say ... 7... into the input dialog, its going to return a String "7" instead of an int. Why, you ask? Just because. You'll find out when you're older
In order to get that "7" to be a 7, you have to parse it. How, you ask? Follow me into the land of discovery to find out!
int num = Integer.parseInt(String input);
int num is just an integer that declare to catch the parsed String input. Easy, huh? They have the same thing for double, float, long, and maybe some others. I cba to look. You'll only need the int one for now.
~~~~~~~~~~~~Part 4~~~~~~~~~~~~~~~
Random class:
No video game can go without this! This, or the random class itsself, is the most import part of interesting and compelling games. Without them, you'd have the same thing everytime you play it. No random battles, no random attack values, no random anything! Boring! So, I'm a big fan of the Math class because I'm a nerd like that. Math is part of java.lang methinks, so you don't have to inport it (Yea it is, I just checked). Unfortunatally for you, I used the Random class, which is in util.
import java.util.*;
You probably have already seen that one before. It's super duper handy. Random is a class, so like all classes, you have to instantiate it. "Instantiate, you say? Wtf is that?!" To instantiate means to make an instance of, which you need to do to access that class.
Random rnd = new Random();
This creates an instance of the Random class under the variable name "rnd". Pretty sweet, yea? Yea.
Alright, so want to use your new instance. But how?
int x = rnd.nextInt(int n);
int x is just a catcher for the new randomized number. You can use rnd however many times you feel like, by the way. int n is the number after the number you wish to stop at. Normally, in this format, the randomizer starts at 0, and goes to int n - 1. So, you want a random number from 0 - 2, like in this program, you would have to put a 3 in the parenthesis. Easy enough, right? Now should be able to do all kinds of sweet random numbers. Why? I don't care, but go for it
~~~~~~~~~~~~Part 5~~~~~~~~~~~~~~~
Actual full source code:
I am sure by the time you've reached this you are probably curled up into the fetal position and sucking your thumb, so I'll reward you with the full thing for reference. ^____^ Enjoy!
This is just the basic skeleton of the program. You need to probably add stuff, and it couldn't hurt to clean it up a little bit, but it's pretty straight forward. It shows every step that every beginner should understand. Nothing special ^__^
My code might be a little shabby, but... you get the idea. All it really is is a bunch of if-else loops. Easy stuff.
package rockpaperscissors; /** * * @author Erica * @date 9-25-08 * * A simple program to utilize the Random class */ //imports import javax.swing.*; import java.util.*; public class Main { public static void main(String[] args) { String input; //stores user input from JOP int uchoice; //parse of input; users choice int compchoice; //random choice by computer //prompts for user input and then parses it to uchoice input = JOptionPane.showInputDialog("What'll it be? Rock, paper, or scissors?\n" + "1 for rock, 2 for paper, and 3 for scissors: "); uchoice = Integer.parseInt(input); //generates random number and sets a limit Random randomnum = new Random (); compchoice = randomnum.nextInt(3); //determines whether the user or computer wins if (uchoice == 1 && compchoice == 0) JOptionPane.showMessageDialog(null, "Tie!"); else if (uchoice == 1 && compchoice == 1) JOptionPane.showMessageDialog(null, "Paper beats rock. You lose!"); else if (uchoice == 1 && compchoice == 2) JOptionPane.showMessageDialog(null, "Rock beats scissors. You win!"); else if (uchoice == 2 && compchoice == 0) JOptionPane.showMessageDialog(null, "Paper beats rock. You win!"); else if (uchoice == 2 && compchoice == 1) JOptionPane.showMessageDialog(null, "Tie!"); else if (uchoice == 2 && compchoice == 2) JOptionPane.showMessageDialog(null, "Scissors beats paper. You lose!"); else if (uchoice == 3 && compchoice == 0) JOptionPane.showMessageDialog(null, "Rock beats Scissors. You lose!"); else if (uchoice == 3 && compchoice == 1) JOptionPane.showMessageDialog(null, "Scissors beats paper. You win!"); else if (uchoice == 3 && compchoice == 2) JOptionPane.showMessageDialog(null, "Tie!"); } }
See ya around, neighbor! *gets shot* | http://www.dreamincode.net/forums/topic/92907-beginning-games-for-n00blets-rock-paper-scissors/page__p__1048732 | CC-MAIN-2016-22 | refinedweb | 1,267 | 77.03 |
Hello,
I am writing a C++ program that displays a multiple choice question, and then asks the user for the answer. I am posting this because I have a couple questions regarding errors I receive. Below is the code.
- I will use a "do-while" loop, to keep the program running until the user gets the answer correct
- I will use "if" statements to give user clues when they get the answer wrong.
- I use an "if" statement when the user types in an answer not on the list of answers for the question, to let them know that they did not enter a valid answer.
Here is the Code:
Here is a list of errors that appear:Here is a list of errors that appear:Code:
// name: quiz.cpp
// author: Kautz
// date: November 9, 2008
// description: Multiple Choice Question
#include <iostream>
int main()
{
char Color;
do
{
cout << "What Color is the Sky?" << endl;
cout << "A. Blue" << endl;
cout << "B. Green" << endl;
cout << "C. Red" << endl;
cout << "A, B, or C?" << endl;
cin >> Color;
}
while (Color == B );
while (Color == C );
if (Color == B )
{cout << "Hint: Starts with the letter B" << endl;}
if (Color == C )
{cout << "Hint: Starts with the letter B" << endl;}
if (Color != A , B, C )
{cout << "You must enter either A, B, or C" << endl;}
if (Color == A )
{cout << "Correct!" << endl;}
return 0;
}
I hope that someone can help me out with why there are errors and what I can do to fix them.I hope that someone can help me out with why there are errors and what I can do to fix them.Code:
quiz.cpp: In function `int main()':
quiz.cpp:21: `B' undeclared (first use this function)
quiz.cpp:21: (Each undeclared identifier is reported only once
quiz.cpp:21: for each function it appears in.)
quiz.cpp:22: `C' undeclared (first use this function)
quiz.cpp:28: `A' undeclared (first use this function)
It would be greatly appreciated!
Thank you! | http://cboard.cprogramming.com/cplusplus-programming/109058-while-statements-question-printable-thread.html | CC-MAIN-2015-22 | refinedweb | 327 | 81.83 |
Re: Thinking of reinstalling Windows...
- From: "ntuser" <nospam@xxxxxxxxxxxxxxxxxxxx>
- Date: Mon, 8 May 2006 14:19:35 +1000
Your going to regret asking this:
A better idea is to post some detail on what is wrong, so we can try
to help you with that instead.
...but you asked for it. So here goes *ahem*:
First, a little background: I've had my PC for a couple of years now and
i've made several user accounts in that time. Everything has always worked
(with the exception of my disc drives causing system freezes due to Nero
software; but for some reason that problem went away of its own accord
so..*shrug*). Those accounts created have always been fully functional
whether they were limited or administrator accounts. However...
The present day: Lately, I deleted those previous accounts except 1 and
created a new limited user account (for my PC inept family). After logging
in and importing the files from the deleted accounts I noticed that the
picture files (.jpg, .gif etc) were now represented by that icon for file
extensions that windows don't recognize (in windows explorer). I then tried
to double click and open them; I received the file type association dialogue
box asking me to associate a program for these fielt types (in this case;
..jpg). So I selected the Windows picture and fax viewer and ticked the
"always open with..." box and hit "OK". The files then opened up with the
program - no problem. Here's where things get complicated:
You would think that after going through this process that from then on all
..jpg files would be represented by that picture icon in windows explorer and
be refered to as a "JPEG image" and when opened; would automatically open
with the Windows picture and fax viewer wouldn't you? No such luck. Instead,
the files were still represented by the icon associated for files Windows
didn't recognize. My initial reaction was puzzlement but I then thought I
could remedie this simply by going to My Computer>Tools>Folder Options>File
types and finding the .jpg extension and associating permenantly. Once again
I was thwarted. There was no .jpg extension listed! So I created it and set
it to open with the Windows Picture and Fax viewer. However I noticed 2
things:
1. After creating the entry and the dektop refreshing - no changes were made
and the file type still wasn't recognized
2. After I closed the file association window and re-opened it; it was as if
I had never created the .jpg extension. Once again the .jpg extension was
missing.
I was confused. I then proceeded to re-create the .jpg extension and
assoicated program to open with. Only this time; before closing the window I
clicked the "Advanced button" of the file type window (the one that opens
with a window for "Edit File type"). Nothing happened it was as if the
button wasn't there. Clicking on it yielded no response. And of course,
after closing the window and re-opening it resulted in no .jpg file type
listed.
Once again I was confused. I created a test file type called "AAA" and
associated a program with it. This time, clicking the "Advanced" button
resulted in the "Edit File Type" window comming up as normal. Also, after
closing the file type window and re-opening it resulted in the "AAA" file
type still being listed and associated with my chosen program. I.e.
everything worked appropriately for the "AAA" file type I created. But no
matter how many times I did it, I encountered the same cyclical problem
described above for the .jpg/.gif etc file type.
Therefore whenever I double-clicked a .jpg/.gif file I would recieve the
window asking me to associate the file type with a program (the cycle
continued).
Despite all this; I was about to encounter something even stranger. In an
effort to fix this I deleted the account and created a new administrator
account and logged in. This time whenever I double clicked a .jpg/.gif etc
file type; it would open with...you guessed it - the Windows picure and Fax
viewer! I thought things were finally working. However I noticed several
things:
In windows explorer; the files were still relegated the unkown file type
icon. Despite the fact that they would open with the appropriate program -
Windows couldn't associate the .jpg/.gif etc as a "JPEG image/GIF image"
etc.
I then went to take a look at the My Computer>Tools>Folder Options>File
types window and sure enough; the .jpg/.gif file types weren't there! And of
course, I encountered the same cyclical problem as before.
Possible causes:
The only things I had done recently before creating these new accounts and
encountering this exasperating problem were the uninstallation of Microsoft
Office 2003, Counterspy (anti-spyware software that suspicioulsy enough
looks to have the identical features of Windows defender (don't ask me which
came first because I don't know)) and the testing of various registry
'cleaner' programs. One other event did occur however;
Despite Noron Internet security 2005 catching the "reger.exe" virus and
deleting; I was infested with mutliple pieces of incrediablly annoying
spyware (alexa etc etc). This all happened at once. It was if I was hit by a
spyware bomb. After evading spyware from the internet for years due to my
ant-spyware apps and Norton; it was finally "my turn". At the time of
infection I had un-installed counterpsy and had Lavasoft Ad-aware and Spybot
1.4 installed. I ran these and despite multiple detections and subsequent
deletions of the invading files. The files were always returned and were
re-detected seconds later. I.e. scan>detect>delete>scan again>detects same
files>delete>cycle repeats.
I was at a loss. Norton couldn't help either because as we know; spyware
arn't seen as viruses (mostly) and thus wern't picked up by the norton
anti-vrus scans. The only way I was able to remove the infections was
through a combination of Counterspy (which I re-installed) and the multiple
registry cleaning software I had downloaded (all registry cleaners on the
market pickup different things so I thought I'd cover my bases using
multiple apps - in the end I came to only user RegSeeker 1.45). My only
guess from this is that my registry was critically destabalised by these
events.
One more thing- I noticed a bug in Counterpsy that occurs for limited user
accounts. If you have Counteroy 1.5 installed and attempt to login into a
limited user account - the startup processes fail to load and you recieve an
error message (some generic thing i forget exactly the text). Shortly after
the user account crashes to login screen. However, this error is avoided if
you logon as an administrator. My guess is that the developers of Counterspy
rushed the production of the 1.5 version in an attempt to beat Windows
defender to the market. Just my 2 cents.
I'ts unlikely that these events are the source of my file association
problem but I thought i'd mention it.
Additional remedie steps taken:
1. sfc /scannow off the xp disc
2. The attempted application of downloadable stable registry entries as
suggested by this newsgroup. However I now encountered a new type of error
when attempting to merge these stable reg entries into my registry. I would
receive a "Cannot import "file": Error accessing the registry". However - as
a test - I was able to merge these reg keys into my last remaining old user
account "Registry entries merged successfully". This led me to believe that
the error may be due to a corrupted ntuser.dat file and thus a corrupted
Default user profile that all new user accounts copy off from. One thing
didn't make sense however;
The hkey_classes_root reg hive had different registry entries for the .jpg
file type between the new user accounts and the old working account. I've
since learned that the hkey_classes_root reg hive is actually assembled as
thus:
/quote
The HKCR hive itself does not really exist, it's a combination of/quote
HKLM\SOFTWARE\Classes (default settings) and HKCU\Software\Classes
(user settings) while HKCU\Software\Classes is dominant if the values
of identical settings differ. Therefore, HKEY_CLASSES_ROOT is both
system *and* user related.
This made things much more complicated for me as I now nolonger could
simpley merge good registry entries from the old working account to the
hkey_classes_root reg hive in the new account and hope to remedie the
problem. Not that I could anyway seeing as I still would receive the "Cannot
import "file": Error accessing the registry" error.
The only other things I have noticed is that this problem also effects the
default administrator only accessable through safe mode.
So there you have it; the scanario in full. Hopefully you have better luck
in ascertaining the source of the problem then I did. My suspicion is that
i'll have to do a full reformat and re-install of the OS *sigh*. Does anyone
know a good driver backup program? >.<
.
- Follow-Ups:
- Re: Thinking of reinstalling Windows...
- From: cquirke (MVP Windows shell/user)
- Re: Thinking of reinstalling Windows...
- From: Thota Umesh
- Re: Thinking of reinstalling Windows...
- From: Newt Ownsquare
- References:
- Thinking of reinstalling Windows...
- From: ntuser
- Re: Thinking of reinstalling Windows...
- From: Richard Urban
- Re: Thinking of reinstalling Windows...
- From: ntuser
- Re: Thinking of reinstalling Windows...
- From: cquirke (MVP Windows shell/user)
- Prev by Date: Re: Using XP Professional can I run 2 programs at once?
- Next by Date: Re: control panel items won"t stay in alphabetical order
- Previous by thread: Re: Thinking of reinstalling Windows...
- Next by thread: Re: Thinking of reinstalling Windows...
- Index(es): | http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.general/2006-05/msg02919.html | crawl-002 | refinedweb | 1,644 | 65.32 |
Revealed: Chrome Really Was Exploited At Pwnium 2013 102 [Monday]." Asks Freshly Exhumed: "So, was it really Google Chrome, or was Linux to blame?"
Re: (Score:2)
During the apple / java problem debacles the consensus arose that even though the flaw was in java, apple was to blame. Surely the same apples here.
FTFY
Linux or Chrome? (Score:4, Insightful)
Wasn't it both? They're both a component in the same vector.
Re:Linux or Chrome? (Score:5, Insightful)
Re:Linux or Chrome? (Score:5, Insightful)
Re: (Score:1)
Please consider that no OS is secure.
Maybe ther is no "perfect security", but where is Microsoft $40,000 for each exploit?
Re:Linux or Chrome? (Score:5, Insightful)
Re: (Score:2)
If a particular sequence of events is discovered that leads to a bug in the kernal being exposed, by all means push for that kernal bug to be fixed. But in the interim and for added safety, you might also want to hamstring access to that bug in your own code (Chrome in this case). That is my whole point, s
Re: (Score:1)
At least in the case of ChromeOS, where they maintain a fork of the kernel, it would make more sense to just fix the kernel bug and push the update to users. That they didn't do this suggests they were unaware of the bug.
The problem with Linux is that kernel bugs are committed constantly. It's just ridiculous how bad things have gotten. While everybody is arguing about namespaces, SELinux, etc, etc, these same people are committing egregious exploits.
I wouldn't share resources on a Linux instance if my life
Re: (Score:1)
Interesting theory... So, if buggy kernel returns true instead of (expected) false -- application should have logic that expects kernel to misbehave and somehow work out that kernel is lying? At this point I have only one question left -- do you have any programming experience?
To turn your question around, is it your position then that IE and Safari isn't at fault for being compromised here either, if the attack also relied on bugs in Windows and OS-X (as they almost always do)?
Re: (Score:2)
If that ChromeOS box had been sitting minding it's own business with no ports open, it wouldn't have been compromised. It WAS compromised, because the Chrome browser opened ports, received data, and did things. Chrome is at fault.
That doesn't mean the bug in Linux shouldn't be fixed, but Chrome is the program that wasn't properly sanitizing outside data..
Re: (Score:2)
Are you saying they didn't also patch Chrome?.
I don't know that it guarantees that it is a Linux problem. Did they modify the Linux source to do something specific for Chrome OS? Is the video parsing issue specific to Chrome OS? Did they do something non-standard or inherently unsafe with the config file?
Re: (Score:2)
So, it sounds like Linux.
Nope. It sounds like Chrome OS. It is a component of the Linux codebase that Google included in the Chrome OS codebase of their own volition, that included an exploitable bug that they did not catch in testing.
Most manufacture these days is simply the assembly of components. Do Acer get absolved of all responsibility if a batch of hard drives fails? "Not our fault, blame Hitachi"? Exploding iPhone batteries -- "Don't blame Apple, they didn't make it"?
Re: (Score:2)
It sounds like both. Video drivers in the kernel and a config file error in Chrome.
Re:Linux or Chrome? (Score:5, Informative)
Of course, but let's make it a polarized finger-pointing issue anyway! That's the American way!
Yeah, it's Chrome's fault for letting the untrusted code claim to be somewhat trusted. It's the kernel's fault for letting that questionable code become fully trusted. It reminds me of one of my favorite exploits where the kernel would helpfully drop core dumps in any directory the application said it was running from... including
/etc/cron.d. A crafted program segfaulting could run a cron job to do aything as root. The attack didn't really exploit either program's faulty behavior, but rather the interaction between them.
Re: (Score:2)
Were the core dumps marked executable, or was cron sourcing the scripts instead of just executing them?
Re: (Score:2)
cron.d files aren't scripts and don't need to be executed or sourced (doing so wouldn't work anyway).
But if you arrange for
* * * * * root
/home/me/something
to be at the start of a line in a file that is in
/etc/cron.d and the file also manages to be parsable by cron then your something executable will be run as root.
It's up to you who to blame. (Score:4, Insightful)
The blame falls to neither or both of them. It's completelly up to you.
If you are a Linux developer you want to make that sure it remains secure even if Chrome fucks up. If you are a Chrome developer, you want to make sure you have covered all your bases for all the different OS you are developing for. If you are a fanboy, you want to blame whatever product you aren't a fan of. If you are just a practical person, you care little about the blaming game and simply chose dependinig on which platform you are more invested in, Linux or Chorme.
PS: I still can't believe Google named its browser after an internal technology of Mozilla. Hell, I still can't believe MS named its VM after a TLD.
It's not a bug ... (Score:5, Funny)
The answer is: Yes (Score:4, Insightful)
The kernel shouldn't have had the bug, so Linux is to blame.
Chrome OS is built on Linux by choice, not necessity (they could have used FreeBSD, Minix, or even done a UI replacement of Windows if they wanted to spend more $$$), so... since they didn't fix the bug in their chosen, and open source OS, it's their fault too.
Blame doesn't always have to fall on one party, it can fall on multiple parties who all didn't do due diligence, or no parties when the problem was from nature, and nobody could have reasonably predicted it.
Re:The answer is: Yes (Score:5, Insightful)
Re:The answer is: Yes (Score:4, Interesting)
Thats like arguing that your supermarket isnt to blame if they sell horsemeat as steak or something. You blame them (they are responsible for vetting the product they provide), and they will blame their vendor (who sold them a bad product).
As parent said, blame isnt this binary thing; multiple people can be at fault simultaneously, but as Chrome's vendor, the end user should look to Google for answers-- its not Linus' job to fix Chrome OS, its Google's.
Re: (Score:2)
Thats like arguing that your supermarket isnt to blame if they sell horsemeat as steak or something.
Well, sort of. I expect them to do their due diligence. If someone misleads them intentionally, that's a mitigating factor. You blame them, and identify the party at fault, and they remedy the situation. That's how it's supposed to work, anyway.
Yes, absolutely. The end user will get recompense, and internal auditing will identify the weak link and (hopefully) resolve the issue. However, this is totally separate from:
As parent said, blame isnt this binary thing; multiple people can be at fault simultaneously, but as Chrome's vendor, the end user should look to Google for answers-- its not Linus' job to fix Chrome OS, its Google's.
True, but all I'm saying is either the ChromeOS team screwed up something in the kernel, or
Re: (Score:2)
Well, sort of. I expect them to do their due diligence.
And at the software level, due diligence means "testing".
True, but all I'm saying is either the ChromeOS team screwed up something in the kernel, or the kernel was already exploitable.
The kernel was already exploitable. But the ChromeOS team failed to catch it in testing.
Re: (Score:1)
Um, no. If ChomeOS introduced a bug into their version of the Linux kernel, then it's their fault. A more apt analogy would be that the store bought the meat, then added some horsemeat to it, and that horsemeat made people sick.
Not that it happened that way, of course, since we don't know if it was a Linux bug (more likely) or a ChromeOS bug.
Re: (Score:2)
If a can of 'black beans' turns out to contain pork and beans, that is NOT the supermarket's fault even if it is their responsibility to handle the situation with their supplier.
Re: (Score:2)
I would argue that if the bug is exploitable in non-ChromeOS kernels then Linux is to blame. If the bug was introduced by the ChromeOS implementation, then it's the fault of ChromeOS.
I would say that if the bug is exploitable in Chrome on other OSs, particularly non-Linux OSs, then the Chrome browser is to blame. If it is not exploitable in those OSs, but is exploitable under Linux based OSs, then Linux is to blame.
Of course, there is the possibility, indeed the probability, that every browser failed everywhere, in which case, both browser & OS were to blame.
Re: (Score:2, Funny)
But wait! The config file was really the kernel
.config, and the error was setting CONFIG_ESCALATE_TO_ROOT_RANDOMLY=M.
Re: (Score:2)
Re: (Score:1)
Before we know details, we can't tell.
Generally, if kernel wasn't doing something as advertised, or at least not at the "sanity" level expected, it's a kernel problem.
If it is about the interface that is easy to misuse, or not guaranteed to be safe in the sense expected by Chrome sandbox (misuse or wrong assumption), then it's a bug in Chrome.
Rationale is that we sometimes can't draw a line and have to look at the system as a whole.
Who gives a fuck ? (Score:3, Insightful)
I mean apart from academic curiosity, who does give a fuck if the fault should be blamed on Linux or on Chrome ?!
The REAL ACTUAL IMPORTANT part is that the problem got discovered, so you can expect that the kernel, the config file parser and the video decoder (or the video driver if it's hardware accelerated) will get patched, sent upstream and then a wave of updates will be pushed to all the various distributions affected by said bugs.
The world will be a safer place AND THAT'S WHAT MATTERS for everyone.
Not
Re: (Score:3)
I mean apart from academic curiosity, who does give a fuck if the fault should be blamed on Linux or on Chrome ?!
The Brand-tribes have to know so that the guilty tribe can sacrifice a virgin[0] to the volcano gods.
[0] obSlashdot: Insert slashdot "virgin" joke of your choice here.
Re: (Score:1)
Just like the hardware world of OEM using someone's component for building a system, Google might not be directly responsible for the bug, they should be held responsible for fixing it.
Just because something is software, the vendor that sold it to you should not be able get away with it.
Re:PinkiePie (Score:5, Informative)
PinkiePie is one of the My Little Ponies. That handle's kinda cute, considering that that those that are pwn'd are sometimes called Pwnies and there are the Pwnie Awards [wikipedia.org]. And all the bronies know that PinkiePie is the funniest of the ponies... not that I'd admit watching the show... wink, wink... ahem...
Re: (Score:3, Insightful)
Appropriate too. Pinkie Pie has a reputation for breaking the fourth wall and using that as a readily available exploit. Normal reality and it's laws of physics simply don't apply.
Re: (Score:2, Funny)
Comment from a happy_place calling PinkiePie "kinda cute" is a bit amusing in itself, but not going to crack jokes on it here when what I find more interesting is the hint that there potentially is a hacker/cracker group out there called "My Little Pwnies". Will leave the humor and fact finding to those more interested and better suited for each of those categories.
Re: (Score:2, Funny)
All we need is the OCD freak who tests everything meticulously, the simple hard-worker who keeps at it, the rock-star coder obsessed with speed, the hacker who's all about style, and the shy introvert with a menagerie of botnets and they could summon the freaking elements of exploitation.
Re:Misleading (Score:5, Insightful)
You don't seem to understand how Pwn2Own works. People don't arrive at the contest, pick an OS/Browser and then start looking for an exploit.
They begin weeks in advance looking for exploits. IF they find one, then they go to the contest and select the appropriate platform and demonstrate the exploit. Their demonstration may fail, because the versions of the software on the contest platform might be different from what they were practicing with.
That no one "attempted to hack" OSX and Safari at the competition this year is because in the past few weeks of trying, no one has found an exploit for it. It's certainly not the case that they could have won the prize, but couldn't be bothered.
Re: (Score:1)
The OS X kernel and POSIX layer are all open source. There is no security through obscurity.
Re: (Score:1)
Re: (Score:2)
Right... that's why it was gleefully hacked in all of the preceding years. It's time to put that tired old chestnut to bed and think up some new material.
Re: (Score:1)
Well, yes and no. I would remind our readers that both Chrome and Safari are based on WebKit and soon, because I can't stop it, Opera too. So it's at least interesting that, despite this commonality, the attack was executed and somewhat sucessful on Chrome but not Safari. The point is with BSD and WebKit at the core of MacOS X and Safari, neither of them are completely obscure. Though I won't argue the disinterest part.
Re: (Score:2)
Yes, people just didn't want that money.
Re: (Score:2).
Where is the justice? (Score:2, Funny)
PinkiePie should be given at least 41 months behind bars!! Down with all "Hackers". Put them all in Jail!!!! PFFFFTTTTTT!!!!!
XEN para-virtualized browsers in Qubes OS (Score:5, Interesting)
The browser is a rather complex beast and there is probably no way that the application itself can ensure system integrity... at least with any consistency.
Some of us are migrating our online activities to Qubes OS [qubes-os.org] which is a desktop distro (I know...) that allows you to create App VM domains for things like "personal", "work", "unsafe", etc. and also a "disposable" one that gets reset on exit. Each domain of apps is displayed in window borders that have an associated color.
Taking it further, some of the commonly-attacked system components like the network stack are virtualized as well.
Qubes employs VT-x and VT-d/IOMMU hardware to allow you to operate different types of peripherals (like bluetooth) without incurring all of the risk they normally carry. Even device drivers are paravirtualized! So the attack surface that can be used against the core system (or any other domains in the system) is kept to a bare minimum.
An added benefit of this approach is that user activities are tracked somewhat less than normal (especially if you use disposable VMs).
Re: (Score:2)
"Even device drivers are paravirtualized"
How can you virtualise device drivers? Either they access hardware directly or they don't. If they don't then what runs the hardware?
Re: (Score:2)
IOMMU hardware in recent CPUs translates the device into the VMs address space. Then the kernel running inside the VM can use its normal drivers to operate the device at essentially full speed.
The kernel that runs the hypervisor only runs a limited set of drivers by default.
Re: (Score:1) [wikipedia.org]
drivers *not* paravirtualized (Score:2)
If you're using VT-d/IOMMU to assign a particular piece of hardware to a particular VM then that is not actually paravirtualized...you're basically running the normal driver in the VM.
Paravirtualized hardware would require the host OS to have the driver for the hardware, then present a different (usually simplified) API to the guest to allow it to do things more efficiently than fully virtualizing the hardware.
Re: (Score:2)
OK, thanks for correcting my terminology there!
Re:XEN para-virtualized browsers in Qubes OS (Score:5, Interesting)
You mean the same idea I've been asking for for about 15 years, otherwise known as bottling, process separation and lots of other fancy terms?
Your browser doesn't need access to the hard disk, except a single, solitary folder for downloads. That's it. It shouldn't even KNOW where that folder is, nor if it's in memory or a disk, or a network share. Hell, it shouldn't even be allowed to have the capability to look, let alone actually find out.
For uploads, the browser requests that YOU supply the information to the browser process bottle, and it takes it once supplied and does what's necessary. It has no need to have arbitrary access to every file visible on your system, only those it creates itself inside its bottle, or those you explicitly provide it with through some system mechanism. Similarly, it has no need to do anything more than put out a HTTP request and get a response.
Something else, somewhere, will handle, authorise and sanitise that request and response and do NOTHING else. Yes or NO. The program should have NO way to detect what that process is (so if the user wants to run in a zero-privilege environment, the browser just has to cope with that rather than say "I can't run without admin").
Now replace "browser" with "word processor", "spreadsheet", "hardware utility" or anything else that you use on your system.
The problem we have is that we've come from general purpose OS that were designed to let all processes have access to anything that wasn't explicitly locked away from them. The fix is to give processes the absolute bare minimum they require to do their work, make them ASK for everything, and refuse any request that you don't like. And make every process work (for the correct definition of work) even when tested inside a bottle that ALWAYS gets No to every request.
We've sort of tacked on such security features to today's OS (Unix-likes are certainly closer than, say, Windows), which historically always said "Yes", and now we have to start with one that says "No" all the time, for everything, and gives nothing to a process that isn't 100% necessary.
Replace all "file open dialog" actions with a system component that does NOTHING but let the user choose a file (Windows started out with the right idea here, but fails terribly in implementation). Hell, theming is then permanent and to the user's preference (and the program needs know NOTHING of the theme chosen or anything else) and nobody has to (or can) run around recreating an official file-open dialog. You can even "green-bar" official file-open dialogs (like we do with padlocks on SSL sites) so that they are distinguishable from rogue processes trying to create fake file-open dialogs (even though those would not be able to escape the bottle to read files anyway!). Make it so that NO other process can green-bar a file-open window except the file-open process.
Hell, why should a process even be able to know or change whether it's full-screen, windowed, the window size, etc.? Instantly you take ten options out of every game that has "recreated" those options and decisions for you and leave it to the user to decide. Game X will ALWAYS load fullscreen. Any process marked as a "Game" will only be fullscreen when I press this button. Or even "No process can EVER go fullscreen because I always like to see my Start Bar". And the process will have no way to know, and no way to override the decision of the user. All it knows is that it has a bitmap area it can draw to which is copied to the screen when it asks. It can't tell if that copy is a copy-and-scale into a window bitmap, or direct copy to video memory, or even just copy to a screenshot / VNC program.
Assuming a program wants to open a file, the program calls the function to open said dialog and is blocked until it returns. It can't do anything else but request it. The dialog is run in a process all of its own and has access to read file names in user-allowed folders, display things in a file-open dialog on-screen (again, subj
Re: (Score:2)
Re: (Score:2)
It's only evil when it's forced on the user. User access to the entire filesystem on their own device IS a right.
Re: (Score:1)."
Re: (Score:2, Informative)."
Total nonsense. Apple's approach to sandboxing is centered around giving users the very power you claim they're denying. Say a user wants to open a file in a sandboxed app, and that file doesn't reside inside the app's private storage area (the isolated chunk of filesystem where it is free to create/open/delete files at will). The user uses the "open" command from the menu or the keyboard, same as any pre-sandbox GUI app. But instead of presenting its own UI for file opening, the sandboxed app makes a l
Re: (Score:2)
Cliff Notes version: Modern OSes should adhere to the principle of least privilege.
Re: (Score:2)
The principle has always been good, but implementing it in a way where you can actually work with it has always been harder. Most systems that try to do this don't have a good answer to letting the user deal with it.
Re: (Score:2)
IMHO this is more like a work-around that a real solution. You have to allocate RAM for each application VM, which is really unpractical and will impact performance. A better approach seems to be to build security from the top down by extending chroot into things like Linux Containers and FreeBSD jails. You get similar isolation for applications, but full system performance. You don't have the extra security of isolation for the drivers, that wouldn't be possible in Linux without virtualisation. However, i
Re: (Score:2)
Yes, nothing is 100% efficient or safe. I don't mind throwing a couple extra GB at the problem of security, however, especially if it gets me robust hardware-backed isolation.
Chrome hack to get GPU (Score:3, Interesting)
Chrome OS bug:
The CVE-2013-0913 hack was was a buffer overflow in the GPU for Chrome OS / Linux. [nist.gov]
Chrome browser bug:
Last year's PinkiePie hack chained multiple Chrome (browser) bugs together to be able to get to the GPU. [webpronews.com]
They didn't release details yet, but odds are since it's the same person he probably used a similar method to hack the browser and get access to the GPU of the OS.
Re: (Score:2)
They find them ahead of time.
Re: (Score:2)
Microsoft sympathizers at Google
OK, that was funny.
OSX/Safari (Score:2)
So OSX/Safari was the only one standing?
Re: (Score:1)
Considering OS X has a much higher share VS. Linux, and the fact that Safari is used on hundreds of millions of iOS devices...your comment is...full of shit?
Re: (Score:1)
Yea, call it what you want AC but Safari was apparently the most secure among the bunch. Like it or not.
Sent from my iPhone
Analyzing the exploit (Score:3)
So the attack would likely involve a web page employing hardware acceleration, that leaks an overflow into the i915 driver, resulting in
Calling it not reliable means that there isn't a deterministic way to establish the system state needed for the exploit to work.
Google has fixed Chrome already - and now we need to watch what gets upstreamed in the i915 driver for the next week or so.
p.s. PinkiPie da Man (or woMan, don't know gender).
Chrome IS Linux/GNU/X (Score:2)
So, was it really Google Chrome, or was Linux to blame?
I hate to tell all you Linux dislikers this, but here it is: Chrome is just another Linux/GNU/X-windows distribution. What differentiates "Chrome" from others? A thin layer of links to Google web sites on the user interface.
Wrong wrong wrong (Score:2)
See: this discussion [slashdot.org], for example.
Obvious troll article (Score:2, Insightful)
Re: (Score:1) | https://tech.slashdot.org/story/13/03/19/1328244/revealed-chrome-really-was-exploited-at-pwnium-2013?sdsrc=prev | CC-MAIN-2018-22 | refinedweb | 4,232 | 71.04 |
How To Connect MySQL Database From Python Application and Execute Sql Query with Examples
MySQL is very popular and open source database server. Python is old but lately discovered language. In this tutorial we will look how to use this popular techs in our applications in order to run sql queries.
Install Python MySQL Connector
In order to connect MySQL database from Python application we need some libraries. There are different libraries those supports running sql queries on MySQL but we will stick with
mysql-connector-python3 .
Fedora, CentOS, RHEL
We can install mysql-connector-python3 library with the following command. This requires root privileges.
Debian, Ubuntu,Mint
We can install mysql-connector-python3 library with the following command. This requires root privileges.
Pip
Pip can be used to install mysql connector library.
Load MySQL Library
In order to use MySQL library we need to import it into our application. We will use
import statement to import library which is named
mysql.connector .
Connect MySQL Database with Username and Password
Now we can connect to the database. While connecting MySQL database we to provide following parameters to the
mysql.connector.connect function
useris the username to authenticate
passwordis the password of the user
hostthe database server hostname or IP address
databaseis optional which provides the database name
Run and Execute SQL Query
In order to run SQL query we need to create a cursor which is like a SQL query window in GUI SQL tools. We will use
dbcon.cursor() and then use created
cursor objects
execute function by providing the SQL.
Print and List Returned Data
After executing the query the results will be saved to the cursor object named
cur . We can get and list returned information from
cur object. Provided data is stored in a list format. So we can use loops to iterate and print.
Close MySQL Connection
One of the most important part of database programming is using sources very strictly. In order to prevent performance problems we need to close the connection to the MySQL database after finishing job. We will use
close() function of connection object. | https://www.poftut.com/connect-mysql-database-python-application-execute-sql-query-examples/ | CC-MAIN-2019-22 | refinedweb | 353 | 56.96 |
JSON is a text format that is widely used as a data-interchange language because its parsing and its generation are easy for programs. It is slowly replacing XML as the most powerful data interchange format, as it is lightweight, consumes less bandwidth, and is also platform-independent. Though Java doesn't have built-in support for parsing JSON files and objects, there are a lot of good open-source JSON libraries are available which can help you to read and write JSON objects to file and URL. Two of the most popular JSON parsing libraries are Jackson and Gson. They are matured, rich, and stable. :
<groupid>com.googlecode.json-simple</groupid> <artifactid> json-simple</artifactid> <version>1.1</version>
Otherwise, you have to add the newest version of json-simple-1.1.1.jar in CLASSPATH of your Java program. Also, Java 9 is coming up with built-in JSON support in JDK, which will make it easier to deal with JSON format, but that will not replace the existing Jackson and GSON library, which seems to be very rich with functionality.
How to create JSON File in JavaJSONParser parse a JSON file and return a JSON object.
Once you get JSONObject, you can get individual fields by calling get() method and passing name of the attribute, but you need to remember it to typecast in String or JSONArray depending upon what you are receiving.
Once you receive the array, you can use the Iterator to traverse through JSON array. This way you can retrieve each element of JSONArray in Java. Now, let's see how we can write JSON String to a file. Again we first need to create a JSONObject instance, then we can put data by entering key and value. If you have not noticed the similarity then let me tell you, JSONObject is just like Map while JSONArray is like List.
You can see code in your write method, that we are using put() method to insert value in JSONObject and using add() method to put the value inside JSONArray object. Also note, the array is used to create a nested structure in JSON. Once your JSON String is ready, you can write that JSON String to file by calling toJSONString() method in JSONObject and using a FileWriter to write that String to file.
import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.util.Iterator; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; /** * Java Program to show how to work with JSON in Java.
* In this tutorial, we will learn creating * a JSON file, writing data into it and then reading from JSON file. * * @author Javin Paul */ public class JSONDemo{ public static void main(String args[]) { // generate JSON String in Java writeJson("book.json"); // let's read readJson("book.json"); }
/* * Java Method to read JSON From File */ public static void readJson(String file) { JSONParser parser = new JSONParser(); try { System.out.println("Reading JSON file from Java program"); FileReader fileReader = new FileReader(file); JSONObject json = (JSONObject) parser.parse(fileReader); String title = (String) json.get("title"); String author = (String) json.get("author"); long price = (long) json.get("price"); System.out.println("title: " + title); System.out.println("author: " + author); System.out.println("price: " + price); JSONArray characters = (JSONArray) json.get("characters"); Iterator i = characters.iterator(); System.out.println("characters: "); while (i.hasNext()) { System.out.println(" " + i.next()); } } catch (Exception ex) { ex.printStackTrace(); } }
/* * Java Method to write JSON String to file */ public static void writeJson(String file) { JSONObject json = new JSONObject(); json.put("title", "Harry Potter and Half Blood Prince"); json.put("author", "J. K. Rolling"); json.put("price", 20); JSONArray jsonArray = new JSONArray(); jsonArray.add("Harry"); jsonArray.add("Ron"); jsonArray.add("Hermoinee"); json.put("characters", jsonArray); try { System.out.println("Writting JSON into file ..."); System.out.println(json); FileWriter jsonFileWriter = new FileWriter(file); jsonFileWriter.write(json.toJSONString()); jsonFileWriter.flush(); jsonFileWriter.close(); System.out.println("Done"); } catch (IOException e) { e.printStackTrace(); } } } Output: Writting JSON into file ... {"author":"J. K. Rolling","title":"Harry Potter and Half Blood Prince", "price":20,"characters":["Harry","Ron","Hermione"]} Done Reading JSON file from Java program title: Harry Potter and Half Blood Prince author: J. K. Rolling price: 20 characters: Harry Ron Hermione
That's all about how to parse JSON String in Java program. You can also use other popular libraries like Gson or Jackson to do the same task. I like the JSON simple library to start with because it's really simple, and it provides a direct mapping between Java classes and JSON variables.
For example, String in Java also maps to string in JSON, java.lang.Number maps to number in JSON, and boolean maps to true and false in JSON, and as I have already said object is Map and array is List in Java.
All I can say is JSON is already a big thing and in the coming days every Java programmer is going to write more and more code to parse or encode decode JSON String, it's better to start early and learn how to deal with JSON in Java.
Other JSON tutorials you may like to explore
- How to convert a JSON String to POJO in Java? (tutorial)
- 3 Ways to parse JSON String in Java? (tutorial)
- How to convert a JSON array to a String array in Java? (example)
- How to convert a Map to JSON in Java? (tutorial)
- How to use Google Protocol Buffer in Java? (tutorial)
- How to use Gson to convert JSON to Java Object? (example)
- 5 Books to Learn REST and RESTful Web Services (books)
P.S. - If you want to learn how to develop RESTful Web Services using Spring Framework, check out Eugen Paraschiv's REST with Spring course. He has recently launched the certification version of the course, which is full of exercises and examples to further cement the real world concepts you will learn from the course.
11 comments :
Why you think JSON is better than XML? In my opinion XML has more tools available than JSON e.g. XPath, XSLT and XQueyr.
When you convert a JSON String to Java Object, does libraries also take care of encoding? I am new to JSON and don't know if it contains encoding anywhere in header just like XML does. Any idea?
Oracle has proposed to drop lightweight JSON API from Java 9 in favor of two other features, value type and generics over primitive types. This is what Oracle's head of Java Mark has to say
“We may reconsider this [JSON API] JEP for JDK 10 or a later release, especially if new language features such as value types and generics over primitive types (JEP 218) enable a more compact and expressive API.” – Mark Reinhold
i have a txt file like
S2F3
accept reply: true
L,3
ASC,"This is some text that
gets to the next line
and the next line and possibly any number of lines."
L,3
integer,6
bool,true
float,5.21
L,2
ASC,"More text"
L,1
integer,8
S10F11
accept reply: false
asc,"test2"
S9F1
accept reply: false
L,0
S1F15
accept reply: true
L,4
bool,true,false,true,true,false
Integer,3,5,6,3,7,2,6
float,9
L,3
integer,4
L,2
float, 4.6,9.3
bool, false
L,2
L,1
L,2
integer,5
L,0
L,3
asc,"test3"
asc,"test
4"
L,0
this file convert into json formate how can i do. json formate like this
[{"header":{"stream":2,"function":3,"reply":true},"body":[{"format":"A","value":"This is some text that\ngets to the next line\n and the next line and possibly any number of lines."},[{"format":"U4","value":6},{"format":"Boolean","value":true},{"format":"F4","value":5.21}],[{"format":"A","value":"More text"},[{"format":"U4","value":8}]]]},{"header":{"stream":10,"function":11,"reply":false},"body":{"format":"A","value":"test2"}},{"header":{"stream":9,"function":1,"reply":false},"body":[]},{"header":{"stream":1,"function":15,"reply":true},"body":[{"format":"Boolean","value":[true,false,true,true,false]},{"format":"U4","value":[3,5,6,3,7,2,6]},{"format":"F4","value":9},[{"format":"U4","value":4},[{"format":"F4","value":[4.6,9.3]},{"format":"Boolean","value":false}],[[[{"format":"U4","value":5},[]]],[{"format":"A","value":"test3"},{"format":"A","value":"test\n4"},[]]]]]}]
please help
thank you
I am getting compile time error "Cannot cast from Object to long" at line long price = (long) json.get("price");
It should be like :long price = (Long) json.get("price");
What I have noticed is that you cannot retrieve an int value. But long is possible.
want to get values from jSON file.facing some errors.I dont want to use "parse".
Using JSONObject obj=new JSONObject();
or using JSONArray
I am getting error as "org.json.simple.JSONArray cannot be cast to org.json.simple.JSONObject" inside readJSON method.
Earliest help would be much appreciated.
how can we convert a json object to in a string object
just put.asString();
{"Status":"1","Error":"-","Data": "test data"}
i want to convert this json formate into java please help | https://javarevisited.blogspot.com/2014/12/how-to-read-write-json-string-to-file.html | CC-MAIN-2021-49 | refinedweb | 1,526 | 57.47 |
We’re happy to announce the availability of our newest free ebook, Introducing Windows Server 2016 (ISBN 9780735697744), by John McCabe and the Windows Server team. Enjoy!.
Introduction related.
About the author.
When will be Kindle?
Today is 4.10 and there are not 🙁
import it into calibre, export it as a mobi or azw. Or docx. or ….
Send the pdf file to email of your device @amazon.com and syncronize… after this your ebook will be displayed in collection.
Except it will still handle and look like a pdf which can be troublesome when converting to any of those formats. Text wont fit right, menu might not work, etc.
If they had released it like epub then it could easily be converted to any other format…
Try to use Kindle DX (9 inch screen) PDFs are there more welcome 🙂
Ok
My whatsapp is not working
Great eBook! Let’s read.
Thank you John !
What a great book. Timely released. More resources in less volume.
Thanks John.
thanks for this ebook! i will take a look
Great ebook! Important informations…
Por favor, editen una versión en español.
Thank you for update news.
Thanks
will be help
Thanks
Thanks, for ebook and all my wishes to Microsoft for great deals
Thanks John.
thank you
Where is the download link??
Thanks John
Thank you John.
Thank You, John McCabe and the Windows Server team for this book. Thanks Kim for publishing this on the portal.
Thank you for the great ebook – do i Need Windows 10 Enterprise to read it or is Windows 10 Pro working too?! 😀
When I am going to download the ebook, there is an ERROW:
“There is a problem with this website’s security certificate. This organization’s certificate has been revoked.”
Teşekkürler
its great
thanks for this ebook! i will take a look
Thank you Guys for keeping the technology rolling we always need to improve!
THANK YOU
Thanks for sharing this eBook.
Aspetto che lo pubblichiate in italiano
Seria de gran ayuda e interesante que enviaran estos correos en Español
muchas gracias
Gostei
Thanks in advance.
Nice one, thank you.
Thanks, just what I needed !
Very pleased to get this ebook. thank you John
I need it. Thank you so much.
Bem que poderia ser em portugues.
It’s can not be download.
Send me the ebook copy server 2016
this version of Windows Server, help Surface RT to prepare for Windows Server 2016 and give the means to develop and design. A path to introduce Windows Server 2016 into the RT environment for full advantage of what is to come.
Available now on Kindle.
Great
Thanks John
As a former engineer whose epileptic condition is visibly in recession its nice to touch base with old haunts. Realize that looking back for former models of what developing, the older the analogy, the more dependable the resource.
need ebook microsoft server2016
Thanks
well,its god but it lacks the installation and upgrade step by step practical guideline needed. try to include practical step-by step guidelines that can be implemented in the test lab right away next time. thanks
Isn’t that what TechNet is for?
Thanks for the book!
Looking for ePub version
how to buy the microsoft ebook?
Thanks!
Thank you very much
For waht it’s worth, I’d like to see the ebook released in epub and mobi formats as well. I have a Kindle Oasis I bought specifically to take backpacking with me. It’s smaller and lighter than any tablet and has at least a months of battery life… versus hours of battery life with tablets.
I agree with Marlon, pdf files on Kindle devices, especially the ‘page white’ Kindles, are not a good combination. I think the only advantage a pdf file has is the original publisher intended formatting is preserved which includes an annoyingly fat border between the text and the edges of the screen. A mobi version of the ebook would allow the text to reflow to the edges of the screen and allow me to choose a smaller font size to get more text on the screen. Both of these would result in fewer ‘page turns’ (screen refreshes) which, in turn, results in longer battery life. Since page white Kindles only use power refresh the screen, the fewer times I have to change pages, the longer the battery lasts.
Thank you.
I am Abhishek kumarRao/sarojkumar/a geethamarani x2000x2016-17.com
Thanks, for a book.
Thanks
When will the ePub and mobi versions of John McCabe and the Windows Server team
be released: Introducing Windows Server 2016 so that it read with Apple iPad iBooks.
We are unable to offer the ePub and Mobi formats of this book at this time.
Well, very nice that eBook.
Great information.
i do not know
I need this ebook. thanks
Webroot Support has become a necessity for the clients, as it is not easy to manage all the attributes of it. Hence if a user wants to enjoy Webroot services to the fullest then he must take the help of Webroot technical support team.
Thank you so much for sharing this e-book with us. I really like it, there are lots of new things into about the new technology.
I Like It
Thank You, John McCabe and the Windows Server team for this book. Thanks Kim for publishing this on the portal.
Thank You, John McCabe and the Windows Server team for this book. Thanks Kim for publishing this on the portal. gREAT rEAD UP OF COURSE | https://blogs.msdn.microsoft.com/microsoft_press/2016/09/26/free-ebook-introducing-windows-server-2016/ | CC-MAIN-2018-30 | refinedweb | 928 | 75 |
What is Windows 10 IoT Core?
Commercializing Your IoT Core Solution
Develop for IoT Core
Episode 54: VS for Linux with Ankit Asthana
C++ IDE Improvements
C++ - Language Conformance
C++ - Build Universal Windows Apps
Cross-Platform Mobile Application Development
VS 2015 C++ Cross Platform Mobile Application Development: new iOS support and updated…
OGRE you ready to create Universal projects?
The Discussion
BenZillaReally good interview.
You guys should put up some kind of cheat sheet with the most usefull keyboard commands somewhere.
UnoriginalGuy
Wonderful interview. VERY interesting stuff here guys. I really learnt a trick or two.
If a VS VC++ team member drops by this thread, I would like to know if there is any information, good or bad, about support for VC++ being added to the VS Class Designer (as supported for C#, VB.net etc).):
justAnExample would only select Example and the next tab on the left key would select AnExample. It's also useful with ctrl+backspace as I hardly ever use the backspace one single time, it's usually quicker to delete the entire word and retype it
.
BorisJ
Trackerball, I was definitely not trying to skirt the issue, it's more that I was not always expecting those questions
The short answer to your question is this:
- a managed class (ref class A {}) can only hold pointers to native types
- a native class (class B {}) can not hold a managed handle directly, instead you should use gcroot<T> as follows:
#include <vcclr.h>
using namespace System;
class B {
public:
gcroot<String^> str; // can use str as if it were String^
B() {}
};
int main() {
B b;
b.str = gcnew String("hello");
Console::WriteLine( b.str ); // no cast required
}
I hope that answers the question. If you want to learn more, I gave a talk about interop at Tech-Ed and I have some posts about it over at.
-- Boris. | https://channel9.msdn.com/Blogs/Charles/VC-2005-IDE-Tips-and-Tricks?format=progressive | CC-MAIN-2017-43 | refinedweb | 313 | 59.74 |
Hi Guys any Idea how to Export A Gridview result to an Image file (such as jpeg or Bitmap file) ?
Thanks In advance
when you say gridview result, I usually have images in the gridview with img tags so the gridview never actually sees the jpg. You can certainly use code like the following to extract a jpg from an assembly. The code I'm pasting below is
an HttpHandler that gets the Image data and then displays it like an img tag.
public class CaptchaTypeHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
if (context.Request.QueryString["encryptedvalue"] == null)
{
// likely, this is in design mode and we want to just show the
// default image for type 2.
Assembly assembly = Assembly.GetExecutingAssembly();
Stream imgStream =
assembly.GetManifestResourceStream("CaptchaUltimateCustomControl.Images.CaptchaType2.gif");
byte[] ba = new byte[imgStream.Length];
imgStream.Read(ba, 0, Convert.ToInt32(imgStream.Length));
context.Response.BinaryWrite(ba);
}
Thanks Peter ,
Actually The code you wrote was useful , But I mean I want to Export a Gridview , Just Like Report or something . I mean I want to see the Resultset of a grid in an Image File , Do you Think is that Really Possible ? I want to Fax The Grid Search Result
Thanks in advance
You can use DrawLine method to draw some grids like the source data table by yourselft, then save the image and fax it. | https://social.msdn.microsoft.com/Forums/en-US/4c9f1afd-cc32-4435-aa93-7d53c5c4462f/girdview-export-to-image-file?forum=aspsystemdrawinggdi | CC-MAIN-2021-49 | refinedweb | 225 | 55.54 |
Placeholder metadata for operands of distinct MDNodes. More...
#include "llvm/IR/Metadata.h"
Placeholder metadata for operands of distinct MDNodes.
This is a lightweight placeholder for an operand of a distinct node. It's purpose is to help track forward references when creating a distinct node. This allows distinct nodes involved in a cycle to be constructed before their operands without requiring a heavyweight temporary node with full-blown RAUW support.
Each placeholder supports only a single MDNode user. Clients should pass an ID, retrieved via getID(), to indicate the "real" operand that this should be replaced with.
While it would be possible to implement move operators, they would be fairly expensive. Leave them unimplemented to discourage their use (clients can use std::deque, std::list, BumpPtrAllocator, etc.).
Definition at line 1281 of file Metadata.h.
Definition at line 1287 of file Metadata.h.
References llvm::Metadata::SubclassData32.
Definition at line 1296 of file Metadata.h.
Definition at line 1301 of file Metadata.h.
References llvm::Metadata::SubclassData32.
Replace the use of this with MD.
Definition at line 1304 of file Metadata.h.
References assert(), llvm::MetadataTracking::track(), and llvm::MetadataTracking::untrack().
Definition at line 1282 of file Metadata.h. | http://llvm.org/doxygen/classllvm_1_1DistinctMDOperandPlaceholder.html | CC-MAIN-2018-51 | refinedweb | 200 | 52.36 |
Originally posted by Michael Tu: 1. public interface DataInterface: All basic database operations are defined here, including criteriaFind, lock and unlock. Generic Exception is thrown in all methods.
4. public class LockManager: LockManager is implemented as Singleton. It keeps a reference to Data.
6. public interface RemoteDataInterface extends DataInterface, Remote: Basic database methods are exported here again. DatabaseException and RemoteException are thrown for all methods.
10. public class ClientData: This is a wrapper class for local or remote database services. It automatically connects to local or remote database based on the database address argument upon instantiation. Again, all basic database operations are implemented.
You can make a case for throwing IOException, but Exception...?
Apart from the FBN assignment, how many database-driven applications do you know that use exactly one database table? In general, Singleton is the most abused design pattern around. Never use it unless there is a solid, fundamental reason why you can have only one instance of a class.
Why bother? Just have code: interface RemoteDataInterfaceextends DataInterface, Remote { // that's all folks}
All you need is a place to instantiate a DataInterface object that is either Data, or a RemoteData stub.
Originally posted by Michael Tu: [...] If you list all of them, the method definitions will be quite long, which I also don't like. A generic one looks more concise.
[...] I don't quite understand what you mean by mentioning 'one database table'? [...]
The LockManager keeps a reference to Data for some basic database info, such as total record count.
No need to export methods here? I may have a try.
This is what I do.
Does this class need to be aware of anything other than the locked record numbers and their respective clients?
So where does the ClientData wrapper class come in?
This implementation should include a class that implements the same public methods as the suncertify.db.Data class, although it will need different constructors to allow it to support the network configuration.
Originally posted by Michael Tu: 1) Change the method signatures in DataInterface to throw IOException and DatabaseException.
2) Not implement the LockManager as Singleton.[...]
The ClientData class is implemented merely for this requirement: This implementation should include a class that implements the same public methods as the suncertify.db.Data class[...]
You see, it says "should include". It doesn't say that you have to be the one writing it. The stub for RemoteData generated by rmic satisfies the requirement on all counts.
Originally posted by Michael Tu: Having such a class will do no harm though it is not necessary.
When I implement sth, I am always bugged by this question : does the assessor accept this approach?
[...] Now I find it not very good as LockManager is solely used by RemoteData but declared outside of it. [...]
Originally posted by Peter den Haan: Why bother? Just have interface RemoteDataInterface
extends DataInterface, Remote {
// that's all folks
}
Originally posted by Eddie Sheffield: [QBDon't you have to explicitly rename all the methods and explicitly throw RemoteException?[/QB]
Originally posted by Michael Tu: 1) A client may stay connected throughout or may work in cyclic mode of connection-transaction-close. Both have drawbacks.
2) For local mode, is it really not necessary to implement lock/unlock?
HTMLEditorKit to display it (read-only) in an application window | http://www.coderanch.com/t/182525/java-developer-SCJD/certification/Final-check-submission | CC-MAIN-2014-49 | refinedweb | 554 | 50.02 |
You’ve just gotten done with your React 18 upgrade, and, after some light QA testing, don’t find anything. “An easy upgrade,” you think.
Unfortunately, down the road, you receive some internal bug reports from other developers that make it sound like your debounce hook isn’t working quite right. You decide to make a minimal reproduction and create a demo of said hook.
You expect it to throw an “alert” dialog after a second of waiting, but weirdly, the dialog never runs at all.
This is strange because it was working just last week on your machine! Why did this happen? What changed?
The reason your app broke in React 18 is that you’re using
StrictMode.
Simply go into your
index.js (or
index.ts) file, and change this bit of code:
jsx
render(<StrictMode><App /></StrictMode>);
To read like this:
jsx
render(<App />);
All of the bugs that were seemingly introduced within your app in React 18 are suddenly gone.
Only one problem: These bugs are real and existed in your codebase before React 18 - you just didn’t realize it.
Proof of broken component
Looking at our example from before, we’re using React 18’s
createRoot API to render our
App inside of a
StrictMode wrapper in lines 56 - 60.
Currently, when you press the button, it doesn’t do anything. However, if you remove the
StrictMode and reload the page, you can see an
Alert after a second of being debounced.
Looking through the code, let’s add some
console.logs into our
useDebounce, since that’s where our function is supposed to be called.
jsx
function useDebounce(cb, delay) {const inputsRef = React.useRef({ cb, delay });const isMounted = useIsMounted();React.useEffect(() => {inputsRef.current = { cb, delay };});return React.useCallback(_.debounce((...args) => {console.log("Before function is called", {inputsRef, delay, isMounted: isMounted()});if (inputsRef.current.delay === delay && isMounted())console.log("After function is called");inputsRef.current.cb(...args);}, delay),[delay]);}
Before function is called Object { inputsRef: {…}, delay: 1000, isMounted: false }
Oh! It seems like
isMounted is never being set to true, and therefore the
inputsRef.current callback is not being called: that’s our function we wanted to be debounced.
Let’s take a look at the
useIsMounted() codebase:
jsx
function useIsMounted() {const isMountedRef = React.useRef(true);React.useEffect(() => {return () => {isMountedRef.current = false;};}, []);return () => isMountedRef.current;}
This code, at first glance, makes sense. After all, while we’re doing a cleanup in the return function of
useEffect to remove it at first render,
useRef's initial setter runs at the start of each render, right?
Well, not quite.
What changed in React 18?
In older versions of React, you would mount a component once and that would be it. As a result, the initial value of
useRef and
useState could almost be treated as if they were set once and then forgotten about.
In React 18, the React developer team decided to change this behavior and re-mount each component more than once in strict mode. This is in strong part due to the fact that a potential future React feature will have exactly that behavior.
See, one of the features that the React team is hoping to add in a future release utilizes a concept of “reusable state”. The general idea behind reusable state is such that if you have a tab that’s un-mounted (say when the user tabs away), then re-mounted (when the user tabs back), React will recover the data that was assigned to said tab component. This data being immediately available allows you to render the respective component immediately without hesitation.
Because of this, while data inside of, say,
useState may be persisted, it’s imperative that effects are properly cleaned up and handled properly. To quote the React docs:
This feature will give React better performance out-of-the-box but requires components to be resilient to effects being mounted and destroyed multiple times.
However, this behavior shift in Strict Mode within React 18 isn’t just protective future-proofing from the React team: it’s also a reminder to follow React’s rules properly and to clean up your actions as expected.
After all, the React team themselves have been warning that an empty dependent array (
[] as the second argument) should not guarantee that it only runs once for ages now.
In fact, this article may be a bit of a misnomer - the React team says they’ve upgraded thousands of components in Facebook’s core codebase without significant issues. More than likely, a majority of applications out there will be able to upgrade to the newest version of React without any problems.
All that said, these React missteps crawl their way into our applications regardless. While the React team may not anticipate many breaking apps, these errors seem relatively common enough to warrant an explanation.
How to fix the remounting bug
The code I linked before was written by me in a production application and it's wrong. Instead of relying on
useRef to initialize the value once, we need to ensure the initialization runs on every instance of
useEffect.
jsx
function useIsMounted() {const isMountedRef = React.useRef(true);React.useEffect(() => {isMountedRef.current = true; // Added this linereturn () => {isMountedRef.current = false;};}, []);return () => isMountedRef.current;}
This is true for the inverse as well! We need to make sure to run cleanup on any components that we may have forgotten about before.
Many ignore this rule for
App and other root elements that they don’t intend to re-mount, but with new strict mode behaviors, that guarantee is no longer a safe bet.
To solve this application across your app, look for the following signs:
- Side effects with cleanup but no setup (like our example)
- A side effect without proper cleanup
- Utilizing
[]in
useMemoand
useEffectto assume that said code will only run once
One this code is eliminated, you should be back to a fully functioning application and can re-enable StrictMode in your application!
Conclusion
React 18 brings many amazing features to the table, such as new suspense features, the new useId hook, automatic batching, and more. While refactor work to support these features may be frustrating at times, it’s important to remember that they a serve real-world benefit to the user.
For example, React 18 also introduces some functionality to debounce renders in order to create a much nicer experience when rapid user input needs to be processed.
For more on the React 18 upgrade process, take a look at our instruction guide on how to upgrade to React 18 | https://unicorn-utterances.com/posts/why-react-18-broke-your-app | CC-MAIN-2022-33 | refinedweb | 1,097 | 54.52 |
O f fice
GAO
Scl)tcntln.r 1!)f)2
Report to Congressional Requesters
SECURITIES
I
STO R
PROTECTION The Regulatory Framework Has Minimized SIPC's Losses
147624
(rAO/(x (xD-02- 1 09
G
United 8tates General Accounting OClce Washington, D,C. 20548 General Government Division
B-248152 September 28, 1992 The Honorable Donald W. Riegle, Jr. Chairman, Committee on Banking, Housing, and Urban Affairs United States Senate The Honorable John D. Dingell Chairman, Subcommittee on Oversight and Investigations Committee on Energy and Commerce House of Representatives This report responds to your requests that we review the operations and solvency of the Securities Investor Protection Corporation (sn c). It discusses how the regulators' success in protecting customers depends upon the quality of regulatory oversight of the securities industry. We also provide recommendations to improve Securities and Exchange Commission (sEc) and sac disclosures to customers and sEc's oversight of stoic's operations. We will send copies of this report to the Chairman, sn'c; the Chairman, sEC; appropriate congressional committees and subcommittees; and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of Craig A. Simmons, Director, Financial Institutions and Markets Issues, who may be reached on (202) 275-8678 if there are any questions concerning the contents of this report. Other major contributors to this report are listed in appendix IV.
Richard L. Fogel . Assistant Comptroller General
Executive Summary
Purpose
Congress created the Securities Investor Protection Corporation (SIPc) in 1970 after a large number of customers lost money when they were unable to obtain possession of their cash and securities from failed broker-dealers. SIPc was established to promote public confidence in the nation's securities markets by guaranteeing the return of properly to small investors if securities firms fail or go out of business. siPc is a member-financed, private nonprofit corporation with statutory authority to borrow up to $1 billion from the U.S. Treasury. This report responds to requests by the Senate Banking Committee and the House Energy and Commerce Subcommittee on Oversight and Investigations that GAO report on several issues, including (1) the exposure and adequacy of the SIPc fund, (2) the effectiveness of siPc's liquidation oversight efforts, and (3) the disclosure of sIPc protections to customers.
Background
The law that created SIPc also required the Securities and Exchange Commission (SEc) to strengthen customer protection and increase investor confidence in the securities markets by increasing the financial responsibility of broker-dealers. Pursuant to this mandate, SEc developed a framework for customer protection based on two key rules: (1) the customer protection rule and (2) the net capital rule. These rules respectively require broker-dealers that carry customer accounts to (1) keep customer cash and securities separate from those of the company itself and (2) maintain sufficient liquid assets to protect customer interests if the firm ceases doing business. In essence, siPc is a back-up line of protection to be called upon generally in the event of fraud or breakdown of the other regulatory protections. Except for certain specialized broker-dealers, all securities broker-dealers registered with SEc are required to be members of sIPc. Other types of financial firms that are involved in the purchase or sale of securities products, such as open-end investment companies and certain types of investment advisory firms, are not permitted to be siPc members. As of December 31, 1991, sII c had 8,153 members. Of this number, only 954 are authorized to receive and hold customer property. The rest either trade .exclusively for their own accounts or act as agents in the purchase or sale of securities to the public. SEc and sIPc officials estimate that over $1 trillion of customer property is held by SIPc members. sIPC is not designed to keep securities firms from failing or, as in the case of deposit insurance for banks, to shield customers from changes in the
Page 2
GAO/GGD-92-109 Securities Investor Protection
Execntlve summary
market value of their investment. Rather, SIPC has the limited purpose of ensuring that when securities firms fail or otherwise go out of business, customers will receive the cash and securities they own up to the siPc limits of $500,000 per customer, of which $100,000 may be used to satisfy claims for cash. Thus, the risks to the taxpayer inherent in siPc are less than those associated with the deposit insurance system. sEc and self-regulatory organizations, such as the New York Stock Exchange, are responsible for enforcing the net capital and customer protection rules. However, if a firm is in danger of failing and customer accounts are at risk, si c may initiate liquidation proceedings. sEc and industry participants do not expect that siPc's back-up role in liquidating firms should be needed very often, which both reduces SIPc's exposure to loss and minimizes potential adverse market impacts. siPc liquidation proceedings can be quite complex, and it can take weeks or longer before customers receive the bulk of their property. In the 20 years since its inception, siPc has been called on to liquidate 228 firms, most of which have involved fewer than 1,000 customers. The revenues available to the si c fund have been sufficient to meet all liquidation and administrative expenses, which totaled $236 million. As of December 31, 1991, the accrued balance of the fund stood at $653 million, the highest level ever. After conducting a review of its funding needs, siPc adopted a policy to increase its reserves to $1 billion by 1997. siPC and SEC officials believe that reserves of this level, augmented by bank lines of credit of $1 billion and also by a $1 billion line of credit at the U.S. Treasury, will be more than sufficient to fulfill its back-up role in protecting against the loss of customer property.
Results in Brief
The regulatory framework within which siPc operates has thus far been successful in protecting customers while at the same time limiting slPc s losses. However, complacency regarding siPc's continuing ability to be successful is not warranted because securities markets have grown more complex and the SIPc liquidation of a large firm could be very disruptive to the financial system. The central conclusion of this report — that siPc's funding requirements and market stability depend on the quality of regulatory oversight of the industry — underscores the need for sEc and self-regulatory organizations to be diligent in their oversight of the industry and their enforcement of the net capital and customer protection rules.
Page 3
GAD/GGD-92-109 Securities Investor Protection
Erecutlve Summary
No objective basis exists for setting the right level for SIPcreserves, but Gho believes that efforts to plan for the siPc fund's future needs by increasing SIPc's reserves represent a responsible approach to dealing with the fund's potential exposure. However, in view of the industry's dynamic nature, siPc and SEc must make periodic assessments of the fund to a<gust funding plans to changing SIPc needs. In particular, measures to strengthen the fund must be taken immediately if there is evidence that the customer protection and net capital rules are losing effectiveness. While sIPc generally has received favorable comments from securities regulators and industry officials on its handling of past liquidations, it could do more to prepare for the potential liquidation of a large firm. sIPc s readiness to respond quickly by having the information and automated systems necessary to carry out a liquidation is important for the timely settlement of customer claims. The impact upon public confidence in the securities markets may be important in the liquidation of a large firm with thousands of customers. sIPc and sEc could provide the public with more complete information about the nature of SIPc coverage. Certain sEc-registered firms that are not sIPc members, including some investment advisers, may act as intermediaries in the purchase and sale of securities to the public and have temporary access to customer funds. These firms are not required to disclose the fact that they are not SIPc members, even though their customers are subject to the risks of loss and misappropriation of their funds and securities. Better disclosure is needed so that customers can make informed investment decisions.
GAO's Analysis
Strong Enforcement Is the Key to Continued Success in Protecting Customers
To date, sIPc's role in providing back-up protection for customers' cash and securities has worked well. The securities industry has faced many difficult challenges since SIPc's inception, such as major volatility in the stock markets and numerous broker-dealer failures (including two of the largest securities firms within the past 3 years). Since 1971, more than 20,000 broker-dealers have failed or ceased operations, but sIPc has initiated liquidation proceedings for only 228 — about 1 percent — of these firms. (See p. 22.)
Page 4
GAO/GGD-92-109 Securities Investor Protection
Executive Summary
Most firms involved in si c liquidations failed due to fraudulent activities. Within the last 5 years, 26 of 39 siPc liquidations have involved failures due to fraud by firms that were acting as intermediaries between customers and firms authorized to hold customer accounts. Most firms that cease operations do not require a si c liquidation because they do not carry customer accounts, customer accounts are fully protected, or they and/or the regulators have made alternative arrangements to protect the customer accounts. (See pp. 29-31.) In the future, SIPc losses can remain modest if sEc and self-regulatory organizations continue to successfully oversee the securities industry. But complacency is not warranted, and securities markets could be significantly disrupted if the enforcement of the net capital and customer protection rules proved insufficient to prevent a siPc liquidation of a large securities firm. In that instance, customers of the firm could experience delays in obtaining access to their funds. In addition, the development of new products and the increasing risks associated with the activities of many of the larger securities firms pose special challenges to the
regulators. (See pp. 36-39.)
SIPC Has Addressed Its Funding Needs
There is no scientific basis for determining what SIPc's level of funding should be because the greatest risk the fund faces — a breakdown of the effectiveness of the net capital and customer protection rules — cannot be foreseen. However, given the growing complexity and riskiness of securities markets, GAO believes that SIPc officials have acted responsibly in adopting a financial plan that would increase fund reserves to $1 billion by 1997. While GAo cannot conclude that this level of funding will be adequate, $1 billion should be more than sufficient to deal with cases of fraud at smaUer firms, and it probably can finance the liquidation of one of the largest securities firms. The $1 billion fund may not, however, be sufficient to finance worst-case situations such as massive fraud at a major firm or the unlikely simultaneous failures of several of the largest broker-dealers. Periodic su c and sEc assessments must account for factors such as the size of the largest broker-dealer and any signs that regulatory enforcement of the net capital or customer protection rules has deteriorated. (See pp. 40-46.)
Improve SIPC Preparation for Liquidating a Large
Firm
sIPc liquidations may involve delays and can expose customers to declines in the market value of their securities. To minimize delays, in the early 1980s a SIPc task force and sEc recommended that sIPc prepare for
Page I
GAD/GGD-92-109 Securities Investor Protection
potential liquidations of large firms. However,sIPc continues to make only limited preparations for the potential liquidations of large troubled firms. SIPC believes it is unlikely it will ever be called on to liquidate a large firm and cites its record of success as demonstrating its ability to liquidate any firm, (See pp. 54-57.) GAQ has no reason to question the way SIPc has conducted liquidations. However, those liquidations have all been of relatively small firms. GAQ is concerned that lack of preparation and planning may limit siPc's ability to ensure the prompt return of customer property in the event it was called on to liquidate a large, complex firm. SIPc could have been better prepared to conduct the liquidation of a large firm that could have become a liquidation in 1989.In addition, SIPc has not analyzed automation options and may be limited in its ability to ensure that the trustee of a major liquidation would be able to acquire a timely and cost-effective automation system. Working with sEc, sII c should improve its capabilities in these areas. (See pp. 57-61.)
Improve Disclosure to Customers
siPc-member broker-dealers are required to display a sIPcsymbol to notify their customers that they are SIPc members. They are also encouraged to provide customers with a brochure that explains SIPc protection. GAo believes that this brochure could be modified to clarify areas of confusion that have been raised by customers — for example, that customers of firms that fail or go out of business have only 6 months to file a claim. (See pp.
65-67.)
However, the greatest opportunity for customer confusion arises from SEc-registered firms that act as intermediaries in the purchase and sale of securities products to customers. These firms include some SIPc-exempt broker-dealers and certain types of investment advisory firms. These firms may have temporary access to customer property but are not required to disclose that they are not SIPc members. Some customers have purchased securities from nonmember intermediaries that were affiliated or associated with SIPc firms and were not protected by SIPc when the intermediary firm failed. Customers of these intermediary firms risk loss of their property by fraud and mismanagement. GAobelieves that customers should receive information on the sII c status ofSEc-registered intermediary firms that have access to customer funds and securities so that they can make informed investment decisions. (See pp. 67-72.)
Page 8
GAO/GGD-92-109 Securities Investor Protection
Iaeeltlve Sumnuuy
Recommendations
The chairmen of SIPc and SEc should periodically review the adequacy of sIPc's funding arrangements (see p. 58). The chairmen should also work with self-regulatory organizations to improve siPc's access to the information and automated systems necessary to carry out a liquidation of a large firm on as timely a basis as possible. In addition, the sEc Chairman should periodically review sIPc operations to ensure that si c liquidations are timely and cost effective (see p. 62). Finally, the chairmen of siPc and SEc, within their respective jurisdictions, should review and, as necessary, improve disclosure information and regulations to ensure that customers are adequately informed about the SIPC status of SEc-registered financial firms that serve as intermediaries in customer purchases of securities and have access to customer property
(see p. 72).
Agency Comments
sEc and sIPc provided written comments on a draft of this report (see apps. II and III). sEc and sIPc agreed with GAo s assessment of the condition of the siPC fund and with GAQ s recommendation for periodic evaluation of the fund's adequacy. sEc also agreed with GAo's recommendations to improve its oversight of siPc's operations and to consider some expansion of sEc disclosure regulations. siPc agreed with GAo s recommendation to improve siPc disclosures to customers. sEc and SIPc did not believe that problems exist in obtaining information or acquiring automated liquidation systems, but they agreed to review their policies and consider GAo's recommendations in these areas.
Page 7
GAO/GGD-92-109 Securities Investor Protection
Contents
Executive Summary Chapter 1 Introduction Chapter 2 The Regulatory Framework Is Critical to Minimizing SIPC's Exposure Chapter 3 SIPC's Responsible Approach for Meeting Future Financial Demands
Background
SIPC Reserves Are Increasing Objectives, Scope, and Methodology Agency Comments 12 12 16 19 21 22 22 24 29 32 35 39 40 40 42 46 51 52 63 63 54 64 57 61 62 62 62
Few SIPC Liquidations Needed to Protect Customers How the Regulators Have Protected Customers While Minimizing SIPC's Exposure to Losses Most SIPC-Liquidated Firms Failed Due to Fraud SIPA Liquidation Procedures Involve Delays A My)or SIPC Liquidation Could Damage Public Confidence Conclusions
SIPC Funding Needs Are Tied to the Risk of a Breakdown in the Regulatory System SIPC's Plan Seems Reasonable to Fulfill Back-Up Role If SIPC's Funding Needs Increase, Assessment Burden Issues Could Arise Alternatives or Supplements to SIPC's Financial Structure Conclusions
Recommendations
Agency Comments and Our Evaluation
Chapter 4 Prepare for Potential Liquidations
SIPC Can Better
SIPC Has Not Made Special Preparations for Liquidating a Large Firm Measures to Enhance SIPC's Ability to Liquidate a Large Firm on a Timely Basis More Effective Oversight by SEC Is Needed Conclusions Recommendations Agency Comments and Our Evaluation
Page 8
GAD/GGD-92-109 Securities Investor Protection
Contents
Chapter 5 Discrepancies in Disclosing Customer Protections Appendixes
Disclosure Requirements for SIPC Members Differences in Customer Disclosure Need to Be Addressed SEC Should Address Differences in Customer Protection Conclusions Recommendations Agency Comments and Our Evaluation Appendix I: SEC Customer Protection and Net Capital Rules Appendix II: Comments From the Securities Investor Protection Corporation Appendix III: Comments From the Securities and Exchange Commission Appendix 1V: Major Contributors to This Report Table 1.1: SIPC's Cumulative Expenses for the Years 1971-1991 Table 2.1: SIPC Membership and Liquidations, 1971-1991 Table 2.2: Most Expensive SIPC Liquidations as of December 31, 1991 Table 2.3: SIPA Liquidation Proceedings Table 2.4: SIPC Bulk Transfers, 1978-1991 Table 2.5: Major Securities Firms and Largest SIPC Liquidations Table 3.1: Most Expensive SIPC Liquidations as of December 31, 1991 Table 3.2: Largest SIPC Liquidations as Measured by Customer Claims Paid as of December 31, 1991 Table 3.3: History of SIPC Assessment Rates Table 3.4: SIPC Assessments, Industry Income, and Revenue, 1983-1991 Table 4.1: Operational Information Recommended by a 1985 SIPC-SEC Committee to Help Ensure the Timely Liquidation of a Large Broker-Dealer Table I.1: Credits Component of the Reserve Formula Calculation Table I.2: Debits Component of the Reserve Formula Calculation Table I.3: Reserve Calculation Table I.4: Alternative Net Capital Calculation Figure 1.1: SIPC Accrued Fund Balance, 1971-1991 Figure 3.1: SIPC Revenue and Expenses, 1971-1991
65 65 67 71 72 72 73 76 86 92 101 17 23 30 33 34 37 41 42 47 49
Tables
79 81 82 85 18 47
Figures
Page 9
GAD/GGD-9$-109 Securities Investor Protection
Contents
Figure 3.2: SIPC Assessments as a Percentage of Securities Industry Pretax Income, 1983-1991
50
Abbreviations
DTC FDIC FDR FOCUS
NASD NYSE
Depository Trust Corporation
Federal Deposit Insurance Corporation
Fitzgerald, DeArman, and Roberts, Inc. Financial and Operational Combined Uniformed Single
occ
SEC SIPA SIPC
SRO WBP
National Association of Securities Dealers, Inc. New York Stock Exchange Options Clearing Corporation Securities and Exchange Commission Securities Investor Protection Act Securities Investor Protection Corporation self-regulatory organization Waddell Benefit Plans, Inc.
GAO/GGD-92-109 Securities Investor Protection
Page 10
Page 11
GADIGGD-98-109 Securities Investor Protection
Cha ter 1
Introduction
This report was prepared in response to requests from the chairmen of the Senate Banking Committee and the House Energy and Commerce Subcommittee on Oversight and Investigations that we review the effectiveness of the Securities Investor Protection Corporation (siPc). sIPC, a private, nonprofit membership corporation established by Congress in 1970, provides certain financial protections to the customers of failed broker-dealers. As requested, this report assesses several issues, including the exposure and adequacy of the member-financed SIPC fund, the effectiveness of SIPc's liquidation efforts, and the disclosure of siPc protections to customers.
Background
The Securities Investor Protection Act of 1970 (SIPA), which created SIPc, was passed to address a specific issue within the securities industry: how to ensure that customers recover cash and securities from broker-dealers that fail or cease operations and cannot meet their obligations to customers. To address this issue, SIPA authorized the Securities and Exchange Commission (sEc) to promulgate financial responsibility rules designed to strengthen broker-dealer operations and minimize slpc s exposure. The rules require broker-dealers to (1) maintain sufficient liquid assets to satisfy customer and creditor claims and (2) safeguard customer cash and securities. sIPC serves a back-up role and is generally called upon to compensate the customers of firms that fail due to fraud and cannot meet their obligations to customers.' When a troubled firm cannot fulfill its obligations to customers, SIPc initiates liquidation proceedings in federal district court. The court appoints an independent trustee or, in certain small cases,sipc itself to liquidate the firm if the court agrees that customers face losses. After the case is moved to federal bankruptcy court, siPc oversees the liquidation proceedings, advises the trustee, and advances payments from its fund if needed to protect customers. Customers of a firm in liquidation receive all securities registered in their name and a pro rata share of the firm's remaining customer cash and securities. Customers with remaining claims for securities and cash may each receive up to $500,000 from the sIPcfund, of which no more than $100,000 can be used to protect claims for cash. siPC coverage applies to most securities — notes, stocks, bonds, debentures, certificates of deposit, and options on securities — and cash deposited to purchase securities. However, siPc coverage does not include,
'The regulators require operating firms to maintain blanket fldelity bonds to protect customers against the fraudulent misappropriation of their property.
Page Ig
GAO/GGD-9R-109 Securities Investor Protection
Chapter 1 Introduction
among other things, any unregistered investment contracts, currency, commodity or related contracts, or futures contracts. Congress enacted siPA in response to what is often referred to as the securities industry's "back-office crisis" of the late 1960s, which was brought on by unexpectedly high trading volume. This crisis was followed by a sharp decline in stock prices, which resulted in hundreds of broker-dealers merging, failing, or going out of business. During that period, some firms used customer property for proprietary activities, and procedures broke down for the possession, custody, location, and delivery of securities belonging to customers. The breakdown resulted in customer losses exceeding 4100 million because failed firms did not have their customers' property on hand. The industry attempted to compensate customers through voluntary trust funds financed by assessments on broker4ealers. However, industry officials, sEc, and Congress subsequently agreed that the trust funds were inadequate,' and that an alternative — sipc — was needed to better protect customers and maintain public confidence.
SIPC's Structure and Membership
sIPA defines sl c's structure and identifies the types of broker<ealers that are required to be sl c members. Under sl A, sl c has a board of seven directors that includes government and industry representatives and
determines policies and oversees operations.' Among other duties, the
board has the obligation to examine the condition of the SIPc fund and ensure that it has sufficient money to meet anticipated liquidation expenses. sipc has one office located in Washington, D.C., and employs 32 staff members. Sl c spent about $5.1 million in 1991 to pay salaries, travel, and other operating expenses. sIPA authorizes SEc to oversee slPc and ensure that slPc fulfills its responsibilities under the act. For example, SIPc must submit all proposed rules to sEc for review and approval. sEc's oversight responsibilities for SIPc are generally similar to sEc's oversight responsibilities for the self-regulatory organizations (sRo) — the national exchanges such as the New York Stock Exchange (msE) and the National Association of
'The trust funds failed for the following reasons: (I) the size of the funds was inadequate, (2) the exchanges disbursed money I'rom the funds on a voluntary basis, and (3) the funds did not protect customers of firms that were not members of the exchanges.
Fhe president of the United States appoints five of the directors, subject to Senate approval. Two of these appointees — &e chairman and the vic~hairman — must be from the general public; the other three represent the securities industry. The secretary of the Treasury and the Federal Reserve Board appoint officers ol' their respective organizations to serve as the sixth and seventh directors.
Page 18
GAO/GGD-82-108 securities Investor Protection
Chapter I Introduction
Securities Dealers, Inc. (NAsD). sRos, whose boards are elected by their members, are private corporations that examine broker-dealers, monitor their compliance with the securities laws and regulations, and, along with sEc, notify sIPc when a broker-dealer experiences financial problems. With certain exceptions discussed below, all firms re'stered as broker-dealers under section 15(b) of the Securities Exchange Act of 1934 are required to become sIPc members regardless of whether they hold customer accounts or property. As of December 31, 1991, SIPc had 8,153 members. Of this total, only 954 (12 percent) were carrying firms that had met the sEc requirements for holding customer property or accounts.' The other 7,199 sipc members (88 percent) were either (1) introducing firms, which serve as agents between the customers and the carrying firms and handle customer property for limited periods,' or (2) firms that trade solely for their own accounts on the national securities exchanges. Data were not available to determine the total amount of customer property that is protected by siPc. sEc does not routinely collect data on the amount of fully paid customer securities held by broker-dealers that would make up the bulk of sipc's potential exposure. However, sEc and SIPc officials estimated that broker-dealers hold over $1 trillion of sIPc-protected customer property based on data from the 20 largest broker-dealers. sIPA excludes broker-dealers whose principal business, as determined by sipc subject to sEc review, is conducted outside the United States, its possessions, and territories. A sIPc official said that sIPc reviews applications for exclusion on a case-by-case basis. Moreover, sIPA excludes broker-dealers whose business consists exclusively of (1) distributing shares of registered open-end investment companies (mutual funds) or unit investment trusts, (2) selling variable annuities, (3) providing insurance, or (4) rendering investment advisory services to registered investment companies or insurance company separate accounts.
'These carrying firms also include clearing firms that hold customer property for a limited period solely to settle trades. 'For example, the introducing firm may send a customer's check to the clearing firm as payment for executing a trade. 'SEC officials stated that information on the amount of SIPC-protected customer property is not collected for several reasons: (I) the value of customer securities is marked-to-market and changes continuously; (2) gathering this information would be expensive and require significant computer capability, which would be especially difficult for small firms; and (3) SEC has not needed the data for regulatory purposes.
Page 14
GAO/GGD-92-109 Securities Investor Protection
Chapter I Introduction
SIPC Has Back-Up Customer Protection Role
Congress established sIPc as one part of a broader regulatory framework to protect the customers of U.S. broker-dealers. Congress also required sEc to issue financial responsibility rules designed to improve the operations of broker-dealers and prevent the types of abuses that occurred during the 1960s back-office crisis. The two key financial responsibility rules are the customer protection rule and the net capital rule. In 1972, sEc issued a customer protection rule (rule 15c3-3) that requires firms to safeguard customer cash and securities and forbids their use in proprietary activities. In 1975, sEc strengthened its net capital rule (rule 15c3-I). The net capital rule requires firms to have sufficient liquid assets on hand to satisfy liabilities, including customer and creditor claims. sRos and sEc are responsible for monitoring broker-dealer compliance with the customer protection and net capital rules and for closely monitoring the activities of financially troubled firms. Generally, the regulators are able to arrange the transfer of all customer accounts at troubled firms to other firms or to return customer property directly to customers if the troubled firms are in compliance with the sEc rules. A sipc liquidation becomes necessary if customer cash and securities are missing or if the sRQ feels that there is not enough money to self-liquidate.
SIPC Protections Differ From Deposit Insurance
Protections
sII c's protections differ fundamentally from federal deposit insurance protections for bank and thrift depositors, which are administered by the Federal Deposit Insurance Corporation {FDIc).' sipc does not protect investors from declines in the market value of their securities. The major risk that sipc faces, therefore, is that broker-dealers will lose or steal customer cash and securities and violate the customer protection or net capital rules. By contrast, FDIc protects the par value of deposits and accrued interest payments up to 8100,000.' Suppose that a customer purchased one share of XYZ Corporation for $100 through a broker-dealer, and the firm held the security. The market value of the share then declined to $50. If the broker-dealer failed and the share
was iiusslIig, sIPC would advance 450 so that the trustee could purchase
one share of XYZ Corporation. SIPC would not protect the customer against the share's 850 market loss. By contrast, FDIc would pay an individual with
~The other deposit insurer is the National Credit Union Administration, which protects the customers of credit unions. "Customers receive similar protection from both FDIC and SIPC for cash claims of up to 4100,000.
Page 15
GAO/GGD-92-108 Securities Investor Protection
Chapter 1 Introduction
a $100 deposit the full $100 if the bank failed, even if the assets of the bank were worth 50 percent of their book value. Another difference is that sEc's customer protection rule prevents broker-dealers from using customers' securities and funds for proprietary purposes. By contrast, the essence of banking is that banks use insured deposits to make loans and other investments. Consequently, by guaranteeing the par value of deposits, FDIC protects depositors not only against the disappearance of deposits due to bookkeeping errors or fraud but also against bad investment decisions by such banks. It is much riskier for the government to protect depositors against the consequences of bad investments, as FDic does, than only against missing property, assiPCdoes. There is also a difference in the amount of customer property that is protected. sipc protects customer losses of up to $500,000 after all customer funds and securities have been distributed on a pro rata basis from the failed firm's separate account that includes all customer property. This means that a customer with a claim for $5 million of stock who received $4.5 million of their stock from the pro rata distribution would then receive an additional $500,000 worth of securities from su'C. Creditors of the failed securities firm cannot claim assets from the firm's customer property account. By contrast, bank depositors are assured of recovering their deposits only up to the $100,000 limit; if they had any deposits exceeding $100,000, in many cases they are required to join all other creditors for a pro rata share of the remaining failed bank assets. Finally, SIPC and FDIc protections differ in that the customers of broker-dealers liquidated by sipc trustees are likely to wait longer to receive compensation than are insured bank depositors. Under su'A, customers frequently must file claims with the trustee before receiving their property. Although trustees and sn'c have the authority to arrange bulk transfers of customer accounts to acquiring firms to speed up the process, such transfers are not always possible if the firm failed due to fraud, if it kept inaccurate books and records, or if its accounts were of poor quality. Moreover, a bulk transfer can take weeks or longer to arrange. In contrast, FDic frequently transfers the insured deposits of failed banks to other banks over a weekend.
SIPCReserves Are Increasing
Between 1971 and 1991, sipc initiated liquidation proceedings against 228 failed firms. As of December 31, 1991, sipc trustees had completed 183 of the 228 liquidation proceedings. The 183 completed liquidations had an
Page 18
GAO/GGD-92-109 Securities Investor Protection
Chapter I Introduction
average of about 930 customer accounts and cost slPc about $500,000 per liquidation in customer protection and administrative expenses. At yearwnd 1991, the other 45 liquidation proceedings remained open because trustees were still processing claims or litigating matters, such as civil actions against former firm officials. As of December 31, 1991, SIPc's cumulative operational expenses totaled $63 million and liquidation expenses for closed cases and open proceedings totaled $236 million. Of the $236 million, SIPc used $175 million to satisfy customer claims for missing cash and securities and $61 million to pay administrative costs, such as trustees fees and litigation expenses. (See table 1.1.)
Table 1.1: SIPC's Cumulative Expenses for the Years 1971-1991 Type of expense SIPC operations Liquidation expenses Administrative costs Customer claims Total
Source: SIPC.
Total expense
$62,575,788 61,032,655 174,834,104
$298,442,547
Yo acquire the cash necessary to pay liquidation expenses and maintain a reserve fund, si c levies assessments on the revenues of member firms and also earns interest on the invested fund balance. When SIPc was first established, the assessment was 0.5 percent of a firm's gross revenues from the securities business of each member.' Rates fluctuated from that time depending on the level of expenses, and for several years the assessment was nominal. Following the stock market crash of 1987, the sIPc board decided to increase the assessment rate to 0.019 percent of gross revenues. In 1990, sipc assessments amounted to $73 million, based upon industry gross revenues of $39 billion. Because of the assessment increases, interest income, and low liquidation expenses, sipc's accrued fund balance has increased significantly in recent
Litigation matters were still pending in 37 of the 46 cases. In those 37 cases, the trustees had already satisfied all customer claims.
' Gross revenues, as specified in SIPA, include fees and other income from various categories of the securities business but do not include revenues received by a broker-dealer in connection with the distribution of shares of a registered open-end investment company or unit investment trust, from the sale of variable annuities, or from insurance business. In 1990, gross revenues were about 64 percent of total industry revenues.
Page I7
GADIGGD-92-109 Securities Investor Protection
Chapter 1 Introduction
years (see fig. 1.1)." As of December 31, 1991, the accrued balance of SIPc's fund was $653 million, its highest level since slPc's inception and an 87-percent increase over the fund balance at year-end 1987. Si c also maintained a $500 million line of credit with a consortium of U.S. banks at year-end 1991. In addition, sipc has the authority to borrow — through sEc — up to $1 billion from the U.S. Treasury.
Figure 1.1: SIPC Accrued Fund
Balance, 1971-1991
700 Oollars In millions
1971
10 7 3
1070
1077
1070
1081
1083
1085
1087
1080
1001
Source: SIPC.
In 1991, the slPc board reviewed the adequacy of the fund size and bank borrowing authority in light of potential liquidation expenses. Based on the review, the board decided to build the fund at a 10-percent annual rate with a goal of $1 billion by 1997. To accomplish this goal, the board set the assessment rate at 0.065 percent of each firm's net operating revenues; this action resulted in assessment revenue in 1991 of $39 million — a $34 million
"The SIPC i'und, as defined by SIPA, consists of cash and amounts invested in U.S. government or agency securities while the accrued fund balance represents SIPC's assets minus funds needed to complete ongoing liquidations.
Page 18
GAO/GGD-92-109 Securities Investor Protection
Introduction
Chapter I
reduction from the amount collected in the previous year." In 1991, the fund increased by $47 million due to interest revenue. The board also decided to raise slPc's bank line of credit to $1 billion beginning in 1992. Over the next 4 years, $250 million of credit will come due annually and may be renewed. The line of credit was arranged with a consortium of banks and cannot be canceled by the banks, but the banks could decline to renew as each portion of the line comes up for renewal.
Objectives, Scope, and Methodology
We received separate requests from the chairmen of the Senate Banking Committee and the House Energy and Commerce Subcommittee on Oversight and Investigations to assess several issues, including (i) the exposure and adequacy of the sipc fund, (2) the feasibility of supplemental funding mechanisms such as private insurance, (3) the effectiveness of sIPc s liquidation efforts, and (4) the disclosure of sIPc protections to customers. We were also asked to determine whether sipc needs the authority to examine the books and records of its members and to take enforcement actions. To gain a basic understanding about how slpc and the securities regulatory framework protects customers, we reviewed slPA and its legislative history, sEc's net capital and customer protection rules, and sIPc bylaws and internal documents. We also reviewed our previous reports on the securities industry. During our review, we determined that no quantifiable measure exists to assess the exposure of the sipc fund and the adequacy of its reserves (such as the ratio of reserves to insured deposits, which FDIc uses to assess the exposure of the Bank Insurance Fund). As a result, we based our conclusions about the sipc fund's ability to protect customers and maintain public confidence in the markets on such factors as slpc's past expenses, current trends in the securities industry, the regulators' enforcement of the net capital and customer protection rules, and sipc's policies and procedures. We reviewed the principal studies used by the slpc board in making its judgments: a report prepared by the Deloitte and Touche accounting firm and a report on sipc's assessment policies prepared for slpc by a task force
"Net operating revenue-based assessments allow broker-dealers to deduct all interest expense from securities business revenue. Broker-dealers also have the option of continuing to deduct 40 percent of interest revenue from margin accounts.
Page 19
GAO/GGD-92-109 Securities Investor Protection
Chapter 1 Introduction
of regulatory and industry officials.' We did not independently duplicate the methodology of these studies, but we assessed the reasonableness of the studies and the board's decisions in light of the risk characteristics of the industry, the history of sIPc liquidations, the effectiveness of the regulatory structure, and recent developments within the industry. We discussed the reports and sIPc fund issues with senior sIPc of6cials, sEc officials in the Division of Market Regulation and the New York Regional Office, officials at msE and the NAsD, officials at the Federal Reserve Board and the Federal Reserve Bank of New York, and an official at the Department of the Treasury. We also interviewed the individuals who wrote the Deloitte and Touche report and industiy representatives to ascertain their views on the adequacy of the sipc fund. We did not conduct a comprehensive review of the efficiency of si c's liquidations proceedings; rather, we focused on SIPc's preparations for liquidations that could affect the timeliness of customers' ability to access their accounts. We also looked at the sEc's oversight efforts and reviewed a 1986 sEc letter to the sIPc chairman reporting on sEc's review of SIPc's operations, which is the only written evaluation sEc has issued on siPc's operations. We discussed SIPc's annual financial audits with its independent auditor, Ernst and Young. We also contacted the trustees of four large sIPc liquidations (as measured by sIPc expenditures and number of customer claims paid). We interviewed the trustees of the two most expensive liquidations to date — Bell and Beckwith, and Bevill, Bresler & Schulman, Inc. In addition, we interviewed the trustee who liquidated the largest firm, Blinder Robinson and Co., Inc. (as measured by the number — 61,000 — of customer claims paid) and contacted the trustee who liquidated Fitzgerald, DeArman, and Roberts, Inc. (FDR) in the largest bulk transfer to date (80,000 accounts). Moreover, we discussed with senior sIPc officials their efforts to prepare for the liquidations of two large firms that could have become sipc liquidations — Thomson McKinnon Securities Inc. and Drexel Burnham Lambert Incorporated. We reviewed sIPc bylaws and sEc regulations to determine the requirements for SEc-registered firms to disclose their sipc status. We also reviewed slPc and SEc customer correspondence and litigation relating to customer protection issues to assess customer concerns in this area. We did our work between May 1991 and May 1992 in accordance with generally accepted government auditing standards.
"See The Securities Investor Protection Co oration: S ecial Stud of the SIPC Fund and Fundin u irements, e oitte and ouche, t o r , I . see ort an omme a t ions of the n a o rce on Assessments, presented to the SIPC Boar o s i r e ctors p t e m r , I l .
Page 20
GAO/GGD-92-109 Securities Investor Protection
Agency Comments
slPc and smc provided written comments on a draft of this report. Relevant portions of their comments are presented and evaluated at the end of chapters 3 through 5. The comments are reprinted in their entirety as appendixes II and III. They also provided technical comments on the draft, which were incorporated as appropriate.
Page 21
GAD/GGD-92-109Secnrltlea Investor Protection
Cha ter2
The Regulatory Framework Is Critical to Minimizing SIPC's Exposure
As we pointed out in chapter 1, the regulatory framework — including the net capital and customer protection rules — serves as the primary means of customer protection wle siPc serves in a back-up role. Since Congress passed siPA in 1970, the regulatory framework has successfully limited the number of firms that have become sIPc liquidations. The firms that siPc has liquidated failed primarily because their owners committed fraud and misappropriated customer cash and securities. Given the relative success of the regulatory framework, which relies largely on sEc and the SRos to prevent sIPc liquidations, we do not believe that siPc needs the authority to examine its members. However, sEc and the sRos must continue to enforce existing rules to ensure that siPc can fulfill its back-up role and maintain public confidence in the securities industry. The regulators' ability to protect siPc in the future could prove challenging due to the continued consolidation of the industry and increased risk-taking by moor firms.
Few SIPC
Liquidations Needed
to Protect Customers
The U.S. securities industry consists of thousands of broker-dealers, many of which are small and not allowed to hold customer property. The regulatory framework and the restrictions on the holding of customer property ensure that hundreds of broker-dealers can fail or cease doing business each year without becoming siPc liquidations. As table 2.1 indicates, 20,344siPc members went out of business or failed between 1971 and 1991, but only 228 (about 1 percent) became siPc liquidations. Moreover, the number of siPc liquidations begun annually has declined since the early 1970s. Between 1971 and 1973, siPc initiated an average of 31 liquidations a year. Since 1976, siPc has initiated an average of seven liquidations a year.
Page 92
GAO/GGD-99-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Mintmtsing SIPC's Ezposure
Table 2,1" .SIPC Membership and liquidations, 1971-1 991 Year
1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988
SIPC members
3,994 3,756 3,974 4,238 4,372 5,168 5,412 5,670 5,985 6,469 7,176 8,082 9,260 10,338 11,004 11,305 12,076 12,022 11,284
Non-SIPC ter m l natlons'
0
SIPC liqui d ations
24
669 622 551 631 219 637 663 637 635 741 706 666 1,176 1,059 1,354 1,033 1,430 1,79 1
40 30 15
10
12 4 6
1989 1990
1991
9,958
8,153
2,279
2,845 20,344 228
Total
bSIPC did not report on membership terminations in 1971. Source: SIPC annual reports, 1971-1991,
'Number of terminations listed in SIPC's annual reports minus the number of SIPC liquidations.
Many of the 20,344 firms that went out of business without SIPc involvement were introducing firms or firms that trade solely for their own accounts on national securities exchanges and do not hold customer property. In the absence of fraud, introducing firms can fail, disband, or cease doing business without becoming SIPc liquidations. However, SIPc protection is extended to these firms because fraudulent activities — such as theft of money or securities — could result in customer losses. The partners of a small firm who trade solely for their own account may decide to sell the firm's proprietary securities and cease doing business. A sipc official also said that sipc's membership may fluctuate because individuals
Page 2$
GAO/GGD-92-109 Securities Investor Protection
Chapter I The Eeguiatorr Framework Is Critical to Minlmlalng SIPC's Exposure
tend to form broker-dealer firms during market upturns, as in the early to mid-1980s. Many firms may later cease doing business or fail when market downturns occur, as happened after 1987. According to a slpc official, the SIPc liquidation caseload peaked in the early 1970s because many firms still suffered operational and financial problems associated with the "back-office crisis" discussed earlier. The number of siPc liquidations has declined since 1976 as a result of the introduction of the customer protection rule, the strengthening of the net capital rule, and improved supervision by the regulators. Moreover, before financially troubled firms actually fail, the regulators frequently arrange for the transfer of their customer accounts to acquiring firms. For example, between 1980 and 1990, msE and sEc arranged account transfers for 21 of the 25 msE members that went out of business under financial duress and protected about 2.7 million customer accounts. SIPc liquidated the other four firms, which, combined, had about 112,500 customer accounts.' In its 20-year history, sipc has paid about 329,000 customer claims.
How the Regulators Have Protected Customers %bile Minimizing SIPC's Exposure to Losses
The customers of broker-dealers that fail or go out of business without becoming si c liquidations generally can continue trading in their accounts without any delays or disruptions if their accounts are transferred to other firms or if their property is returned. The regulatory foundations of this customer protection are the net capital and customer protection rules. The regulators routinely monitor broker-dealer compliance with the rules and place financially troubled firms under intensive supervisory scrutiny. The regulators may also arrange for the transfer of the accounts of troubled firms to acquiring firms via computer.
Net Capital Rule
The net capital rule requires each broker-dealer to maintain a minimum level of liquid capital sufficient to satisfy its liabilities — the claims of customers, creditors, and counterparties. Net capital is similar to equity capital in that it is based on an analysis of each broker-dealer's balance sheet assets and liabilities. Unlike equity capital, however, only liquid assets — such as cash, proprietary securities that are readily marketable, and receivables collateralized by readily marketable securities — can be counted in the net capital calculation. Assets that are not considered liquid include furniture, the value of exchange seats, and unsecured receivables.
'The four firms were John Muir 4 Co., Bell and Beckwith, Hanover Square Securities Group, and H.B. Shaine 4 Co., Inc.
Page 24
GAO/GGD-92-109 Securities Investor Protection
Chapter 9 The Iiegulatory Framework Is Critical to Minimising BIPC's Exposure
The proprietary securities that qualify for inclusion in the net capital calculation must be carried at their current market value. Even after securities positions are marked to reflect market value, the net capital rule offers further protection by requiring broker-dealers to deduct a certain percentage of the market value of all proprietary security positions from the capital of the firm. These deductions — or "haircuts" — are intended to reflect the actual liquidity of the broker-dealers' proprietary securities by providing a cushion for possible future losses in liquidating the positions. For example, debt obligations of the U.S. government receive a haircut depending on their time to maturity: from a 0-percent haircut for obligations with less than 3 months to maturity to a 6-percent haircut for obligations with 25 years or more to maturity. Haircuts for more risky assets can be much higher. SEc also allows broker-dealers to include subordinated liabilities that meet the rule's requirements in the net capital calculation. In order to count toward net capital, these subordinated liabilities must be subordinated to the claims of all present and future creditors, including customers, and must be approved for inclusion as net capital by the broker-dealer's SRo. The subordinated liabilities may not be repaid if the repayment would reduce the broker-dealer's net capital level below a level specified by the rule, and the liabilities must have an initial term of I year or more. The minimum amount of net capital required varies from broker-dealer to broker-dealer, depending upon the activities in which the firm engages. Because they hold customer property, carrying firms have higher minimum capital requirements than introducing firms. In addition, the regulators have established "early-warning" levels of net capital that exceed the minimum requirement. As discussed below, the sRos notify sEc and place restrictions on firms whose capital falls to the early warning levels. They also begin consultations with the ailing broker-dealer to formulate a recovery plan. Should the plan fail, the regulators may try to arrange a transfer of the customer accounts to one or more healthy broker-dealers. As soon as the net capital falls below the minimum level, the firm is closed. Closing a broker-dealer before insolvency either makes the firm a viable merger candidate (because there is residual value left in the firm) or allows the broker-dealer's customers to be fully compensated when the firm is liquidated.
Page 25
GAG/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Mlnimising SIPC's Exposure
Customer Protection Rule
T he customer protection rule (rule 15c3-3) applies to c~ g fir m s because they hold customer securities and cash. The rule requires the firms to have possession or control of customers' securities. As a result, the rule minimizes the need for sIPc liquidations because financially troubled firms can return customer property or send it to acquiring firms under the supervision of the regulators. The customer protection rule has two provisions. The first provision requires broker-dealers to maintain possession or contro12 of customers' fully paid and excess-margin securities.' This requirement prevents broker-dealers from using customer property to finance proprietary activities because fully paid and excess-margin securities must be in possession or control locations. The rule also forces the broker-dealer to maintain a system capable of tracking fully paid and excess-margin securities daily.
The second provision of the customer protection rule involves customer
cash kept at broker-dealers for the purchase of securities. When customer cash — the amount the firm owes customers — exceeds the amount customers owe the firm, the broker-dealer must keep the difference in a special reserve bank account. The amount of the difference is calculated weekly using the reserve account formula specified in the customer protection rule.4 The rule assumes that all margin loans will be collected because they are collateralized by the securities in customer margin accounts. A sharp and sudden decline in the market value of this collateral would render the loans unsecured; hence, these loans are required to be overcollateralized.
'The customer protection rule specifies the locations in which a security will be considered in possession or control of the broker-dealer. This includes those securities that are held at a clearing corporation or depository, free of any lien; carried in a Special Omnibus Account under Federal Reserve Board regulation T with instructions for segregation; a bona fide item of transfer of up to 40 days; in the custody of foreign banks or depositories approved by SEC; in a custodian bank; in transit between offices of the broker-dealer, or held by certain subsidiaries of the broker-dealer.
'Excess-margin securities in a customer account are margin securities with a market value in excess of 140 percent of the account debit balance (the amount the customer owes the firm). For example, assume that a firm hss a customer account with 100,000 shares of stock and that each share has a $10 market value, for a total account value of $1,000,000, The customer pays for $900,000 worth of stock and purchases the remaining $100,000 worth on margin from the broker-dealer. Applying the 140 percent to the $100,000 owed by the customer results in $140,000 worth of margin securities that the broker-dealer can use as collateral on the original $100,000 loan, To calculate the excess-margin securities in the account, subtract $140,000 from the market value of $1,000,000. The brokerAealer must have $860,000 worth of excess-margin securities in its possession or control.
'See appendix 1 for a more detailed explanation of the reserve formula and the customer protection rule.
Page 26
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regniatorg FramevrorhIs Critical to Minimiaing SIPC's Exposnre
Broker-dealers are subject to initial margin account requirements set by Federal Reserve Regulation T and sRo regulations that must be met before a customer may effect new securities transactions and commitments. In addition, maintenance margin requirements are set by the sRos and broker-dealers. The requirements specify how much equity each customer must have in an account when securities are purchased and how much equity must be maintained in that account. For example, the NYsE requirement for securities held long (owned by a customer but held by a brokerage firm) in a margin account is currently set at 26 percent of the current market value of the securities in the account. With these customer protection rules in place and properly enforced, customers are assured that their cash — up to the $100,000 SIpA limit — is readily available and can be quickly returned. These rules also facilitate the unwinding of a failed firm through a self-liquidation, with oversight by the regulators, without the need for sipc's involvement. While the customer protection rule significantly limits siPc's exposure, it does not completely eliminate the exposure. The rule includes provisions that are intended to minimize the compliance burden yet could potentially result in sipc losses. For example, broker-dealers are required to make the cash reserve deposit calculation only once a week, on Friday, and to make the actual bank deposit the following Tuesday. Therefore, if a firm received large customer cash deposits on a Wednesday and became a sipc liquidation on Thursday, it might not have sufficient cash in the reserve bank account to pay customer claims. sn c might have to reimburse the customers for the cash deposits if the deposits could not be recovered from the firm's estate. Also, a broker-dealer is considered to be in compliance with the rule and in control of customer securities when the securities are in transfer between branch offices. A liquidation expert told us that this provision has been used by small, financially troubled broker-dealers to fraudulently disguise the fact that they do not have the required control over their customers' property.
Regulator s M o n i t o r
sEc and s Ros have established inspection schedules and procedures to
ornpiiange + jt h H u les o n Routine Basi s
rout i nely monitor broker-dealer compliance with the net capital and customer p rotection rules:
• The two largest sRos — NvsE and NAsn — inspect their carrying members annually. During each exam, the examiners calculate the firm's net capital and assess the quality and accuracy of the automated systems it uses to
Page 27
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatorg Ftanleworh Is Critical to Mlnlsslslng SIPC's Exposure
•
•
•
•
maintain possession and control of customer fully paid and excess-margin securities.' sEc annually examines about 6 percent of the broker-dealers that the sRos have previously examined to ensure broker-dealer compliance with the securities laws and to evaluate and provide feedback to the sRos on the quality of their examination programs. Once every 2 years, sEc also examines the 20 largest broker-dealers that carry customer accounts. sEc requires broker-dealers to notify the regulators when their capital falls to certain levels above the minimum requirement and again if it falls below the minimum requirement. sEc requires carrying firms to submit financial and operational data monthly and requires introducing firms to report quarterly. The financial data include a computation of each firm's net capital and the amount in its reserve bank deposit account. sFc requires each broker-dealer to have its financial statements audited each year and to file the audited statements with the regulators. The regulators' policy is to place firms with financial or operational problems under more intensive supervisory scrutiny than that outlined above. Evidence of such financial or operational problems include net capital levels that (I) decline to early warning levels that exceed the parameters or (2) lead to consecutive monthly losses. When such problems are detected, the regulators may require the firm to provide daily financial statements and restrict its activities, such as its ability to increase its asset size. The regulators may also begin to solicit other firms to acquire the troubled firm's customer accounts. If the troubled firm continues to deteriorate, the regulators may arrange for the transfer of its customer accounts to an acquiring firm or firms. The regulators' ongoing monitoring and supervisory efforts are critical to minimizing sipc's potential exposure. Regulators told us that they pay especially close attention to financially troubled broker-dealers. In an attempt to stay in business, financially troubled broker-dealers may be forced to alter their behavior in such a way as to increase sipc's exposure if the firm fails and becomes a sipc liquidation. For example, NYsE officials said that a financially troubled broker-dealer may be tempted to violate the customer protection rule by using fully paid customer securities as collateral in order to increase its short-term borrowings. This situation may arise if creditors have cut off their unsecured loans needed for liquidity purposes. If this broker-dealer does not recover and becomes a
'SeeSecurities Re ulation: Customer Protection Rule Oversi t Procedures A ear Ade uate (GA D- -1 , N ov. 1, 1 1
Page 28
GAO/GGD-92-108 Securfities Investor Protection
Chapter S The Regulatory Framework Is Critical to Minimising SIPC's Exposme
si c liquidation, si c may need to make advances to recover the customer property serving as collateral for these additional loans. To keep track of this sort of activity, sEc and the sRos frequently require troubled broker-dealers to report their daily bank and stock loan activity.
Most SIPC-Liquidated Firms Failed Due to
Fraud
Although the regulatory framework has successfully protected millions of customers without the need for sipc liquidations, sipc has had to liquidate 228 firms. si c officials estimate that fraud — which can prove difficult for the regulators to detect — was involved in more than half of the 228 liquidations and accounted for about 81 percent of sipc's $236 million in liquidation expenditures as of December 31, 1991. The fraudulent schemes have included not only the officials of carrying firms who illegally violated the customer protection and net capital rules but also officials at introducing firms who stole customer property that should have been sent to the co ng f i rms for the customers. Between 1986 and 1991, introducing firm failures accounted for 26 of sipc's 39 liquidations. Other factors that have caused sipc liquidations include poor management and market conditions. Ordinarily, the regulators have time to transfer out a troubled firm's accounts because broker-dealer financial positions tend to deteriorate over a period of months or years. However, the regulators may not discover fraud until the principals of the firm have already depleted its capital or misappropriated customer assets. For example, in the most expensive liquidation — Bell and Beckwith (see table 2.2) — a senior firm official managed to "borrow" $32 million from the firm's margin accounts over a 5-year period without being detected by the regulators. As collateral for the loan, the official pledged stock in a Japanese corporation, which he valued at nearly $280 million; its real worth was approximately $5,000. When the fraud was discovered, sipc initiated liquidation proceedings to protect customers.6
fhe official spent time in federal prison, and the SIPC trustee, in conjunction with the Bevill, Bresler 4 Schulman, Inc., trustee, agreed to a 410 million settlement with the firm's auditors.
Page 29
GAO/GGD-99-109 Securities Investor Protection
Chapter 2 'rhe Regulatory Franiewortt Is Crltteal to Mtntniistng SIPC's Exposure
Table 2.2: Most Expensive SIPC Liquidations as of December 31, 1991
Firm Bell 8 Beckwith
SIPC expenses $31,722,352
Bevill, Bresler 8 Schulman,
26,395,628
Inc, (BBS)
Cau s e of failure Fir m official stole about $32 million from the firm by grossly inflating the value of collateral for the margin loan. BBS officials funded the losses of its affiliates, The losses continued to mount and resulted in failures of BBS and several affiliates.
Firm officials wrongfully diverted about $14 million from the firm
Stix & Co., Inc.
16,990,497
by creating fictitious margin
accounts. Officials used the
Joseph Sebag, Incorporated
11,351,787
Government Securities Corp.
8,109,953
funds to purchase real estate. Firm officials allegedly purchased shares without customers' permission and caused share prices to artificially increase. When share prices collapsed, Sebag failed because it had a substantial ownership position in the shares. Firm officials allegedly set up fraudulent "managed accounts" for certain customers, Rather than executing trades, firm officials used customer funds for their own benefit.
Total
Source: SIPC.
$94,570,217
Fraudulent sales practices may also increase financial and regulatory pressures on a firm and force it into asl c liquidation. For example, Blinder Robinson — the largest liquidation as measured by customer claims paid (61,000) — became a SIPC liquidation in July 1990 when its owner tried to put the penny stock firm' into a federal bankruptcy proceeding without the knowledge of sEC and SIPc. At the time, Blinder Robinson was under serious regulatory and financial pressure because sEC had been investigating the firm's sales practices for almost a decade and a Denver - businessman had won a substantial legal judgment against the firm, According to the sIPctrustee, Blinder Robinson's owner filed for bankruptcy so the firm could avoid its legal obligations. However, sipc
'Penny stock firms specialize in selling the low-priced securities of highly speculative companies.
'NASD had first informed SIPC about Blinder Robinson's deteriorating position in August 1988.
Page 20
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Minimlslng SIPC's Exposure
filed liquidation proceedings against the firm because its customers were at risk, and the courts have agreed with sIPc. To date, fraud at a major broker-dealer involving the possession or control requirement or the cash reserve calculation has not adversely affected sIPc. While fraud and questionable management practices have contributed to the demise of major broker-dealers, such as E.F. Hutton and Drexel, the regulators have had time to arrange the transfer of customer accounts without the need for siPc liquidations.9
Other Factors That Have
Caused SIPC Liquidations
Poor management and market conditions may also cause firms to fail with minimal warning and become a sIPc liquidation. For example, H.B. Shaine and Company, Inc., failed during the October 1987 market crash because management did not properly oversee the firm's options department. Certain customers engaged in risky options trading, which proved profitable while the market increased during the mid-1980s. However, when the market plunged on October 19, 1987, the Options Clearing Corporation (occ) issued very high margin calls to the firm. Shaine officials could not collect sufficient margin payments from their options customers, and the firm had insufficient capital to pay the margin calls, so it was closed and turned over to SIPc. The trustee anticipates that the Shaine liquidation ultimately will impose minimal costs on si c because the firm had most customer property on hand and the administrative expenses will be recovered from the firm's estate. Of the approximately 30 broker-dealer firms that failed as a result of the October 1987 stock market break, only Shaine required a sipc liquidation. A si c official also said that siPC has initiated liquidation proceedings to protect the customers of firms no longer in business. When a sIPc member broker-dealer chooses to cease operations, it should file a form with sEc and its withdrawal from registration becomes effective 60 days after the filing. sEc checks the form to see whether the firm owes any property to customers. If any amounts are owed, sEc asks the sRos to ensure that all customer property is returned. sFc should then notify so'c of the firm's withdrawal date, which starts a 180-day countdown. During the next 180 days, sl c must protect any customers who come forward with valid
claims for cash or securities. Under SIPA, sipc cannot initiate liquidation
proceedings after the 180-day period has passed. sipc correspondence files
'ln E.F. Hutton's case, the firm merged with Shearson Lehman Brothers in 1988. In Drexel's case, the failure of the holding company due to fraud, the resulting settlement, and the concentration in high-yield securities impaired the broker-dealer's ability to trade and ultimately forced the broker-dealer into bankruptcy.
Page 21
GAO/GGD-92-109 Securities Investor Protection
Chapter g The ILegalatorI Prameworh Ie Critical to Minlmlalng SIPC'e Expoenre
indicated that several customers have lost cash and securities because they filed claims after the 180-day deadline.
SIPA Liquidation Procedures Involve Delays
Customers benefit if the regulators can arrange to protect customer accounts without the need for siPc liquidations because the customers generally do not lose access to their investments. However, if a SIPC liquidation becomes necessary, siPc and the trustees must comply with siPA procedures (see table 2.3), such as freezing all customer accounts. The period of time during which customers are denied access to their accounts depends upon whether the trustee pays claims account by account via the mail or arranges a bulk transfer of customer accounts to acquiring firms. According to siPc officials, bulk transfers often permit customers to trade in their accounts within days or weeks of the liquidation's commencement, although the process can take longer. Payment of claims account by account can take months. For example, when FDR failed in 1988, the trustee used the bulk transfer authority to satisfy about 25,000 (80 percent) of 30,000 claims within 3 months of the liquidation's commencement. By contrast, the trustee of Blinder Robinson had to pay about 61,000 customer claims on an account-by-account basis. The trustee had paid out about half of the claims 6 months after the start of the liquidation, and the entire process took about a year. When the liquidation process denies customers access to their accounts for extended periods, they can be exposed to declines in the market value of their securities. The market risks facing customers were exemplified by the failure of John Muir 8r, Co. in August 1981. An NI'sEmember, Muir had approximately 16,000 customer accounts. While the SIPC trustee arranged the transfer of about 8,000 accounts within 10 days of the liquidation's commencement and another 4,700 accounts within 3 months, it took 7 months or more to satisfy the remaining accounts, primarily because of disputes over how much the customers owed Muir. The delay adversely affected many of the Muir customers, who were denied access to their accounts. For example, one customer who owned $500,000 worth of stock at the start of the Muir liquidation received shares worth about $350,000 from the trustee 14 months later.
Page 82
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Mtutmtstng SIPC s Exposure
Table 2.3: SIPA liquidation Proceedinge
Step Regulators notify SIPC about troubled firm. SIPC initiates liquidation proceedings. Court appoints trustee to liquidate firm. Accounts are frozen and trustee completes "housekeeping" tasks. Customers file claim with trustee. Trustee distributes customer property up to
SIPA limits. Source: SIPC.
Overview
SEC and the SROs have the responsibility to examine SIPCmembers. Under SIPA 5(a), regulators must notify SIPC when a firm is in or approaching financial difficulty, such as substantially declining net capital levels. SIP C may initiate liquidation proceedings in federal distric t court if customers are at risk, If the court agrees with SIPC, it may appoint an independent trustee and counsel to liquidate the firm. Trustee may hire legal staff, and then the case is removed to federal bankruptcy court, Trustee secures firm offices and customer and creditor accounts, hires liquidation staff, locates customer property, and begins notification process. Customers have 6 months to file a claim. Trustee's staff and SIPC officials review claims to ensure accuracy. Customers can appeal trustee's decision on claims to bankruptcy judge. Trustee distributes customers' name securities and approves claims up to SIPA limits of $500,000 ($100,000 cash) per customer, SIPC makes advances to cover missing cash and securities.
Payment of claims account by account can be time consuming because it is a labor-intensive process, particularly for large firms. For example, a si c official said the bulk transfer of a major firm's accounts may involve several employees, and an official involved in the Blinder Robinson liquidation said that paying claims account by account required 26 employees during the initial stages of the liquidation. After the staff and sIPc officials had reviewed and approved each customer claim, the staff had to send instructions to the Depository Trust Corporation (DTC) in New York, where Blinder kept most securities, to deliver the appropriate securities via the mail to the liquidation site in Englewood, CO. The staff opened the package from DTc to ensure that it contained the appropriate number of securities and that the securities were registered to their proper owners. Only then did the staff send the securities to the customers via registered mail. Bulk transfers can expedite the payment process because customer accounts are transferred via computer to acquiring firms before the trustee reviews customer claim forms. However, trustees and SIPc arranged bulk transfers for only 18 of the 99 liquidations commenced between 1978 and
Page 33
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regntatori Pramevrorh Is Critical to Mtntmtxtng SIPC'sExposure
1991. (See table 2.4.) A sIPC official said the high incidence of fraud — more than 50 percent — among SIPC liquidations accounts for the low number of bulk transfers. In such cases, the trustee and SIPC staff cannot rely on the books and records of the firm, so they review each customer claim to ensure accuracy. Another reason for the low number of bulk transfers is that some failed broker-dealers specialized in securities (such as penny stocks) that qualified acquiring firms found unattractive. The Blinder Robinson trustee said he did not attempt a bulk transfer because {1) firms experienced in handling numerous customer accounts expressed no interest in Blinder's customer accounts, which primarily contained penny stocks, and (2) firms that did express interest lacked adequate financial and operational controls to accept the accounts without endangering their own survival.
Table 2.4: SIPC Bulk Transfers, 1978-1991
Plrm Mr. Discount Stockbrokers, Inc. Gallagher, Boylan, 8 Cook, Inc. John Muir 8 Co, Stix 8 Co., inc, Bell 8 Beckwith
Gibralco, Inc.
Filing date 6/30/80
3/17/81
Number of claims paid
541 1,363
8/16/81 11/5/81 2/5/83 6/21/83
1/31/84
16,000
4,205 6,523 713 1,500 11,658 1,338 1,079 6,140 331 3,601 2,362 256 9,103 4,372 30,376 101,461
California Municipal Investors Southeast Securities of Florida, Inc. M,V. Securities, Inc. June Jones Co. First Interwest Securities Corp. Coastal Securities Bevill, Bresler & Schulman, Inc. Donald Sheldon & Co, Cusack, Light & Co., Inc. Norbay Securities Inc. H,B. Shaine 8 Co., Inc. Fitzgerald, DeArman,& Roberts, Inc. Total
Source; SIPC,
1/31/84 3/14/84
6/4/84 6/7/84
5/3/85
4/8/85
7/30/85 6/25/86 10/14/86 10/20/87 6/28/88
Page 34
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Begulatory Framevrorh Is Crltlcal to Minimlsing SIPC's Exposure
A M@or SIPC Liquidation Could Damage Public Confidence
Because SIPc liquidations can involve delays, the liquidation of a major broker-dealer could damage public confidence in the securities industry. Under such a worst-case scenario, hundreds of thousands of customers could be temporarily denied access to their property and exposed to market risks. Although the regulatory framework discussed earlier has been successful in preventing such an occurrence, the regulators and SIPc cannot afford to become complacent. This is all the more true because large firms are continuing to engage in riskier activities than in the past. To prevent large broker-dealers from becoming sIPc liquidations in the future, the regulators must continue to vigorously enforce the net capital and customer protection rules and other applicable securities laws and regulations.
Potential Impacts on Market Stability
The sl c liquidation of a major broker-dealer may only affect the customers of the failed firm. However, it is possible that the impact of a large SIPC liquidation could adversely affect the stability of securities firms and markets more generally. This spillover effect could occur if customers of other broker-dealers became worried about what would happen if their broker-dealer got into financial difficulty. In such an event, large numbers of customers could be motivated to move their accounts from one broker-dealer to another to avoid the possibility of having their funds tied up for some indefinite period of time. Or customers might get out of securities investments altogether, for example, by selling investments and depositing money in a bank. Both types of acgustments could be destabilizing to the normal operation of the securities markets, but the latter situation of actually selling securities could be highly disruptive because it could result in rapid declines in the prices of many types of securities. sEc and sIPc officials told us that the destabilizing effects associated with a large broker-dealer liquidation could be contained. The regulators and sIPc believe they could arrange to transfer the customer accounts of a large failed firm to acquiring firms within weeks. Unlike penny stock brokers such as Blinder Robinson, the officials said that the customers of large broker-dealers tend to hold highly liquid assets such as government bonds and blue chip stocks in their accounts. Other large broker-dealers find such customer accounts attractive and could generally be expected to bid on and acquire the accounts within a relatively short time.
Page 35
GAO/GGD-92-109 Securities Investor Protection
Chayter 2 The Regnlatorr Fralneworlr. Is Critical to Mlnljnlaing SIPC's Rxpoonre
Incentives Foster Efforts to Avoid Major SIPC Liquidations
Given the potentially adverse consequences of a ~o r broker-dealer liquidation, incentives exist to avoid such an event. Regulators, creditors, and customers of failed securities firms all have incentives to avoid the unpleasant aspects of SIPC liquidations — their length, their cost, and the provisions 1Il sIPA regarding creditor and counterparty relationships with the failed broker-dealer. • Regulators (sEc, sRos, and the Federal Reserve) want to avoid very large siPc liquidations because such liquidations can cause significant delays for counterparties of the failed firms and can disrupt the smooth functioning of the financial markets. • Creditors want to prevent a slPc liquidation because their share of the failed broker-dealer's assets would decrease in the event of a SIPC liquidation, where SIPC has a priority claim on the assets of the firm to pay the administrative costs of the liquidation. Customers prefer that their firm not go into a siPC liquidation because they could lose access to their property for an extended period of time and, consequently, be exposed to market risk. • Banks having loans and other arrangements with a failed broker-dealer want to avoid SIPc liquidation because they lose the ability to call their loans or unwind transactions for a period of time determined by the court. This exposes them to market risk and reduces their flexibility. • Other securities firms with noncustomer claims against troubled firms would like to avoid siPc liquidations because they, like creditors, could only settle claims from the general estate, which would be diminished by administrative expenses, and the completion of any other nonopen financial arrangements, like those involving banks, would be delayed. The strength of these incentives, in tandem with the regulatory framework, can be very important. As we pointed out earlier, most firms that have slipped through the regulatory framework and become SIPc liquidations were small and failed as a result of fraud. As table 2.5 indicates, the five largest siPC liquidations in terms of customers are dwarfed by the five largest broker-dealers, At year-end 1990, the Securities Industry Association, an industry trade group, reported that it had 60 members with 100,000 or more customer accounts.
Page 86
GADIGGD-92-109 securities Investor Protection
Chapter g
The Regulatory Framework Is Critical to
Mtntmixtng SIPC's Exposure
Table 2.6: Major Securities Firms and Largest SIPC Liquidations
Major securities firms by number of customer accounts
Merrill Lynch, & Co. Shearson Lehman Brothers, Inc.
Prudential Securities, Inc.
Customer accounts
7,900,000 4,000,000 2,700,000
Dean Witter Reynolds, Inc. 2,500,000 Paine Webber Group, Inc. 1,700,000 Largest SIPC liquidations by number of customer accounts C ustomer claims paid Blinder Robinson, Inc. 61,334 Weis Securities, Inc. 32,000 Fitzgerald, DeArman, and Roberts, Inc, 30,376 John Muir & Co. 16,000 OTC Net, Inc. 14,107
Sources: 1991-1992 Securities Indust Yearbook and SIPC,
Regulators and SIPC Must Avoid Complacency
While the incentives and the regulatory framework have been successful in preventing major SIPc liquidations to date, sEc officials and slPc cannot afford to become complacent. During our review, sEc officials told us that two large firms — Thomson McKinnon and Drexel — could have become slPC liquidations. In fact, in July 1989 slPC's general counsel flew to New York to prepare to initiate liquidation proceedings against Thomson McKinnon, which had about 500,000 customer accounts." Fortunately, slPc did not have to liquidate Thomson McKinnon because hnrsE and sEG
officials arranged the transfer of the firm's customer accounts to
Prudential-Bache Securities Inc. Moreover, in 1990 four major broker-dealers received capital contributions from their parent firms: the First Boston Corporation; Shearson Lehman Hutton Inc.; Prudential-Bache
Securities Inc.; and Kidder, Peabody 8r, Co. Incorporated.
Looking forward, there is no cause for complacency because changes in the securities industry are making the regulators' job of monitoring broker-dealer net capital and protecting customers more difficult. Continuing a trend that began about 10 years ago, broker-dealers are relying on riskier activities for more of their revenue than in previous decades. Moreover, many of these activities are new and technically sophisticated, and the risks involved may not be well understood. The
' thomson McKinnon had been experiencing 5nancial problems since the 1987 stock market crash. In 1989, the firm entered into merger negotiations with Prudential-Bache. On July 14, 1989, the merger negotiations broke down temporarily in a dispute over Thomson's 5nancial exposure. The negotiations later resumed and Prudential-Bache acquired Thomson's customer accounts and retail branch network.
Page 87
GAOfGGD-98-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Minimlsing SIPC's Exposure
sophisticated, and the risks involved may not be well understood. The structures of broker-dealers and broker-dealer holding companies are also changing and becoming more complicated. Increasingly, broker-dealer holding companies are moving very risky activities out of the registered and regulated broker-dealers and into unregulated affiliates. Although these affiliates are separate, their activities, and financial difficulties, could affect the financial health of the broker-dealer (a slPc member)." These changes may reduce the amount of time the regulators have to protect customers of a financially troubled broker-dealer, making it more difficult
to protect customers without sIPC involvement.
While the riskiness of broker-dealers and their affiliates has continued to increase, sEc's ability to oversee the securities industry and thereby protect sIPc was enhanced by the passage of the Market Reform Act of 1990 (P.L 101-432, 104 Stat. 963). This act, passed in the wake of the Drexel bankruptcy, authorized sEc to collect information from registered broker-dealers and government securities dealers about the activities and financial condition of their holding companies and unregulated affiliates. sEc has issued proposed rules under the act that would require firms to maintain and preserve records on financial activities that might affect the broker-dealer. sEc officials plan to use this information to assess the risks presented to these regulated broker-dealers by the activities and financial condition of their affiliated organizations.
No Expansion of SIPC's
Role Warranted
We were asked to look into whether SIPc should have the authority to examine the books and records of its members to fulfill its customer protection role. Given the relative success of the regulatory framework to date in preventing slPc liquidations, we do not believe there is any evidence to warrant such an expansion of slPc's authority. Several practical problems also are associated with such proposals. sIPc, with 32 staff members, does not have the resources to ensure that its members comply with securities laws and regulations. Giving slPc regulatory authority to monitor its 8,153 members would, therefore, require a large increase in SIPc's staff and impose additional costs on the securities industry. The benefits of such an expansion are questionable because it would (1) duplicate the work of sEc and sRos and (2) prove counterproductive if it weakened the accountability sEc and the sRos now have for monitoring securities firms and enforcing the net capital and
"See our report Securities Markets: Assessin the Need to Re ulate Additional Financial Activities of U.S. Securities Firms A / D- -7 , A p r , 1 , 1 92 .
Page 38
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Pramework Is Critical to Minimlslng SIPC's Exposure
customer protection rules, sEC and the sRos should continue to serve as the first line of defense for customers. Sipc should also maintain its back-up role within the regulatory framework. However, although SIPC does not need expanded regulatory authority, it can better prepare for potential liquidations (see ch. 4).
Conclusions
The regulatory framework has successfully limited the number and size of sipc liquidations. Most of the firms that slipped through the regulatory framework and became su'c liquidations failed because of fraud. When a sipc liquidation becomes necessary, customers may be denied access to their accounts for extended periods. The delays expose customers to market risk, and if a major broker-dealer becomes a SIPc liquidation, public confidence in the securities industry could be damaged. In recent years, several large broker-dealers have experienced financial difficulties that could have resulted in sipc liquidations. As a result, the regulators and sipc cannot afford to become complacent about the possibility of a major firm becoming a sipc liquidation. They must work to avoid such an outcome and be prepared to respond effectively if it should occur.
Page $9
GAO/GGD-92-109 Securities Investor Protection
Cha ter 3
SIPG's Responsible Approach for Meeting
Future Financial Demands
In 1991, the siPc board implemented a new strategy for building the sa'c customer protection fund. The board set a goal of 41 billion for fund resources (cash and investments in government securities) to be met by 1997, The board also changed its assessment strategy. The new strategy calls for consistent fund growth of 10 percent annually, with assessment rates varying as needed to achieve the target. If sipc expenses remain in line with past experience, assessments will be lower than they have been foi' the last 2 years. In November 1991, sEc approved the board's proposed changes to the sipc bylaws, and su'c implemented the plan. Given sipc's back-up role in securities industry customer protection, we believe that the board's strategy represents a responsible approach to anticipating funding demands that may be placed on sipc in the future. The plan provides resources well above what sipc would need if its future demands are similar to those of its past. Furthermore, siPc's resources should enhance the credibility of protection afforded to customers from the failure of a very large firm — something sipc has never experienced — if such a firm should end up in a su'c liquidation. However, the reasonableness of this strategy depends entirely on the continued success of the securities industry's regulatory framework in shielding su'c from losses. Given the changing nature of the securities industry, the siPc board and sEc will have to continue to assess the adequacy of the fund.
SIPC Funding Needs Are Tied to the Risk of a Breakdown in the Regulatory System
One characteristic of the sipc Fund that makes assessing its adequacy very difficult is that fund liquidation expense is not correlated with any traditional measure of financial exposure for financial institutions, such as credit risk or the amount of insured property. Instead, its adequacy is most dependent on the industry's compliance with sEc and SRo rules, particularly the sEc customer protection and net capital rules. The probability of such compliance, or noncompliance, is not quantifiable. If the risk of broker-dealer activities was a good predictor of siPc expenses, we would expect to find either that sipc liquidations increased sharply during economic downturns in the securities industry' or that most of the broker-dealers ending up in a sipc liquidation were engaged in very risky activities. However, we found that neither case represents reality.
'The riskier a broker-dealer's activities, the more sensitive that broker-dealer is to economic downturns, poor decisions, or even bad luck and the more likely the brokerMealer is to fail.
Page 40
GAD/GGD-92-109 Securities Investor Protection
Chapter 9 SIPC's Responsible Approach for Meeting Future Financial Demands
sIPc endured a period of securities industry recession from 1987 through 1990 without an appreciable increase in the number of SIPc liquidations.' Moreover, a significant percentage of broker-dealers that have been turned over to siPc did not engage in particularly risky activities. As we explained in chapter 2, 26 of the 39 broker-dealers turned over to sIPc since 1986 (67 percent) were introducing firms engaged in very low risk lines of business. If the amount of sipc-protected property was correlated to sIPc losses, we would expect that the largest liquidations would be the most costly. However, this has not been the case. Tables 3.1 and 3.2 show that the size of a broker-dealer (as measured by the amount of customer property or the number of customers) is not correlated with the cost to sIPc. Returning $190 million worth of property to customers of John Muir, Inc., resulted in no cost to SIPc while returning about $106 million to Bell and Beckwith customers cost siPc nearly $32 million. Blinder Robinson, listed in table 3.2, had more customers than all five firms listed in table 3.1, yet this liquidation was much less expensive than that of Bell and Beckwith.
Table 3.1: Most Expensive SIPC Llquldatlons as of Oecember 31, 1991 Dollars in millions
SIPC
Firm
Bell 8 Beckwith
advances $31.7
26.4 17,0 11.4 8.1
Customer Customer property c laims pai d return e d 6,523 $105.7
3,60 1 4,205 3 ,640 2,403 20,372 4 17 .5 51.2 33.9 40.8
Bevill, Bresler & Schulman, Inc. Stix 8 Co„ inc. Joseph Sebag, Inc. Government Securities Corp. Total
Source: SIPC.
$94.6
$649.1
'See table 2.1, where SEC turned four broker-dealers over to SIPC for liquidation in 1987, five in 1988, six in 1989, and eight in 1990.
Page 4i
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible hpproach for Meeting Future Financial Demands
Table 3.2; Largest SIPC Liquidations as Measured by Customer Claims Paid as of December 31, 1991
Dollars in millions Firm Blinder Robinson, Inc. Weis Securities, Inc. Fitzgerald, DeArman, and Roberts, Inc. John Muir & Co. OTC Net, Inc.
Total
Source: SIPC.
S IPC advances $6.2
3.4 5 .6 0,0 -.4 $14.8
Customer Customer property clai m s paid returne d
6 1,334 3 2,000 30 ,376 16,000 14 ,107 153 ,817 $25.8 187,2 137.0 190.4 17.4 $ 557 .8
As has been discussed, the regulatory framework established in the last 20 years to protect customers of broker-dealers has helped to limit sipc liquidations to a little over 1 percent of all broker-dealer closures. With the sIPc fund currently equaling more than twice siPc's cumulative liquidation expenses from 1971 through 1991, it appears that sipc is in a good position to continue its past performance with these small broker-dealers. Thus, based on the historical record alone, si c resources would seem to be adequate. There is, however, no reason to assume that the future will be like the past. Therefore, sipc must consider its funding needs in relation to the possibility of a breakdown in security industry compliance with the net capital and customer protection rules.
SIPC's Plan Seems Reasonable to Fulfill Back-Up Role
In 1989, the board initiated a substantial reevaluation of its funding and assessment strategies. While the board believed that the regulatory framework — backed up by the SIPC fund — was adequate to protect customers, it recognized that the securities industry had changed dramatically since sipc's inception. The industry had consolidated, with fewer firms doing a greater share of the business. The primary source of industry revenue had also changed from commissions to more risky lines of business such as trading, mergers and acquisitions, and merchant banking. Moreover, the stock market crash in 1987 and the recent demise of several of the largest broker-dealers in the industry (including Thomson McKinnon and Drexel) as well as the savings and loan and banking crises, attracted a great deal of attention and caused a significant decrease in public confidence in financial institutions.
Page 42
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible Approach for Meeting Future Financial Demands
The board first approached the question of how to adapt to changes in the securities industry and how to confront sagging customer confidence by evaluating the adequacy of the fund. To help in this process, the board commissioned a study of the fund's size as well as alternatives to supplement the existing fund, which comprises cash and government securities and is supplemented by commercial bank lines of credit.3 The study considered sipc's responsibilities, its resources, and the effect of changes in the risks taken by broker-dealers on siPc's future funding requirements, given the current regulatory framework. The study also explored plausible scenarios that might place the sipc fund under considerable strain, such as the failure of the largest brokerMealer in the industry, and whether or not slPC resources were sufficient to withstand this sort of stress. Finally, the study considered alternative forms of customer protection that could be used to supplement the current cash fund. The board decided that building a cash fund to an amount sufficient to liquidate the largest broker-dealer' in the industry would be an effective way to demonstrate sIPc's capacity to protect customers. According to the fund adequacy study, $1.24 billion was the largest amount likely to be needed to liquidate the largest broker-dealer. Of this amount, roughly 60 percent would represent temporary liquidity requirements and would be recovered bysIPc in the course of the liquidation. The largest cost component of such a liquidation was assumed to be temporary advances required to retrieve customer property pledged as collateral for bank loans or involved in stock loans.' If such an event were to occur, siPc, with a $1 billion fund together with a $1 billion commercial bank line of credit and the $1 billion Treasury line of credit, would provide the resources necessary to meet this responsibility.
'See Deloitte 4 Touche'sSpecial Study of the SIPC Fund.
'Deloitte 4 Touche estimated how much it would cost SIPC to liquidate the largest brokerAealer in the industry, at the time of the study. The study based this estimate only partially on SIPC liquidation experience because SIPC has never liquidated a brokerAealer that had more than 61,000 customers. By comparison, the largest broker-dealer in the industry had more than 6 million customer accounts at the time of the study.
~When customers purchase securities on margin, they must pay at least half of the purchase price, and the broker-dealer may borrow the remaining half from a bank or another broker<ealer. The lending institution will demand that the customer's broker4ealer pledge securities that exceed the value of the loan as collateraL Due to the excess collateralization requirement, SIPC can pay off the loan, recover the customer margin securities, and still recover its advance in full.
Page 48
GAOIGGD-92-109 Securities Investor Protection
Chapter S SIPC's Responsible hpproacb for Meeting Pntnre Financial Demands
Building the slPc fund to $1 billion would better enable it to meet such a contingency, although it would have to draw on its commercial bank line of credit to meet all the liquidity needs. sEc officials told us that the 41.24 billion estimate is highly conservative because it assumes a substantial breakdown in compliance with the customer protection rule. By incorporating the 81.24 billion conclusion of the fund adequacy study into slPc's new fund goal, the board decided that a significant cumulative event, such as slPc being asked to liquidate two or more major broker-dealers within a short time, was improbable because of the securities industry's regulatory and capital structure. In assessing the reasonableness of si c's financial plans, we concluded that there is no methodology that slPc could follow that would provide a completely reliable estimate of the amount of money sipc might need in the future. slPc has had no experience with a large liquidation, and the evidence from smaller liquidations is that the cash outlay and net cost aspects depend greatly on the particular circumstances of the firm. sipc s estimate, therefore, must be judgmental. We have not tried to develop our own independent estimate of sipc's funding needs. As explained in the following paragraphs, however, we believe that slPc's strategy represents a responsible approach to planning for future financial needs. We base our conclusions on several factors. In general, the plan does not assume that the future will be like the past, and it anticipates the possibility that sIPc may have to liquidate a large firm. Furthermore, in the absence of recognized measures of fund adequacy, the concept of using a worst-case scenario to look at potential funding needs makes sense, although this approach is limited by the assumptions made and by the uncertainty of f'uture developments. While the simultaneous liquidations of several large broker-dealers, which could wipe out the sn c fund, cannot be ruled out in an uncertain world, in assessing the adequacy of sn'c's plans it is appropriate to bear in mind the back-up role that has been laid out for su'c. In such an event, sEc and all the other key financial agencies of the federal government, including the Federal Reserve and the Department of the Treasury, would be involved in attempting to manage what would clearly be a crisis situation. Even a market break the size of the one in 1987, which potentially could have caused many siPc liquidations, placed no unusual demands on siPc. Since that time, regulators' ability to contain the damage that market breaks may
Page 44
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible Approach for Meeting Future Financial Demands
have on broker-dealers has been strengthened through "circuit-breaker" provisions and through improvements in communication and coordination
among the agencies.'
Looking more directly at the $1.24 billion estimate of the amount of cash needed to liquidate the largest securities firm, the slPc funding requirement is conservative with respect to some of its assumptions. It assumes, for example, that the failed broker-dealer's capital would be depleted to the point that its required reserves would be exhausted and that the trustee would not recover any portion of the broker-dealer's partially secured and unsecured receivables. Largely because of these assumptions, officials of sIPc and the sEc and representatives of the securities industry told us that sipc's funding estimates were on the conservative side. However, we cannot definitively conclude that the $1.24 billion estimate is overstated. The study sipc used assumed that the books and records of a large failed broker-dealer would accurately reflect the Qrm's accounts and that the brokerMealer would be in compliance with the possession or control component of the customer protection rule. In view of the prevalence of fraud in past smaller sipc liquidations, we believe that the possibility of fraud or of a serious breakdown of internal controls cannot be ruled out, even though sEc contends that these controls are monitored more closely in larger broker-dealers. Furthermore, the largest firms in the industry are likely to continue to grow, so the amount of money that might be needed in 1997 could be higher than the $1.24 billion estimated in 1990. We commend the board for taking a forward-looking approach to planning the SIPC fund strategy. However, in view of the dynamic nature of the industry, it is essential that the board, together with sEc in its oversight role, assess the fund periodically to adust the funding plans to changing sipc needs. Among other factors, the periodic assessments of the fund's adequacy must focus on the size of the largest broker-dealers, evidence of increased risk-taking within the industry, trends with respect to the amount of customer property, and any signs that regulatory enforcement has deteriorated. To a large degree, the new fund strategy builds in the opportunity for such periodic assessment; on an annual basis, the sipc board must estimate its liquidation expenses and determine the revenues needed to build the fund
'In 1989, SEC approved new exchange and NASD rules that require temporary trading halts of i or 2 hours if the Dow Jones Industrial Average falls more than 260 points or more than 400 points, respectively, in a single day.
Page 45
GAG/GGD-92-109 securities Investor Protection
Chayter 8 SIPC's Reayonslble Ayyroach for Meeting Pnture Financial Demands
at a 10-percent annual rate and decide whether to renew 25 percent of its commercial bank line of credit.
If SIPC's Funding Needs Increase, Assessment Burden Issues Could Arise
In the past, the si c assessment burden in most years has been quite low — less than 0.1 percent of total securities industry revenue and not more than 2 percent of the industry's pretax income. The burden was greatest in 1990 when si c was building its resources and industry profits were down. The assessments in that year represented approximately 10 percent of pretax income. The plan slPc has adopted will enable it to reach the $1 billion goal by 1997 with low assessments if liquidation expenses remain low (as has been the case in the last several years). Total estimated 1991 assessments were $39 million, a 47-percent decrease from the 1990 assessments of $73 million. However, if sIPc liquidation expenses increase significantly and sIPc needs to recapitalize its fund, sIPC may have to address both the total assessment burden and the distribution of the assessment burden.
Assessment History
When SIPc was created, sIPA required each siPc member firm to contribute 0.125 (one eighth of 1 percent) of its gross revenues for that year to start the customer protection fund. Until recently, the board has retained the gross revenue base for the assessments needed to maintain fund viability till'ougllout sIPc s history, believing that it was the most equitable distribution of the assessment burden.' Also in the past, the board attempted to match assessment rate increases with declines in the fund balance, so that years of high SIPc expenses were followed by periods of higher assessments. Figure 3.1 shows how siyc revenues and expenses have varied. In 1973 and 1981, expenses were high; consequently, the board increased revenue to cover the high expenses by increasing assessments in the years that followed. Table 3.3 shows the various assessment rates for each year.
7See SIPC assessable gross revenue definition in chapter i.
Page 46
GAO/GGD-92-109 Securities Investor Protection
Chapter tt SIPC'0 Responsible Approach for Meeting Fntnre Financial Demands
Figure 3.1: SIPC Revenue and ExPenses, 1971-1 991
100 Dollars In millions
110
ll l1 I
gl
I
r(
I
I 1
r
I
\
I
l
I
r
I l l l
30
10
I
I
0
rr
r~~
~ a ~ ~ ~
I I I
I
1g
1071
10 7 3
1078
1977
1079
1981
1083
1088
1087
1989
1991
— Revenue - - • Expenses Souroe: SIPC
Table 3.3: History of SIPC Assessment Rates
Period January 1971 - December 1977 January 1978 - June 1978 July 1978 - December 1978 January 1979 - December 1982 January 1983 - March 1986
April 1986 - December 1988
Rate 0.5% of gross revenues 0.25% of gross revenues
0,0 % $25 flat fee
0,25% of gross revenues
$100 flat fee
January 1989 - December 1990
January 1991 - present Source: SIPC.
0.19% of gross revenues 0.065% of net operating revenues
SIPC's New Assessment
BBse
In 1991, the board created a task force to examine the assessment strategy, and the task force concluded that steady fund growth, regardless of liquidation expense, was preferable to the previous reactive strategy. The board also directed the task force to examine the way SIPC assesses
Page47
GAO/GGD-92-109 Securities Investor Protection
Chapter 9 SIPC's Responsibleby proach for Meeting Putnre Financial Demands
member 5rms to build the fund. The task force examined a variety of assessment strategies that would appear to be more closely correlated with actual SIPc losses to make the assessments risk- or exposure-based. However, the task force did not find a material relationship between either risk or exposure and slPc losses. For example, as noted in tables 3.1 and 3.2, no correlation could be found between the level of securities and cash balances at failed broker-dealers and actual slPc liquidation costs. Also, the riskiness of failed broker-dealers' activities did not translate into SIPc losses as long as the failed broker-dealers complied with sec and SRo regulations. The board adopted the task force's recommendation that revenue remains the best base for assessments, but that the existing gross revenue assessment base should be changed to net operating revenue. While the change to a net operating revenue assessment base did not tie assessments any closer to fund risk or exposure, it did address the concerns of some sIPc members, especially some of the larger broker-dealers, about the treatment of interest expense in the previous assessment base. The slPc task force on assessments reported that the increased emphasis on activities that involve interest expense made gross revenues an inappropriate basis for assessments. Interest expense at NYsE member firms increased from 21 percent of gross revenues in 1980 to 42 percent in 1990.9 Many large broker-dealers complained that a broker-dealer's gross revenues could increase dramatically — and with it, the SIPc assessment — with a rise in interest rates. Such an interest rate increase would cause little or no economic change for the broker-dealer because interest expense would also increase. The change to a net operating revenue base eliminated this problem by basing the assessment on the difference (spread) between interest revenue and interest expense. In the event of a significant downturn in the health of the fund, sipc may not be able to meet the 10-percent annual fund growth goal. Although sipc assessments will increase if the fund experiences losses, it may not be able to achieve the annual growth goal because there is a cap on the total amount of assessments that may be collected in any 1 year. With this cap,
rhe board also maintained an alternative assessment base that SlpC members may choose, of gross revenues less 40 percent of margin interest earned on customers' securities accounts. The SIPC task force on assessments recommended that this option be made available in an attempt to distribute the assessment burden equitably between firms that actively engage in trading and interest-rate spread transactions and firms that rely on their retail operations for income.
NYSE member broker-dealers were responsible for approximately 80 percent of SIPC's total assessment revenue before the assessment change.
Page 48
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible hpproaeh for Meetbsg Pntnre Plnandal Demands
assessments collected in 1 year may not exceed the equivalent of 0.5 percent of gross revenues. Moreover, if the fund falls below $150 million (approximately 22 percent of its current level), the assessment base reverts back to gross revenues. The gross revenue base would shift more of the assessment burden to firms with relatively higher gross revenues, usually larger broker-dealers. Industry criticism of the proposed changes was minimal, largely because the overall effect of the change for the near term was lower assessments for most broker-dealers in the industry. As long as the securities industry regulators vigorously enforce the net capital and customer protection rules, the incentives limiting sl c's exposure remain, and slpc's investments in U.S. government securities continue to generate considerable interest income, slpc expects assessments to remain low in the near term.
Assessment Burden Issues
Can Arise If SIPC Assessments Increase
The effect of the change in the assessment base should be small as long as the assessment rate remains at or near its current low level. However, in the event that a significant increase in assessments is required to meet the fund growth goal, the issue of assessment burden, for both the entire industry as well as individual broker-dealers within the industry, may require reevaluation. While slpc assessments have generally been small compared to industry income, 1990 si c assessments represented a significant percentage of industry income. Table 3.4 compares SIPc assessments to securities industry pretax income and total revenues for 1983 to 1991.
Table 3.4: SIPC Assessments, Industry Income, and Revenue, 1983-1991
Dollars in millions Year
1983 1984 1985 1986 1987 1988 1989 1991 Sources: SEC and SIPC.
Assessment revenue $36.8
52.3 71.0
P retax Income $5,206,8
2,856.6
Total revenue $36,904.1
39,607.1 49,844.3 64,423,8 66,104.4 66,100,4 76,864.0 72,087.8
23,1
1.0 1.0 66.0 73.0
6,502.4 8,301.2 3,209.9
3,477.3 2,822.9 737.2
39,0
7,600.0
76,900.0
Page 48
Gho/GGD-88-108 Secnrltles Investor Protection
Chapter 8 SIPC'0 Responsible hpproach for Meeting Fntare Financial Demands
Figure 3.2 shows the burden of assessments on the securities industry in terms of a percentage of pretax income.
Figure 3.2: SIPC Aaaeaamenta ea e Percentage of Securltlea Induatry Pretax Income, 1983-1 QS1
10 Psrosnfage of Prelax Income
1088
1084
1N 6
19N
1N 7
10N
19N
1000
1001
Years Source: SIPC and SEC,
A high slPC assessment rate, combined with the change to net operating revenue-based assessments, may have a more profound effect on the distribution of the assessment burden among sD'c members than it does on the industry as a whole. By focusing SIPc assessments on net operating revenue, sipc shifted some of the assessment burden from broker-dealers that are actively engaged in trading and interest rate spread transactions to broker-dealers that are primarily dependent on their retail brokerage business for income. Under the new assessment structure, broker-dealers are allowed to deduct interest expense — from debt-financed activities — from SIPC assessable revenues. Generally, only large broker-dealers have a significant amount of deductible interest expense. sipc is also continuing the broker-dealers' option of choosing to deduct 40 percent of margin interest revenue (this
Page 50
GAOIGGD-92-109 Securities Investor Protection
Pntnre PlnaneLal Dentate
Chapter 8 SIPO s 144ponsible hpproaell for Meeting
deduction existed prior to the change in the assessment strategy). The change to the new assessment structuie would shif't the assessment burden further toward broker-dealers that have primarily a retail business.' The issue of assessment equity has come up in the past and raised the following concerns: Large broker-dealers claim they have carried too much of the burden because small brokerAealers usually become sIPc liquidations. Small broker-dealers claim that large broker-dealers pose a more significant threat to the fund and are in a better position to carry the assessment burden. Broker-dealers with few or no customers claim that they receive little benefit from sipc and consequently should not be forced to pay assessments at the same rate as broker-dealers with more customers. However, as we discussed earlier, the impact of changing the assessment structure on both the total assessment burden and the distribution of the assessment burden among individual broker<ealers depends upon the assessment rate. As long as the rate remains low, questions concerning the equity of the assessment structure should not demand a great deal of attention. If rates rise significantly as a result of high liquidation expenses, the sIPc board may need to revisit the issue.
Alternatives or Supplements to SIPC's Financial Structure
We were also asked to look at the role of alternatives such as private insurance to supplement sipc coverage. The Deloitte 6b Touche study of the sIPc fund and the sIPc task force on assessments also addressed alternative or supplemental ways to provide protection to securities investors. The task force concluded that a customer protection fund comprising cash and short-term government securities, like the current fund, is the best protection for customers and the best way to maintain public confidence in the securities industry, We agree that a cash fund is superior to private insurance, letters of credit, and lines of credit in terms of providing a basic level of customer protection and public confidence. Historical experience with private insurance plans, like the excess customer protection insurance coverage carried by many major broker-dealers, has shown that coverage frequently cannot be obtained when it is needed most. For example, private insurance coverage for
' rhe SIPC task force on assessments proposed eliminating the option once SIPC reaches its 41 billion fund goal.
Page 51
GAD/GGD-SR-108 Securities Investor Protection
Chapter I SIPC's Responsible hpproacbtor Meethag Fatnre Phaancial Demands
customers with account values above sipc coverage limits was not renewed at either Drexel or Thomson McKinnon before their closing. Although we believe that private insurance cannot adequately provide the basic customer protection currently provided by thesipc fund, supplemental private insurers, through the pricing of their products, can provide valuable information concerning the health of the institutions they insure. In this way, private insurance fulfills a monitoring function that supplements the activities of the regulators. The Federal Deposit Insurance Corporation Improvement Act of 1991 (P.L. No. 102-242, 105 Stat. 2236) requires a study of the feasibility of a similar option for a private reinsurance system covering depository institutions. Bank lines of credit, like su c's current line of credit, or bank letters of credit may be appropriate to serve as a supplement but are not appropriate to replace the current cash fund. Lines of credit can be written so that they will be honored under almost all circumstances, but the cost of such a line might be prohibitive. Banks also have the option of not renewing lines or letters of credit when they expire, and they may choose not to renew siPc's credit when sipc would need it most, during periods of significant losses.
Conclusions
The su c board's new fund strategy appears to be responsible, given sipc's back-up role in customer protection and the regulatory framework that exists in the securities industry today. With this regulatory structure in place and diligent supervisory and enforcement efforts, it is reasonable to assume that only a small percentage of broker-dealer closures will be turned over to su c for liquidation, and sipc has the resources necessary to liquidate these firms. sIPc currently has more than twice the money available to protect customers than it has spent in its entire 20-year history to meet similar obligations. However, the reasonableness of this strategy depends entirely on the continued success of the securities industry's regulatory framework in shieldingsu c from losses. sipc has a responsibility to regularly review its funding needs and take measures to strengthen the fund if there is evidence of any declining effectiveness of the customer protection and net capital rules. Further, in view of the importance of the regulatory protections,sEc in its oversight capacity should also regularly review the adequacy of sipc's funding strategy.
Page 62
GAD/GGD-92-109 Securities Investor Protection
SXPC's Responsible hpproach for MeetIng Future Financial Demands
Rapter I
Recommendations
We recommend that the sipc Chairman periodically review the adequacy of sipc's funding arrangements, taking into account any changes in the principal risk factors affecting the fund's exposure to loss. We also recommend that the sEc Chairman review the adequacy of funding plans developed by sipc.
Agency Comments and Our Evaluation
sIPc and sEc officials provided comments on our assessment of the adequacy of the sipc fund. Both sipc and sEc agreed with our assessment that so c acted responsibly in planning for the siPc fund's future needs. SIPc also agreed that a cash fund is superior to private insurance, letters of credit, and lines of credit. sipc did not comment on our recommendations, but sEc agreed that the adequacy of the sipc fund should be reviewed periodically. sEc stated that it and si c have reviewed the adequacy of the fund and will continue to do so.
Page 59
GhO/GGD-92-109 Securities Investor Protection
Chanter4
SIPC Can Better Prepare for Potential r,iquidations
siPc has never had to liquidate a large securities firm, and siPc and sEc officials believe it unlikely that they will ever have to. However, should SIPc be called on to liquidate a large firm, the complexities of such a liquidation could impede the timely resolution of customer claims. Such delays, in turn, could damage public confidence in the securities industry. We believe that there are reasonable steps sEc and siPc officials can take that would better enable them to liquidate a large firm in a timely manner should the need arise.
SIPC Has Not Made Special Preparations for Liquidating a Large
Firm
The persons we contacted in the course of our review — industry officials, liquidation trustees and others involved in liquidations, and regulatory officials — generally gave siPc high marks for its ability to conduct liquidations. Although a detailed review of the efficiency of siPc liquidations was outside the scope of our review, we found no reason to question this assessment of siPc's liquidation activities. However, these liquidations have all been of relatively small firms. Successful performance in the past does not, therefore, necessarily mean that sIPG is adequately prepared to move quickly to take on the liquidation of one of the largest firms. siPc's largest liquidation to date has involved the processing of about 61,000 claims; the five largest broker-dealers each has more than 1 million
customer accounts.
A decade ago, si c recognized the need to address the problems associated with the potential liquidation of larger firms by establishing a task force to look into the topic of how to handle large liquidations. The task force, composed of si c, sRo, and industry officials, was initiated in 1981 to study ways to ensure the timely return of customer property in the event a large firm with more than 100,000 customers became a sIPc liquidation. The task force was prompted by siPc's liquidation of several relatively large firms in 1981, the largest of which, Muir, had about 16,000 accounts. The task force reported in 1981 that there were 11 securities firms carrying over 100,000 active customer accounts. gn 1990, over 50 securities firms had more than 100,000 customer accounts.) The 1982 the task force report stated that the failure of a major broker-dealer would pose substantial challenges to siPc and its normal liquidation procedures. The report stressed the problems that could confront the trustee and si c's efforts to promptly satisfy customer claims in a large liquidation. For example, the trustee, si c, and the regulators would generally try to arrange a bulk transfer and avoid the need to
Page d4
GAO/GGD-9$-109 Securities Investor Protection
Chapter 4 SIPC Can Better Prepare for Potential Llqnldatlons
process customer claims.' However, as we pointed out in chapter 2, this
process may not always be possible due to the quality of the firm's
accounts and the reliability of its books and records. Bulk transfers could also prove time consuming, particularly if SIPc had to simultaneously
negotiate transfer agreements with one or more acquiring firms. Acquiring
firms would want to ensure that the accounts meet internal credit standards, and extensive computer programming efforts may be required to ensure that no errors occur in the transfer process. To minimize these and other potential problems, the task force recommended that sl c develop a plan for large liquidations. The task force suggested that sipc work with industry officials to negotiate agreements needed to ensure the timely liquidation of a large broker-dealer. For example, SIPC could negotiate standby agreements on data processing services.2 Since the task force report, sipc officials have not attempted to strengthen their pl~ g pr o c esses or make special preparations for a large broker-dealer liquidation. In response to the task force recommendation, a committee composed of sl c and SEc ofTicials developed a list of operational information that should be available from a large debtor broker-dealer at the beginning of a liquidation. (See table 4.1.) However, no action was taken to implement the recommendation.
'Amendments to SIPA in 1078 allowed SIPC to pay claims directly in some small cases and bulk transfer customer accounts to other acquiring firms. 'The task force also recommended that SIPC be given the authority to operate large troubled firms so that customers could continue to trade in their accounts and thereby avoid market losses. SIPC and SEC officials said the recommendation was not adopted because it would involve rescuing failed firms and that no other brokers would be willing to serve as counterparties to a bankrupt firm.
Page 66
GAO/GGD-98-109 securities Investor Protection
CInLyter 4 SIPC Can Setter PrePare for Potential Liquidations
Table 4.1: Operational Information Recommendrd by a 10LI SIPC-SEC Commltte»Io Nelp KnIMN the Timely Llquldatlon of a Laryo Srokor-INaler
1. 2. 3. 4. 5. 6, 7. 8. 9,
Cur r ent list of branch offices Loc a tion of leases for branch offices Loc a tion of equipment leases and other executory contracts Lis t of banks or financial institutions with funds or securities on deposit and banks with outstanding loans (both customer and firm) Loc a tion of vaults and other secure locations Loc a tion and description of computer databases and services used Loc a tion of mail drops, e.g., post office boxes and other depositories Cha r t of interlocking corporate relationships between the broker-dealer and its affiliates Lis t of key personnel
10. A ccurate count of active customer accounts
Source: SIPC.
Senior sipc officials said that they do not see the need to implement the 1982 task force recommendation or take other special measures to develop a plan for large liquidations.sEc officials agreed and said it would be impossible to develop a single plan that would be applicable to all troubled firms. We believe that the views of sIPc andsFc do not take seriously enough the problems that would result were sipc to have to conduct the liquidation of a large firm. In support of their position, SIPcand sEc officials said it is unlikely that they will have to liquidate a large firm. They pointed out that over the past decade the regulators have demonstrated the ability to protect the customers of such firms without SIPc involvement. However, as we noted in chapter 3, when we discussedsIPc's financing needs, the regulators and sIPcofficials cannot afford to become complacent about the possibility of a large broker-dealer ending up in asIPc liquidation. sEc officials told us that the financially troubled Thomson McKinnon and Drexel firms could have become siPC liquidations, and in 1990 four major broker-dealers had to be recapitalized by their parent companies. Another reason sIPcBIll sEcofficials say that special preparations for large liquidations are not needed is that SIPc can readily adapt the procedures developed for smaller firms to the liquidation of larger ones. They point out that larger firms are more likely than smaller ones to have well-functioning computerized information systems that are the key to being able to move quickly to protect customer accounts.
Page 56
GAG/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Cais Setter Prepare for Potential Lliiuldations
We agree that the experience sIPc has gained in liquidating firms over the years has certainly enabled it to develop and improve its ability to liquidate fhms. Throughout the past 20 years, so c has continued to upgrade its procedures and its automated liquidation system to improve its ability to conduct timely liquidations. While we appreciate and support SIPc's ongoing improvement efforts, we also believe that SIPc would be in a better position to protect customers if it were to take reasonable steps, in coordination with sEc, to prepare for the contingency of a large firm liquidation. In view of the risks to market stability that may accompany the failure of a large firm, we think it reasonable for siPc to do everything possible to be able to protect customers should it be called on to conduct such a liquidation. Experience from the last several years, reviewed in the next section, suggests strongly that additional measures can be taken to help customers of large firms gain access to their property as quickly as possible should any such firm fail.
Measures to Enhance SIPC's Ability to Liquidate a Large Firm on a Timely Basis
Operational Information Could Be Collected Sooner
The ability of sipc and trustees to satisfy customer claims in a timely fashion can be directly related to actions taken within the first hectic days of a liquidation's commencement. By better planning with regard to obtaining information about failing firms and securing automation support, siPC can increase the chances that a large liquidation can proceed without delay, should such a liquidation prove necessary.
In the early stages of a liquidation, the trustee — with sIPc advisement — must simultaneously gain control of the failed firm's headquarters and branch offices, freeze all customer accounts and creditor claims against the firm„ identify the location and availability of customer cash and securities, and determine the feasibility of arranging a bulk transfer. siPc officials also (I) advise the trustees on the hiring of key liquidation staff such as accounting firms and (2) review and approve customer claim forms with the liquidation staff.s We found that siPc officials have generally received high marks from trustees and other individuals involved in slPc liquidations for the guidance and assistance they provided in the conduct of liquidations.For example, the Blinder Robinson trustee said that siPc provided excellent legal advice, which he used to defend against challenges to his authority by the former
SIPC had not established specific documents that customers must file to support their claims. Instead, customers are encouraged to submit the ordinary documentation broker-dealers normally provide, such as monthly statements, purchase and sale confirmations, and canceled checks.
Page 67
GAD/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Setter Prepare for Potenthal Lirinidations
owner of the firm, and the FDR trustee praised SIPc's valuable legal and technical assistance. However, we also learned from sIPc staff and trustees of complications they experienced in acquiring the necessary information and automated liquidation systems during past liquidations, We believe such complications are indicative of the types of problems in obtaining operational information that, should they occur in the liquidation of much larger firms, could potentially result in significant delays for a large number of customers. For example, the trustee for Blinder Robinson, which had been on the 5(a)4 list 11 months prior to its failure, said his staff did not trust the accuracy of the information the firm provided about the locations of its branch offices. In addition, the staff did not locate certain of the firm's bank accounts until 10 months after the start of the liquidation. Similarly, the trustee of the FDR liquidation also experienced problems gathering all the information he needed. For example, it took two employees 4 to 6 weeks to find all the firm's branch office leases. Also, the trustee estimated that it took three staff members between 60 to 75 days to examine each of the firm's customer accounts to determine whether it was active before a bulk transfer could be arranged. In the above examples, the trustees had some difficulty in obtaining operational information concerning the location of offices and accounts. In each instance, the trustees did not believe that these complications had interfered with the timely processing of customer claims. But even if they did result in delays, relatively few customers were affected, and there was no potential for adverse impact on confidence in securities markets in general. For a large liquidation the stakes would be higher. We therefore believe it wise to initiate procedures to be sure that sIPc has as much operational information as possible before it would actually have to undertake the liquidation of a large firm. The potential impact that lack of operational information of the type referred to in table 4.1 could have on the liquidation of a major dealer can be illustrated by the events surrounding the failure of Thomson McKinnon in 1989. Although slPC did not have to initiate liquidation proceedings for Thompson McKinnon, sl c officials had made few preparations when they were informed of its imminent demise. Thomson McKinnon had about
'Under SIPA section 5(a), the regulators must notify SIPC about broker<ealers that are in or approaching financial difficulty.
Page 59
GAO/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Batter Prepare for Potential Liqnldations
600,000 customer accounts, 170,000 more customers than su c had protected in its 20-year history. As discussed earlier, NYsE and sEc arranged for the transfer of Thomson McKinnon's accounts to Prudential-Bache, but sEc officials said the firm could have become a sipc liquidation when the merger negotiations broke down temporarily. NYSE first warned su c that Thomson McKinnon was experiencing financial problems in May 19S9. But information sipc received about Thomson McKinnon was primarily financial information, such as the firm's quarterly financial reports, which identify its net capital level and aggregate data on the value of bank loans secured by customer margin securities. At the request of sEc, sIPc's general counsel went to New York on Friday, July 14, 19S9, to prepare to initiate liquidation proceedings, possibly as early as the following week. After sEc notified si'c, the staff began intensive efforts to collect operational information about the firm, such as the location of branch offices, and plan for the liquidation. We believe that the regulators should provide sIPc with operational information needed to liquidate troubled firms so that sIPc can begin preparations before firms fail. With such information, sipc officials could assess, on a case-by-case basis, the impact that a liquidation would have on customers days or weeks in advance and make plans to return customers' property as quickly as possible, SEc officials said that requiring the regulators and troubled firms to provide the information in updated form to sIPc would impose unnecessary administrative burdens, particularly as they try to protect customers without sipc involvement. However, we question how great a burden such a requirement would impose on sIPc, the regulators, and troubled firms. As sEc officials and the Blinder Robinson and FDR trustees told us, much of the information is already collected by the regulators and available at the start of the liquidation process. Furthermore, if the regulators are attempting to protect customers by transferring accounts to another firm, they would need virtually all of this information. The burden of being certain that sIPC has as much operational information as possible before it has to undertake a liquidation could be minimized if the requirement is limited to 5(a) referrals (perhaps only exceeding a certain size) and other troubled firms at the discretion of the regulators.~ For example, the regulators may decide that sipc should take
~Between 1088 and 1001, SIPC received 63 new 6(a) referrals, of which 18 (29 percent) became SIPC liquidations.
Page 59
GAO/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Better Prepare for Potential Llgnldatlona
precautionary steps and plan for the liquidation of a large firm, with numerous customers and a nationwide branch office network, whose capital has fallen to the early warning levels but was not on the 6(a) list. sipc had advance warning via the 6(a) list of 67 percent of the firms that were liquidated between 1988 and 1991. sipc, sEc, and the sRos should work together to identify operational information that si c will need to plan for potential liquidations and that the regulators and troubled firms can reasonably be expected to provide.
SIPC Has Not Addressed
Cost-Effective Automation
System Options
Another important tool used by sipc to promptly respond to the demands of any liquidation is an automated liquidation system. An automated liquidation system is the computer software program or programs that help trustees organize liquidations and pay customer claims promptly. sipc developed its own automated system in 1985 and has periodically modified the system to upgrade software and hardware capability. The system is designed for typical-sized liquidations and to be used either alone or in corjunction with modifications to the failed broker-dealer's system. We support sipc's efforts to develop a system that meets the needs of typical liquidations and si c's policy of acquiring the most cost-effective automated liquidation systems. However, it is not clear what automated system sipc would use in situations where either its own system or the failed broker-dealer's automated systems could not be readily adapted to meet the liquidation's needs. siPc's system has not been used in a liquidation involving more than 30,000 customer claims. Although sipc officials have stated that their system could be modified to handle liquidations of any size, they also recognize that it may not be cost effective to modify their system for a large liquidation. To date, si c has relied primarily on one supplier to meet its automated liquidation system needs for liquidations where sipc's system cannot be used. When Blinder Robinson failed, sipc advised the trustee to use that supplier's system even though the trustee had made arrangements to use another supplier. To the extent that it is relying on one supplier, sipc is incurring a management risk that could delay efforts to return customer property. For example, the system may be unavailable in an emergency, or it may cost more than other competitive systems. We are concerned about su'c's ability to acquire the most cost-effective automated system in a timely manner because they have not analyzed various data processing options or compared cost data to determine
Page 60
GAO/GGD-82-109 Securities Inveator Protection
Chapter 4 SIPC Can Setter Prepare for Potential Liquidations
system costs and capabilities. While SIPc officials stated that all rr@jor public accounting firms are capable of meeting their automation needs, they had only had experience with one firm, and they did not have cost data from any other firms. If the process of analyzing system comparisons does not take place until a siPc liquidation is initiated, unnecessary delays could result in acquiring the automated system that is key to the processing of customer claims. We believe that sipc would be in a better position to ensure that trustees acquire cost-effective automated liquidation systems on a timely basis by systematically analyzing automated system options and developing plans to meet diverse requirements of potential liquidations.
More Eff'ective Oversight by SEC Is Needed
As the federal agency responsible for overseeing the securities industry, sEc has a vital interest in the protection of customers and the continued stability of the securities markets. SEc also has the responsibility for overseeing siPc's operations, and in many cases would itself have to take action to ensure that sipc can fulfill its responsibilities in the best possible manner. For example, sEc would have to issue any rule that would require the sRos to provide operational information to siPC about troubled firms. In the past, sEc has carried out its oversight responsibilities by participating in sipc task forces, reviewing monthly and annual reports on sIPc s expenditures, investigating customer complaints about si'c, and meeting with slPc staff regarding liquidation issues. Moreover, sEc officials said the director of sEc's Division of Market Regulation began attending sIPc board meetings at the invitation of sipc beginning in 1991. While such contacts between spc and SIPc are important, we question whether SEc has paid sufficient attention to its sipc oversight responsibilities. In particular, sEc has not taken steps to ensure that sIPc develops plans to liquidate large troubled firms as the 1982 task force recommended. Additionally, according to sEc and sipc officials, sEc has evaluated SIPc's operations only once, in 1985. Although sEc found at that time that siPc was doing a good job selecting trustees and overseeing the liquidation process, it also identified actions that would speed the payment of customer claims, such as the development of an automated liquidation system. However, sEc never followed up on the 1985 evaluation to determine if SIPc's automation program met siPc's various liquidation requirements. Without more active oversight efforts by sEc, investors and Congress cannot be assured that siPc has fully implemented proposals designed to strengthen its operations.
Page Bl
GM/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Setter Prepare for Potential Lignldatlone
Conclusions
siPc has a responsibility to ensure that trustees liquidate failed firms efficiently so that customers are protected against unnecessary market losses and risks to the financial system are minimized. sEc and trustees have complimented si c's guidance and assistance in past liquidations. However, we believe that siPc can enhance its ability to protect customers by improving its preparations for liquidations of large troubled firms. Specifically, SIPC should (1) collect information needed to liquidate troubled firms sooner and (2) assess the cost effectiveness of various automation options to ensure the timely acquisition of an automated liquidation system. Also, additional oversight by sEc could help ensure that sIPC was as prepared as possible for responding to the demands that would result from the liquidation of a large firm. Unlesssipcand sEc address these concerns, SIPc may not be in a position to manage liquidations efficiently and protect customers from unnecessary market losses resulting from delays in the liquidation process.
Recommendations
We recommend that the chairmen of sipc and SEc work with the sRos to plan for the timely liquidation of a large broker-dealer by improving the timeliness of information provided to SIPc by the regulators that is needed to liquidate a troubled firm. We further recommend that the Chairman of sIPc, in coordination with the SEc Chairman, systematically determine sipc's automation needs for various sized liquidations and develop appropriate plans and procedures to ensure that trustees will promptly acquire cost-effective automated liquidation systems. Finally, we recommend that the sEc Chairman periodically review siPc's operations and its efforts to ensure timely and cost-effective liquidations.
Agency Comments and Our Evaluation
sIPc and sEc officials commented on our recommendations to improve sIPc s preparations for liquidations. On our first recommendation to improve information collection, both agencies questioned the need for additional information. However, they both agreed to thoroughly review this matter. In commenting on our second recommendation to improve sIPc s automation program, siPc stated that they continuously review their own automation system, determine their automation needs at the inception of a liquidation proceeding, and can make any necessary modifications without delaying the liquidation proceedings. Both agencies agreed to again review si c's system and consider aII of our comments. Finally, SEc agreed with our third recommendation to initiate periodic reviews of siPc operations and has taken steps to begin such a review.
Page 68
GAO/GGD-82-109 Seenritiee Inveetor Protection
Chapter 4 SIPC Can Setter Prepare for Potential Uqnidatlons
siPc and sEc responded to our report by stating that there are no indications of any problems with the information currently collected on financially troubled firms and that the evidence cited in the report is anecdotal. They also expressed concern that our report did not recognize srPc's efforts to develop and upgrade its own automated liquidation system or that siPc's decision to use its automated system, either alone or in cog}unction with possible modifications to the debtor's automated system, is based on sIPc's assessment of the most cost-effective solution. The responses by sEc and SIPc officials did not address our main concern, which is the issue of improving sipc's preparations for the liquidation of a large broker-dealer. Instead, the comments suggested that we were criticizing sIPc foI' the way it had conducted liquidations and for alleged deficiencies in the automation system it has developed. We modified the text and recommendations in chapter 4 to emphasize that our focus was on preparing for potential large liquidations because, as we noted in the draft, our main concern was with market stability. We did not assess the quality of specific sipc liquidations or features of siPc's automated liquidation system. Our recommendations to improve sipc's preparations for large liquidations are prompted by a concern we share with previous sipc and sEc chairmen that SIPc's ordinary liquidation procedures may not be sufficient to liquidate a large broker-dealer on a timely basis. sIPc s ability to promptly process customers claims is critical to maintaining public confidence and stability in the financial system. Our recommendations focus on the two areas where timeliness could be key in a large liquidation: (I) the collection of information needed in the liquidation and (2) the acquisition of an automated liquidation system. In the first area, a si c task force, as well as some trustees experienced in sIPc liquidations, suggested that it would be useful to have specific operational information available to plan for a liquidation. We believe that SIPc and sEc officials could work together with the sRos to ensure that the collection of this information is not unduly burdensome for the regulators or troubled
firms.
In the automation area, we agree that sipc deserves credit for developing an automated system to meet its typical liquidation needs and have noted in the report that si c has made periodic improvements to the system. We also agree with SD c's policy of integrating the most cost-effective automated data processing solutions into the assistance and support it
Page B8
GAD/GGD-98-109 Secnrities Investor Protection
Chapter 4 SIpC Can Setter prepare ror potential Ll4lnldatlons
provides to trustees appointed under sly. We do not question slPc's ability in any case to use its own system or modify other systems. However, slPc hss not collected any cost data or compared various automation options. Therefore, we are concerned about whether these determinations can be accomplished without delay to the liquidation proceedings, particularly in situations involving large liquidations and liquidations where slPc's own system could not be used. slPc would be in a better position to make both timely and cosMffective decisions if it analyzed cost data for various automation options to plan for potential liquidations.
Page 64
GAO/GGD-92-109 secnrltles Investor Protection
Discrepancies in Disclosing Customer Protections
Disclosure is an important feature of securities regulation so that customers have adequate information to make informed investment decisions and retain confidence in financial institutions and markets. Within the scope of this review, we were asked to discuss several aspects of disclosure related to sIPcmembership. Specifically, we were asked to determine what information is provided to customers aboutSIPc's coverage and whether customers are informed of whether they are dealing with a sipc member. sIPcmembers are required to disclose their SIPc status to their customers. For the most part this disclosure seems adequate, although we found some areas where improvements could be made. There is, however, no requirement for non-sipc members regulated by SEc to disclose their lack of membership in sIPc. In certain situations, customers have been confused because some nonmember firms are involved in similar securities activities as member firms, such as the purchase and sale of securities to customers. In addition, customers could be harmed because they may be subjected to undisclosed risks of loss or misappropriation of their funds or securities. If these nonmember firms were required to disclose that they were not sipc members, investors would be better informed about the relevance of SIPc coverage to their investment decisions. We, therefore, believe that sEc should require nonmember firms that serve as intermediaries in customers' purchases of securities and have temporary access to customer funds to disclose their sIPc status.
Disclosure Requirements for SIPC Members
SIPc requires its members to inform customers about their sl c status. sl c members generally must display the sipc logo in their principal and branch offices and in most advertising. These firms may also refer to su c in other material such as statements of account. SIPC may, however, prevent members from displaying the logo when it would be misleading — for example, if the firm's principal business was in products such as commodity options that are not covered by slPc. Disclosure regarding some of the features ofsIPc coverage is also important. Even if a broker-dealer is aSIPcmember, customers do not have SIPC protection for products not covered by SIPc. Furthermore, customers of failed firms lose SIPc protection if they do not submit their claims within 6 months after sipc or the trustee publishes notification of a sIPC direct payment procedure or liquidation.' siPc has developed an official
'SIPC may use a direct payment procedure rather than a formal court-supervised liquidation to resolve small firm liquidations if each customer claim is within the limits of SIPC protection and the claims of all customers total less than 4260,000.
Page 65
GAD/GGD-92-109 Securities Investor Protection
Protections
Chapter 6 Discrepancies ln Disclosing Customer
brochure to explain sipc coverage and provides the brochure to its members for voluntary distribution to their customers. sIPc's brochure generally explains what sipc does and does not cover.g siPc officials do not believe that customers of sIPc-member firms have a significant information problem relating to sIpc's coverage because most customers purchase typical securities products that are clearly covered by siPc. Nevertheless, questions concerning customers' eligibility for SIPc coverage have been raised in correspondence and litigation relating to slpc liquidations where some customers found out too late — after their firm failed or after the deadline for filing claims had passed — that they were not entitled to sIPc protection. Some of these customers had transacted business with an affiliate of the broker-dealer that was not a sIPc member. Others had not submitted their claim forms by the designated deadline. siPc's official brochure was recently revised to address potential customer confusion regarding the si c status of si c-member affiliates, The sIPc brochure now advises customers that some affiliates of siPc members may not be sipc members and that they should make checks payable only to sIPc members. We agree that the sIPc brochure provides a useful mechanism for including or clarifying information that customers may need to know. In addition to sIPc s recent changes, we believe that sIPc should consider revising other areas of the brochure to address potential confusion. One area that sIPc should review involves specifically explaining the 6-month deadline for filing a claim in order to be eligible for si c protection. The brochure currently states only that customers should file their claims promptly within the time limits set forth in the notice and in accordance with the instructions to the claim form; no deadline is mentioned. This issue was raised in some customers' letters to si c when customer claims were denied because they were not filed within the designated time frame. If customers do not receive a notice from the trustee or see the newspaper notifications for a SIPc liquidation or direct payment procedure and, therefore, do not file within the 6-month period, they will not be protected
by SIPC,
iSIPC's official brochure lists the securities that SIPC covers when purchased from a SIPC-member firm as notes, stocks, bonds, debentures, and certificates of deposit, Also, SIPC protects shares of mutual funds, publicly registered investment contracts or certificates of partidpation or interest in any profi-sharing agreement or in any oil, gas, or mineral royalty or lease. Hnally, warrants or rights to purchase, sell, or subscribe to the securities mentioned above and to any other instrument commonly referred to as a security are protected under SIPA. On the other hand, the brochure explains that SIPC does not protect some securities-related products such as unregistered investment contracts; gold, silver, and other commodities; and commodity contracts or options.
Page BB
GAO/GGD-98-109 Securities Investor Protection
Chapter 5 Diecrepanclee ln Dlecloelng Customer Protectione
Another area the brochure does not address is what customers should do if their broker-dealer fails or goes out of business but does not go through a sipc liquidation. This information is important because 99 percent of broker-dealers that are liquidated or go out of business do so without going through sipc. Customers should know that they still need to check to ensure that all of their securities and cash have been returned or transferred to another firm. If they 5nd that this has not been done, they must notify sipc or the regulators within 180 days after the 5rm's registration is withdrawn so that sipc may consider whether to initiate formal liquidation or direct payment proceedings. They also should check on the status of their firm if regular statements about their accounts are not received.
Differences in
Customer Disclosure Need to Be Addressed
As mentioned, there are no requirements for firms in the securities industry that are registered with sEc but not members of sipc to disclose this fact to their customers. This lack of disclosure often poses no problem because many such firms do not have access to customer funds. However, there are situations in which this lack of disclosure could harm investors. These situations involve nonmember firms that serve as intermediaries in customers' purchases and sales of securities and may temporarily have access to customer funds. Should they fail or go out of business, these firms could expose customers to loss. If sipc nonmembers with access to customer funds were required to disclose their sipc status, there would be greater assurance that investors would be informed about the relevance of sIpc coverage to their investment decisions. siPc and sEc officials did not know the extent to which customers may have difficulties because of the differences in protections provided by nonmember firms. We agree that extensive evidence is hard to come by. However, the potential harm to investors is demonstrated by evidence that customers of sipe nonmembers that have access to customer property are exposed to the same type of fraud that has been prevalent in sipc-liquidated firms and some customers have had problems with sipc nonmember firms that are affiliated or associated with member firms. The spirit of the securities laws dating back to 1933 emphasizes the need to provide investors with information necessary or appropriate for their protection so that they can make informed decisions. In our judgment, for customers to be fully informed about the risks and differences in
Page 67
GAO/GGD-99-109 Sccurltiee Inveetor Protection
Cbapter 5 Discrepancies in DLsciosing Customer Protections
protection associated with different types of financial firms, disclosure of the siPC status of nonmember firms that serve an intermediary role with customers and have access to customer property should be required.
Customers May Face Similar Risks From Members and Nonmembers
Information about the SIPc status of financial firms is important to customers when they face similar risks, but different protections, in purchasing similar types of products. Although most of the SEc-registered financial firms that are not sipc members do not hold customer accounts, some types of firms play an intermediary role by accepting funds from customers for the purchase of securities products and, accordingly, have discretionary access to customer accounts. These intermediary firms are subject to the risks of misappropriating or losing customer funds. Nonmember intermediary firms include sIPc-exempt broker-dealers and certain types of investment advisory firms. According to sipc data as of year-end 1991, about 440 sEc-registered broker-dealers were excluded from SIPc membership. Broker-dealers that are not siPc members include those whose business is involved exclusively in the following areas: selling shares of mutual funds or unit investment trusts, selling variable annuities or insurance, providing investment advisory services to registered investment companies or insurance company separate accounts, transacting business as a government securities specialist dealer,' or transacting principal business outside the United States and its territories and possessions. Investment advisory firms are required under the Investment Advisers Act of 1940 to register with sEc. These firms may be involved in a variety of services, such as supervising individual clients' portfolios, participating in the purchase or sale of financial products, providing investment advice and developing financial plans for clients, and publishing market reports for subscribers. An sEc official estimated that about half of the approximately 17,500 investment advisory firms are involved in the purchase and sale of securities products to customers and may temporarily have access to customer funds. The other investment advisory firms provide primarily advisory or information services and do not serve an intermediary role or handle customer property. When these firms
3For further discussion of the SIPC exclusion of government securities specialist dealers, see our report, U.S. Government Securities: More Trar~tion Information and Investor Protection Measures Are Ne e A - 114, pt . 1 4, I, pp.
Page 88
GAO/GGD-92-109 Securities Investor Protection
Chapter 5 Discrepancies in Disclosing Customer Protections
register with sEc, they must specify whether they will have custody or access to client accounts and identify any material relationship with a broker-dealer. In a previous review of investment advisers, we found, although precise figures were unavailable, some examples where investment advisory firms had misappropriated client funds.' The importance of customers' choices between sIPc-member and nonmember firms may be illustrated by the purchase of a common securities product, mutual fund shares. Customers may purchase fund shares directly from mutual fund investment companies, which would not involve an intermediary from another firm. However, customers may also use an intermediary firm in their purchase of mutual fund shares. Customers may select different types of intermediaries, including sIPc-member broker-dealers, nonmember broker-dealers, investment advisers, and other types of financial firms not registered with sEc.' In these cases, the customer deals with a sales agent or intermediary who directs the customer funds to the mutual fund where the customers' shares are held. In 1991, slPc-member broker-dealers earned about $4.2 billion in revenues from the sale of mutual fund shares while the revenues for nonmember broker-dealers were about $600 million. In cases where customers are dealing with intermediaries, only customers of slPc-member firms would be protected by si c if the firm holding their securities failed and required a sIPc liquidation. However, the potential for fraud exists in all intermediary situations. sipc officials noted that in the last 5 years, 26 of 39 sIPc liquidations involved failures resulting from fraud on the part of introducing firms that did not retain customer accounts. In addition, during 1991 sipc liquidated a broker-dealer involved primarily in selling mutual funds that failed due to the fraudulent misappropriation of about $1.8 million in customer funds. In these cases, the firms did not hold onto customer money or establish customer accounts. These firms failed due to fraud resulting primarily from agents misappropriating customer funds instead of passing them on to either the mutual fund sponsor or other broker-dealers.
'See our report Investment Advisers: Current Level of Oversi ht Puts Investors at Risk (GAO/GGD- , une , I , pp. 11-1 . Customers may also purchase mutual fund shares from banks and other depository institutions. However, we have limited the scope of our review in this report to those financial firms that are registered with SEC.
Page 69
GAO/GGD-92-109 Securities Investor Protection
Cbayter 5 Discreyancies in Disclosing Customer Protections
Nonmember Affiliates and Associates of SIP C-Member Broker-Dealers
One problem area regarding the slpc status of nonmember intermediary firms involves those firms that are affiliated with (formally tied within the same financial holding company) or associated with (having a material relationship with) a slpc-member broker-dealer. Here, in addition to the underlying risk of misappropriated funds, there is the additional complication of confusion regarding a possible tie to a sipc member. One of the major changes over the last 2 decades within the financial industry has been the emergence of large holding company structures headed by a parent company and comprising many (sometimes hundreds) affiliated insured and uninsured companies involved in diversified activities. In several highly publicized incidents, customers lost money because they unknowingly purchased uninsured products from uninsured affiliates of insured depository firms.' One such example involved the customers of the failed Lincoln Savings and Loan. Some Lincoln customers purchased uninsured bonds of the parent holding company in the lobby of the savings and loan. Similar problems have occurred with customers of sipc nonmember financial firms that were affiliated with sn'c broker-dealers. Under financial holding company structures, some firms may be allowed to sell securities products to customers but must have a broker-dealer execute securities trades and hold the customer accounts. sEc officials acknowledged that most of the problems that they are aware of relate to how the sipc logo is displayed at so'c-member broker-dealers that share common space with nonmember affiliates. One example of the confusion over nonmember affiliates was addressed in a recent SIPC lawsuit involving the liquidation of a sipc-member broker-dealer, Waddell Jenmar Securities, Inc., in North Carolina.' In this case, siPC conceded that several customers were defrauded by Guilford T. Waddell, the president of both the sEc-registered broker-dealer and a nonmember investment advisory firm, Waddell Benefit Plans, Inc. (wBP), which administered pension plans. However, slpc protection depended on whether these customers had been customers of the su'c member broker-dealer. Some customers instructed Mr. Waddell to purchase stocks with funds from their pension fund accounts, which were held by wBP. Mr. -Waddell never purchased the stocks and misappropriated customer funds. When SIPc liquidated the broker-dealer beginning in April 1989, several
'We reported on customer problems relating to the insured status of financial products offered by banks in our report De osit Insurance: A Strate for Reform (GAO/GGD-91-26, Mar. 4, 1991). In re Waddell Jenmar Securities, Inc., 126 Bankr. 936 (E.D. N.C, 1991).
Page 70
GAO/GGD-92-109 Securities Investor Protection
Chapter 5 Discrepancies Ia Disclosing Customer Protections
customers disagreed with the trustee's refusal to honor their claims and appealed to the U.S. Bankruptcy Court. The judge decided in May 1991 that the claimants of the pension plan fund were not eligible for SIPc protection because they were not customers of the broker<ealer. Instead, the court held that the claimants' missing funds and securities were held
by WBP.
Similar confusion has been raised in customer correspondence concerning nonmember firms that were associated, rather than formally affiliated, with a sIPc member. Often, these firms also transact business directly with customers and transfer customer funds to an associated broker-dealer that executes the securities transactions or hold the customer accounts. Some customers wrote to sIPc seeking clarification of these firms' siPC status in situations involving investment advisory firms associated with sIPc-member broker-dealers. One such customer inquiry asked if sIPc protected the funds and securities invested with a nonmember financial planning firm and held by a sIPc-member brokerAealer. In this situation, if the financial planning firm failed and had not delivered the customer funds to a member broker-dealer, the customer would not be eligible for sIPc protection. But if the customer funds or securities were held in an account with a member broker-dealer, the customer property would have sIPc protection.
SEC Should Address Differences in Customer Protection
Differences in customer protection and differences in the disclosure of customer protection are two distinct and important issues. This review does not address the former issue. We believe that the disclosure differences among sEc-registered firms that transact securities-related business with customers and have access to customer funds need to be addressed so that customers can make more informed investment decisions. At a minimum, customers should know whether an sEc-registered firm that is subject to the risks of losing or misappropriating customer property is a member of SIPC. Congress has considered several legislative proposals that would require affiliates of su'c-member broker-dealers to disclose their nonmember status. Another option is for sEc to address discrepancies in its regulatory disclosure requirements for registered firms that serve as intermediaries with customers and have access to customer funds or securities. sEC officials said that they would prefer to address the disclosure issue by amending their regulations rather than by amending sl A. They are considering revising their regulations to require affiliates of
Page 71
GAO/GGD-92-109 Securities Investor Protection
Chapter 5
INscrc~
Protections
ln DLscloslng Customer
brokerAealers, and possibly nonmember broker-dealers, to disclose that they are not SIPc members, but they do not know when the proposal would be issued i' or comment. If sECis to take the lead on this issue, it should identify and require those SEc-registered firms that serve as intermediaries in selling securities products to customers and have access to customer funds or securities to disclose that they are not sipc members. We recognize that other financial firms outside sEC's jurisdiction also sell securities and securities-related products but are not required to disclose their slPc status. This report is limited to financial 5rms under sEc s jurisdiction. In a previous study we have recommended that it would be appropriate for Congress to address the issue of uniform disclosure of federally insured and uninsured products.s
Conclusions
In today's financial markets, customers may receive different protection for similar securities-related products, depending on the type of firm i'rom which they purchase the product. Only SIPc-member firms are required to inform their customers of their SIPc status. Some confusion has occurred over the protections available to customers, particularly those involving financial firms afmiated or associated with SIPc-member broker-dealers. Customers should have adequate information about the siPc status of financial firms that serve as intermediaries in selling securities products so that they can make more informed investment decisions. siPc andsFc can improve the information available to customers by addressing the current discrepancies in the disclosure requirements among thosesEc-registered firms that serve as intermediaries with customers and have access to customer funds and securities.
Recommendations
We recommend that the sipc Chairman review and revise, as necessary, si c's official brochure to better inform customers of what they should do if their securities firm fails or otherwise goes out of business and to specify the amount of time that customers have to respond in order to qualify for slPc protection.
'The SEC-registered firms that are serving an intermediary role and that should be required to disclose their non4IPC status would include those SIPCexempt brokerAealers that assist customers in buying and selling securities much as introducing brokerMealers do. Also included should be those investment advisory firms that manage discretionary or nondiscretionary accounts. These firms have temporary custody of customer property and are subject to the risks of losing or misappropriating customer property. See GAO/GGD-91-26, p. 143.
Page 79
Gho/GGD-98-109 Securltles Investor Protection
Chapter I Discrepancies ln Disclosing CnstonLcr Protections
We also recommend that SEc revise its regulations to require sEc-registered Qnancial firms that serve in an intermediary role with customers and have access to customer funds or securities to disclose to their customers that they are not sipc members.
Agency Comments and Our Evaluation
sIPc andsEc commented on our recommendations to provide customers with better information on slpc liquidation proceedings and to require certain additional securities firms to disclose their sIPcstatus to customers. Both SIPc and SEC generally agreed to address our concerns. Although we do not agree with SIPc's comments that our report concludes that there are no substantial gaps in disclosure to customers about the slpc program, SIPc has agreed to clarify its official sl c brochure as soon as feasible. sEc s comments indicate that while views within SEc differed regarding disclosure of sipc status to customers, sEc is considering expanding disclosure requirements for some of the financial firms that serve in an intermediary role with customers.SEc's letter stated that its Division of Market Regulation is considering recommending a rule that would require disclosure of the absence ofsipc coverage on the part of (I) non-sipc firms that are afmiated with registered broker-dealers and that have similar names or use the same personnel or office space and (2) non-sipc registered broker-dealers. We support this effort, although if enacted it would leave a third category of firms firms that are neither broker-dealers nor affiliates of broker-dealers that serve in an intermediary capacity — without aslpc disclosure requirement. Additional efforts will still be needed to ensure that all SEc-registered firms make adequate disclosure regardingsipc coverage in the event of misappropriation of customer funds. Althollgll sEc s Division of Investment Management believes there is some merit in our concern about the possibility of investor confusion, they do not believe that additional disclosure is necessary for two reasons. First, because investment advisers are excluded fromsipc membership, "there is no more reason to require investment advisers with custody of client funds or securities to disclose their non-sipc status than there is reason to require investment advisers to disclose that they are not members of the Federal Deposit Insurance Corporation." Second, if investment advisers were required to disclose their non-sipc status, they run the risk that customers will have the false impression that the funds and securities they manage or hold have less protection than other financial firms outside
Page 78
GAO/GGD-92-108 Securities Investor Protection
Chapter 5 Discrepancies ln Disclosing customer Protections
sEC's j~ cti o n (e,g., banks and future commission merchants) that also sell securities and securities-related products and are not required to disclose their sipc status. In response to the first reason cited by sEc's Division of Investment Maregement, we do not believe that the analogy to the FMc status of investment advisory firms is valid. Investment advisory firms are not involved in transactions involving deposits, but certain investment advisory firms are involved in the purchase and sale of securities products — sometimes through an affiliation with a siPc-member broker-dealer. Also, customers could be confused because investment advisory firms may be involved in many types of securities-related activities — including, as sEc's letter points out, having temporary custody of customer property. Officials in both sEC divisions agreed that there is a possibility of customer confusion about a firm's sipc status, particularly with firms that are affiliated with siPC-member broker<ealers. For this reason, we believe that it is important to inform customers of the siPC status of firms with whom they transact securities-related business. In response to the second reason, we focused our recommendations in this report on actions within sEC's jurisdiction, which includes only sEC-registered firms. While we cannot say whether customers will think that sipc nonmember firms required to disclose have less protection than nonmember firms that are not required to disclose, we believe it is important that customers have better information to make more informed investment decisions. We also recognized in this report that some other financial firms (e.g., banks) involved in the purchase and sale of securities products are not required to disclose their su'c status. This report notes that in a previous study we recommended that it would be appropriate for Congress to address the issue of uniform disclosure of federally insured and uninsured products.
Page 74
GAO/GGD-92-109 securities Investor Protection
Page 75
GAD/GGD-9R-109 Secnrities Investor Protection
A
e n dix I
SEC Customer Protection and Net Capital Rules
The Securities and Exchange Commission's (sEc) customer protection rule (15c3-3) and uniform net capital rule (15c3-1) form the foundation of the securities industry's customer protection framework. The net capital rule is designed to protect securities customers by requiring that broker-dealers have sufficient liquid resources on hand or in their control at all times to promptly satisfy customer claims. The customer protection rule is designed to ensure that customer property in the custody of broker-dealers is adequately safeguarded. In the Securities Investor Protection Act of 1970 (slPA), Congress directed sEc to promulgate rules and regulations necessary to provide financial responsibility safeguards including, but not limited to, the acceptance of custody and use of customer securities and free credit balances. sEc rule 15c3-3, restricting the use of customer property, was a result of this congressional directive. According to sEc, rule 15c3-3 attempts to ensure that customers' funds held by broker-dealers and cash that is realized through lending, hypothecation,' and other permissible uses of customer securities are used to service customers or are deposited in a segregated account for the exclusive benefit of customers; require broker-dealers to promptly obtain possession or control of all fully paid and excess-margin securities carried by the broker-dealers for customers; separate the brokerage operation of the firm's business from that of its firm activities, such as underwriting and trading; require broker-dealers to maintain more current records, including the daily determination of the location of customer property (for possession or control purposes) and the periodic calculation of the cash reserve; motivate the securities industry to process transactions more expeditiously; inhibit the unwarranted expansion of broker-dealer business activities through the use of customer funds; augment sEc's broad program of broker-dealer responsibility; and facilitate the liquidations of insolvent broker-dealers and protect customer assets in the event of a Securities Investor Protection Corporation (siPc) liquidation. Rule 15c3-3 has two requirements: (1) broker-dealers must maintain possession or control of all customer fully paid and excess-margin
Customer Protection Rule Restricts
Broker-Dealer Use of Customer Property
'Pledging customer securities as collateral for a loan.
Page 7B
GAO/GGD-92-109 Securities Investor Protection
Appendix I SEC Customer Protection andNet Capital Rules
securities2 and (2) broker-dealers must segregate all customer credit balances and cash obtained through the use of customer property that has not been used to finance transactions of other customers.
Part 1: Possession or
Control
sEG s requirement that broker-dealers maintain possession or control of customer fully paid and excess-margin securities substantially limits broker-dealers' abilities to use customer securities. Rule 15c3-3 requires broker-dealers to determine, each business day, the number of customer fully paid and excess-margin securities in their possession or control and the number of fully paid and excess-margin securities that are not in the broker-dealer's possession or control. Should a broker-dealer determine that fewer securities are in possession or control than are required, rule 15c3-3 specifies time frames by which these securities must be placed in possession or control. For example, securities that are subject to a bank loans must be returned to the possession or control of' the broker-dealer
within 2 days. Securities that are on loan to another financial institution
must be returned to possession or control within 5 days of the determination. Once a broker-dealer obtains possession or control of customer fully-paid or excess-margin securities, the broker-dealer must thereafter maintain possession or control of those securities. Rule 15c3-3 also specifies where a security must be located to be considered "in possession or control" of the broker-dealer. "Possession" of securities means the securities are physically located at the broker-dealer. "Control" locations are a clearing corporation or depository, free of any lien; a Special Omnibus Account under Federal Reserve Board Regulation T' with instructions for segregation; a bona fide item of transfer of up to 40 days; foreign banks or depositories approved by SEc; a custodian bank; in transit between offices of the broker-dealer or held by a guaranteed corporate subsidiary of the broker-dealer; in the possession of a majority-controlled subsidiary of the broker-dealer; or in any other location designated by sEc, such as in transit from any control location for no more than 5 business days.
'Excess-margin securities in a customer account are those securities with a market value greater than 140 percent of the customer's debit balance (the amount the customer owes the broker-dealer for the purchase of the securities).
'Securities that have been pledged to a bank as collateral.
'Federal Reserve System Regulation T (12 C.F.R. 220) regulates the extension of credit by and to broker<ealets. For the purposes of SEC rule 16c30, it deals primarily with broker-dealer margin accounts.
Page 77
GADIGGD-92-108 Securities Investor Protection
Appendix I SEC Customer Protection and Net Capital Rules
Part 2: Segregation of
Customer Cash and the Reserve Formula,
The second requirement of rule 15c3-3 dictates how broker-dealers may use customer cash credit balances and cash obtained from the permitted uses of customer securities, including from the pledging of customer margin securities. Essentially, the customer protection rule restricts the use of customer cash or margin securities to activities directly related to financing customer securities purchases. The rule requires a broker-dealer to periodically (weekly for most broker-dealers) compute the amount of funds obtained from customers or through the use of customer securities (credits) and compare it to the total amount it has extended to finance customer transactions (debits). If credits exceed debits, the broker4ealer must have on deposit in an account for the exclusive benefit of customerss at least an equal amount of cash or cash-equivalent securities. For most broker-dealers, the calculation must be made every Friday, and any required deposit must be made by the following Tuesday. Tables I.l, I.2, and I.3 show samples of the individual components of the cash reserve portion of rule 15c3-3 as they appear in the routine Financial and Operational Combined Uniform Single (Focvs) reports submitted by broker-dealers to sEc. First, we will explain the numbered items as they relate to slPc, and then we will use the items to demonstrate how the reserve calculation works. The numbered items in table I.1 make up the credits portion of the reserve calculation. These accounts generally represent accounts payable by the broker-dealer to customers and money borrowed by the broker-dealer using customer property as collateral. Item 1 is the amount of cash in customer accounts that si c would be required to return to customers in a liquidation. Items 2 and 3 show the amount of customer property pledged as collateral for bank loans or involved in stock loans. Generally, the securities involved in these transactions come from customer margin accounts and are used to secure the customers' margin loans. Customers may also volunteer their fully paid securities for use in stock loans if the broker-dealer provides the customer with liquid collateral; however, when they do so they forfeit the slPc protection covering those securities. These items also show the amount slPc may need to advance to recover customer
'Rule 16c34 requires that broker-dealers maintain a bank account that is separate from any other account of the brokerAealer and specified as a "Special Reserve Bank Account for the Exclusive Benefit of Customers" (reserve account). The brokerdealer must also obtain written notification from the bank that all cash or qualified securities within the reserve account are being held for the exclusive benefit of customers; cannot be used directly or indirectly as security for any loan to the broker<ealer by the bank; and shall be subject to no right, charge, security interest, lien, or claim of any kind in favor of the bank or any person claiming through the bank.
Page 78
GAO/GGD-92-109 Securities Investor Protection
Appendix I SEC Cnstotner Protection and Net Capital Rules
property pledged as collateral at banks or involved in stock loans with
other broker<ealers.
Tabl • l.1: Credits Component ot the Reserve Formula Calculation Credit balances 1. F r ee credit balances and other credit balances in customers' security accounts. 2, M o nies borrowed collateralized by securities carried for the accounts of customers. 3, M o n ies payable against customers' securities loaned, 4, C u stomers' securities failed to receive. 5. C r e dit balances in firm accounts which are attributable to principal sales to customers. 6. M a r ket value of stock dividends, stock splits, and similar distributions receivable outstanding over 30 calendar days. 7. M a rket value of short security count differences over 30 calendar days old, 8, M a r ket value of short securities and credits (not to be offset by longs or by debits) in suspense accounts over 30 calendar days. 9. M a r ket value of securities which are in transfer in excess of 40 calendar days and have not been confirmed to be in transfer by the transfer agent or the issuer during the 40 days. 10, Other (list) 11. Total credits
Source; SEC FOCUS Report.
Week 1 $10,000,000 3,000,000 5,000,000
4,000,000
Week 2
$10,000,000 3,000,000 + 50,000 5,000,000 4,000,000
4,000,000
4,000,000
1,000,000 2,000,000
1,000,000 2,000,000
500,000
500,000
1,000,000 $30,500,000
1,000,000 $30,550,000
The numbered items in table I.2 make up the debits portion of the reserve calculation. These accounts generally represent transactions that the broker-dealer has financed for customers; item 18 is analogous to the broker-dealer's loss reserve for the loans made to customers. The loans to customers aggregated in these accounts are secured by customer property. If at some point the market value of the customer property securing the debit falls sufficiently to make the debit unsecured or partially secured, the unsecured portion of that account is taken out of the reserve
See Molinari and Kibter,Broker-Dealer's Financial Res nsibili under the Uniform Net Ca ital Rule — A Case for Li uidi, eo . i
Page 79
GAD/GGD-98-109 Securities Investor Protection
hypendht I SEC Customer Protection andNet Capital Rales
calculation, given a haircut, and charged against the net capital of the broker4ealer. In a siPc liquidation, the customer has the option to either pay the remaining debit balance or allow the trustee to liquidate securities in that customer's account to pay the balance. If the debit balance in the account is greater than the value of the securities in the account, the trustee usually liquidates the securities and attempts to recover the remaining debit balance. The Federal Reserve and the self-regulatory organizations (sRos) set initial margin account requirements that must be met before a customer may effect new securities transactions and commitments. In addition, the sRos and broker-dealers set maintenance margin requirements to limit the likelihood that margin loans to customers will become unsecured. These requirements specify how much equity each customer must have in an account when securities are purchased and how much equity must be maintained in that account. For example, the New York Stock Exchange (NvsE) requires that customers of its member firms maintain at least 26 percent equity for all equity securities long in an account. This means that the customer must maintain equity of at least 25 percent of the current market value of the securities in the account. The equity balance of a margin account is calculated by subtracting the current market value of all securities short and the amount of the customer's debit balance (the amount the customer owes the broker-dealer for the purchase of the securities) from the current market value of the securities held long in the account plus the amount of any credit balance.
Page 80
GAD/GGD-99-109 Securities Investor Protection
hppendht I SEC Cnstomer protection and Nct Capital Rnlas
Table l,2: Deblte Component of the Reserve Formula Calculation
Oeblt balancea 12. De bit balances in customers' cash and margin accounts excluding unsecured accounts and accounts doubtful of collection net of deductions pursuant to rule 15c3-3. 13. Securities borrowed to effectuate short sales by customers and securities borrowed to make delivery on customers' securities failed to deliver. 14. Failed to deliver of customers' securities not older than 30 calendar days. 15. Margin required and on deposit with the Options Clearing Corporation for all option contracts written or purchased in customer accounts. 16. Other (list), 17. Aggregate debit items. 18. Less 3 percent (for alternative net capital requirement calculation method only). 19. Total debits
Source: SEC FOCUS Report,
Week 1
Week 2
$10,000,000
$10,000,000 + 50,000
1,000,000 4,000,000
1,000,000 4,000,000
2,000,000 17,000,000 (510,000) $16,490,000
2,000,000 17,050,000
(511,500)
$18,538,500
The numbered items in table I.3 show how the aggregate credit and debit items come together to determine the required segregated reserve. If aggregate credits are greater than aggregate debits, the broker-dealer must ensure that it has sufficient funds in its reserve account to cover the difference. If debits are greater than credits, no reserve is required.
Page 81
GAO/GGD-92-109 Securities Investor Protection
Appendix I SEC Cnstomer Protection and Net Capital Rnles
Table I.3: Reserve Calculation Reeerve computation 20. Excess of total debits over total credits (line 19 less line 11), 21. Excess of total credits over total debits (line 11 less line 19). 22, If computation permitted on a monthly basis, enter 105 percent of excess total credits over total debits. 23, Amount held on deposit in "Reserve Bank Account(s)", including value of qualified securities, at end of reporting period. 24, Amount of deposit (or withdrawal) including 0 value of qualified securities. 25. New amount in Reserve Bank Account(s) after adding deposit or subtracting withdrawal including 0 value of qualified secur'Itles, 26, Date of deposit
Source: SEC FOCUS Report.
Week 1
Week2
$14,010,000
$14,011,500
14,000,000 10,000
14,010,000 1,500
$14,010,000 1-7-92
$14,011,500 1-14-92
To demonstrate how the reser ve formula works with regard to customer credit balances and margin accounts, we prepared this example. The column labeled "Week 1" in tables I.1, I.2, and I.3 shows the account balances of a hypothetical broker-dealer. During week 2, customer A purchased $100,000 worth of securities on margin by paying $50,000. The broker-dealer borrowed $50,000 from a bank, using $70,000 of customer A's securities as collateral. Item 2 in table I. 1 records the use of customer securities for the bank loan, and item 12 in table I.2 records the $50,000 that customer A borrowed (debit) to buy the securities. Item 11 shows total credits increasing by $50,000 in week 2. Item 17 shows aggregate debits also increasing $60,000; however, total debits only increased by $48,500, reflecting the 3-percent charge from item 18. The effect of customer A's transaction is also reflected in the broker-dealer's cash reserve requirement in table I.3, item 21. • Had the broker-dealer chosen to fund customer A's margin account purchase with free credit cash from other customers, the credit balances shown in table I.1 would not change from week 1 to week 2. The debit balances shown in table I.2 would reflect the $50,000 increase in item 12, increasing total debits. The required reserve in this second example would
Page 82
GAD/GGD-92-109 Secnrtttes Investor Protection
Appendix I SEC Customer Protection and Net Gapitai Ruies
decrease by 448,500, and the broker<ealer would be allowed to withdraw that amount from the reserve account. These examples show that broker-dealers must either segregate customer cash in a reserve account (example 1) or use it to lend to other customers (example 2).
Net Capital Rule Stresses Liquidity
In 1975, SEc established the uniform net capital rule (a modification of rule 15c3-1) as the basic capital rule for broker-dealers, which is applicable to all sIPc members.' This rule was designed to make sure that broker-dealers maintain sufficient liquid assets to cover their liabilities, In order to comply with rule 15c3-1, the broker-dealer must first compute its net capital, the net worth plus subordinated debt less nonallowable assets and deductions that take into account risk in the broker-dealer's securities and commodities positions. Second, the broker-dealer determines its net capital requirement in one of two ways: (1) the basic method, where aggregate indebtedness cannot exceed 15 times net capital, or (2) the alternative method, where net capital must be at least 2 percent of aggregate debits from the cash reserve calculation of rule 15c3-3.
Computing Net Capital
The process of calculating a broker-dealer's net capital is really a process of separating its liquid and nonliquid assets. For the purposes of calculating net capital, only assets that are readily convertible into cash — on the broker-dealer's initiative-count in the capital computation. For example, fixed assets (such as furniture and exchange seats) as well as unsecured receivables (such as unsecured customer debits, described in the previous section) cannot be included as allowable assets in the net capital calculation. The process of computing net capital also involves computing the market value of broker-dealer assets and accounting for the price volatility of broker-dealer securities. The net capital rule applies a discount (haircut) to proprietary securities according to their risk characteristics, i.e., price volatility. For example, debt obligations of the U.S. government receive a haircut depending on their time to maturity — from a 0-percent haircut for obligations with less than 3 months to maturity to a 6-percent haircut for obligations with more than 25 years to maturity.
'This rule also applies to SEC-registered brokerAealers that are not SIPC members, but SEC has the authority to exempt some SIPC nonmember firms from the rule.
Page 83
GAO/GGD-92-109 Securities Investor Protection
hyyendlx I SEC Ctltomer Protection and Net capital Rules
Basic Net Capital Requirement
Calculating a brokerAealer's required capital, using the basic method, involves calculating the broker<ealer's aggregate indebtedness. Generally, aggregate indebtedness means the total liabilities of the brokerMealer, including some collateralized liabilities and liabilities subordinated to the claims of other creditors or customers. For broker-dealers that choose to use the basic net capital requirement, the minimum dollar net capital requirement f' or broker-dealers engaging in the general securities business, which involves customers, is $25,000. For broker-dealers that generally do not carry customer accounts (introducing brokers), the minimum capital requirement is $5,000. sEc has proposed that these minimum net capital standards be raised to $250,000 for broker-dealers that hold customer property. Clearing firms that do not hold customer property and introducing firms that routinely receive customer property would be required to hold at least $100,000 in net capital. Introducing broker-dealers that do not routinely receive customer property would be required to hold at least $50,000 in net capital.
Alternative Net Capital Requirement
sEc offered broker-dealers an alternative to the basic net capital requirement that is based on the broker-dealers' responsibilities to customers rather than aggregate indebtedness. This requirement option (most commonly used by large broker-dealers), in conjunction with rule 15c3-3, is designed to ensure that sufficient liquid capital exists to return all property to customers, repay all creditors, and have a sufficient amount of capital remaining to pay for a liquidation if the broker-dealer fails. The broker-dealer's ability to return customer property is addressed by rule 15c3-3. The repayment of creditors and the payment of the broker-dealer's liquidation expenses is addressed by the 2-percent net capital requirement and the deductions from net worth for illiquid assets and risk in securities and commodities positions. sEc believed the alternative requirement would promote customer protection and still allow broker4ealers to allocate capital as they see fit by acting as an effective early warning device to provide reasonable assurance against loss of customer property, avoiding inefficient and costly misallocations of capital in the securities industry,
Page 84
GAD/GGD-92-109 Securities Investor Protection
hypendht I SEC Cnstomer Protection and Net Capital Bales
• elimdmting competitive restraints on the securities industry in its interaction with other diversified financial institutions, • making the capital structures of broker-dealers more understandable to suppliers of capital to the public, and • providing some reasonable and finite limitation on broker<ealer expansion. Broker-dealers using the alternative capital requirement must hold at least 4100,000 in capital. The new minimum standards proposed bysECwould also apply to broker-dealers using the alternative method. Generally, broker-dealers maintain capital levels far in excess of the minimum requirement; this amount is recorded in item 28 of table I.4. Table I.4 shows the items included in the alternative capital requirement calculation.
Table IA: Alternative Net Capital Calculation 22. Computation of alternative net capital requirement Two percent of combined aggregate debit items as shown in Formula for Reserve Requirements (rule 15c3-3), prepared as of the date on the net capital computation, including both brokers or dealers and consolidated subsidiaries' debits. Min imum dollar net capital requirement of reporting broker or dealer and minimum net capital requirement of subsidiaries. Net capital requirement (greater of line 22 or line 23). Excess net capital (total net capital less line 24). Percentage of net capital to aggregate debits. Percentage of net capital after anticipated capital withdrawals to aggregate debits. Ne t capital in excess of the greater of 5 percent of aggregate debit items or $120,000.
23. 24. 25, 26. 27, 28.
Source: SEC FOCUS Report.
Page 96
GAO/GGD-92-109 Securities Investor Protection
A e ndix II
SPC
S EC U R I T I E S I N V E S T O R P R O T E CT I O N CO R P O R AT I O N 5 05 F I F T E E N T H ST R E E T , N . W . S U I T E SO O
WASH I N G T O N ,
( 202 )
OPPICE O P T H E O E r < E KA L COUNS E L
D . C. 2 0 0 0 5 -2 2 0 7 371 83 0 0
June 22, 1992
Ri«hard L. Fogel Assists»t Comptroller General Gc»or<<i Ac<.ourrtbrg Office Waslringto», D.C. 20548
Dear Mr. Fogel:
Now on p. 2.
We sre pleased io have this opportunity to offer the comments of the Sec<rrities hrvestor Protection Corporation ("SIPC") on the GAO Draft Report on SIPC. As t ire Executive Summary of your report notes, the Congressional «ommiitees asked GAO to report on three principal issues: "1) the exposure and mh.quacy of tire SIPC fund, 2) the effectiveness of SIPC's liquidation oversight «ffnrts, and 8) the disclos<u'e of SIPC protections to customers." Draft Report Executive Summary ("ES") at 1. We are pleased to note that in each of these ar'eas tir<. GAO report gives SIPC a vote of confidence. Indeed, it follows from the report's principal findings that the prograru of investor protection enacted in the Securities hrvestor Protection Act of 1970 ("SIPA") has been a major success.
SIVC's role is viewed in the proper perspective as an element in a sintutorily mandated program to promote investor confidence by upgrading broker-dealer financial responsibility and by providing protection to customers of failed broker-dealers. The report reflects that the Securities and Exchange Commission ("SEC"), under authority granted in SIPA, has devised, promulgated, and, together with the self-regulatory organizations, enforced effective financial resporrsi)rility rules for SIPC members which have sharply curtailed the need for hrvestor prutectio» tlu ough SIPC finrurced customer protection proceedings.
Set forth below are our comments on the matters covered by the report, includi»g our responses to some comments with which we disagree. We have submitted a separate memorandum alerting GAO to a few technical problems we find i» the Draft Report. SIPC will not in this letter offer comments on those parts of the rcport which deal with the SEC or the SEC's role in the SIPC program.
Page 86
GAO/GGD-92-109 Securities Investor Protection
Appendix II Comments Promthe Securities Investor Protection Corporation
Richard L. Fogel Page 2 June 22, 1992
Now on p, 19.
Now on p. 44, Now on p. 44. Now on p. 5. Now on p. 42.
The proper measure of SIPC's exposure and the adequacy of SIPC's resources appear to be the main reasons that GAO was asked to do the study. We note that you have "determined that no quantifiable measure exists to judge the exposure of the SIPC fund and the adequacy of its reserves...." (Draft Report at 1.16) and that "[i]n assessing the reasonableness of SIPC's financial plans, [you] concluded that there is no methodology that SIPC could follow which would provide a completely reliable estimate of the amount of money SIPC might need in the fuiure. ~ ~ ~ SIPC's estimate, therefore, must be judgmental." Draft Report at 3.9. b> light of your determinations, with which we fully concur, we are gratified that GAO hss stated its belief "that SIPC's strategy represents a responsible approach for planning for future needs" (Draft Report at 3.9); that "SIPC officials have acted responsibly in adopting a financial plan that would increase Fund reserves to $1 billion by 1997v (ES at 9); snd that "based strictly on the historical record, SIPC resources would seem to be adequate." Draft Report at 3.5.
The report suggests that t h e p r incipal threat t o t h e c ont inued effectiveness of the SIPC program is the possibility that SIPC and the regulators might become complacent. As a theoretical statement, we cannot disagree, but in fact the report itself shows no reason to believe that either SIPC or the regulators are becomb>g complacent. Indeed, the recent decisions of the SIPC Board with regard to the fund size and the line of credit demonstrate just the opposite.
Now on p, 51,
We concur with the report's position that "a cash fund is superior to privst.e hisurance, let.ters of credit, or lines of credit in terms of providing a basic level of customer protection and public confidence." Draft Report at 3.23. We believe Ib>es of credit, however, are a useful supplement to a cash fund.
Nowon p,22, Now on p. 38.
The report states that GAO does "not. believe that SIPC needs the authority to individually examine its members" (Draft Report at 2.1), and concludes thai. providbig SIPC with investigai,ive or regulatory authority is not warranted. Drsi't Report at 2.34. This fully accords with SIPC's long standing views on this subject. The reasons given in the report are the same reasons SIPC has articulated bi the past. One additional reason, not mentioned in the report, is our perception that, by divorciug the regulatory function from the customer protection function, the authority snd responsibility of the regulator and the protector are both cuban«cd and clarified.
We, of course, are pleased the report concludes that "SIPC's role in providing back-up protection for customers' cash and securities has worked weII."
Page S7
GAO/GGD-92-109 Securities Investor Protection
Appendix 11 Comments Prom the Securities Investor
Protection Corporation
Richard L. Fogel Page 3 June 22, 1992
Now on p. 4. Now on p, 54. Now on p, 54,
(ES at 7); that "the assistance SIPC provides trustees during liquidations has received high marks... " (Draft Report at 4.1); and that the GAO has "no reason to question the quality of the assistance SIPC provides after a liquidation begins.... " Draft Report at 4.2.
Now on p,65.
The report concludes that there are no substantial gapa In disclosure to customers about the SIPC program, noting that "tf]or the most part this disclosure seems adequate... " D r af t R eport at S .l . T h e r eport does recommend that disclosure as to the time lbuits for filing claims In a SIPC liquidation and the time lhults on SIPC's jurisdiction to initiate liquidation proceedings be enhanced. We note that the published notices of liquidation proceedings set forth the time limits for customers to file claims and that the notice, claim form», and instructions mailed to customers set forth t hese thne lhults In l arge, bold, faced type. Nevertheless, we recognize the merit in GAO's comments, and we will revise the question aud answer booklet with a view toward implementing your suggestions as soo» as feasible.
t
The report expresses concern that SIPC does not take adequate steps to gather operational information on firms which may be liquidated prior to the hiitiation of a liquidation proceeding. The evidence cited in the report for the need for this information is anecdotal. There is no indication whatsoever that the problems discussed were more than an inconvenience or that these matters delayed the processing or satisfaction of customer claims.
SIPC does, of course, receive information on members in f i nancial difficulty from the regulators snd frequently requests as much information as it can in order to make its independent determination of the need for SIPC protection snd to select a trustee and counsel with adequate experience and resources to meet the needs of the undertaking, We will, however, thoroughly review this matter with the SEC and the SROs.
SIPC has pursued a policy of i ntegrating the most cost effective automated data processing solutions bito the assistance and support SIPC provides to trustees appointed under SIPA. SIPC bas achieved some important successes, including the conception, definition, and creation of the first and only automated liquidation system designed for use in the liquidation of broker-dealers under SIPA. The GAO Report, however, expresses concerns about SIPC's efforts and preparedness in this area. For the reasons set forth below, we believe those
Page 88
GAO/GGD-92-109 Securities Investor Protection
Appendix 11 Commeats Prom the Securities Investor Protection Corporation
June 22, 1992
Richard L. Fogel Page 4
concenis are based on erroneous assumptions and a misunderstanding of the liquidation process.
The GAO Report states, "[w]e believe it is critical that SIPC review its automation practices iuid develop policies which ensure that t r ustees acquire capable autoniated liquidation systems on a tiniely basis." Draft Report at 4.12. The report defines automated liquidation systems as "computer software piograms that help trustees organize liquidations and pay customer claims promptly." Draft Report at 4.12. The report reflects concern that "SIPC has not assessed current practices to ensure that the trustees of large liquidations acquire automated systems." Draft Report at 4.15. The report states that in cases too large for SIPC's own system "SIPC relies primarily on one supplier that has developed a system SIPC officials believe exceed the capabilities of others on the market,." Draft Report at 4.14. The report observes that SIPC's reliance on "one supplier" b>ours the risk that "the system may be unavailable in an emergency or may cost more tlnm other competitive systems." Diaft Report at 4.15. The difficulty with the Draft Report's position is that, except for what SIPC has developed, there is no off-the-shelf "automated liquidation system" for stockbroker liquidations and there is, therefore, no "supplier" of such systems.
Now on pp. 60-61. Now on p, 60. Now on pp. 60-61. Now on p. 60. Now on p. 60.
At the inception of a liquidation proceeding, the SIPC staff reviews the automated data processing capabilities of t h e d e btor, w it h a v i e w t o ward dcterniinbig wliether to use SIPC's system alone; SIPC's system in conjunction with the debi,or's existbig data processing capability; or the debtor's capability, modified for the needs of t h e l i quidation. T his determbiation and any modifications necessary can be accomplished without delay to the liquidation proceeding. The trustee snd SIPC select a public accounting firm which is best qualified to supply the accounting services required for that liquidation, which includes the automated data processing expertise needed for the unique requirements of a SIPA liquidation. ln SIPC's view, all major public accounting firms are capable, in terms of experience and data processing expertise, of supplying those services. SIPC, and the trustee, engage the public accounting firm judged the best positioned to meet requirements of that liquidation at the lowest cost. The firm selected may well be one with a track record in SIPA proceedings and one which has developed relevant experience aiid expertise. SIPC's "automated liquidation system" was planned to interface with a debtor 's own computer system. As stated by Charles Cash of KMPG Peat Marwick, SIPC's consultant on its data processing requirements, "The system iutis not to be a repiacement for the broker deafer's own back office accounting system. We designed the system to support the liquidation piocess with enough back office functionality to handle routine needs. For larger, more complex liquidations, the bi'oker dealer's existing system can be used to meet back office needs." (Exhibit A. Cash Ltr. June 15, 1992, at 2 . H e reh>after "the Cash Letter.") (Emphasis in orighial.)
SIPC's soFtware package can 1) generate a broad variety of reports needed by a t rustee siid SIPC, 2) provide an automated capability of c l aims
Page 89
GAO/GGD-92-109 Securities Investor Protection
kypen4lx II GonLnlenta FronL the Securities Investor Protection Corporation
Richard L. Fogel Page 5 June 22, 1992
matching and sorting according to the results of the match, and 3) to assist in the satisi'action of customer claims. It is a complex and highly sophisticated system, (Attached as Exhibit B is a copy of the table of contents of the user's manual.) The SIPC software package provides the only existing automated capability f or matching customer claims with the debtor's records and reporting on the results of the match.
SIPC's system was designed for cases of the magnitude most frequently encountered by SIPC although it can now be used in cases larger than originally contemplated.> It is employed hi all cases where its use will be most efficient and cost-effective, for example, in eases in which the debtor broker-dealer does not have an existbig, staffed computer system which can be adapted to meet the special requirements oi' a SIPC liquidation.
The SIPC software package is continuously reviewed and critiqued. You can he confident that we will again review our automation system with all of the
GAO comme»i,s in mba. If the addition of a capability is considered feasible and cost-effective, it will be added. "Each new user requirement and new technological development. is reviewed in terms of other alternatives available, cost and potential use on other liquidations." Cash Letter at 3. See also Cash Letter at 4.
We believe it Is important to call attention to that part of the report which correctly notes the significant differences between the obligations of SIPC member broker-dealers to their customers and the obligations of banks to theh' depositors. An example would be the report's conclusion that the "risks to the taxpayer hd~erent in SIPC are thus less than those associated with the deposit hisurance system." ES at 3. Broker-dealers hold securities and cash entrusted to theta by investors and are prohibited, except in a very limited manner, from using the securities or cash b> their own business. Banks must use their resources, hicludbig hisured deposits, to generate the income necessary for profits, operating expenses, and interest to depositors.
1/ 'While i( was not initially capable of handling 50,000 to 60,000 customer claims, this is not the case today. If we were to add high performance workstations and faster prini,lng devices to the network, the system could handle substantially more than 50,000 to 60,000 customer claims. Th e a d u ances in m i c r ocomputer
technology, networks, high performance systems, and high speed printers make it almost impossible to place a practical limit on its ability to kandle a large number of claims. These additions are easily added on an as needed basis and only Involve a nominal cost. To suggest that the system will only 'handle the small number of claints that SIPC trustees typically liquidate' does not reflect its true capability." Cash Letter at 3. (Emphasis added.)
Page 90
GAO/GGD-92-109 Securities Investor Protection
hppeadlx 11 Comments Prom the Securltles Investor Protection Corporation
Richard L. Fogel Page 6 June 22, 1882 In the case of SIPC members, then„ the risk of loss and the possibility of gain tluough appreciation or loss in value of securities is that of the investor. Bmdre, however, are obligated to depositors for principal and Interest on deposits but the risk of nonperformance of the bank's portfolio of assets is the bank's. Thus, the SIPC member broker-dealer's financial condition is not threatened by the vagaries of the economy in the sine mmuier as is a bank's.
The report'e descriptions of and conclusions as to SIPC depict a successful program. The costs of SIPC'e operations to the taxpayer have been zero. We believe we have taken all reasonable steps to ensure that continues. SIPC has met sll its obligations in an environment of major changes in the Industry, hae absorbed losses of customer property resulting from massive frauds and, in short, hae been equal to all the challenges it has faced. Although the SIPC fund is at ite highest level in history, the report correct,ly notes the assessment burden has been low. SIPC has taken responsible m easures to ensure the financial strength required to continue to meet it s obligations and, as the report notes, assessments should remain low. It would seem fair to conclude that SIPC hae achieved ite objectives in a cost-effective manner and the success of the undertaking makes it a f ine example of industry and government cooperation. Very truly yours,
C~
ames G. Stearns Chairman JGS;ved
Enclosures
Page 91
GAD/GGD-92-108 Securities Investor Protection
endix III
Exchange Commission
„F
UNITED STATES
SECURITIES AND EXCHANGE COMMISSION
W ASHINGTON. D.C. 2 0 5 4 9
DIVIOIOII OF MAIIKCT IIOOVI.AT ION
J uly 2 1 , 1 9 9 2
Mr. R i c h a rd L . Fo g e l A ssist an t C omptr o l l e r G e n e r a l G eneral Government D i v i s i o n Unite d S t a t e s G e n e r a l A c c o u n t i n g O f f i c e
Washing t o n , D.C. 20548
Dear Mr. P o g e l :
I am wr i t i n g i n r es p o ns e t o y o u r l et t er o f J u n e 1 , 1 9 9 2 , t o C hairman Breeden r e ques t i n g o u r c o mments o n t h e G e ner al Accountin g O f f i c e ' s ( " G AO' s" ) d r a f t r ep o r t e n ti t l e d ~ ~ ia
I ( t he " Repor t " )
W concur w i t h e t h e R e p o r t ' s ce n t r a l c o n c l u s i o n t h a t t he S ecuri t i e s I n v e s t o r P r o t e c t i o n C o r p o r a t i o n ( " S I P C" ) h a s b e e n successfu l i n p r o t e c t i n g c u s t o mers a g a i n s t l os s e s . W e a r e p leased t o n o t e t h a t t h e R e p o r t al so c o n c l u d e s t h a t t he Securi t i e s a n d E x change Commission ( " Commission" ) an d t h e s e l f r egulat or y o r g a n i z a t i o n s ( " S ROs") h av e e f f e c t i v e l y e n f o r c e d t h e i r f i n a n c i a l r e s p o n s i b i l i t y r ul e s a n d t h u s h a v e m i n i m i z e d l o s s e s t o S IPC. T h e C ommissi o n ' s p r o m u l g a t i o n a n d e n f o r c e ment o f R u l e s 1 5c3-1 and 1 5 c 3- 3 u n de r t h e S e c u r i t i e s E x c h a nge Ac t o f 1 9 3 4 ( t h e "Act " ) , 1 7 C F R 55 24 0 . 1 5 c 3 -1 a n d 1 5 c 3 - 3 , ar e not e d f o r t he i r p art i c u l a r i m p o r t a nc e i n p r e v e n t i n g s u c h l o s s e s . W r e g ar d ith t o p ro t e c t i o n o f t he i nv e s t i n g p u b l i c , t he Report a c c u r a t e l y r el at e s t h a t S I P C s e r v e s i n a ba c k u p r o l e t o the r e g u l a t o r y a c t i v i t i es o f t he C o mmissio n a n d t h e S ROs. Addit i o n a l l y , t h e Re p o r t c or r ec t l y d e s c r i b e s t h e m e a ns b y w h i c h the Commissio n a n d t h e S ROs ens ur e t h a t br ok e r - d e a l e r s c o m p l y w ith t h e i r r u l e s . Th e C o mmissio n an d SROs monito r c o mp l i a n c e b y , among other th i n g s : cond u c t i n g r o u t i n e e x a m inati o n s of b rok e r d ealers ; r e q u i r i n g f i r m s w h ose c a p i t a l f a l l s be l o w e a r l y - w a r n i n g l evel s t o n o ti f y t he C o mmissi o n a n d t h e S ROs; r e q u i r i n g b r o k e r d ealer s t o p r e p a r e a n d f i l e f i na n c i a l r e p o r t s o n a m o n t h l y a n d q uarte r l y b a s i s ; a n d r e q u i r i n g f i r m s t o u n d e rg o a n nual a u d i t s b y
independent pu b l i c a c c o u nt a n t s . To s u mmari ze , t h e Re p ort d escri be s a s u c c ess fu l p r o g r a m o f i n v e s t o r p r o t e c t i o n . SI P C ' s f i n anc i a l r e s o u r c e s a r e a t a n a l l - t i m e h i g h , n o t a x p a ye r f u n d s h ave ever b e e n u s ed , a n d S I P C ' s f u n d i n g s t r a t e g y r e p r e s e nt s a r esponsi bl e a p p r o ac h f o r d e a li n g w i t h t he S I P C f u n d ' s (t he M Fund's" ) p o t e n t i a l ex p o s u r e .
Page 92
GAO/GGD-92-109 Securities Investor Protection
Appendix HI Comments From the Securities and Exchange Commission
Mr. R i c h a r d Page 2
L. F og e l
I n t h e Report , t h e G AO o f f e r s f i v e r e c ommendati on s r e g a r d i n g the Commissio n' s o v e r s i g h t r e s p o n s i b i l i t i e s w i t h re sp e c t t o SI PC . I n r e s p onse t o G A O' s r e c ommendati o ns , w e h a v e r e a s s e s sed t h e a dequacy of e a r l i e r i n i t i at i v e s w h i c h s o u ght t o a d d r e s s t h e s a me concerns ex p re ssed i n t h e R e p or t .
See p, 53.
We agree t h a t t h e ad e q u a cy o f t h e SI P C f u n d s h o u ld b e r eviewed pe r i o d i c a l l y . SI PC a n d t h e C ommissio n h av e d on e s o , a n d we wil l c o n t i n u e t o d o s o . pur i ng t h e l as t 7 yea r s , SI P C h a s commissioned t w o t a s k f o r c e s a n d D e l o i t t e C T o u c he t o r ev i ew the adequacy of t h e S I P C f u n d a nd f u n d i n g a r r a n g ements. T he Commission s t a f f h a s p a r t i c i p a t e d o n t h e s e t a s k fo r c e s . W e h a v e d iscussed t h e a d e quacy o f t h e S I P C f u n d w i t h t h e S I P C Boar d o f Direc t o r s a n d S I P C s t a f f . The ade q u ac y o f t he S I P C f u n d i s a m atter o f co n c e r n t o u s a t a l l t i mes .
The t a s k f o r c e s w e r e c o mposed o f r ep r e s e n t a t i v e s f r o m t he securi t i e s i n d u s t r y , S I P C an d t h e g o v e r nment.
The Del o i t t e I T ouc h e s tudy u sed a v e r y c o n s e r v a t i v e " w o r s t c a s e a n a l y s i s " w h i c h w e beli e v e s u b s t a n t i a l l y ov e r s t a t e s t h e S I P C a d v ances l i ke l y r equi re d i n l i qu i d a t i n g a l ar g e b r o k e r - d e a l e r . T he Report s u g g e st s t h a t m a s s i v e f r a u d a t a m a j o r f i r m o r t he s i m u l t a n e ou s f a i l u r e s o f s ev e r a l o f t h e l a r ge s t br o k e r dealer s c o u l d r e s u l t i n l oss e s t o S I P C o f ov e r $1 bi l l i on . Fr aud on such an e n o rmous sc a l e , w h i l e t heo r e t i c a l l y po s s i b l e , i s h ighl y u n l i k e l y . I n sm a l l f i r m s , f r a u d h a s r e s u l t e d i n
m isappropr i a t i o n o f a s i gn i f i ca n t p r o p o r t i o n o f c u s t o mer a s s e t s held b y a b r o k e r - d e a l e r . How e v e r , t he p ro p o r t i o n o f cus t o m er assets mis appropri a t e d i n s m a l l e r fi r m c a n n ot b e u s e d t o s r easonably e s t i m at e p o s s i b l e l o s s e s i n l ar g e r f i r m s . Lar g e r fi rms h a ve a c t i v e i n t e r n a l s u r v e i l l an c e a n d c omplia n c e department s t h a t w o u l d m os t l i ke l y u n c o v e r s u c h fr au d we l l b e f or e i t c o u l d j e o p a r d i z e l a r g e a mount s o f c u s t o mer a s s e t s . I n addit i o n , t h e C ommission an d t h e S R Os have s i g n i f i c a n t l y mo r e frequent i n s p e c t i o n s c hedule s an d r e p o r t i n g r e q u i r e ments f o r l arger f i r m s a s a means of p r e v e n t i n g s uc h f r a u d ul ent a c t i v i t y . ( conti nued . . . )
Page 98
GAO/GGD-92-109 Securities Investor Protection
Appendix III Comments From the Securities and Exchange Commhmion
Mr.
Page 3
R i c h a r d L . Fog s l
See p. 62.
S pecif i c a l l y , t h e R e p or t r e c ommends t ha t t h e r e g u l a t o r s p rovid e S I PC wit h t h e f o l l o w i n g i n f o r m a t i o n i n a d v a nc e o f l i q u i d a t i o n : ( 1 ) a l i st of br a n c h o f f i ce s ; ( 2) t he l oc a t i o n o f l eases fo r b r a n c h o f f i c e s ; ( 3 ) t h e l o c a t i o n o f e q u i p ment l e a s e s a nd other e x e c u t or y c o n t r a c t s ; ( 4 ) a l i st of ban k s o r f i n a n c i a l i nst i t u t i o n s w i t h f u n d s o r s e c u r i ti e s o n d e p o s i t ; ( 5 ) l oc a t i o n o f vault s a n d ot h e r s e c u re l o c a t i o n s ; ( 6 ) l oc a t i o n a n d d e s c r i p t i o n of computer d a ta b a s es a nd se r v i c e s us e d; ( 7 ) l oc a t i o n of m a i l dropsj ( 8 ) a char t of i nt er l o cki n g c o r p o r a t e r e l at i ons h i p s b etween th e b r o k e r - d e a l e r a n d i t s af f i l i at e s ; ( 9 ) a l i st of k ey personnel ; a n d ( 1 0 ) a n a c c u r a t e i d e a o f t he n u m ber o f act i v e c ustomer a c c ou nt s . Current l y , t he C ommissi o n ' s reg u l a t i o n s r e q u i r e br o k e r d ealers t o p r e p a r e a n d p r e s e rv e i n a n a c c e s s i b l e p l a c e a c onsiderabl e a mount o f i nf o r m a t i o n r e l a t i n g t o t he i r b us i n e s s . When a SI P C member's f i na n c i a l co n d i t i o n m a y w a r r a n t S I P C i nt e r v e n t i o n , t h e C ommissi o n a n d SRO st a f f s i m medi a t e l y b e g i n t o c oll e c t d a t a a n d d o c umentat i o n t h a t c o u l d b e u s e d i n l i qu i d a t i o n proceedi n gs . T hi s i n f o r m a t i o n i s sh a r e d w i t h S I P C a s s o o n as i t
is obt a i n ed.
T he Report i m p l i e s t h a t t he s a t i sf a c t i o n o f cu s t o mer c l a i m s m ay be del a yed b y a l a c k o f r e a d i l y a c c e s s i b l e d o c umentat i o n . Indeed, r e l u c t a n t , u n c o o per a t i v e ow n ers or ma n agers — who may have been i nv o l ve d i n f r a u d or w r o n g doing — are unl i k e l y t o p r ov i d e
(... cont i n usd) Also, g i v e n t h e o p e r a t i o n o f t he C o mmissi o n ' s a n d S ROs' r egulat or y p r o g r am, s i m u l t a n e ous f a i l u r e s o f s e v e r a l o f t he l argest b r o k e r - d e a l e r s r e q u i r i n g S I P C i n t e r v e n t i o n a r e h i g h l y u nli k e l y .
F inal l y , b e c a us e o f t h e s t r o n g r e g u l a t o r y p r o g r a m , t h e C ommission an d t h e S ROs have b een a bl s t o w i n d d o w n t h e o perat i o n s o f m an y b r o k e r - d e a l e r s e x p e r i e n c i n g d i f f i cu l t y w i t h o u t the need fo r S I P C i n t e r v e n t i o n . Lar ge b r o k e r - d e a l e r s s u c h a s Drexel B u r n ham Lambert , I n c . a n d T h omson McKinnon Secur i t i e s I nc . h ave been wound down i n t h i s f as h i o n . The i n f o r m a t i o n t h a t a b r o k e r - d e a l e r m us t m a i n t a i n i s li s t e d in Ru l e s 17 a - 3 a n d 1 7 a - 4 u n d e r t h e Ac t , 17 CF R II 2 40 . 1 7 a 3 and 1 7 a - 4 .
Page 84
GAO/GGD-88-108 Securities Investor Protection
hppendix III Cowwents Frow the Securities and Exchange Cowwission
M r. R i c h ar d Page 4
L. F ogel
import an t i n f o r m a t i o n . Th e R e p o r t , how e v e r, doe s n o t i d ent i f y a ny cases i n w h i c h t h e a b s e nce o f t h e a b o v e i n f o r m a t i o n a c t u a l l y i mpeded th e s a t i s f a c t i o n o f cu s t o mer c l a i m s . Ne v e r t h e l e s s , w e w il l r ev i e w t h i s r e c o mmendatio n w i t h S I P C an d t h e S ROs i n a n effort t o i m r o ve t h e i n f o r m ati o n g a t h e r i n g a n d d i s t r i bu t i o n p process,
See p. 62.
The Report e x p r e s s es c o n cern t h a t S I P C 's a u t o m ati o n pract i c e s may b e i n a d equate , p a r t i c u l a r l y w i t h re g a r d t o t he s ystem's a b i l i t y t o h a n d l e l i q u i d a t i o n o f a m a ) o r b r o k e r - d e a l e r . T he Report s u g g e st s t h a t t he C ommissi on , i n i t s ov er s i g h t capaci t y , s h o u l d i d e n t i f y a n d c o r r e c t s h o r t c o m i ng s w i t h t h e c urrent S I P C l i q u i d a t i o n s y s t e m , d e t e r m i n e S I P C' s a u t o mat i o n needs wit h re g a r d to l i q ui da t i o n of f i r ms o f va r i ou s s i z e s , a n d e nsure t h a t S I p C t r us t e e s p r o m p t l y a c q u i r e e f f i c i e n t aut o m a t e d li q u i d a t i o n sy s t e m s.
W e have pr e v i o u s l y e x p r e s sed t h e s e s ame concern s t o S I P C , a nd SIPC, i n o u r v i e w , h a s a d e q uat e l y r e s p o n ded . I n 19 8 5 , w e r ecommended to S I P C t h a t i t exp e d i t e a u t o m a t i o n o f i t s li q u i d a t i o n pr o c e s s . SI PC r e t a i n e d K P MG Peat Ma rw i c k a s consult a n t s t o d e v e l o p a n a u t o mated l i q u i d a t i o n s y s t e m . Bo th t he Commission a n d S I P C s t a f f s a n t i ci p a t e d t h a t au t o m a t i n g t h e l i q u i d a t i o n p r o c e s s w o ul d p r o v i d e g r e a t e r u n i f o r m i t y i n li q u i d a t i o n p r o c e edi ngs and e xpedit e s a t i s f a c t i o n o f c u s t o mer clai ms.
K PMG Peat Marwic k d e s i g n e d S I P C' s a u t o mate d l i qu i d a t i o n s ystem t o i n t e r f a c e w i t h b ro k e r - d e a l e r s ' ex i st i ng c o mput e r systems. T h e s y s t e m was desi g ned t o a l l o w S I P C qu i c k l y t o : m atch and s o r t cu s t o mer c l a i m s f o r , a n d a l i qui da t i n g b r o k e r dealer ' s r e c o r d s of , ca s h a n d se c u r i t i es ; g en e r a t e r e p o r t s t hat the SIPC t r u s t e e i s r eq u i r e d t o c o m p l e t e ; a n d o t h e r w i s e meet inf ormati on pr o c e s s i n g r e q u i r e m ents i n b r o k e r - d e a l e r l iquidat i o ns . KP M G Peat Mar w i c k a l s o d e s i g n e d t h e s y s t e m t o b e user- f r i e n d l y a n d n o t t o re qu i r e SI P C t o m a i n t a i n a l a r g e s t a f f s olel y t o o p e r at e t h e c o mputer s .
In i t i a l l y , SI P C a n d K PMG Peat Ma r w i c k de c i d e d t h a t t he system shoul d b e a b l e t o pr oc e s s t h e t yp e s o f ca se s m o s t f requent l y e n c o u n t e r e d b y S I PC — those c a s e s w i t h a p p r o x i m a t e l y 10,000 t o 1 5 , 0 0 0 c u s t o mer c l a i m s . The sof t w a r e w a s a t f i r st operable on l y o n a si n g l e I B M pe r s o nal c o m pute r. T he sy s t e m h a s
Page 95
GAO/GGD-92-109 Securities Investor Protection
hpyendlx HI Comments From the Securities and Exchange Commission
Mr. R i c h a r d L . Fo g e l Page 5
b een progres s i v el y u p g r a ded and now may be i n c o r p o r a t e d i n t o a network w i t h m u l t i p l e wo r k s t a t i o n s . Al t h ou g h t h e R e p o rt i ndicate s t h a t S I P C' s a u t omatio n s y s t em i s c u r r e n t l y c a p a bl e o n l y o f l i q u i d a t i n g a f i r m w i t h f e w e r t h a n 6 0 , 0 0 0 c u s t omers , i n a let t e r t o S I P C f r o m a r ep r e s e n t a t i v e o f KP M G Peat M arw ic k , t he represent a t i v e s t a t e d t h a t i t i s a l m o s t imp o s s i b l e t o p l ac e a pract i c a l n u m eri c al l i mi t on t h e sy s t e m 's a b i l i t y t o h an d l e claims . T h e o n l y pr a c t i c a l l i mi t a t i o n o n t h e a u t o m ati o n s y s t e m rel a t e s t o c o mputer ha r d w a r e. I f a la r ge b r ok e r - d e a l e r m u s t b e li q u i d a t e d , SI P C c a n p r o m ptly a c q u i r e o r r an t t he h ar d w a r e n ecessary t o c o mpl et e t h e l i qu i d a t i p n , o r i t c an u s e t h e b r o k e r dealer' s e x i s t i n g co m puter sy s t e m s.
N otwith s t a n d i n g o u r p r e s en t a s s e ssment o f S I P C ' s a u t o mat i o n program, we have r e s o l v e d t o co n s i d e r t h i s ma t t e r f u r t he r . The C ommission i s cu r r e n t l y un d e r t a k i n g a n i ns p e c t i o n o f SI P C ' s operat i o n s . Th i s i nsp e c t i o n s h o u l d b e c o mpl e t e d d u r i n g t h e l ast q uarte r o f 199 2 . SI PC ' s a u t o m a t i o n s y s t e m i s o n e o f t h e a rea s t hat w i l l be ex a m i n e d b y t h e C ommiss i o n s t a f f . Upon c o m p l e t i o n o f t h e i n s p e c t i o n , w e w i l l t ake s u c h a c t i o n a s a p p e a r s a ppropri a t e .
See p. 62.
The commissio n i s e n g a ge d i n c o n s t a n t o v e r s i g h t o f SI P C '• acti v i t i e s . Com m i s s i o n s t a f f m e mber s h o l d qu a r t e r l y m e e t i n g s w it h S I P C s t a f f m e mber s t o di sc u s s m a t t e r s t h a t co n c e r n o r r equir e t h e a t t e n t i o n o f t h e C ommissi on . I n t h e co u r s e o f d a y t o-day op er a t i o n s , t h e t w o s t a f f s c o mmunicat e r e g u l a r l y b y t el ephone . T h e D i r e c t o r o f t he C o mmissi o n ' s D i v i s i o n o f M a r k e t R egulat i o n at t en d s t h e m e e t i n g s o f S I P C ' s B o ar d o f D ir e c t o r s . B ylaws passed by S I P C' s B o ar d o f D i r e c t o r s m us t b e s u b mi t t e d t o t he Commissi o n b e f o r e t h e y t a k e e f f e c t . SI PC ' s r ul es m u s t be approved b y t h e C o m missi o n . The Co m miss i o n r e c e i v e s m o n t h l y reports fr o m S I P C c o ncern i n g t h e s t at u s o f t he Fu n d a n d c u r r e n t
I n f a c t , a r ep r e s e n t a t i v e o f K P MG Peat Mar wic k h a s s t a t e d t hat t h e r e i s n o p r a c t i c a l l i m it on t he n u mber o f cl ai m s t h a t c a n be processed under t h e e x i s t i n g s y s t e m . ~ Let t er f r om J a m es G Stearns t o Ri c h a r d L. Fo g e l (J u n e 2 2 , 19 9 2 ) ( K PMG Peat M arw ic k ' s l e t t e r t o SI P C r e s p o n d i n g t o t h e G A O' s d r a f t r epo r t i s at t ac h e d a s an appendi x t o M r . S t e a r n s ' l et t er ) . T he Di v i s i o n o f M a r k e t R e g u l a t i o n i s r es p o n s i b l e f o r , am o n g o ther t h i n g s , r e g u l a ti n g t h e a c ti v i t i e s of br ok e r - d e a l e r s .
Page 96
GAD/GGD-92-109 Securities Investor Protection
hppendlx IH Comments From the Securities and Exchange Commission
M r. R i c h a r d Page 6
L. F ogel
l i q u i d a t i o n s . SI PC s u b m i t s a f t e r t he e n d o f e a c h c a l e n d a r y e a r a n annual r e p o r t t o t he C o mmiss i o n t h a t i ncl u d e s i n d e p endent l y a udit e d f i na n c i a l s t at e m e n t s . Th i s r ep o r t i s f or wa r d e d t o C ongress w i t h s u c h c omment a s t h e C ommissio n d e ems approp r i a t e . Personnel a t t he C ommissi o n ' s re g i o n a l o f f i ce s a s s i s t a s n e e d e d i n SIPC l i q u i d a t i o n s .
Members of t h e C o mmissio n ' s s t a f f . m o n i t o r SI P C o p e r a t i o n s i n o ther w a ys . I n 199 1 , a n A s s o c i a t e D i r e c t o r o f t h e D iv i s i o n o f M arket Regul a t i o n s e r v e d o n a S I P C-appoi n t e d t a s k f o r c e f o r me d t o analyse an d make r e c ommendati on s o n S I P C a s s e ssments . Th i s committe e r e c ommended, an d S I P C' s B o ar d o f Di r ect o r s i mp l e m ente d , a pro g r a m u n der w h i c h S I P C i n t e n d s t o bu i l d t he Fu n d t o $1 b i l l i o n . Th i s yea r , C o mmissio n s t a f f m e mbers a r e p a r t i ci p a ti n g in a s u b c o mmitt e e o f t h e Ma r k e t T r a n s a c t i o n s A d v i s o r y C o mmitt e e t hat w i l l m a k e r e c ommendations r e g a r d i n g p r o c edures t o b e f ol l o wed i n t h e e v e n t t h a t a f i r m re g i s t e r e d as b o t h a b r ok e r d ealer a n d a f u t u r e s c o mmiss i o n me r c h an t m us t b e l i qui d a t e d . I n a d d i t i o n , t he C ommissi o n p e r f o r med a n i n s p e c t i o n o f S IPC's o p e r a t i o n s i n 19 8 5 . As pr ev i o u s l y m e n t i o n e d , a n o t h e r i nspec t i o n i s u n d e r w ay . As not e d i n t he R e p o r t , h o w e v er , w e h a v e not e s t a b l i s h e d a p e r i o d i c i ns p e c t i o n s c h e d ul e d e s i g n a t i n g f i xe d d ates o n w h i c h t h e C ommissio n i s t o i nsp e c t S I P C ' s o p e r a t i o n s . W e agree wi t h t h e r e c ommendatio n t h a t s u c h a s c h e d ul e s h o ul d b e e stabl i s h e d , a n d w e w i l l i n sp e c t S I P C a c c o r d i n g t o a s e t sc h e d u l e i n t h e f u t u r e . W w i l l d et e r m i n e t h e a p p r o p r i a t e t i m e t a b l e a f t er e e valuat i n g t h e r e s u l t s o f o u r c u r r e n t i ns p e c t i o n .
See p. 73.
T he Repor t n o t e s t h a t u n d e r t h e S e c u r it i es I nv e s t o r Prote c t i o n A c t o f 19 7 0 ( " S I P A " ) a n d S I P C ' s b y l a w s , S I P C members
T he Marke t T r a n s a c t i o n s A d v i s o r y C o mmi t t e e w a s f o r m e d p ursuan t t o t he M a r k e t R e f o r m A c t of 199 0 , P u b . L . N o. 101 - 4 3 2 l 1 04 St a t . 963 ( 19 9 0 ) .
S IPC members i n c l u d e a l l r eg i st e r e d b r o k e r - d e a l e r s o t h e r t han ( 1 ) t h o s e whose p r i n c i p a l b u s i n e s s , i n t he d e t e r m i n a t i o n o f SIPC, i s co n d u c t e d o u t s i d e t h e U n i t e d S t a t e s ' ( 2) t hos e w h o s e b usiness c o n s i s t s e x c l u s i v e l y o f di st r i b ut i o n o f sha r e s o f regist e r e d o pen end i n v e stment companies or un i t i nve s t m ent ( conti n u e d . . . )
Page 97
GAO/GGD-92-109 Securities Investor Protection
hppendlx III Comments From the Securities and Exchange Commission
M r. R i c h a r d
Page 7
L . Fo g e l
must i n f o r m cu s t o m e rs o f t he i r me m b e rs h i p i n SI pC , wh i l e non S IPC i i r m s t h a t a r e r eg i s t e r e d w i t h t he C o mmis s i o n n e e d n o t dis c l o s e t h e i r no n - membershi p i n S IP C . Th e R e p o r t r ec o mmends that t h e C ommissio n d r a f t a r u l e r eq u i r i n g r e g is t e x e d i n v g s t ment adviser s a n d o t h e x' Commissio n - r e g i s t e r e d " i n t e r m e d i a r i e s " t hat have cus t od y o f cl i e n t f u n d s t o d i sc l o s e t o cl i en t s t h a t t h e y ar e n ot S I P C members .
I n t h e G A o' s v i e w , t h e r at i on a l e f o r su c h a r eq u i r e ment i s t wo- f o l d . F i r st , t he s ec u r i t i es ac t i v i t i es o f t hes e n o n - S I p C int e r medi a r i e s s u b ] e c t t h e i r cus t o m er s t o t h e s a me ri s k s o f l oss or mi s a p p r o p r i a t i o n as d o SI P C m embers. Seco n d , f o r adv i s e r s an d other n o n - S I P C i n t e r m e d i a r i .e s t h a t a r e af f i l i at e d or a sso c i a t e d with S I p c b r o k e r - d e a l e r s , t h e r e i s t h e ad d it i on a l r i sk t h at i nvest or s w i l l be con f u s e d a s t o w h e t h e r o r n o t f un d s h e l d b y t h e adviser o r i nt e r m e d i a r y a r e p r o t e c t e d b y S I p C . Acc o r d i n g t o t h e Report , i f t he s e n o n - member f i r m s we re r e q u i r e d t o d i scl o s e t h a t they we re n o t SI P C m embers, i n v e s t o r s w o u l d b e b e t t e r i n f or m d e a bout t h e s c o p e o f S IP C ' s c o v e r a g e a n d a b ou t i t s x el ev a n c e t o thei r i n v e s t m e nt d e c i s i o n s . Th e re qu i r e d n o n - m embership di s c l o s u r e w o u l d a l s o di m in i s h t he pot e n t i a l f o r c on f us i o n a ri s i n g f r o m a f f i l i at i ons or as s o c i a t i o n s b e t w e e n S I P C a n d n o n
S IPC f i r m s .
(... cont i n ued) t xust s , t h e sa l e o f va r i ab l e a n n u i t i e s, t he b u s i n e s s o f i nsurance , o r t he b u s i n e s s o f r en d e r i n g i n v e s t ment a d v i s o r y
s ervices t o o n e o r m or e r e g i s t e r e d i n v e stment companies or insurance company separat e a c c ounts ; a n d ( 3 ) b r o k e r - d e a l e r s w h ose s ecuri t i e s b u s i n e s s i s l i mi t e d t o U . S . G o v e xnment s e c u r i ti e s a n d who are re g i s t e r e d w i t h t h e C o mmission u n d ex a p r o v i s i o n of l aw
which d oe s n o t re qu i r e SI P C m embership . At s e v e r a l p l a c e s i n t h e Re p o r t , G A Gs u g gests t h a t i nvest ment a d v i s e r s a r e " i nt er m e d i a r i e s " b e c a u s e t h e y s e l l secur i t i e s p r o d u c t s t o cu s t o m e rs . ~ , ~ , Repor t at 5 . 9 . This i s a n i nc o r r e c t st a t e m e nt . I nve s t m en t a d v i s e r s d o n o t se l l secux'it i e s p r o d u c t s t o t he i r cus t o m e rs . The R e p or t i s a ccu r a t e , h owever, w he n i t st at es t h a t i nv e s t m en t a d v i s o r y f i r m s m a y "manage di s c r e t i o n a r y or no n - d i s c r e t i on a r y ac c o u n ts . . . and h ave t emporar y ' c u s t o d y ' o f cu s t o me r p r o p e r t y . . . . " Repor t a t
5 . 15 , n .7 .
SBS p. 72.
Page 98
GAOIGGD-92-109 Securities Investor Protection
hppendlx III Comments From the Securities and Exchange Commlmion
M r. R i c h a r d Page 8
L. F ogel
Regarding i n t e r medi a r i e s r e g i s t e r e d as i n v e s t g e nt a d v i s e r s , t he Commissi o n ' s D i v i s i o n o f I nv e s t m ent Management h as c ommunicate d t o u s t h a t i t d oes n o t b el i e v e t h a t i t i s nec e s s a r y o r appr o p r i a t e t o r e q u i r e i n v e s t ment a d v i s e r s w i t h c u s t o d y o f cl i e n t f u n d s t o d i sc l o s e th e i r non - m embership i n SI P C . Und e r t he S IPA, i n v e s t ment a d v i s e r s a r e e x c l u d e d f r o m S I P C membershi p . Consequently , t h e r e i s n o m or e r e a son t o r e q u i r e i n v e s t ment advis er s wi t h cu s t o d y o f c l i ent f und s o r sec u r i t i es t o d i sc l ose thei r n o n - S I P C s t a t u s t h a n t h e r e i s r ea s o n t o r eq u i r e i nv e s t m e nt advis er s t o d i sc l o s e t h a t t h e y a r e n o t m e mbers of th e F ed e r a l D eposit I n s u r a nce Corpor a t i o n .
The Di v i s i o n o f I nv e s t m ent M anagement c ommented t h a t , a s t he R eport r e c o g n i z e s , o t h e r f i nan c i a l f i r ms o u ts i d e t h e C o mmi.ani o n ' s ) ur i s d i c t i o n a l s o s e l l secu r i t i e s a n d s e c u r i t i e s- r e l a t e d p r o d u c t s
w ithout b e i n g r e q u i r e d t o d i s c l o s e t h e i r S I P C-membership s t a t u s
hanks and f u t u r e co m miss i o n m e r c h a n t s . To r e qui r e r egi s t e r e d i n v e s t ment a d v i s e r s w i t h cus t o d y o f cl i en t f un d s t o d is c l o s e t h e i r no n - membershi p i n S I P C t h u s r u n s t h e r i sk o f creat i n g t h e f a l s e i m p r e s s i o n t h a t f un d s a n d s e c u r i t i e s t h a t t h e y m anage or h o l d a r e a f f o r d e d l e s s p r o t e c t i o n t h a n f u n d s a n d
securi t i e s h e ( d b y f i na n c i a l f i r ms o u t s i d e t h e C ommission ' s j ur i s d i c t i o n . '
Regarding i n t e r m edi a r i e s r e g i s t e r e d a s b r o k e r - d e a l e r s , t he D ivi s i o n o f M a r ke t R e g ul a t i o n i s c o n s i d e r i n g r e c ommending t o t h e c ommission a r u l e t h a t a d d r e s s e s s ome o f t h e i ss u e s r a i s e d b y GAO. Th e r u l e u n d e r c o n s i d e r a t i o n w o u l d re q u i r e di s c l o s u r e i n t hose i n s t a n ce s wh er e c u s t o mer c o n f u s i o n c o n c e r n i n g S I P C p rot e c t i o n ma y r e s u l t (~ , when a n on - S I P C a f f i l i at e h a s a simi l a r n a me , a n d t h e s a me p e r s o n ne l a n d of f i c es , as a S I PC m ember). T h e r u l e m a y a l s o a d d r e s s d i s c l o s u r e r e q u i r e ment s f o r
n on-SIPC, r e g i s t e r e d b r o k e r - d e a l e r s .
The Di v i s i o n o f I nv e s t m en t M a n agement i s r e sp o n s i b l e f o r , a mong othe r t h i n g s , r e g u l a t i n g t h e a c t i v i t i es o f r e gi st e re d i nvestment ad v i s e r s .
See p. 72.
The Div i s i o n o f I nv e s t ment Management b e l i e v e s t h e r e i s some meri t i n G A O' s c o n t e n t i o n t h a t th er e i s a pos s i b i l i t y of investo r c o n t u s i o n c o n c er n i n g t h e a v a i l a b i l i t y oi' S I P C p r o t e c t i o n f or r e g i s t e r e d i n v e s t ment a d v i s e r s t h a t a r e a f f i l i at e d w i t h S IP C b roker - d e a l e r s . As di sc u s s e d b e l o w , t h e D i v i si o n o f M a r k e t Regulat i o n i s c o n s i d e r i n g r u l e makin g t h a t ad d r e s s es t h i s i ssu e , and th e D i v i s i o n o f I nv e s t ment Management w i l l ass i s t t h at D ivi s i o n .
Page 99
GAO/GGD-92-109 Securities Investor Protection
hypemlls Rl Comments From the Secnrltles and Exchange Comndssion
Mr. R i c h a rd L . F o g e l page 9
We apprecia t e t h e o p p o r t u n i t y t o c om ment o n t h e d r af t report . w e wou l d b e h a p p y t o m e e t w i t h t h e GA O s t a f f at y our c onvenience t o d i s c u s s ou r c omments f u r t h e r . I i you h a v e a n y questi on s r e g a r d i n g t h i s l et t e r , p l ea s e f e e l f r e e t o t el e p h on e me
at ( 2 02 ) 2 7 2- 3000, or if y ou h a v e a ny q u est i o n s r e g a r d i n g regist e r e d i n v e stment adv i s e xs , p l e a s e c o nt ac t Gene Gohlke, A ssociat e D i x e ct ox , D i v i s i o n o f I n v e s t ment Management, a t ( 2 0 2 )
2 72 2 0 4 3 .
S incere l y ,
Willi a m H . Heyman
D irec t o r
Page 100
GAD/GGD-92-109 Securities Investor Protection
A endix IV
Major Contributors to This Report
Division, Washington, D.C. Office of the General Counsel, Washington, D.C.
General Government
Stephen C. Swaim, Assistant Director Teresa Anderson, Evaluator-In-Charge Wesley Phillips, Evaluator Mark Ulanowicz, Evaluator Rosemary Healey, Attorney-Advisor
(2SSS25)
Page 101
GAD/GGD-92-109 Securities Investor Protection
()rder ing I tif ormat i«in
The first. c«ipy of each ('AO report and testimony is free. A«l«liti«ntal ««ipi«s are $2 each. Orders sh«iuld be sent to the fo l l o w ing a«ldress, arr«impanie«l by a check or money order made out t«i the Yiuperin ten«lent. of I)«icuments, when ttecessary. Orders for 100 or m«ire r«iliies t«i b«maile«l to a single address are discounted 2,"i per«ent.. 1I.S. (r'«neral Accounting Office
P.(). 11«ix 601,"i
(~ait.h«rsliurg, MD 20877 ()r«l«rs may als«i be placed by calling (202) 27,"i-(i241,
United Stat es General Acr.ount,ing ()A]r e Washingto», I).C. 2054N Ofheial 13usin<",ss
First-Class Mail Postage & Fe es Paid
GAO
Permit No . G100
Penalty for Private Ipse f:300 | https://www.scribd.com/document/70857055/GAO-Audit-SIPC-1992 | CC-MAIN-2019-04 | refinedweb | 46,353 | 50.09 |
The problem asks us to find the maximum values for each subarray of size and then print the minimum of these maximums. The main problem is to keep track of the maximum elements for each subarray of size . Using a deque can easily solve this problem for us.
For each query, we get an integer , and we’ll try to maintain a deque of size where it contains the maximum element at one of its ends. We slide over the array, keeping a window of size .
Pretend we are at index , and the maximum element is at the end of the deque. When we add the element, we first delete all the elements from the deque that do not fall within the current window of size . Then, we delete those elements that are less than the element and push it accordingly so that the maximum element of this window is at the end of the deque.
We maintain the answer by taking the minimum of the answer with the maximum value for each subarray for a particular and print it at the end when all the subarrays are processed.
C++ code-
#include #include #include using namespace std; const int N = 1e5 + 100; int a[N]; deque dq; int main() { int n, cq; scanf(" %d %d", &n, &cq); for (int i = 0; i < n; i++) { scanf(" %d", a + i); } for (int it = 0; it < cq; it++) { int d; scanf(" %d", &d); dq.clear(); int best = 1 <= d - 1) { assert(dq.size()); if (best > a[dq.front()]) { best = a[dq.front()]; } } } printf("%d\n", best); } return 0; } | https://discusstest.codechef.com/t/editorial-ck1607/13321 | CC-MAIN-2021-31 | refinedweb | 265 | 78.48 |
Over the past few years we've been building a suite of developer-focused presentation tools, like...
import React, { useState } from 'react'; function Example() { const [count, setCount] = useState(0); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}> Click me </button> </div> ); }
1.PRESENTING CODE
Built-in syntax highlighting with support for line-highlights.
Walk through your code using multiple highlight steps.
Powerful highlighting features packed into an intuitive UI.
1.PRESENTING CODE
Customize anything with our
real-time CSS editor.
2.CUSTOM STYLES
Style specific elements or slides by giving them class names.
2.CUSTOM STYLES
Go beyond the editing options with the HTML editor.
3.CUSTOM HTML
4.IFRAMES
Embed code editors, 3D models, spreadsheets, and more with iframes.
For example, here's a live code editor embedded from CodePen.
5.DEFINE API
{ "title": "My Deck", "width": 1024, "height": 576, "transition": "slide", "slides": [ { "notes": "Top secret speaker notes", "blocks": [ { "type": "text", "value": "Hello world" }, { "type": "iframe", "value": "" } ] } ] }
Create prefilled decks with the Define API. Form POST this
Built-in \( \LaTeX \) typesetting makes it dead simple to show math formulas.
6.LaTeX
f(x) = \int_{-\infty}^\infty \hat f(\xi)\,e^{2 \pi i \xi x} \,d\xi
7.EXPORTS
HTML
CSS
JS
Slides for Developers is available today. Jump right into the editor and try it out.
You'll need to sign up or log in | https://slides.com/news/developers/embed?byline=hidden&share=hidden | CC-MAIN-2021-21 | refinedweb | 232 | 59.3 |
2016-09), Eric Faust (EFT),), Jacob Groundwater (JGR), Adam Klein (AK)
ES Modules Lifecycle
(Bradley Farias)
Slides: docs.google.com/presentation/d/1aq_QjBUQTovj9aQZQrVzS7l1aiOs3ZNlk7wgNTUEMy0/edit#slide=id.g16ab11d101_51_46
BFS: We will be talking about host-dependent behavior: The Node module loading hook is specified in a way that meets ES spec requirements. There is a global and local cache <explain details>
(from slide)
- Resolve (as absolute URL) => Fetch => Parse
- Make Module Record
- Place in Global Cache using Absolute URL*
- Errors remove records from Global Cache*
- Traversal of import declarations recursively
- Ensure step 2 has been performed on the dependency
- Place dependency in Local Cache using Import Specifier*
- Link dependency to module
- Errors prevent any evaluation
- Evaluate in post order traversal
- Errors prevent further evaluation
CP/YK/AWB: (There are items here that are strictly host-specific)
BFS: Necessary for Node
DD: for example the local cache is not required by the spec; we don't have one in browsers
BFS: agreed.
DH: Inherently, dynamic module systems that would want to interact with ESM need a late linking mechanism. Another option would be to delay linking for everything. I would be open to this option. It might not preclude reasonable implementation optimizations.
YK: And modules haven't shipped anyway
AWB: Appears to be an "interpretation" of the requirements, but we need to understand why - eg. local caching? Why required?
MR: We have caching; we need it
BFS: [~Lifecycle Errors slide]
// a (entry) import {fromB} from 'b'; import {fromC} from 'c';
// b export let fromB = 'b';
// c import {fromB} from 'b'; // FIXME import {doesntExist} from 'b'; export let fromC = 'c'; throw Error();
BFS: This causes a link error. no evaluation occurs. B exports to A, C exports to B, and we fail. To implement this in Node, we store things in the global cache, and remove when there are errors resulting.
docs.google.com/presentation/d/1aq_QjBUQTovj9aQZQrVzS7l1aiOs3ZNlk7wgNTUEMy0/edit#slide=id.g16ab11d101_51_0
WH: What do you mean by placing b,c into a cache?
BE: It's not a cache; you can't miss. Sounds like the cache would be more accurately be called a table, since you insert things in for a real effect.
AWB: The spec text states when the link error occurs, it all goes away
WH: All of A, B, C disappear?
(confirmed)
DH: I didn't think there was anything in ES2015 about error states, global semantics of the registry
AWB: When you reach an error, it throws, and it unwinds to the old state
DD: Actually, the idempotency requirement is strong. Firefox found a bug in the initial HTML/Modules integration where we violated that requirement, and it caused us to make changes to unwind the state
DH: The slides discuss the idempotency requirements?
(confirmed)
BFS: (remove FIXME)
AWB: The top level was aborted before reaching the end, and result was...?
BFS: One function of the cache is to make sure the module evaluation only occurs for the first time the module was imported, and not again on subsequent imports
BFS: [~Lifecycle parallel loading slide] Diamond imports. We actually plan to evaluate in the order d, b, c, a
AK: Why are the linking and evaluation orders different?
BFS: The linking and initializing bindings steps are logically the same
YK: Spec bug? If hoistable decl and linkage occur in wrong order?
WH: Can anyone produce a concrete example of where this matters?
BFS: This...
(Note: this is far in the future of the deck)
BFS: [Timing example - hoistable] After linking, functions are available to be called [since functions are hoisted]. If we get some things wrong, foo could be undefined when we try to call it.
AWB: This can't happen.
YK: Is this related to cycles?
BFS: Most of the problems are related to cycles, or things that cross them
YK: If you have a cycle, make sure the hoistable decls are evaluated ...?
BE: How observable?
BFS: In pure ESM, you never run code before it's completely linked, but when interacting with commonjs, we have to be able to execute some code earlier.
DH: ES2015 does not cope with dynamic modules.
BFS: If we have any interop, the distinctions between
BFS: let's go back instead of skipping ahead a bunch of slides
AWB: Need to understand differences between static and dynamic per spec.
BE/DH: need to address interop, dynamic code that can execute before link
BFS: CommonJS/ESM interop: conceptual distinctions are fundamentally at odds, material distinctions are at odds due to spec/implementation details
AWB: Conceptually, ESM is based around sharing bindings, whereas CJS is based on sharing values
- Conceptual intent or design goals that are fundamentally at odds
- based on notes, records, design, etc. - Material specification or implementation mandates that are at odds
- based on things in reality
WH: Why are conceptual and material required to be at odds by definition?
AWB: clarify? "conceptually" ES modules are based around sharing of "bindings" vs commonjs sharing of "values"
BFS: More than that
BFS: There's more than that in terms of conceptual differences
YK: We were intending for the loader to fill that gap
BFS: Conceptual difference: Mode detection: The spec expects things to be declared out of band; this could be a grammar change
BFS: Material: Some cases are ambiguous. This isn't the most important issue.
BFS: Cache data structures: We will have module maps analogous to browsers
DD: This is what enforces the idempotency
BE: Can you unload from this?
BFS: From given module record, import some string it's permanent: cannot remove it.
AWB: depends. linkage errors you could
BFS: expect to completely recreate your dep graph
AWB: once mod is instantiated and linked into the system, it's in. If not past the point of linking, then no one has seen it.
YK: the discussion on es-discuss could've gotten it wrong?
DD: There was a conceptual goal of idempotency
DH: A thing in the spec, but not especially understood.
DH: consequence of things dissappearing at error? avoid surprises. need complete control in your program. Not always automattically forced into reload policy. If the spec says reload, then issue.
DD: The spec doesn't lead to reloads; it leads to permanent not-reloads.
DH: Doesn't deleting from the cache cause a reload?
DD: the semantics say: once you get an error, you must cache that error forever
DH: Sounds like we should add more features for control here
YK: Is an issue that node needs to be things to go away
(fell behind)
BFS: example: a mocking library that wants to replace things in cache
JSL: The default behavior in Node is to get the same behavior back once it's initialized
BBR: We want to avoid many cases of getting two copies of modules, though it's not always possible, e.g., case-insensitive file system and different case names.
AWB: if you have file path: import a from path, you retrieve, link that in; subsequent import of the same string returns the same thing.
DH: clarify: a post resolve name?
- Can't specify anything about strings that appear in the
(I didn't hear the end)
AK: Unfortunately, the spec does talk about those strings
AWB: What the spec says that if two strings are pointing to the same thing, it should be the same module
YK: We should fix the spec if needed
AWB: Want two identical paths to produce different modules
JSL: eg. mocks, instrumented modules
MRS: Some development tools explicitly control the cache, e.g., blow it away
AWB: things like repl loops are outside of the spec. But if you have two imports with identical strings, then under what circumstances would they produce different things?
DD: Discussion reiterates the following points a few times:
- Per spec, the only requirement is that if A imports "x" multiple times, it return the same module. There is no requirement on A importing "x" vs. B importing "x".
- However, some people believe that the spec should be talking about the "normalized" or "absolute" form, not about the literal string ("x" above) that appears in the
import statement.
AK: In the specification, the third bullet in 15.2.1.17 has the idempotency requirement. There is no normalization
This operation must be idempotent if it completes normally. Each time it is called with a specific referencingModule,specifier pair as arguments it must return the same Module Record instance.
(hostility towards note taking)
DH: Idempotency constraint not about source text, about result
WH: So where is the result used in the 15.2.1.17 HostResolveImportedModule •3 idempotency constraint? That line uses the string, not the result.
DD: Layering-wise, there's no way that we could enforce the idempotency requirement about any sort of normalized form, as this is done within the spec based on the name used to address the module in the source text.
DH: Thank you, good to know that the idempotency requirement is about the operation of HostResolveImportedModule, utterly impossible for node?
DD: the idempotency requirement is: if you have two
import "x" in the same file, they must produce the same thing
DH: just a narrow constraint
YK: The intermediate string is where people are thinking about the api
BFS: This is a fine constraint for semantics
DD: Bradley's implementation is based on a map, as shown in his slides. The constraint of the spec means that,
if you want to use a map to satisfy this requirement
MRS: To get back to the core point, with Domenic's interpretation (i.e. that the spec only restricts that
import "x" twice in the same file must return the same thing), are there any problems this?
BFS: No, no problem. We can implement this; I was just explaining what this is.
BFS: [Cache data structures slide]
BFS: Using
import() to illustrate
AWB: Remember that import sets up bindings
AK: The example would be identical if it said import "f"; import "f"; in semantics
BFS: The idempotency is prior to any evaluation. ESM import declaration links prior to evaluation, so idempotent prior to evaluation
- CJS declares its exports occur during or at the end
AWB: Doesnt matter what you did for exports, nothing to link.
BFS: let's say I have
import "foo"
AWB: The difference between import and require is that require returns a value, so as long as you get the value, you have it, unlike linking bindings and pre-initializing them
BFS: The current feeling of how modules work is based on the Babel implementation, where that is not quite true. They use member expressions for variable access, rather than creating bindings
- using member expressions to simulate live bindings for variable access, not making bindings
YK: Babel might be leaky, but it allowed people in node to do things that need to be understood
BFS: [Timing slide] ESM was designed for async loading, conceptually
AWB: My primary spec goal was static linking; I wasn't thinking about async linking at all
YK: AWB's spec is well written to separate the steps (re: sync and async are irrelevant)
BFS: To import CommonJS, you need to know their shape, which occurs during evaluation, after linking. For an ES module to import a CommonJS module, we need to hoist the evaluation of the CommonJS module into the linking phase
AWB: So if you want to treat a CJS module as an ESM, you can think of it as <?>
BFS: You still need to perform eval at some point
YK: b/c cjs modules need to be evaluated to know what the exports are, and have to evaluate esm, the cjs modules have to be run first (declarative vs. non)
DD: All agreed, evaluation has to happen first.
YK: CJS always treated as a single export
JM: Do have an object before evaluation, can be clobbered. You don't know the bindings ahead of time. If you import a CJS module from an ESM, you can make the semantics be that your lookup of the name is dynamic in a similar sense to the live bindings from ESM.
AWB: When import something from another module, taking bindings that aren't initialized
WH: What is
import *?
import *doesn't import everything...
DH: Doesn't exist anymore.
CP: If you happen to have a module that is not esm, can create binding that is default
BFS: These slides are based on the intent that we would have the same level of compatibility as Babel, where you can have named imports from CJS. We'd like to not lose that.
CP: not a requirement
YK: desirable
- Have to allow evaluation before linking
DD: Allow him to get to the slides...
BFS: Without eval occurring during linking, we have no path to transition from transpilers to native modules.
BFS: [Timing example - Circular] Circular dependency between CJS and ESM, with
module.exports = null from CJS
YK: Is this a realistic example?
JSL: Sometimes people do actually blow away exports from within the module
BFS: entry is our commonJS, dep is ESM. Dep tries to link, but entry's shape is not finalized. We can snapshot the shape at the end of the evaluation, but we can't link it earlier as we don't know the shape.
AWB: is the problem circ deps back to CJS?
BFS: You run into this sort of issue whenever you cross the bridge.
AWB: Requiring a dep starts a new root level load of the module. Not an import.
BFS: expectation is that esm cannot import cjs?
AWB: circularly.
DD: Could you clarify?
AWB: requires in context of esm cannot introduce circlularity because hey are binding based, not value based
DD: Can get an evaulation circularity
AWB: Can get a loop or an error
YK: banning cycles between cjs and esm seems more palatable
BFS: An alternate solution: Making loading esm from cjs an async op. This makes it so that you can't do eval circularly. This is a pretty drastic change, as some of your loading is async, so your whole dep graph is async
YK: would node consider disallowing cycles between cjs and esm?
BFS: Disallowing (throwing on attempt) was part of the original proposal
BBR: fine to drop support for circular deps?
JSL: Circular dependencies are actually very common with require. We cannot get rid of that. There is something to be said that, once we do an import, that we can't do a circular dependency back to require; maybe we can get rid of not allowing that
DH: This is not as drastic as getting rid of cycles altogether.
MRS: at scale?
BBR: Packages can actually be circular
MRS: essentially opting into once you have an import?
DD: Not the deps
DH: only within one package, if cjs package and want to migrate to esm: must migrate all within the package
DD: How does npm install circular package dependencies?
JS: It flattens the loop
DD: It would be interesting getting data on circular dependencies in npm packages
MRS: npm3 makes this worse by circularly depending on things flattening implicitly
JSL: There are multiple problems: When you do change the exports (insert explanation)
BFS: The two things which come up the most for Node CTC:
- Named imports being supported in whatever fashion for
import <named thing> from CommonJS
- Can we do something synchronously? require(ESM) synchronously returns the module namespace object
That's mostly what this is about.
JSL: We could implement the spec as is. But this would make some compromises for the usability. With the current spec, these things wouldn't work.
DH: Design constraints: - It needs to be possible to import named exports from CJS - require(ESM) needs to synchronously
JM: Are these technical needs or ecosystem needs?
JSL: These are ecosystem needs. Babel today can do these things. Those users will want to be able to not change their code. If we say that doesn't work, we're violating a concern.
BB: People won't want to upgrade if they don't get synchronous
require
DD: Some of these things are up for debate; maybe it doesn't need to synchronously return
JSL: We could also let require this return a Promise
BM: Async is a way to get out of these issues, so you get out of "the zebra striping problem".
(Do we need to be able to import CommonJS from EJS?)
JSL: Maybe we could sell, if it's a CommonJS, we can't import it.
MRS: That's easier to sell to Node people, but it's probably not what you want if you're an advocate of ESM. This will make the transition to ESL a hard sell for new users.
YK: any upgrade to esm, must break back compat.
JHD: a major semver release
DD: talking about named exports? If to make it work it would be an object not bindings? That's not a route we want to go
MRS: We're going to have mix of these two module systems for ever
DD: So, then do we want to get rid of static constraints?
DH: Do not want to throw away the guarantees from static constraints
AWB: With babel's loose interpretation of ESM, you're able to take a CJS module, apply its semantics, and it mostly works
BFS: It uses CJS under the hood, with ESM syntax
AWB: In the spirit of migrating, maybe just do exactly what Babel does?
BFS: are you suggesting we use the syntax of ESM, but not the semantics?
AWB: transpilation
YK: What node is discovering, esm and cjs is one big graph, other mdoules not considered
AWB: Babel translates ES module binding semantics into CommonJS value semantics
(discussion re: bridging from cjs to esm)
AWB: Not just syntax, fundamentally different semantics
BFS: But community things about modules as CJS
DH: concrete level: live bindings, aliasing
?
DD: Adding new properties or deleting old properties
YK: Changing exports by mutating properties
JSL: People mutate the object
AWB: With binding semantics, you can't look at a binding that hasn't been initialized. With value semantics, you look at everything as properties, you may see things as undefined
JSL: The community would really want named imports
BFS: We are here to discuss the problem. For timing, we have fixes, for named imports, we are here to discuss
JM: Sounds like we're going to break Babel somehow. We should be discussing how we will break Babel, but rather what's the way to break Babel that's minimally invasive.
YK: Could we make the exports object the default one, and sometimes make some of the bindings get additional redundant names?
DRR: as soon as named export plucking, make it non-trivial to move from cjs to esm. as soon as switched, one of two naive approaches.
- (I'll ask Daniel to fill this in)
- instead of assigning
module.exports, I'll do
export function() {}over and over
JM: Third option, if there's no default export, Node could create one
MRS: It already does
YK: Maybe we could ask users to run a tool when upgrading
DRR: It would be hard to get the tool to be run
MRS: This would be a Python3-style incompatibility
BFS: skeptical that we can put in loader?
MRS: vast majority of modules will not be upgraded, more worried about these and users having expectations that they are
BBR: allow two different entry points, if package maintainer wants to do that?
- Or a tool that requires it, looks at the exports and creates a wrapper?
BFS: Not sure?
BBR: We accept that there is no automatic transition
MRS: We want to avoid module authors doing any explicit work
BFS: If default is
DRR: (fill in point)
DH: important to look at named imports and exports that don't change, vs. do
MRS: (spoke to fast for me to follow, but basically something about
module.exports = function() {}?)
BFS: If we had a way to observe mutations, we may be able to track them, but it's not clear how we would do that
YK: Empirically, most modules don't do mutation.
MRS: Module authors will get bugs reported to them that claim the issue is the new version of Node.
AWB: It's clear that all of the CJS and ESM lumped together will not work perfectly, but we can example things for use cases. Some may be automatic, others for tools, others impossible
MRS: We are trying to make these tradeoffs, but we want to make them in a sane manner; we want to look at whether which things would break the spec, or break what things on what end. We think users have a reasonable expectation to be able to do named imports from CJS modules. We will have to sacrifice some of the linking features in ESM, since Node's load cycle is a big block. We need to be able to break the tradeoff without violating the spec.
YK: approach, get as close to the spec as possible, and come back to the committee with concrete points that need to be fixed.
BFS: [Hoistable fix slide]
BFS: you currently have access to calling the functions defined in a module even if you never evaluate it at all. In this proposal, that behavior would be removed, and you'd only get the functions if you really do it.
AWB: introducing new hypothetical API, need to define its semantics
DH: The observable difference is about whether you encounter
AWB: This only happens from circularities
AK: This is all about the interaction between circular dependencies between CJS and ESM
YK: Why care if esm can or cannot see cjs
BFS: We want a single module system that can be used for ES
I'm unable to type fast enough to keep up with this. Its hard to tease out the point when people start and stop statements mid-statement.
I want to explain, I'm going to, Here's how I think we can explain, let me unpack (interrupted)
DH: [suggesting some design where we "delay validation" of imports across module system boundaries]
AK: That would be a big change; it's hard to understand what that would mean.
DH: We may want to insert a lot more dynamism to solve this issue
MRS: How should ESM -> CJS -> ESM work?
CP: you would have a couple stripes when crossing the boundary
DH: That's why it's called zebra stripes
(break)
BFS: Linking is very dynamic. The popular npm module "meow" relinks its parent. Used for CLI. When you require it, you get a new particular module per importer, which gives a modified version of it. We may need to revisit linking to support this.
WH: What does this achieve?
BFS: This lets you tool out your CLI without knowing anything about your dependent. It lets your dependency learn about your module by reading its package.json.
BFS: [Named imports slide] ESM cannot do named imports from CJS dependencies without mitigations. Our proposal is to hoist evaluation of the CJS module up to the linking phase.
JSL: No matter what we do, we will break Babel somehow, the question is just how.
BFS: In our current proposal, we would take a snapshot of the exported properties of the object and export those names. We had considered more flexible behavior, but it seems unworkable.
CP: How many people are using this?
JSL/MRS: Some? It's unclear how many rely on it.
MRS: We could say that it just doesn't work properly if you import it as ES6.
BFS: Common on npm: "import {Component} from 'React'".
ESM Doable needs
- Context
- Remove existing "magic" variables
- Built-in module like
import {url} from 'js:context' to get these variables -- defer to iterate on details, relates to discussion in Portland
- Early errors from non-existent context variables
- Hooks - Need to ensure hookup prior to
- Some kind of loader spec, maintaining invariants of es262. May be related to the loader spec.
----- Discussion Context syntax details
BT: There was earlier discussion about import.context which would be a grab-bag object for these things like these, championed by Dave Herman at the Portland 2015 TC39 meeting
DE: If you want early errors, rather than getting undefined, then we could have
import.url rather than
import.context.url which would have an early error if you did
import.foobar.
DD: It is important for this to be host-extensible and not introduce desktop concepts to the web
JSL: Yes, there should be compatibility between Node and the Web, and we'd be open to trying to move away from Node-specific APIs.
MSR: For extensibility, there are various cases, bundling loader, etc. These may hook into this in various ways.
YK: I'm concerned about compatibility with filename and dirname
BFS: Although the spec would allow it, we will not modify the absolute URLs. The main thing we would need is import.url; we may want another path for CJS metadata, but no other properties shared between environments.
DH: I'm skeptical
MRS: dirname may be useful for a case where you have templates that you want to load from the directory where the JavaScript file is
DD: This could be based on require.resolve within the Node ecosystem
DH: In the loader spec work, there have been cases where import.url would not be such a clean thing to expose--for example, there could be two names that correspond to the same module.
import.resolve is another proposal for a pseudofunction, taking a string literal or possibly a runtime string value, for resolving the absolute, rather than relative, path which gives a more canonicalized way to reach the module, which would also be a thing that could be passed into import.
Generally: There is agreement in the room that getting the url and having a way to resolve an import into a url are important problems, and some subtle questions remaining about whether it should be a built-in module or pseudoproperty, as well as other details.
DH: New interoperability linking suggestion: The validation to linking would not always be performed statically, but rather dynamically when hitting a CJS module, and in that case, deferring the invalidation until the beginning of the execution of the top-level ESM module body. The next part is, what does the dynamic validation look like. One option is, we preserve live bindings from CJS, and the other option is a snapshot. The latter option guarantees nothing disappearing, but this loses the aliasing semantics of ES6. This is all for the case of importing a CJS from ESM.
BFS: There's a difference between snapshotting the list of property keys, and snapshotting the values of properties. I have proposed doing a live binding to the list of property keys which are available as one is exported from a CJS module after the initial evaluation, but to have bindings that are live from the CJS object to the module namespace object or direct usages
AK: The Babel version does have some notion of live binding [because it translates named lookups to member expressions on the exports object]
BFS: e.g., used in Promisify.all. Changing the values is much more common than changing the keys.
WH: Trying to pin things down. What is the specific evaluation order of an ESM module A importing from an ESM module B, and what exactly changes if B were a CJS module instead?
AK: [described DH's previous proposal involving delaying resolving bindings imported from CJS into an ESM until after CJS evaluates during evaluation]. Example:
// a import {foo} from "b"; import {bar} from "c";
// b export let foo = 1; console.log('hello');
// c console.log('world'); module.exports = { bar: 2 }
Instantiation of a causes instantiation of b, but because c is a CJS module we defer its "instantation" till later (and in the meantime put the "bar" binding in a into TDZ) Evaluation of a immediately causes b to evaluate (before evaluating any of the statements in a), then c to evaluate, and after c evaluates we can complete linking of a to c, thus initializing "bar" to 2. If c failed to export "bar", then an error would occur at this time (before any of a's statements execute).
DH: Another possibility, more ambitious: We may be able to allow cycles, based on some reordering. Disallowing cycles would be an adoption hazard. The proposal here would require the earlier modules to fully resolve before the later modules come.
JSL: Tools can create cycles that didn't exist
AK: How?
MRS: A downstream dependency may end up adding a dependency edge to something that uses it very indirectly, and a module in the middle of this cycle might not know about this but be affected
AK: Cycles would not work with the idea I was describing before as requiring the ESM from the CJS that imports it would hang, as require is synchronous and wants a fully-formed answer immediately. Options include CP's making the namespace object partial, or returning a Promise, or Dave's changing the evaluation order.
[AK: lots of discussion about a cyclical dependency graph involving CJS and ESM modules; q: how does evaluation order change? a: in several ways. q: do we know anything about the shape of the ESM modules themselves? a: not if there are StarExports that cross that boundary]
Prohibit export* of a CJS module from an ESM module.
BFS: We shouldn't return different shapes of modules at different times, e.g. if an ESM module imports a CJS module while CJS hasn't finished evaluating
JSL: We do support that in CJS right now, as a side-effect of how require works
AK: It should be a dynamic error to try to import * from a CJS module if that module is in a purgatory state
AWB: Or we could relax the immutability of module namespace objects in a case where the imported-from module is dynamic. The immutability was mostly for optimizability.
AK: Couldn't there also be security issues with allowing a namespace object to change after creation?
AWB: With everything being ESMs, we can do static resolution all the way down to the base binding. We can't necessarily do that when we have dynamic CJS modules in the middle there. We have to stop at that point. Seems like that requires a spec change. Ultimately, I think TDZs take care of it, but at present, in the ES spec, we assume that we can go all the way to the end, as opposed to going through multiple steps.
11.3.a import()
(Domenic Denicola)
domenic/proposal-import-function
DD: Presently, all module loading is done statically, top level. There are cases where you absolutely need to have conditional imports. There are many common use cases. Need to dynamically load a module, with a string. Returns a Promise for a module namespace object. The proposal keeps with out general stance of keeping proposals small; this is mostly syntax plus calling out to embedder hooks.
AWB: Does the occurrence of
import turn the script into a module?
DD: Initially, I thought to restrict it to modules, but no reason to restrict it.
BFS: Many reasons to include for use in Node
DD: Good way to bootstrap into modules
AWB: Why special form, not a function?
BFS: To give you a context
DD: Want to be given a module specifier, e.g., for some embedders, a relative path which is resolved based on where you're calling this from.
domenic/proposal-import-function#an-actual-function
domenic/proposal-import-function#a-new-binding-form
DD: This is better than inserting
<script type=module> tags dynamically because it's based directly on module specifiers, easier to use, doesn't pollute DOM, etc. Not introducing a new binding form.
DD: New embedder hook, HostFetchImportedModule (runtime equivalent of [[RequestedModules]])
WH:
import( is not currently legal, so there's no ambiguity with import declarations
RW: Channeling DH from three years ago:
import( creates confusion because it looks like a function but isn't a function.
DD: This is a syntactic form that's function-like. super is a prominent one. Also, you can do (x => import(x))
RW: That's a good precedent to cite for this (w/r to "it looks like a function, but isn't actually a function object")
Promise.all(["a", "b"].map(name => import(name)).then(() => ...)
BFS: To clarify, even in Node, it would return a Promise
DD: Even if Node wants require to be synchronous, asynchronous background loading of modules is useful, e.g. lazily loading things from JSDOM.
MRS: In Node, we'd probably still do sync I/O and just return a Promise
BFS: Can still use require to basically this. The advantage to load the dep graph in a non-blocking way was explored via
require.async() (pfft. whoops)
AWB: I was initially skeptical but like this proposal. Good for scripts, including for using built-in modules from scripts. Though you may want built-in modules to resolve synchronously.
AK: But it may take a long time to load the built-in module and want it to be asynchronous
AWB: Also, we could add the import statement to scripts
CP: Do you plan to allow require within module source text in Node?
BFS: Probably we'll have some way to get ahold of require, but we'd really encourage people to use import.
BB: Maybe you'd import require from a built-in module.
AK: Why is there a new hook for HostFetchImportedModule?
DE: Isn't this like adding something to [[RequestedModules]] dynamically?
Conclusion/Resolution
- Stage 1 acceptance
- Becomes Stage 2 at the end of the day tomorrow, noting that we are seeking feedback from DH and YK which may affect this. - Reviewers
- Caridy Patiño
- Allen Wirfs-Brock
- James Snell
Revisit System.global => global
JHD: Reviewers and editor have signed off on the spec text.
- Willing to make it
enumerable: truepending implementor feedback
JHD: Hope to have as many browser implementations before Stage 4
Conclusion/Resolution
- Stage 3 acceptance
11.2.b Intl.Segmenter
(Daniel Ehrenberg)
DE: Unicode defines breaking properties: grapheme, word, line, sentence breaks.
- Want to remove
Intl.v8BreakIterator(sorry about that!)
// Create a segmenter in your locale let segmenter = Intl.Segmenter("fr", {type: "word"}); // Get an iterator over a string let segmentIterator = segmenter.segment("Ceci n'est pas une pipe"); // Iterate over it! for (let {index, breakType} of segmentIterator) { console.log(`index: ${index} breakType: ${breakType}`); break; }
Short-cut accelerated API:
let segmentIterator = segmenter.segment( "Ceci n'est pas une pipe"); // next() returning undefined segmentIterator.advance(); // index of current break result segmentIterator.index(); // breakType of current break result segmentIterator.breakType();
WH: Is the iterator the only API you're proposing? If you just wanted to turn a string into an array of words in that string, is there a quick way of doing that?
DE: For now just planning on providing the lower-level API. Things like that can be built on top of it.
JSL: Only works if you have full ICU (
Intl.v8BreakIterator), ideal if this can be made to work with small ICU
WH: What does this do for indices at which there are no breaks? Does it have a concept of "no break", or does it just omit them from the iterator output?
DE: It omits them.
WH:
Hello, "world"! \n\n
Where do the word breaks go here, and are the decisions configurable?
DE: Explained in unicode.org/reports/tr29 (UAX29)
WH: Might be nice to also include a segmenter that indicates character boundaries; i.e., everywhere except between UTF16 surrogate pairs.
AWB: Intl standard-optional, not standard-mandatory
- Is this functionality something that you want tied to the optionality?
BT: grapheme doesn't require a lot of data
DE: The advice is: callout to ICU or you'll do it wrong
BT: is Segmenter based on UAX29?
DE: Both UAX29 and UAX14
Mixed discussion about ICU data, etc. Dan answers all questions.
SYG: cannot reverse a string and get same breaks
WH: Reversing doesn't make a difference. In either the forward or reverse case you may need arbitrarily long lookahead and lookbehind to determine the breaks. For example, there are emoji grapheme breaking rules that state that if you have an even number of characters preceding your position then you have a break, but if you have an odd number then you don't. So you need to consider the input string as a whole.
WH: UAX29 lists lots of configuration sub-options for different kinds of breaking choices. Is it your intent to support those?
DE: Yes.
JSL: ideal to get rid of prefixed.
Conclusion/Resolution
- Stage 1 acceptance | https://esdiscuss.org/notes/2016-09-28 | CC-MAIN-2019-18 | refinedweb | 6,138 | 60.14 |
I'd like to create a script to do various things when the network connection changes. For example, if the machine connects to my office network I might desire it to:
Then, when the machine is off the office network it might revert some of those.
Presumably the "active network" would be based on IP address/gateway range or WIFI network name, etc.
Do you have any recommended tools for accomplishing this? I can see something like this being applicable to many IT pros so I'm sure there are some good tips out there.
Note that this is desired to function when the network changes--not just when the user logs on/off.
The "Network Location Awareness" service () in Windows XP (and 2000? I don't recall...) and up will enable this functionality, but I haven't found where anybody has written an application to take advantage of it. I'd love to code something myself, but I don't have enough spare cycles to even begin to think about it.
This wouldn't be a simple little VBScript thing, but it wouldn't be that much coding either. Maybe somebody could pick up the idea and run with it. There's even sample code at.
There have been numerous times I've wanted this functionality, and I'd think there are more than a few people who would like to see it.
I've had some success with Net Profiles before.
This is kind of a sideways and partial answer (and not even documented because I can't seem to find the original article), but I'm fairly sure Windows 7 has a property sheet on the printer object allowing you to change defaults depending on subnet.
I'd love to see features addressing the rest of your list, as well.
You can do it using Task Scheduler:
Just in case, there are also events for "Suspend" and "Resume" available there. I, for example, use them to stop and restart Hamachi on these events, since otherwise it seems to have problems on my machine.
For "Suspend" You can use Log "Microsoft-Windows-Kernel-Power/Thermal-Operational", Source "Kernel-Power" and the Event ID is 42.
For "Resume" You can use Log "System", Source "Power-Troubleshooter" and the Event ID is 1.
For user login/logoff, I do not know events under Task Scheduler (but I believe they are available there too), but You can configure it instead here:
You can do similar stuff for "Startup" and "Shutdown" events under "Computer Configuration" folder in the same window. The Startup script runs under system account before the user logs on.
You could create a batch file that reads the results of an ipconfig command, and have it run every 5 seconds or so...
Using .net framework version 2 it is possible to use the System.Net.NetworkInformation namespace. From this you can determine:
network availability: NetworkAvailabilityChangedEventHandler()
network address changes: NetworkAddressChangedEventHandler()
ipaddress: IPAddressInformation
and so on.
How much of this is available through powershell I don't know because I don't use it. But I have written a very simple vb.net tray application that just monitors network availability and connects to a network share when the network is available. If you're interested in the bare-bones of the code then let me know.
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
5946 times
active
1 year ago | http://serverfault.com/questions/26056/how-can-i-run-a-script-when-my-network-connection-changes | CC-MAIN-2015-22 | refinedweb | 580 | 62.58 |
A topic of much discussion within the unit testing community is how to test protected or private methods. Since access to such methods is restricted, writing unit tests for them is not straightforward.
Some developers deal with this quandary by simply ignoring protected or private methods and testing only the public interfaces. It's argued that most of an object's behavior is reflected in its public methods. The behavior of the protected methods can be inferred by the exposed behavior.
There are some drawbacks to this approach. If there are private methods that contain complex functionality, they will not be tested directly. There is a tendency to make everything public so that it is testable. Some behaviors that should be private might be exposed.
It is possible to access and test protected and private methods, depending on the specifics of how a language defines and enforces object access permissions. In C++, making the test class a friend of the production class allows it to access protected interfaces:
class Library { #ifdef TEST friend class LibraryTest; #endif }
This introduces a reference to the test code into the production code, which is not good. Preprocessor directives such as #ifdef TEST can omit such references when the production code is built.
In Java, a simple technique that allows test classes to access protected and private methods is to declare the methods as package scope and place the test classes in the same package as the production classes. The next section, "Test Code Organization," shows how to arrange Java code this way.
For Java developers who are not satisfied with the direct approach, the Java Reflection API is a tricky way to overcome access protection. The JUnit extension "JUnit-addons" includes a class named PrivateAccessor that uses this approach to access protected or private attributes and methods.
The truly hardcore can follow the examples given here to write their own code that subverts access protection. In Example 4-12, the values of all of Book 's fields are read, regardless of protection. This approach is an ugly hack. Don't read this code just after a meal.
BookTest.java import java.lang.reflect.*; public void testGetFields( ) { Book book = new Book( "test", "test" ); Field fields[] = book.getClass( ). getDeclaredFields( ) ; for ( int i = 0; i < fields.length; i++ ) { fields[i]. setAccessible( true ) ; try { String value = (String) fields[i].get( book ) ; assertEquals( "test", value ); } catch (Exception e) { fail( e.getMessage( ) ); } } }
A Book with title and author "test" is created. The Reflection API method getDeclaredFields() returns an array of all of the Book 's fields, and the call to setAccessible( ) allows access to a field. The Reflection API method get( ) is used to obtain each field's value. The test asserts that the value of the field is test .
Similarly, in Example 4-13, all of Book 's get methods are called, ignoring access protection (although the get methods actually are public).
BookTest.java public void testInvokeMethods( ) { Book book = new Book( "test", "test" ); Method[] methods = book.getClass( ). getDeclaredMethods( ) ; for ( int i = 0; i < methods.length; i++ ) { if ( methods[i].getName( ).startsWith("get") ) { methods[i]. setAccessible( true ) ; try { String value = (String) methods[i].invoke( book, null ) ; assertEquals( "test", value ); } catch (Exception e) { fail( e.getMessage( ) ); } } } }
Paralleling the previous example, the Reflection API method getDeclaredMethods() returns all of the Book 's methods, and the call to setAccessible( ) subverts the method's access protection. The test checks the method name and calls only those that have names starting with get to avoid calling Book 's constructor. The Reflection API method invoke() is used to call the methods. Both get methods should return the value test , so this condition is asserted.
Hacks aside, the recommended approach is to design objects so that their important behaviors are public and test those behaviors. Structure the code so that the tests have access to the protected behaviors as well, so that they can be accessed if necessary. | https://flylib.com/books/en/1.104.1.31/1/ | CC-MAIN-2018-34 | refinedweb | 651 | 56.35 |
Named Pipe Security and Access Rights
Windows security enables you to control access to named pipes. For more information about security, see Access-Control Model.
You can specify a security descriptor for a named pipe when you call the CreateNamedPipe function. The security descriptor controls access to both client and server ends of the named pipe. If you specify NULL, the named pipe gets a default security descriptor. The ACLs in the default security descriptor for a named pipe grant full control to the LocalSystem account, administrators, and the creator owner. They also grant read access to members of the Everyone group and the anonymous account.
To retrieve a named pipe's security descriptor, call the GetSecurityInfo function. To change the security descriptor of a named pipe, call the SetSecurityInfo function.
When a thread calls CreateNamedPipe to open a handle to the server end of an existing named pipe, the system performs an access check before returning the handle. The access check compares the thread's access token and the requested access rights against the DACL in the named pipe's security descriptor. In addition to the requested access rights, the DACL must allow the calling thread FILE_CREATE_PIPE_INSTANCE access to the named pipe.
Similarly, when a client calls the CreateFile or CallNamedPipe function to connect to the client end of a named pipe, the system performs an access check before granting access to the client.
The handle returned by the CreateNamedPipe function always has SYNCHRONIZE access. It also has GENERIC_READ, GENERIC_WRITE, or both, depending on the open mode of the pipe. The following are the access rights for each open mode.
FILE_GENERIC_READ access for a named pipe combines the rights to read data from the pipe, read pipe attributes, read extended attributes, and read the pipe's DACL.
FILE_GENERIC_WRITE access for a named pipe combines the rights to write data to the pipe, append data to it, write pipe attributes, write extended attributes, and read the pipe's DACL. Because FILE_APPEND_DATA and FILE_CREATE_PIPE_INSTANCE have the same definition, so FILE_GENERIC_WRITE enables permission to create the pipe. To avoid this problem, use the individual rights instead of using FILE_GENERIC_WRITE.
You can request the ACCESS_SYSTEM_SECURITY access right to a named pipe object if you want to read or write the object's SACL. For more information, see Access-Control Lists (ACLs) and SACL Access Right.
To prevent remote users or users on a different terminal services session from accessing a named pipe, use the logon SID on the DACL for the pipe. The logon SID is used in run-as logons as well; it is the SID used to protect the per-session object namespace. For more information, see Getting the Logon SID in C++. | https://docs.microsoft.com/en-us/windows/desktop/ipc/named-pipe-security-and-access-rights | CC-MAIN-2019-09 | refinedweb | 452 | 53.51 |
Thank you for the follow-up. Food for thought.
I would counter that I have personally known, read about on Medium and elsewhere, and have had therapists confirm that women may appear to enjoy “playful” sex (read: degrading).
Women have faked orgasms. They have faked enjoying all sorts of types of sex.
Frequently it is not until years later or another relationship that these same women realize that they actually felt violated, degraded, used, or coerced. Of course, by that point, that original sexual partner is long gone and has no idea how what he perceived as “playful” actually affected and impacted that woman.
I don’t say this to blame men. It’s a complicated, confusing world. And, most assuredly, some women respond positively and sincerely enjoy all sorts of alternative sex acts.
I only share this to give you a different insight.
As for me, I find that players and I don’t mix. Fortunately, we tend to avoid each other. I appreciate your suggestion, but I am definitely a one man woman and need the same in return (not after the first date but in general). That does make dating more difficult for me, but I’m at peace about that.
Best wishes and thanks for reading! | https://bonniebarton.medium.com/thank-you-for-the-follow-up-food-for-thought-38117f6c4185 | CC-MAIN-2021-10 | refinedweb | 210 | 66.94 |
25 February 2009 19:45 [Source: ICIS news]
SAN ANTONIO, Texas (ICIS news)--The US ethanol industry on Wednesday conceded that a small reduction in the tariff on Brazilian imports would be fair, given the reduction in the US tax credit for blending the biofuel.
But the two sides remained far apart on whether there should be any tariff at all.
The ?xml:namespace>
The
"Unless one is dyslexic, 54 cents/gal is more than 45 cents/gal," Joel Velasco, the chief US representative for
Bob Dinneen, president of the Renewable Fuels Association (RFA) and the chairman of the panel, acknowledged the difference.
The RFA, which represents 90% of US producers, was considering support for parity between the tariff and the credit, Dinneen said.
But Dinneen defended the
Velasco said the Brazilian group had also opposed the tariff at that time. Unica opposes any type of trade restriction, he said.
The tariff dispute came under the spotlight this week on news that
Dinneen said he hoped
Velasco told delegates that the decision not to join at this time was due to resource restrictions. He did not directly link the decision to the trade dispute. | http://www.icis.com/Articles/2009/02/25/9195804/us-could-trim-tariff-on-brazil-ethanol-rfa.html | CC-MAIN-2015-06 | refinedweb | 194 | 55.98 |
Part 3 of this mini series will focus on adding buttons.
Now that we can add a label we can add a button in a very similar way, buttons however are interactive, so not only do we need to place a button we need to define what the program is going to do with it.
def btn1(): print ("button pressed") btn_tog2 = Button( window, text ='button1', command=btn1) btn_exit = Button( window, text ='exit',command=exit) btn_tog2.grid(row = 1, column = 1, padx = 5, pady = 5) btn_exit.grid(row = 2, column = 1, padx = 5, pady = 5)
And how this code works, is explained below
All this should result in | http://zleap.net/tkinter-tutorial-3/ | CC-MAIN-2019-04 | refinedweb | 107 | 59.64 |
tl;dr: the proposed type hinting for Python is done to help tools analyze code better (which can be very useful for programmers) but at the cost of reduced readability. A different idea is discussed which focuses on readability.
----
So, Python is going to have some type hinting [PEP484].
I agree with the idea that lead to this addition to Python; however, I find that the syntax proposed is less than optimal. Thinking about how it came about, it is not entirely surprising.
- Functions annotations were introduced in 2006 [PEP3107].
- Various libraries worked within the constraints and limitations imposed by this new addition, including mypy [mypy].
- PEP 484 is "strongly inspired by mypy" essentially using it as a starting point. However, it indirectly acknowledges that the syntax chosen is less than optimal:
If type hinting proves useful in general, a syntax for typing variables may be provided in a future Python version. [PEP484]
What if [PEP3107] had never been introduced nor accepted and implemented, and we wanted to consider type hinting in Python?...
Why exactly is type hinting introduced?
As stated in [PEP484]:
"This PEP aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and performance optimizations utilizing type information."
The way I read this is as follows: type hinting is primarily introduced to help computerized tools analyze Python code.
I would argue that, a counterpart to this would be that type hinting should not be a hindrance to humans; in particular, it should have a minimal impact on readability. I would also argue that type hinting, as it is proposed and discussed, does reduce readability significantly. Now, let's first have a look at some specific examples given, so that you can make your own opinion as to how it would affect readability.
Simple example (from [PEP484]):
I will start with a very simple examples for those who might not have seen
the syntax discussed.
def greeting(name: str) -> str:
return 'Hello ' + name
Within the function arguments, the type annotation is denoted by
a colon (:); the type of the return value of the function is done
by a special combination of characters (->) that precede the colon which
indicates the beginning of a code block.
Slightly more complicated example (from [mypy])
def twice(i: int, next: Function[[int], int]) -> int:
return next(next(i))
We now have two arguments; it becomes a bit more difficult to see at a
glance what the arguments for the function are. We can improve upon this
by formatting the code as follows:
def twice(i: int,
next: Function[[int], int]) -> int:
return next(next(i))
What about keyword-based arguments?
From [PEP483]
"There is no way to indicate optional or keyword arguments, nor varargs (we don't need to spell those often enough to complicate the syntax)"
From [PEP484]
"Since using callbacks with keyword arguments is not perceived as a common use case, there is currently no support for specifying keyword arguments with Callable."
However, some discussions are taking place; here is an example taken from (formatted in a more readable way than found on that site)
def retry(callback: Callable[AnyArgs, Any[int, None]],
timeout: Any[int, None]=None,
retries=Any[int, None]=None) -> None:
Can you easily read off the arguments of this function? Did I forget one argument when I split the argument lists over three lines? Can you quickly confirm that the formatting is not misleading?
Type Hints on Local and Global Variables
The following is verbatim from [PEP484]
No first-class syntax support for explicitly marking variables as being of a specific type is added by this PEP. To help with type inference in complex cases, a comment of the following format may be used:
x = [] # type: List[Employee]
In the case where type information for a local variable is needed before if was declared, an Undefined placeholder might be used:
from typing import Undefined
x = Undefined # type: List[Employee]
y = Undefined(int)
If type hinting proves useful in general, a syntax for typing variables may be provided in a future Python version.(emphasis added)
Edit: why not bite the bullet and do it now? Considering what this syntax, if it were introduced, should look like, might reassure people who see type information in comments as problematic and ensure that the limited syntax decided upon in this PEP will not have to be changed to be made coherent with the new addition(s).
What about class variables?
I have yet to see them being discussed. I assume that they would be treated the same as local and global variables, with an as yet undefined syntax.
A different proposal for type hinting.
Let's forget for a moment the syntax introduced by [PEP3107] for functions annotations, and imagine that we are considering everything from the beginning.
Type hinting is introduced for tools (linters, etc.). As such, I would assume the following:
When reading/parsing code:
- type hinting information should be easily identifiable by tools
- type hinting information should be easily ignorable by humans (if they so desire)
By this, I mean that the type hinting information should not decrease significantly the readability of the code.
Let me start with an obvious observation about Python: indentation based code-blocks indicate the structure. Code blocks are identified by their indentation level, which is the same within a code block.
Tools, like the Python interpreter, are very good at identifying code blocks. Adding an new type of code block to take into account by these tools should be straightforward.
A secondary observation is that comments, which are ignored by Python, are allowed to deviate from the vertical alignment within a given code block as illustrated below.
def f():
x = 1
y = 2
# this comment is not aligned
# with the rest of the code.
if z:
pass
Now, suppose that we could use a syntax where type annotation was structured around code-blocks defined by their indentation. Since type annotation are meant to be ignored by the interpreter (non executable, like comments), let us also give the freedom to have additional indentation for those ignorable code-blocks, like we do for comments.
The specific proposal
Add where as a new Python keyword; the purpose of this keyword is to introduce a code block in which type hinting information is given.
To see how this would work in practice, I will show screenshots of code from a syntax-aware editor containing type hinting information as it is proposed and contrasted with how it might look when using the "where code blocks". I'm using screenshots as it provides a truer picture of what code would really look like in real life situations.
First, the example from [mypy] shown above:
Now, the same example using the "where" code-block.
I used "return" instead of "->" as I find it more readable; however, "->" could be used just as well.
Having a code-block, I can use the code-folding feature of my editor to reduced greatly the visibility of the type hinting information; such code-folding could presumably be done automatically for all type-hinting code blocks.
Moving on to a more complicated example, also given above. First, the screenshot with the currently proposed syntax.
Next, using the proposed code-block; notice how keyword-based arguments are treated just the same as any other arguments. [Note: I was not sure if timeout above was a keyword based argument assigned a default value or not, since it used a different syntax from retries which is a keyword based argument.]
Again, using code-folding, the visual noise added by the type-hinting information essentially disappears.
Finally, the example with the "global variable", first with the proposed type hinting information added in a comment:
Next, using a code block; no need to import a special "type" to assign a special "Undefined" value: the standard None does the job.
A similar notation could easily be used for class variables, something which is not addressed by the current type-hinting PEP.
Type hinting information is going to be a useful feature for future Python programmers. I believe that using indentation based code blocks to indicate type hinting information would be much more readable that what has been discussed so far. Unfortunately, I also believe that it has no chance of being accepted, let alone considered seriously.
[PEP483]
[PEP484]
[PEP3107]
[mypy]
This blog post is licensed under CC0 and was written purely with the intention to entertain.
17 comments:
Your proposal seems so much more pythonic than the PEP proposals. The stuff they are showing looks like someone is trying to remake python in the image of go or c.
Thank you.
In the examples your proposal looks better than the mypy style.
Still, if you add a sphinx docstring, both seems to add redundancy.
I guess I would prefer a clean signature with a sensible annotation in the docstring.
Cheers, and thanks for sharing.
+1, it's kind of like decorators and bolts onto the language instead of trying to change it. Along the wins, pylint can be easily extended to parse this, and the errors won't all have the same line numbers.
I am not sure what the original proposal is but for somebody like me who codes in other languages as well, why can't we have a syntax like below:
def foo(str: x) -> str:
why the following is proposed:
def foo(x: str) -> str:
To me the first syntax makes more sense.
I have two very, very small problems with this proposal. Mind you, I am far from a Python expert.
Python is coded in blocks to make it easy to differentiate one block from another, and it works just fine. There is no need nest the type definitions two levels further from the rest of the function. It breaks current syntax for no good reason..
I like it. Just one thing, where: doesn't need to be over-indented. It could be a simple block keyword/instruction.
Another thing I would like to see is a more coherent way of indicating types, like Any(list(str), int) or function(float, float, float).
@Jose: I agree that the where block does not (in principle) need to be overindented; however, it was required by my editor so that I could use code-folding to hide it..
I wholly agree that the proposed syntax is abominable. It's the opposite of readable. I do really like the idea of a 'where' keyword to denote typing, but I think a slight refinement of your idea would prove even better:
def retry(callback, timeout, retries=None) where
........callback: Callable[AnyArgs, Any[int, None]],
........timeout: Any[int, None],
........retries: Any[int, None],
........return None:
....pass
def greeting(name) where name: str, return str:
....return 'Hello ' + name
x = [] where x: List[Employee]
To me, this orders of magnitude more readable than the proposed nonsense.
PS. Note the 8-space indent above would only a convention, not requirement.
Please pardon the spam, but I thought of more refinement when I was composing the following message to python-dev:
The proposed syntax is abominable. It's the opposite of readable..
I much prefer the idea of a 'where' keyword to denote typing, as discussed [here], but I think a refinement of their idea would prove even better:
def retry(callback, timeout, retries=None) where
........callback is Callable[AnyArgs, Any[int, None]],
........timeout is Any[int, None],
........retries is in [int, None], # 'is in' construct is more readable, dunno about complexity
........return is None:
....pass
def greeting(name) where name is str, return is str:
....return 'Hello ' + name
x, y = [], [] where x, y is List[Employee], List[Manager]
To me, this orders of magnitude more readable than the proposed nonsense.
PS. Obviously the 8-space indent above would only a convention, not requirement.
@Benjamin:
I like your version; in some ways, it is cleaner than mine. The one relatively minor drawback is that, with your version, it becomes impossible for code-folding (which is driven by indentation in all editors I have used) to be used to hide the type declaration.
I've only ever used vim or PyCharm as editors and I'm fairly certain each of them could be programmed to handle it. As for purely indent-based folders, I believe this would still work and would be syntactically identical:
def retry(callback, timeout, retries=None)
........where
............callback is Callable[AnyArgs, Any[int, None]],
............timeout is Any[int, None],
............retries is in [int, None],
............return is None:
....pass
I like the syntax, but I am unsure... is it compatible with the python parser fundamentals?
I like the syntax also but why use a new keyword? May be use something like :
def foo (a, b):
as:
a:int
b:float
return:float... | https://aroberge.blogspot.com/2015/01/type-hinting-in-python-focus-on.html?showComment=1422783434589 | CC-MAIN-2018-13 | refinedweb | 2,140 | 61.06 |
.
Recently I was playing with my sports data and wanted to create a Google map with my bike ride like Garmin Connect or Strava is doing.
That let me to Google Map API, specifically to their JavaScript API. And since I was playing with my data in Python we’ll be creating the map from there.
But first things first. To get the positional data for some of my recent bike rides I downloaded a TCX file from Garmin Connect. Parsing the TCX is easy but more about that some other time. For now let me just show a basic Python 3.x snippet that parses latitude and longitude from my TCX file and stores them in a pandas data frame.
from lxml import objectify import pandas as pd # helper function to handle missing data in my file def add_trackpoint(element, subelement, namespaces, default=None): in_str = './/' + subelement try: return float(element.find(in_str, namespaces=namespaces).text) except AttributeError: return default # activity file and namespace of the schema tcx_file = 'activity_1485936178.tcx' namespaces={'ns': ''} # get activity tree tree = objectify.parse(tcx_file) root = tree.getroot() activity = root.Activities.Activity # run through all the trackpoints and store lat and lon data trackpoints = [] for trackpoint in activity.xpath('.//ns:Trackpoint', namespaces=namespaces): latitude_degrees = add_trackpoint(trackpoint, 'ns:Position/ns:LatitudeDegrees', namespaces) longitude_degrees = add_trackpoint(trackpoint, 'ns:Position/ns:LongitudeDegrees', namespaces) trackpoints.append((latitude_degrees, longitude_degrees)) # store as dataframe activity_data = pd.DataFrame(trackpoints, columns=['latitude_degrees', 'longitude_degrees'])
Now we can focus on the Google Map JavaScript. The documentation is really great so there is no point in rewriting it myself. This tutorial got me started. In a nutshell, I was about to create a html file that would source Google Map JavaScript API and use its syntax to create a map and plot the route on it.
Following javascript code initializes a new map.
var map; function show_map() {{ map = new google.maps.Map(document.getElementById("map-canvas"), {{ zoom: {zoom}, center: new google.maps.LatLng({center_lat}, {center_lon}), mapTypeId: 'terrain' }});
What we need to solve is where to centre the map and what should be the zoom. The first task is easy as you can simply take an average of minimal and maximal latitude and longitude. Zoom is where things get a bit tricky.
Zoom is documented here plus I found an extremely useful answer on stackoverflow. The trick is to get the extreme coordinates of the route and deal with the Mercator projection Google Maps is using to get the zoom needed to show the whole route on one screen. This is done by functions _get_zoom and _lat_rad as shown further down in a code with Map class I used.
Once we have a map that is correctly centered and zoomed we can start plotting the route. This step is done by using simple polylines. Such polyline is initialised by following javascript code.
var activity_route = new google.maps.Polyline({{ path: activity_coordinates, geodesic: true, strokeColor: '#FF0000', strokeOpacity: 1.0, strokeWeight: 2 }});
Where activity_coordinates contains the coordinates of my route.
I wrapped all this into a Python class called Map that looks as follows
from __future__ import print_function import math class Map(object): def __init__(self): self._points = [] def add_point(self, coordinates): """ Adds coordinates to map :param coordinates: latitude, longitude :return: """ # add only points with existing coordinates if not ((math.isnan(coordinates[0])) or (math.isnan(coordinates[1]))): self._points.append(coordinates) @staticmethod def _lat_rad(lat): """ Helper function for _get_zoom() :param lat: :return: """ sinus = math.sin(math.radians(lat + math.pi / 180)) rad_2 = math.log((1 + sinus) / (1 - sinus)) / 2 return max(min(rad_2, math.pi), -math.pi) / 2 def _get_zoom(self, map_height_pix=900, map_width_pix=1900, zoom_max=21): """ Algorithm to derive zoom from the activity route. For details please see - - :param zoom_max: maximal zoom level based on Google Map API :return: """ # at zoom level 0 the entire world can be displayed in an area that is 256 x 256 pixels world_heigth_pix = 256 world_width_pix = 256 # get boundaries of the activity route max_lat = max(x[0] for x in self._points) min_lat = min(x[0] for x in self._points) max_lon = max(x[1] for x in self._points) min_lon = min(x[1] for x in self._points) # calculate longitude fraction diff_lon = max_lon - min_lon if diff_lon < 0: fraction_lon = (diff_lon + 360) / 360 else: fraction_lon = diff_lon / 360 # calculate latitude fraction fraction_lat = (self._lat_rad(max_lat) - self._lat_rad(min_lat)) / math.pi # get zoom for both latitude and longitude zoom_lat = math.floor(math.log(map_height_pix / world_heigth_pix / fraction_lat) / math.log(2)) zoom_lon = math.floor(math.log(map_width_pix / world_width_pix / fraction_lon) / math.log(2)) return min(zoom_lat, zoom_lon, zoom_max) def __str__(self): """ A Python wrapper around Google Map Api v3; see - - - :return: string to be stored as html and opened in a web browser """ # center of the activity route center_lat = (max((x[0] for x in self._points)) + min((x[0] for x in self._points))) / 2 center_lon = (max((x[1] for x in self._points)) + min((x[1] for x in self._points))) / 2 # get zoom needed for the route zoom = self._get_zoom() # string with points for the google.maps.Polyline</script> <div id="map-canvas" style="height: 100%; width: 100%"></div> <script type="text/javascript"> var map; function show_map() {{ map = new google.maps.Map(document.getElementById("map-canvas"), {{ zoom: {zoom}, center: new google.maps.LatLng({center_lat}, {center_lon}), mapTypeId: 'terrain' }}); var activity_coordinates = [{activity_coordinates}] var activity_route = new google.maps.Polyline({{ path: activity_coordinates, geodesic: true, strokeColor: '#FF0000', strokeOpacity: 1.0, strokeWeight: 2 }}); activity_route.setMap(map); }} google.maps.event.addDomListener(window, 'load', show_map); </script> """.format(zoom=zoom, center_lat=center_lat, center_lon=center_lon, activity_coordinates=activity_coordinates)
Using this to plot my route, I simply start with object initialization:
from activity_map import Map import webbrowser # init map loc_map = Map()
The next step is to add my route coordinates to the Map object.
# add coordinates activity_data.apply(lambda row: loc_map.add_point((row['latitude_degrees'], row['longitude_degrees'])), axis=1)
Finally, I can print my Map object into some html file and open it in a browser (which is when the Google Maps API is called).
# save as html with open('activity_map.html', "w") as out: print(loc_map, file=out) # open in a web browser webbrowser.open_new_tab('activity_map.html')
And voilà here is my route! Below is only a picture, but in reality it is a JavaScript of course.
Please mind that if you want to embed such a map in your page you need to use an API key. One can apply for it here.
Really Great articlcle
Thanks!
Pingback: Notes: Setting Up OSRM & Using OSM Filter for Machine Learning
Can you provide the source files? I do not understand how you made the connection between javascript and python.
Please see the wrapped code defining the Map class in Python.
How did you come up with the _lat_rad equations and zoom calculation? Can you explain?
Hi Piotr,
As stated in the _get_zoom docstring, please check (specifically the answer by John S) and.
Really Great articlcle , I have a question what if I want to summary 7 days or more routes how can I do because I try it but always show the first day route.
Thank you
Hi William, Thanks! I’m not sure I follow the question. The script is plotting whatever is in the file with the coordinates, be it one ride or many. Does that help?
yeah,I got it, what I mean is if I put all the coordinates in one file than it only shows one curve route but I really want to get like 7 or more route within one map. | https://blog.alookanalytics.com/2017/02/05/how-to-plot-your-own-bikejogging-route-using-python-and-google-maps-api/?like_comment=441&_wpnonce=04bb37f407 | CC-MAIN-2021-04 | refinedweb | 1,245 | 58.89 |
Python may be an useful tool to parse HTML files.
First thing we need to do is to access the file. For this, we can use python urllib library:
from urllib import urlopen url = '' content = urlopen(url).read() print content
The code above should print the source of the url.
The second part consists in selecting the desired part from the text. Suppose we want to extract the content of a table in the middle of the page. We can use python regular expressions.
import re pattern = '<tr>.*?</tr>' m = re.findall(pattern, content)
The code above will return a list in m, of all ocurrences of ‘pattern’. In this pattern, ‘.’ represents any character and ‘*’ means we are interested in 0 or more repetitions. The ‘?’ character means will do a minimal match.
For example, if content was:
<tr>hello</tr> something <tr>world</tr>
The list would be
['<tr>hello</tr>', '<tr>world</tr>']
But if we didn’t include the ‘?’ character, the list would be
['<tr>hello</tr> something <tr>world</tr>']
I made a similar code for a very specific task and I probably won’t use this code again. I was advised to not parse HTML files using regular expressions. An alternative for python is using a XML parsing library, for example, Beautiful Soup. | https://kuniganotas.wordpress.com/2010/11/07/processing-webpages-with-python/ | CC-MAIN-2017-34 | refinedweb | 217 | 76.62 |
I am writing a computer vision library from scratch in Python to work with a
rpi
greyscale
img
model B
rpi3
sobel
def sobel(img):
xKernel = np.array([[-1,0,1],[-2,0,2],[-1,0,1]])
yKernel = np.array([[-1,-2,-1],[0,0,0],[1,2,1]])
sobelled = np.zeros((img.shape[0]-2, img.shape[1]-2, 3), dtype="uint8")
for y in range(1, img.shape[0]-1):
for x in range(1, img.shape[1]-1):
gx = np.sum(np.multiply(img[y-1:y+2, x-1:x+2], xKernel))
gy = np.sum(np.multiply(img[y-1:y+2, x-1:x+2], yKernel))
g = abs(gx) + abs(gy) #math.sqrt(gx ** 2 + gy ** 2) (Slower)
g = g if g > 0 and g < 255 else (0 if g < 0 else 255)
sobelled[y-1][x-2] = g
return sobelled
greyscale
>15
squaring
adding
square rooting
gx
gy
sum
absolute
resolution
rpi
480x360
3280x2464
matrix convolutions
np.sum(np.multiply(...))
np.multiply
loops
numpy
C
3
matrix
Even though you're building out your own library, you really should absolutely use libraries for convolution, they will do the resulting operations in C or Fortran in the backend which will be much, much faster.
But to do it yourself if you'd like, use linear separable filters. Here's the idea:
Image:
1 2 3 4 5 2 3 4 5 1 3 4 5 1 2
Sobel
x kernel:
-1 0 1 -2 0 2 -1 0 1
Result:
8, 3, -7
At the first position of the convolution, you'll be computing 9 values. First off, why? You're never going to add the middle column, don't bother multiplying it. But that's not the point of linear separable filters. The idea is simple. When you place the kernel at the first position, you'll multiply the third column by
[1, 2, 1]. But then two steps later, you'll multiply the third column by
[-1, -2, -1]. What a waste! You've already calculated that, you just need to negate it now. And that's the idea with a linear separable filter. Notice that you can break up the filter into a matrix outer product of two vectors:
[1] [2] * [-1, 0, 1] [1]
Taking the outer product here yields the same matrix. So the idea here is to split the operation into two pieces. First multiply the whole image with the row vector, then the column vector. Taking the row vector
-1 0 1
across the image, we end up with
2 2 2 2 2 -3 2 -3 -3
And then passing the column vector through to be multiplied and summed, we again get
8, 3, -7
One other nifty trick that may or may not be helpful (depends on your tradeoffs between memory and efficiency):
Note that in the single row multiplication, you ignore the middle value, and just subtract the right from the left values. This means that effectively what you are doing is subtracting these two images:
3 4 5 1 2 3 4 5 1 - 2 3 4 5 1 2 3 4 5
If you cut the first two columns off of your image, you get the left matrix, and if you cut the last two columns off, you get the right matrix. So you can just compute this first part of the convolution simply as
result_h = img[:,2:] - img[:,:-2]
And then you can loop through for the remaining column of the sobel operator. Or, you can even proceed further and do the same thing we just did. This time for the vertical case, you simply need to add the first and third row and twice the second row; or, using numpy addition:
result_v = result_h[:-2] + result_h[2:] + 2*result_h[1:-1]
And you're done! I may add some timings here in the near-future. For some back of the envelope calculations (i.e. hasty Jupyter notebook timings on a 1000x1000 image):
new method (sums of the images): 8.18 ms ± 399 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
old method (double for-loop):7.32 s ± 207 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Yes, you read that right: 1000x speedup.
Here's some code comparing the two:
import numpy as np def sobel_x_orig(img): xKernel = np.array([[-1,0,1],[-2,0,2],[-1,0,1]]) sobelled = np.zeros((img.shape[0]-2, img.shape[1]-2)) for y in range(1, img.shape[0]-1): for x in range(1, img.shape[1]-1): sobelled[y-1, x-1] = np.sum(np.multiply(img[y-1:y+2, x-1:x+2], xKernel)) return sobelled def sobel_x_new(img): result_h = img[:,2:] - img[:,:-2] result_v = result_h[:-2] + result_h[2:] + 2*result_h[1:-1] return result_v img = np.random.rand(1000, 1000) sobel_new = sobel_x_new(img) sobel_orig = sobel_x_orig(img) assert (np.abs(sobel_new-sobel_orig) < 1e-12).all()
Of course,
1e-12 is some serious tolerance, but this is per element so it should be OK. But also I have a
float image, you'll of course have larger differences for
uint8 images.
Note that you can do this for any linear separable filter! That includes Gaussian filters. Note also that in general, this requires a lot of operations. In C or Fortran or whatever, it's usually just implemented as two convolutions of the single row/column vectors because in the end, it needs to actually loop through every element of each matrix anyways; whether you're just adding them or multiplying them, so it's no faster in C to do it this way where you add the image values than if you just do the convolutions. But looping through
numpy arrays is super slow, so this method is way faster in Python. | https://codedump.io/share/t4BBGhIFzkwf/1/how-to-improve-the-efficiency-of-a-sobel-edge-detector | CC-MAIN-2021-21 | refinedweb | 982 | 71.75 |
Hi, I had try to design a UI using GridBagLayout.I had a problem in that ie,the name of the Label is small(for example."Hai"),ane the gridwidth=1;when I try to add a JTextBox ater the Label,there I find huge space between the Label text and the TextBox.Could anyone help me to reduce the space between those Components?
Thanks for ur help in advance.
Ads
import java.awt.*; import javax.swing.*; public class GridBagLayoutExample { public static void main(String[] args) { JFrame f = new JFrame(); JPanel p = new JPanel(); p.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.insets = new Insets(2, 2, 2, 2); c.gridx = 0; c.gridy = 0; c.ipadx = 5; c.ipady = 5; p.add(new JLabel("Java"), c); c.gridx = 1; c.ipadx = 0; c.ipady = 0; p.add(new JTextField(20), c); f.getContentPane().add(p); f.setSize(300, 200); f.show(); } }
What I exactly want is......1 JLabel name "Hai" another Label name is "Welcome" then the textBox next to Hai and welcome starts from same X axis position.I want the textbox after Hai to start before the welcome TB position
Ads | http://www.roseindia.net/answers/viewqa/Swing-AWT/20692-How-to-remove-the-space-between-a-JComponent-and-its-Grid-border-while-using-GridBagLayout-.html | CC-MAIN-2016-36 | refinedweb | 195 | 72.63 |
Hexagonal Architecture: What Is It and How Does It Work?
Hexagonal Architecture: What Is It and How Does It Work?
Let's take a closer look at the popular hexagonal architectural pattern and see what it's all about.
Join the DZone community and get the full member experience.Join For Free
Hexagonal architecture is a model or pattern for designing software applications. The idea behind it is to put inputs and outputs at the edges of your design. In doing so, you isolate the central logic (the core) of your application from outside concerns. Having inputs and outputs at the edge means you can swap out their handlers without changing the core code.
One major appeal of using hexagonal architecture is that it makes your code easier to test. You can swap in fakes for testing, which makes the tests more stable.
Hexagonal architecture was a departure from layered architecture. It's possible to use dependency injection and other techniques in layered architecture to enable testing. But there's a key difference in the hexagonal model: the UI can be swapped out, too. And this was the primary motivation for the creation of hexagonal architecture in the first place. There's a bit of interesting trivia about its origins. The story goes a little like this...
Hexagonal Architecture? More Like Ports and Adapters!
Hexagonal architecture was proposed by Alistair Cockburn in 2005. "Hexagonal architecture" was actually the working name for the "ports and adapters pattern," which is the term Cockburn settled on in the end. But the "hexagonal architecture" name stuck, and that's the name many people know it by today.
Cockburn had his "eureka moment" about ports and adapters after reading Object Design: Roles, Responsibilities, and Collaborations by Rebecca Wirfs-Brock and Alan McKean. Cockburn explains that the authors "call [the adapter] stereotype 'Interfacers,' but show examples of them using the GoF pattern 'Adapter.'" It's important to note the use of the term "Interfacers" in that wiki entry because that's really what it's all about!
Imagine a hexagon shape with another, larger hexagon shape around it. The center hexagon is the core of the application (the business and application logic). The layer between the core and the outer hexagon is the adapter layer. And each side of the hexagon represents the ports.
It's not as if there are six-and only six-ports in the hexagonal architecture. The analogy kind of falls apart there. The sides are simply representations in the model for ports. Cockburn chose this flat-sided shape instead of a circle to convey a specific intent about ports.
Here's my own interpretation of a hexagonal architecture diagram:
The model is balanced (Cockburn has proclaimed his affinity for symmetry) with some external services on the left and others on the right. Remember that hexagonal architecture is a model for how to structure certain aspects of the application. It's specifically about dealing with I/O. I/O goes on the outside of the model. Adapters are in the gray area. The sides of the hexagons are the ports. Finally, the center is the application and domain. There are no specific requirements about the core, just that all of the business/application/domain logic lives there.
So what are ports anyway? In C#, they're interfaces.
Interfaces as Ports
Thirteen years after Cockburn's idea, we commonly use the term "interface" without remembering how the world was without one. But what is an interface really? An interface defines how one module communicates with another.
For Example
Take two modules: "module A" and "module B." When module A needs to send a message to module B, it will do so through the interface. The interface is in the language of module A. (Let's not worry too much about module B just yet; that'll come later.)
In hexagonal architecture terms, the interface is the port. A port can represent things like the user, another app, a message bus, or a database. In hexagonal architecture, a port — much like an interface — is an abstraction.
Ab-what?
Abstraction just means we don't know how something does what it does. With an abstraction, we only know the high-level details. For example, the instruction "Tell Johnny to meet me at the bank" is an abstraction. We don't care how you tell Johnny so long as he gets the message. If we wanted to be concrete about it, rather than abstract, we'd say "Call Johnny using the following procedure: Turn on your phone; tap the phone icon; now, tap the following numbers on the screen: 555-5555; tap send...." I won't bore you with any more details.
So, we have this interface (as a port). Our module can use the interface to send messages. Now we need to make that message talk to something else. Enter the adapter.
Concretions as Adapters
Finally, the adapter is where we want to think about the concrete implementation. This is how the message is either handled or passed along. A message to "save the user record" might go to a database, a file, or a network call over HTTP. The adapter might even keep the message in memory. Anything goes so long as the adapter responds accordingly.
Here's a concrete example using some C# code. Notice how the UserAdmin only knows about the interface.
public class UserAdmin { private readonly IUserRepo _userRepo; public UserAdmin(IUserRepo userRepo) { _userRepo = userRepo; } UserData _userData; public void Save() { Validate(_userData); _userRepo.Save(_userData); } ... }
In this example, the Save method uses the IUserRepo interface. The UserAdmin class has no idea how the _userRepo does its thing. All our UserAdmin object knows about is the Save method on whatever the _userRepo references via the interface.
Let's say the _userRepo represents module B in our earlier example. Module A is the UserAdmin class. Module A sends a message to module B via the IUserRepo interface. Here's where our adapter comes in. The adapter is the implementing class of the IUserRepo. In our running application, this could write to a database, as in the following code:
class UserDatabaseRepository : IUserRepo { public void Save(UserData userData) { using(var db = GetDatabaseConnection()) { db.ExecuteSave(userData); } } }
Or, I could send the record over HTTP, as in this example:
class UserHttpRepository : IUserRepo { public void Save(UserData userData) { using(var http = GetHttpConnection(Connections.UserRepository)) { http.Post(userData); } } }
Either way, module A behaves the same: It interacts with the repository without any details about what that repository does with the message. This is the power of the adapter!
Our UserDatabaseRepository and UserHttpRepository classes are the adapters. Each adapts the message to the underlying I/O. The adapters aren't the database and the TCP port; rather, they adapt the message from the port to its destination(s).
But how does this help my code, you're wondering, and what does it have to do with hexagonal architecture?
It's About Swappable Components
Module A can use the interface to send a message. It has no way of knowing how or what will actually receive the message. This is the biggest benefit of hexagonal architecture!
Hexagonal architecture is all about swapping components-specifically, external components. In the example above, the module host would inject the IUserRepo into the UserAdmin class. The host could be a web app, a console application, a test framework, or even another app. The point is to make the core independent of its inputs and outputs.
Cockburn stressed the importance of decoupling the application from the UI. This is where he really sought to differentiate his approach from layered architecture. After all, the main goal of decoupling through ports and adapters is to test-drive the application using software (a test harness).
And About the Test Drive
I've already enumerated the advantages of using hexagonal architecture in your design. Now, let me clarify explicitly why you should use this pattern.
The bottom line is that you don't need to rely on external factors to test your application. Instead, just make the core of the system interact through ports. This way, your test framework will drive the application through those ports. You could even use files and scripts to drive it instead!
To give you a common scenario of how this works in practice, let's say we're using a .NET testing framework such as xUnit. The test runner, in this case, is the host.
The following four interactions between the tests and the application will happen via ports:
- Tests send input to the application.
- Test doubles receive output from the application.
- Test doubles return input to the application.
- Tests receive output from the application.
The tests and the test doubles (such as mocks, fakes, and stubs) drive the application through the ports. But what does that look like, you ask?
The UserRepo Revisited
Let's look at that UserRepo again in light of testing. It's common to use a mock or fake during unit testing. A FakeUserRepo might look like this:
using UserDomain.Interfaces; using UserDomain.Data; using System.Collections.Generic; using System.Linq; namespace UserTests { public class FakeUserRepo: IUserRepo { List<UserData> _users = new List<UserData>(); public void Save(UserData user) { _users.Add(user); } public bool IsSaved(int userId) { return _users.Any(user => user.ID == userId); } } }
This is a fake because it stores the data in memory in the List. Notice that it implements IUserRepo. I also added a very rudimentary way to check the List.
Now we need to pass this FakeUserRepo adapter to the UserAdmin in our test code like this:
public class UserAdminTests { [Fact] public void SavesTheUser() { var fakeRepo = new FakeUserRepo(); var sut = new UserAdmin(fakeRepo); var userData = new UserData { ID = 1 }; sut.Save(userData); Assert.True(fakeRepo.IsSaved(1)); } }
You can see that this test code is passing the FakeUserRepo into the UserAdmin class using its constructor.
In .NET-land, a web UI would interact with the UserAdmin class through an HTTP endpoint (the port) and ASP.NET Web API (the adapter). Web API routes and adapts the HTTP message to the controller. Where it leaves you is to write the adapter code into the controller. That's where you'd adapt the action and message to the appropriate application class-in this case, UserAdmin.
That's All There Is to It!
Although hexagonal architecture seems like some vague mystical concept from the ages, it's actually widespread in modern software development. The main theory behind it is decoupling the application logic from the inputs and outputs. The goal is to make the application easier to test. Alistair Cockburn changed the terminology from "hexagonal architecture" to "ports and adapters." Thankfully, hexagonal architecture sort of stuck. It just sounds a heck of a lot cooler!
Published at DZone with permission of Phil Vuollet , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/hexagonal-architecture-what-is-it-and-how-does-it?fromrel=true | CC-MAIN-2019-47 | refinedweb | 1,830 | 58.69 |
Hmm... the session object is still dropping (being lost) , the
recovered (being remember) from time to time.
Brief view of the Publisher Handler setup.
*Note the sess variable is not visible to other functions unless
declared globally.
[Global Space]
sess = None
def login(req):
global sess
sess = Session(req)
sess['this'] = 'that'
...<more code>...
sess.save()
return
def otherFunc(req):
global sess
sess = Session(req) # tested with and without this line (correct?)
sess.load()
...<more code>...
sess.save()
return
I can see the session being written to a file in a serialized way in /tmp
mp_sess.dbm.bak
mp_sess.dbm.dat
mp_sess.dbm.dir
I've also executed a:
watch -d 'ls -l /tmp/mp*'
And upon creating new sessions...via separate browsers... I have seen the file
size fo the above files grow in real-time, which means that data is in
fact being added to the sessions which are partly human-readable.
However, the problem I'm having is that occasionally when
transitioning to other pages such as
will return 'NoneType' object has no attribute 'load' # referring to
the sess object.
So I click the [BACK] button in the browser and re-submit the data.
After about 7 times, the script all of a sudden recognizes the sess
object. Sometimes this error does not happend and loads on the first
try. How can I prevent this behavior?
~= Chris =~
On Wed, 12 Jan 2005 23:48:39 +0900, Hiroaki KAWAI <kawai at iij.ad.jp> wrote:
> Of
>
> _______________________________________________
> Mod_python mailing list
> Mod_python at modpython.org
>
> | https://modpython.org/pipermail/mod_python/2005-January/017103.html | CC-MAIN-2022-21 | refinedweb | 256 | 65.32 |
efficient native code.
To convert the .NET game code (your C# and UnityScript scripts) into JavaScript, we use a technology called IL2CPP. IL2CPP takes .NET bytecode and converts it to corresponding C++ source files, which is then compiled using emscripten to convert your scripts to JavaScript.
Unity WebGL content is supported in the current versions of most major browsers on the desktop, however there are differences in the level of support offered by the different browsers. Mobile devices are not supported by Unity WebGL.
Not all features of Unity are available in WebGL builds, mostly due to constraints of the platform. Specifically:
Threads are not supported due to the lack of threading supporting in JavaScript. This applies to both Unity’s internal use of threads to speed up performance, and to the use of threads in script code and managed dlls. Essentially, anything in the
System.Threading namespace is not supported.
WebGL builds cannot be debugged in MonoDevelop or Visual Studio. See: Debugging and trouble shooting WebGL builds.
Browsers do not allow direct access to IP sockets for networking, due to security concerns. See: WebGL Networking.
The WebGL graphics API is equivalent to OpenGL ES 2.0, which has some limitations. See: WebGL Graphics.
WebGL builds use a custom backend for Audio, based on the Web Audio API. This supports only basic audio functionality. See: Using Audio in WebGL.
WebGL is an AOT platform, so it does not allow dynamic generation of code using
System.Reflection.Emit. This is the same on all other IL2CPP platforms, iOS, and most consoles. | https://docs.unity3d.com/530/Documentation/Manual/webgl-gettingstarted.html | CC-MAIN-2022-40 | refinedweb | 260 | 60.21 |
Red Hat Bugzilla – Bug 591537
RFE: implement restart action in sandbox init script
Last modified: 2010-07-02 15:43:06 EDT
Description of problem:
1) "restart" action is not implemented in the init script
2) "status" action is implemented but not advertised when I run "service sandbox"
Version-Release number of selected component (if applicable):
policycoreutils-sandbox-2.0.82-14.el6
How reproducible:
always
Steps to Reproduce:
Actual results:
Expected results:
sandbox initrc script is not a standard init script. It is just something that needs to be run at boot time. It does not start a service, it just sets up the file system so tools like sandbox and xguest/mls environments can use pam_namespace
Unless Bill has a better location for this, I think we have to close this as not a bug.
You could theoretically make it an upstart event, but that's probably overkill.
There seem to be two options for this bug:
1) (preferred) Find better location for it.
2) (easier) Despite the fact it is not a standard init script I think it would be good and easy to implement few actions required by init script quidelines. This is something we successfully accomplished in RHEL 6 init script project [1] for most components.
Fedora guidelines for init scripts [2] provide details what is the minimum expected from this bug. But we definitely shouldn't close this bug as a not a bug.
[1]
[2]
Fixed in policycoreutils-2.0.82-28.el6
Red Hat Enterprise Linux Beta 2 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you. | https://bugzilla.redhat.com/show_bug.cgi?id=591537 | CC-MAIN-2018-34 | refinedweb | 294 | 60.04 |
I have two models: Child and Day. A Child has many days, and a day
belongs to a child.
I want to create ana ajax table listing the children and, after pressing
a button, I want to show the days that belong to that child.
Here is my code:
(list.rhtml)
(child_controller
def show_days
@child = Child.find(params[:id])
@days = @child.days
end
(child.chow_days.rjs)
page.replace_html ‘show_days’, :partial => ‘days’, :collection => @days
page.visual_effect :toggle_appear, ‘show_days’, ‘duration’ => 0.5
(_days.rhtml)
<% for day in @days -%>
The problem is, in Firefox I get a new page with the following text:
try {
Element.update(“show_days”, "
What am I doing wrong? | https://www.ruby-forum.com/t/ajax-table-does-not-update/62415 | CC-MAIN-2018-47 | refinedweb | 109 | 79.56 |
16 June 2008 10:40 [Source: ICIS news]
MUMBAI (ICIS news)--Neste Oil will establish a €250m-300m ($385m-462m) joint venture with two Bahrain-based companies to build a high-quality lubricant base oils plant in the Gulf state, the Finnish company said on Monday.
The joint venture would set up a 400,000 tonne/year very high viscosity index (VHVI) base oil plant for use in blending top-tier lubricants, it added in a statement.
Neste Oil would have a 45% share in the JV, which would be called Bahrain Base Oil Co, and the remaining stake would be held equally at 27.5% each by ?xml:namespace>
"Demand for these sulphur-free base oils is increasing globally, thanks to their ability to meet current and future performance requirements, as well as more stringent environmental standards," said Neste Oil specialty products executive vice-president Kimmo Rahkamo.
Feedstock for the new base oil facility will be provided by the hydrocracker unit commissioned in 2007 at Bapco's oil refinery, the company added.
OGHC is owned by the government of
($1 = €0.65)
For more on base oils visit ICIS chemical intelligence | http://www.icis.com/Articles/2008/06/16/9132447/neste-oil-forms-jv-for-bahrain-base-oil-unit.html | CC-MAIN-2015-18 | refinedweb | 192 | 57.5 |
The data.txt contains:
PM|||1|XX|PID|HOT|PLAZA|NO. 4|Main Description|
The data.txt must change to the data below:
PM|null|null|1|XX|PID|HOT|PLAZA|NO. 4|Main Description
I have an error with my loop. Any help why it's not working would be very appreciated?
import java.io.*;
import java.util.*;
public class App8 {
public App8 ( ) throws IOException{
BufferedReader br = new BufferedReader(new FileReader(new File("data.txt")));
int j = 0;
String rawString = null;
String modifiedString = null;
while( (rawString = br.readLine() ) != null)
{
//if( (j = rawString.indexOf("||")) > 0 )
//Changes made here, It does not work, Could you help
////////////////// problem here, need a loop here ///////
while (((j = rawString.indexOf("||")) > 0))
{
StringBuffer sb = new StringBuffer(rawString);
sb.replace(j,j+2,"|null|");
modifiedString = sb.toString();
}
}
//Verify output (:NB: This will only print out the last value)
System.out.println(modifiedString);
} //end of constructor
public static void main ( String args[] ) throws IOException {
new App8();
}//End of Main Method
} // End of Application
Need Java Guru help with IO loop (3 messages)
- Posted by: Web Master
- Posted on: August 27 2001 13:34 EDT
Threaded Messages (3)
- Need Java Guru help with IO loop by Jay Feng on August 27 2001 15:33 EDT
- Need Java Guru help with IO loop by Web Master on September 04 2001 13:25 EDT
- Need Java Guru help with IO loop by Andy Nguyen on August 29 2001 10:40 EDT
Need Java Guru help with IO loop[ Go to top ]
try change the "=" to "+=" in the while loop, i.e.,
- Posted by: Jay Feng
- Posted on: August 27 2001 15:33 EDT
- in response to Web Master
<b>
modifiedString += sb.toString();
</b>
Feng
I'm no java guru.
Need Java Guru help with IO loop[ Go to top ]
It works now. Thanks for the help.
- Posted by: Web Master
- Posted on: September 04 2001 13:25 EDT
- in response to Jay Feng
Need Java Guru help with IO loop[ Go to top ]
I think you should use modifiedString in your inner loop.
- Posted by: Andy Nguyen
- Posted on: August 29 2001 10:40 EDT
- in response to Web Master
Like this:
while ((rawString = br.readLine()) != null ) {
modifiedString = rawString;
while ((j = modifiedString.indexOf("||")) > 0) {
StringBuffer sb = new StringBuffer(modifiedString);
sb.replace(j, j+2, "|null|");
modifiedString = sb.toString();
}
System.out.println("Old line: " + rawString);
System.out.println("New line: " + modifiedString);
}
Andy | https://www.theserverside.com/discussions/thread/8694.html | CC-MAIN-2021-17 | refinedweb | 400 | 65.62 |
Sense-arise Automation for Home
Electricity at your Fingertips
Hello Geeks, Aspirants, Tech enthusiasts, students, and learners! We are back with our Second Project with different Components in a fruitful aim to save electricity. This might be similar to the one we see in hotel rooms or resorts many areas .Follow our steps clearly in order to Make the project work on your end too.We would like to thank the authors from whom we referred to our RFID Code as we are beginners in RFID.
Arduino Project Hub (Our Refernece ):
Step 1: Things Required (Hardware and Software )
Hey Guys ! These are the stuffs required to make this project . We have given the Amazon Link for each and every Product and link for software too .
Hardware
- (1 nos )Arduino UNO (with R3 Cable)
- (1 nos )Relay (2X Switch )
- (1 nos )Bulb ( For prototype )
- (1 nos )Breadboard
- (as required ) Wires
- (1 nos) RFID RC522 reader
Software
- Arduino IDE
Step 2: Hardware Connection and Explanation
Hardware Explanation
There are nearly 8 pins in the RFID Reader. Each as its own function.
RC522 Chip Explanation:
RC522 Chip is a combinational Circuit consisting of inter-connected modules of Antenna + Analog Interface ( Process the Modulation and demodulation of analog signals ) + Contactless UART ( Manages Protocol ) + FIFO Buffer ( Manages the data transmission in the basis of First In First Out ) + Serial I/O + Registers and host line. This makes the device to emit and detect RFID Tags instantly.
Pin Explanation
- SDA - Serial data line input and output.
- SCK - Serial Clock Input
- MOSI - Serial Peripheral Interface (SPI) Master Out Slave In
- IRQ - Interrupt Request Output ( Notifies for every Interrupt event )
- GND - Ground
- MISO - SPI Master In Slave Out
- RST - Reset
- 3.3V - Voltage In
Hardware Connection
Please connect the pins according to the Diagram given.
First One is Arduino and RFID and the Next one is Arduino and Relay .
Step 3: Software Code
Procedure :
- Open Arduino IDE
- Create a new file in the place where you have saved the MCRF522 Lib file ( Get from Github)
- Type in the Code
- Connect the components as per the diagram
- Run the program
- Get the ID of the TAG then replace it to the one mentioned in the code
Code :
Github Link :
#include <SPI.h>
#include <MFRC522.h> (7,OUTPUT);
}(); digitalWrite(7, HIGH); if (content.substring(1) == "49 3B 3A D5") //change here the UID of the card/cards that you want to give access { Serial.println("Authorized access"); digitalWrite(7, LOW); Serial.println(); delay(1000); } else { Serial.println(" Access denied"); delay(3000); } }
Step 4: Hurray Done !
Congo You Made it !
This is our working Video
Youtube Link:
Other Links
Github Link:
Like This Hack as it will be our Motivation!
Thanks for reading our Hack!
Made with Love from India By:
Kishore N
Ajay Karthik K
Wigneshwaran N
Narendra Santhosh N
How to reach Us ?
For any queries Mail Us to
Narendra <santhoshnarendra@gmail.com>
Ajay <ajaykarthikkasinaathan14@gmail.com>
Kishore <kishore.nedumaran@gmail.com>
Wignesh <wignesh1996@gmail.com> | http://www.instructables.com/id/Sense-arise-Automation-for-Home/ | CC-MAIN-2017-17 | refinedweb | 497 | 64.61 |
On Fri, Jan 05, 2007 at 09:14:30PM +0000, Daniel P. Berrange wrote: > > The following series of (2) patches adds a QEMU driver to libvirt. The first patch > provides a daemon for managing QEMU instances, the second provides a driver letting > libvirt manage QEMU via the daemon. > > Basic architecture > ------------------ > > The reason for the daemon architecture is two fold: > > - At this time, there is no (practical) way to enumerate QEMU instances, or > reliably connect to the monitor console of an existing process. There is > also no way to determine the guest configuration associated with a daemon. Okay, we admitted that principle in the first round of QEmu patches last year. The only question I have is about the multiplication of running daemons for libvirt, as we also have another one already for read only xen hypervisor access. We could either decide to keep daemon usages very specific (allows to also easilly restrict their priviledge) or try to unify them. I guess from a security POV it's still better to keep them separate, and anyway they are relatively unlikely to be run at the same time (KVM and Xen on the same Node). > - It is desirable to be able to manage QEMU instances using either an unprivilegd > local client, or a remote client. The daemon can provide connectivity via UNIX > domain sockets, or IPv4 / IPv6 and layer in suitable authentication / encryption > via TLS and/or SASL protocols. C.f. my previous mail, yes authentication is key. Could you elaborate in some way how the remote access and the authentication is set up, see my previous mail on the remote xend access, we should try to unify and set up a specific page to document remote accesses. > Anthony Ligouri is working on patches for QEMU with the goal of addressing the > first point. For example, an extra command line argument will cause QEMU to save > a PID file and create a UNIX socket for its monitor at a well-defined path. More > functionality in the monitor console will allow the guest configuration to be > reverse engineered from a running guest. Even with those patches, however, it will > still be desirable to have a daemon to provide more flexible connectivity, and to > facilitate implementation libvirt APIs which are host (rather than guest) related. > Thus I expect that over time we can simply enhance the daemon to take advantage of > newer capabilities in the QEMU monitor, but keep the same basic libvirt driver > architecture. Okay, Work in Progress. > Considering some of the other hypervisor technologies out there, in particular > User Mode Linux, and lhype, it may well become possible to let this QEMU daemon > also provide the management of these guests - allowing re-use of the single driver > backend in the libvirt client library itself. Which reopen the question, one multi-featured daemon or multiple simpler (but possibly redundant) daemons ? > XML format > ---------- > > As discussed in the previous mail thread, the XML format for describing guests > with the QEMU backend is the same structure as that for Xen guests, with > following enhancements: > > - The 'type' attribute on the top level <domain> tag can take one of the > values 'qemu', 'kqemu' or 'kvm' instead of 'xen'. This selects between > the different virtualization approaches QEMU can provide. > > - The '<type>' attribute within the <os> block of the XML (for now) is > still expected to the 'hvm' (indicating full virtualization), although > I'm trying to think of a better name, since its not technically hardware > accelerated unless you're using KVM yeah I don't have a good value to suggest except "unknown" bacause basically we don't know a priori what the running OS will be. > - The '<type>' attribute within the <os> block of the XML can have two > optional 'arch' and 'machine' attributes. The former selects the CPU > architecture to be emulated; the latter the specific machine to have > QEMU emulate (determine those supported by QEMU using 'qemu -M ?'). Okay, I hope we will have enough flexibility in the virNodeInfo model to express the various combinations, we have a 32 char string for this, I guess that should be sufficient, but I don't know how to express that in the best way I will see how the pacth does it. From my recollection of posts on qemu-devel some of the machines names can be a bit long on specific emulated target. At least we should be okay for a PC architecture. > - The <kernel>, <initrd>, <cmdline> elements can be used to specify > an explicit kernel to boot off[1], otherwise it'll do a boot of the > cdrom, harddisk / floppy (based on <boot> element). Well,the kernel > bits are parsed at least. I've not got around to using them when > building the QEMU argv yet. Okay > - The disk devices are configured in same way as Xen HVM guests. eg you > have to use hda -> hdd, and/or fda -> fdb. Only hdc can be selected > as a cdrom device. Good ! > - The network configuration is work in progress. QEMU has many ways to > setup networking. I use the 'type' attribute to select between the > different approachs 'user', 'tap', 'server', 'client', 'mcast' mapping > them directly onto QEMU command line arguments. You can specify a > MAC address as usual too. I need to implement auto-generation of MAC > addresses if omitted. Most of them have extra bits of metadata though > which I've not figured out appropriate XML for yet. Thus when building > the QEMU argv I currently just hardcode 'user' networking. Okay, since user is the default in QEmu (assuming I remember correctly :-) > - The QEMU binary is determined automatically based on the requested > CPU architecture, defaulting to i686 if non specified. It is possible > to override the default binary using the <emulator> element within the > <devices> section. This is different to previously discussed, because > recent work by Anthony merging VMI + KVM to give paravirt guests means > that the <loader> element is best kept to refer to the VMI ROM (or other > ROM like files :-) - this is also closer to Xen semantics anyway. Hum, the ROM, one more parameter, actually we may once need to provide for mutiple of them at some point if they start mapping non contiguous area. > Connectivity > ------------ > > The namespace under which all connection URIs come is 'qemud'. Thereafter > there are several options. First, two well-known local hypervisor > connections > > - qemud:///session > > This is a per-user private hypervisor connection. The libvirt daemon and > qemu guest processes just run as whatever UNIX user your client app is > running. This lets unprivileged users use the qemu driver without needing > any kind admin rights. Obviously you can't use KQEMU or KVM accelerators > unless the /dev/ device node is chmod/chown'd to give you access. > > The communication goes over a UNIX domain socket which is mode 0600 created > in the abstract namespace at $HOME/.qemud.d/sock. okay, makes sense. Everything runs under the user privilege and there is no escalation. > - qemud:///system > > This is a system-wide privileged hypervisor connection. There is only one > of these on any given machine. The libvirt_qemud daemon would be started > ahead of time (by an init script), possibly running as root, or maybe under > a dedicated system user account (and the KQEMU/KVM devices chown'd to match). Would that be hard to allow autostart ? That's what we do for the read-only xen hypervisor access. Avoiding starting up stuff in init.d when we have no garantee it will be used, and auto-shutdown when there is no client is IMHO generally nicer, but that feature can just be added later possibly, main drawback is that it requires an suid binary. > The admin would optionally also make it listen on IPv4/6 addrs to allow > remote communication. (see next URI example) > > The local communication goes over one of two possible UNIX domain sockets > Both in the abstract namespace under the directory /var/run. The first socket > called 'qemud' is mode 0600, so only privileged apps (ie root) can access it, > and gives full control capabilities. The other called 'qemud-ro' is mode 0666 > and any clients connecting to it will be restricted to only read-only libvirt > operations by the server. > > - qemud://hostname:port/ > > This lets you connect to a daemon over IPv4 or IPv6. If omitted the port is > 8123 (will probably change it). This lets you connect to a system daemon > on a remote host - assuming it was configured to listen on IPv4/6 interfaces. Hum, for that the daemon requires to be started statically too. > Currently there is zero auth or encryption, but I'm planning to make it > mandortory to use the TLS protocol - using the GNU TLS library. This will give > encryption, and mutual authentication using either x509 certificates or > PGP keys & trustdbs or perhaps both :-) Will probably start off by implementing > PGP since I understand it better. > > So if you wanted to remotely manage a server, you'd copy the server's > certificate/public key to the client into a well known location. Similarly > you'd generate a keypair for the client & copy its public key to the > server. Perhaps I'll allow clients without a key to connect in read-only > mode. Need to prototype it first and then write up some ideas. okay, though there is multiple authentication and encryption libraries, and picking the Right One may not be possible, there is so many options, and people may have specific infrastructure in place. Anyway the current state is no-auth so anything will be better :-) > Server architecture > ------------------- > > The server is a fairly simple beast. It is single-threaded using non-blocking I/O > and poll() for all operations. It will listen on multiple sockets for incoming > connections. The protocol used for client-server comms is a very simple binary > message format close to the existing libvirt_proxy. Good so we keep similar implementations. Any possibility of sharing part of that code, it's always very sensitive areas, both for security and edge case in the communication. > Client sends a message, server > receives it, performs appropriate operation & sends a reply to the client. The > client (ie libvirt driver) blocks after sending its message until it gets a reply. > The server does non-blocking reads from the client buffering until it has a single > complete message, then processes it and populates the buffer with a reply and does > non-blocking writes to send it back to the client. It won't try to read a further > message from the client until its sent the entire reply back. ie, it is a totally > synchronous message flow - no batching/pipelining of messages. Honnestly I think that's good enough, I don't see hundreds of QEmu instances having to be monitored remotely from a single Node. On an monitoring machine things may be accelerated by multithreading the gathering process to talk to multiple Nodes in parallel. At least on the server side I prefer to keep things as straight as possible. > During the time > the server is processes a message it is not dealing with any other I/O, but thus > far all the operations are very fast to implement, so this isn't a serious issue, > and there ways to deal with it if there are operations which turn out to take a > long time. I certainly want to avoid multi-threading in the server at all costs! completely agree :-) > As well as monitoring the client & client sockets, the poll() event loop in the > server also captures stdout & stderr from the QEMU processes. Currently we just > dump this to stdout of the daemon, but I expect we can log it somewhere. When we > start accessing the QEMU monitor there will be another fd in the event loop - ie > the pseduo-TTY (or UNIX socket) on which we talk to the monitor. At some point we will need to look at adding a Console dump API, that will be doable for Xen too, but it's not urgent since nobody requested it yet :-) > Inactive guests > --------------- > > Guests created using 'virsh create' (or equiv API) are treated as 'transient' > domains - ie their config files are not saved to disk. This is consistent with > the behaviour in the Xen backend. Guests created using 'virsh define', however, > are saved out to disk in $HOME/.qemud.d for the per-user session daemon. The > system-wide daemon should use /etc/qemud.d, but currently its still /root/.qemud.d Maybe this should be asked on the qemu-devel list, Fabrice and Co. may have a preference on where to store config related stuff for QEmu even if it's not directly part of QEmu. > The config files are simply saved as the libvirt XML blob ensuring no data > conversion issues. In any case, QEMU doesn't currently have any config file > format we can leverage. The list of inactive guests is loaded at startup of the > daemon. New config files are expected to be created via the API - files manually > created in the directory after initial startup are not seen. Might like to change > this later. Hum, maybe we could use FAM/gamin if found at configure time, but well it's just an additional feature, let's just avoid any uneeeded timer. > XML Examples > ------------ > > This is a guest using plain qemu, with x86_64 architecture and a ISA-only > (ie no PCI) machine emulation. I was actually running this on a 32-bit > host :-) VNC is configured to run on port 5906. QEMU can't automatically > choose a VNC port, so if one isn't specified we assign one based on the > domain ID. This should be fixed in QEMU.... > > <domain type='qemu'> > <name>demo1</name> > <uuid>4dea23b3-1d52-d8f3-2516-782e98a23fa0</uuid> > <memory>131072</memory> > <vcpu>1</vcpu> > <os> > <type arch='x86_64' machine='isapc'>hvm</type> > </os> > <devices> > <disk type='file' device='disk'> > <source file='/home/berrange/fedora/diskboot.img'/> > <target dev='hda'/> > </disk> > <interface type='user'> > <mac address='24:42:53:21:52:45'/> > </interface> > <graphics type='vnc' port='5906'/> > </devices> > </domain> > > A second example, this time using KVM acceleration. Note how I specify a > non-default path to QEMU to pick up the KVM build of QEMU. Normally KVM > binary will default to /usr/bin/qemu-kvm - this may change depending on > how distro packaging of KVM turns out - it may even be merged into regular > QEMU binaries. > > <domain type='kvm'> > <name>demo2</name> > <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid> > <memory>131072</memory> > <vcpu>1</vcpu> > <os> > <type>hvm</type> > </os> > <devices> > <emulator>/home/berrange/usr/kvm-devel/bin/qemu-system-x86_64</emulator> > <disk type='file' device='disk'> > <source file='/home/berrange/fedora/diskboot.img'/> > <target dev='hda'/> > </disk> > <interface type='user'> > <mac address='24:42:53:21:52:45'/> > </interface> > <graphics type='vnc' port='-1'/> > </devices> > </domain> Okay, I'm nearing completion of a Relax-NG schemas allowing to validate XML instances, I will augment to allow the changes, but based on last week discussion it should not bee too hard and still retain good validation properties. > Outstanding work > ---------------- > > - TLS support. Need to add TLS encryption & authentication to both the client > and server side for IPv4/6 communications. This will obviously add a dependancy > on libgnutls.so in libvirt & the daemon. I don't consider this a major problem > since every non-trivial network app these days uses TLS. The other possible impl > of OpenSSL has GPL-compatability issues, so is not considered. > > - Change the wire format to use fixed size data types (ie, int8, int16, int32, etc) > instead of the size-dependant int/long types. At same time define some rules for > the byte ordering. Client must match server ordering ? Server must accept client's > desired ordering ? Everyone must use BE regardless of server/client format ? I'm > inclined to say client must match server, since it distributes the byte-swapping > overhead to all clients and lets the common case of x86->x86 be a no-op. Hum, on the other hand if you do the conversion as suggested by IETF rules it's easier to find the places where the conversion is missing, unless you forgot to ntoh and hton on both client and server code. Honnestly I would not take the performance hit in consideration at that level and not now, the RPC is gonna totally dominate it by order of magnitudes in my opinion. > - Add a protocol version message as first option to let use protocol at will later > while maintaining compat with older libvirt client libraries. Yeah, this also ensure you get a functionning server on that port ! > - Improve support for describing the various QEMU network configurations > > - Finish boot options - boot device order & explicit kernel > > - Open & use connection to QEMU monitor which will let us implement pause/resume, > suspend/restore drivers, and device hotplug / media changes. > > - Return sensible data for virNodeInfo - will need to have operating system dependant > code here - parsing /proc for Linux to determine available RAM & CPU speed. Who > knows what for Solaris / BSD ?!? Anyone know of remotely standard ways for doing > this. Accurate host memory reporting is the only really critical data item we need. The GNOME guys tried that, maybe dig up the gst (gnome system tools) code base :-) > - There is a fair bit of duplicate in various helper functions between the daemon, > and various libvirt driver backends. We should probably pull this stuff out into > a separate lib/ directoy, build it into a static library and then link that into > both libvirt, virsh & the qemud daemon as needed. Yes definitely ! This all sounds excellent, thanks a lot !!! Daniel -- Red Hat Virtualization group Daniel Veillard | virtualization library veillard redhat com | libxml GNOME XML XSLT toolkit | Rpmfind RPM search engine | https://listman.redhat.com/archives/libvir-list/2007-January/msg00013.html | CC-MAIN-2021-21 | refinedweb | 2,943 | 52.19 |
Hello! > > On my intel p100 when I try to boot the resq1440.bin I get a quick screen > of pci error messages and then an instant reboot. > > The motherboard is Intel-Triton TX. Award bios v.4.51PG > This is a problem with some revisions of TX chipsets The solution: On the file /usr/src/kernel-source-2.0.<version>/drivers/char/keyboard.c Look at the lines with the following: /* * On non-x86 hardware we do a full keyboard controller * initialization, in case the bootup software hasn't done * it. On a x86, the BIOS will already have initialized the * keyboard. */ #ifndef __i386__ #define INIT_KBD static int initialize_kbd(void); #endif And comment the lines with #ifndef and #endif: /* #ifndef __i386__ */ #define INIT_KBD static int initialize_kbd(void); /* #endif */ compile the kernel with the fetures listed in the rescue disk (viewable from DOS/Win) and follow the instructions to update the loader of the rescue disk This will solve your problem regards, Ulisses ----------------------------------------------------------------------------- "Computers are useless. They can only give answers." Pablo Picasso -- To UNSUBSCRIBE, email to debian-user-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org | https://lists.debian.org/debian-user/1998/05/msg01682.html | CC-MAIN-2014-10 | refinedweb | 193 | 56.15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.