text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Apps are what make Android great but sometimes they can get a bit expensive, especially when you’ve already spent quite a lot on your device. So what is an Android fan to do? Go for the free ones of course!
Here are some tried and true ways to get free and cheap apps on your device.
1. Check out the free versions in the Android Market.
Sometimes you hear or read about a great app and want to get it into your phone. More often than not, the developer has a free version of that app available together with a paid one. Just search for the app in the Android Market and click on the app that says INSTALL instead of the one with a price. Alternatively, you can click on the name of the developer that’s right under the app name to see his full roster of apps, including any free versions they may have.
The great thing is that often the free version has all the basic functionality you need. If you feel like you need more or want to reward the dev for such a great piece of software, go ahead and buy the paid one.
2. Install Amazon Appstore Reminder.
The Amazon Appstore is one of the alternative Android app repositories out there. What I like about it is that it releases for free a paid app everyday. Some of the great free apps they’ve offered are Angry Birds Rio, Plants vs. Zombies and Pac-man. There’s also a ton of free and paid apps that are only available through Amazon.
The trouble is reminding yourself to check the Amazon app daily to see what’s new and free today. Install the $0.99 Amazon Free App Reminder from the Market and get pinged on what’s on sale today, everyday. It pays for itself when you get your first free download and notifies you which app is free on a set schedule. And for those of you outside the US, read this here to be able to download apps from the Amazon Appstore.
They’re giving away Fieldrunners HD right now, so go on and grab it now!
3. Browse Getjar.
Another alternative application marketplace, Getjar offers only free Android apps and often works with carriers to provide free apps to their subscribers. They offer free versions of many apps that go paid in the Android Market. Some of the great finds I’ve seen are Cut The Rope and Kaspersky Mobile Security.
4. Wait for holiday sales.
Holidays of all types often bring discounted and even free apps to the Market. And even if the holiday is well known but not celebrated in your part of the globe, like Independence Day for the US, some devs do the sale Market-wide anyway. This was what EA and Gameloft recently did this last Fourth of July, and though most were not free, the discounts on the paid ones lowered many apps to an affordable $0.99. So watch that calendar and you might just get that exceptional game you’ve always wanted for cheap.
5. Look at free alternatives.
Spending on a paid app sometimes makes little sense if you just want it to do one thing. It will take some search-fu skills but you really can find lots of free alternatives out there. One site you can use is AlternativeTo which shows you software similar to the one you want. You can also just simply search the Market for the type off app you want (like say “Battery Alarm” or “Sudoku”) and click on free in the Price filter up top.
6. Ask the devs.
Sometimes, it really doesn’t hurt to ask the app makers for freebies. They are often gracious enough that you’ve found their app worth installing and will reward you with a free version, you only need to ask. Of course in exchange, they might ask for something minor in return (retweet our links, post a short review, like their pages). It’s a small price to pay, especially if you love that developer’s apps.
7. Stay tuned for one-off price drops.
Sometimes a big milestone for an app dev comes along: one million downloads, anniversaries, getting bought by a huge company, etc. They share the love by giving a limited free or discounted offer to their loyal fans so it pays to be attentive. Follow your favorite dev’s Twitter account, watch their blog and website, and stay tuned to their FaceBook page. A good place to start trawling for apps are AndroidZoom’s On Sale tab and AppBrain’s Price Reduced page.
8. Enter contests.
Some app reviewers get a handful of free stuff to give away to their readers that keep tuned to their blog posts. Of course, this doesn’t come easy though. Often they’ll give it away to the first N commenters, raffle it off to FaceBook fans or award it to the account that retweeted them the most. Follow sites like Android.Appstorm to get a chance to win that coveted app.
9. Sign up for the beta.
If unstable software doesn’t faze you, get on the beta list of the new apps you’ve heard so much about. Often, apps that are big in other place (*ahem* iPhone *ahem*) are now expanding into Android but want to work the kinks out first. You get to try a free app right that lets you in some brand spanking new technology. Note though that you will be working for it, in a way, since you will be reporting bugs and sending usage reports back to the dev’s mothership to help them improve their final release. | http://www.androidauthority.com/free-cheap-discounted-apps-android-phone-tablet-18491/ | CC-MAIN-2014-52 | refinedweb | 961 | 79.9 |
Eighty percent of the time users might not need the features of the wizard's third page. The other twenty percent the third page will come in handy.
Customizations on the page can be as simple as updating the package name of the generated JAR class files to overriding the type hierarchy specified in the XSD (or types node in the WSDL) through binding customizations using XPath expressions.
Below is a screenshot of the 3rd page:
Options on the page get more complex from top to bottom.
If the user decides they don't like the default package naming of the generated classes using the target namespace they can type in a valid package name in the text field next to the Package label.
WSDLs are normally copied into the JAR file for ease of reference during runtime. The user can disable this if needed. If the user types a package location (including the filename) then the wizard will warn the user if the Copy WSDL into Client Jar is not enabled. WSDL Location is optional, though, if Copy WSDL is selected. Then the WSDL should be copied over to the JAR using the original WSDL name.
A neat feature, and possible time saver , is when the user is offline yet the WSDL refers to an online XSD document. Using an XML Catalog file (most typical naming of the catalog is: jax-ws-catalog.xml) the user can override the online URI location by specifying the local path (making sure to have a local copy) to the same schema so the ClientGen wizard can actually create the JAR file.
The XML Catalog entry will be added to the ANT build file when it's run and also added to the JAR file if the user selects Generate Runtime Catalog.
Select Bindings is quite complex for this blog entry (I may add some more detail next week) but if need be, more advanced users can customize the types using XPath expressions where the XML type hierarchy in the schema might not be to their taste. | http://blogs.oracle.com/devtools/2008/09/clientgen_wizard_3rd_page_of_w.html | crawl-002 | refinedweb | 344 | 65.56 |
Back to: ASP.NET MVC Tutorial For Beginners and Professionals
Views in ASP.NET MVC Application
In this article, I am going to discuss Views in ASP.NET MVC application with examples. Please read our previous article before proceeding to this article where we discussed Controllers in ASP.NET MVC application. As part of this article, we are going to discuss the following pointers.
- What are the Views in ASP.NET MVC?
- Where ASP.NET MVC View Files are Stored?
- How to create Views in ASP.NET MVC Application?
- Understanding Views in MVC with Multiple Examples.
- Advantages of Using Views in MVC
What are the Views in ASP.NET MVC?
In the MVC pattern, the view component contains the logic to represent the model data as a user interface with which the end user can interact. Typically, it creates the user interface with the data from the model which provided to it by the controller in an MVC application. So you can consider the Views in ASP.NET MVC as HTML templates embedded with Razor syntax which generate HTML content that sends to the client.
Where ASP.NET MVC View Files are Stored?
In ASP.NET MVC, the views are having “.cshtml” extension when we use C# as the programming language with Razor syntax. Usually, each controller will have its own folder in which the controller-specific .cshtml files (views) are going to be saved. The controller-specific folders are going to be created within the Views folder. The most point that you need to keep in mind is the view file name and the controller action name are going to be the same.
Example:
Let’s say, we created an ASP.NET MVC application with two controllers i.e. StudentController and HomeController. The HomeController that we created is having the following three action methods.
- Index()
Similarly, the StudentController is created with the following four action methods.
- Index()
- Details()
- Edit()
- Delete()
The views are going to created and saved in the following order.
As we have two controllers in our application, so there are two folders created with the Views folder one per each Controller. The Home Folder is going to contain all the view files (i.e. cshtml files) which are specific to HomeController. Similarly, the Student Folder is going to contain all the .cshtml file which are specific to Student Controller. This is the reason why, the Home folder contains the Index, AboutUs and ContactUs cshtml files. Similarly, the Student folder contains the Index, Details, Edit and Delete view files.
Understanding Views in MVC with Examples:
To understand the views in ASP.NET MVC application, let us first modify the HomeController as shown below.
using System.Web.Mvc; namespace FirstMVCDemo.Controllers { public class HomeController : Controller { public ActionResult Index() { return View(); } } }
In the above HomeController, we created an Action method that is going to return a view. In order to return a view from an action method in MVC, we need to use the View() extension method which is provided by System.Web.Mvc.Controller Base class. Now run the application and navigate to the “/Home/Index” URL and you will get the following error.
Let us understand why we got the above error.
As we are returning a view from the Index action method of Home Controller, by default the MVC Framework will look for a file with the name Index.aspx or Index.ascx within the Home and Shared folder of the application if the view engine is APSX. If it is not found there then it will search for a view file with the name Index.cshtml or Index.vbhtml within the Home and Shared folder of the application.
If the requested view file is found in any of the above locations, then the View generates the necessary HTML and send the generated HTML back to the client who initially made the request. On the other hand, if the requested view file is not found in any of the above locations, then we will get the above error.
Adding Views in ASP.NET MVC
In order to add the Index view, Right-click anywhere with the Index() function and then click on the “Add View” option which will open the Add View dialog window. From the Add View window, provide the name for the view as Index, select Template as Empty, uncheck the checkboxes for “create as a partial view” and “use a layout page” option. Finally, click on the Add button as shown below.
Once the Index view is created, then copy and paste the following in it.
@{ Layout = null; } <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width" /> <title>Index</title> </head> <body> <div> <h1>Index View Coming From Views/Home Folder</h1> </div> </body> </html>
That’s it. Now run the application and navigates to the “/Home/Index” URL and you will see the output as expected. If you go to the definition of Controller base class, then you will find there eight overload versions of the View method which return a view as shown below.
Each of the above-overloaded versions we will discuss as we progress through this course.
Advantages of Using Views in ASP.NET MVC:
The Views in ASP.NET MVC application provides the separation of concerns (codes). It means, it separates the user interface from the rest of the application such as the business layer and data access layer. The ASP.NET MVC views use the advanced Razor syntax which makes it easy to switch between the HTML and C# code. The common or repetitive sections of the webpages can be easily reused by using layout and partial views.
In the next article, I am going to discuss the Models in ASP.NET MVC application. Here, in this article, I try to explain the Views in ASP.NET MVC application with examples. I hope this article will help you with your needs. | https://dotnettutorials.net/lesson/asp-dot-net-mvc-views/ | CC-MAIN-2020-05 | refinedweb | 986 | 66.94 |
engfmt 1.1.0
read and write in engineering notation
A light-weight package used to read and write numbers in engineering format. In engineering format a number generally includes the units if available and uses SI scale factors to indicate the magnitude of the number. For example:
1ns1.4204GHz
A quantity is the pairing of a real number and units, though the units may be empty. This package is designed to convert quantities between the various ways in which they are represented. Those ways are:
- As a tuple:
- For example, 1ns would be represented as (1e-9, ‘s’). Notice that the scale factor is not included in the units. This is always true.
- As a string in conventional formats:
- For example, 1ns would be represented as ‘1e-9 s’ or as ‘0.000000001s’. This form is often difficult to read for people and so engfmt treats it more as a format meant for machines rather than people.
- As a string in engineering format:
- For example, 1ns would be represented as ‘1ns’. This form is often difficult to read for machines and so engfmt treats it more as a human readable format.
The Quantity class is provided for converting between these various forms. It takes one or two arguments. The first is taken to be the value, and the second, if given, is taken to be the units. The value may be given as a float or as a string. The string may be in floating point notation, in scientific notation, or in engineering format and may include the units. By engineering notation, it is meant that the number can use the SI scale factors. For example, any of the following ways can be used to specify 1ns:
>>> from engfmt import Quantity >>> period = Quantity(1e-9, 's') >>> print(period) 1ns >>> period = Quantity('0.000000001 s') >>> print(period) 1ns >>> period = Quantity('1e-9s') >>> print(period) 1ns >>> period = Quantity('1ns') >>> print(period) 1ns
In all cases, the giving the units is optional.
From a quantity object, you can generate any representation:
>>> h_line = Quantity('1420.405751786 MHz') >>> h_line.to_tuple() (1420405751.786, 'Hz') >>> h_line.to_eng() '1.4204GHz' >>> h_line.to_str() '1420.405751786e6Hz'
You can also access the value without the units:
>>> h_line.to_float() 1420405751.786 >>> h_line.to_unitless_eng() '1.4204G' >>> h_line.to_unitless_str() '1420.405751786e6'
Or you can access just the units:
>>> h_line.units 'Hz'
The output of the to_eng and to_unitless_eng methods is always rounded to the desired precision, which can be specified as an argument. This differs from the to_str and to_unitless_str methods. They attempt to retain the original format of the number if it is specified as a string. In this way it retains its original precision. The underlying assumption behind this difference is that engineering notation is generally used when communicating with people, whereas floating point notation is used when communicating with machines. People benefit from having a limited number of digits in the numbers, whereas machines benefit from have full precision numbers.
Quantities As Reals
You can use a quantity in the same way that you can use a real number, meaning that you can use it in expressions and it will evaluate to its real value:
>>> period = Quantity('1us') >>> print(period) 1us >>> frequency = 1/period >>> print(frequency) 1000000.0
Notice that when performing arithmetic operations on quantities the units are completely ignored.
Shortcut Functions
Generally one uses the shortcut functions to convert numbers to and from engineering format. All of these functions take the value and units in the same ways that they are specified to Quantity. In particular, the value may be a string or a real number. If it is a string it may be given in traditional format or in engineering format, and it may include the units. For example:
>>> from engfmt import quant_to_tuple >>> quant_to_tuple('1.4204GHz') (1420400000.0, 'Hz') >>> from engfmt import quant_to_eng >>> quant_to_eng(1420400000.0, 'Hz') '1.4204GHz' >>> from engfmt import quant_to_sci >>> quant_to_sci('1.4204GHz', prec=4) '1.4204×10⁰⁹Hz' >>> from engfmt import quant_to_str >>> quant_to_str(1420400000.0, 'Hz') '1.4204e+09Hz' >>> from engfmt import quant_to_float >>> quant_to_float('1.4204GHz') 1420400000.0 >>> from engfmt import quant_to_unitless_str >>> quant_to_unitless_str('1.4204GHz') '1.4204e9' >>> from engfmt import quant_to_unitless_eng >>> quant_to_unitless_eng('1.4204e9Hz') '1.4204G' >>> from engfmt import quant_strip >>> quant_strip('1.4204GHz') '1.4204G' >>> quant_strip('1.4204e9Hz') '1.4204e9'
Preferences
You can adjust some of the behavior of these functions on a global basis using set_preferences:
>>> from engfmt import set_preferences, quant_to_eng, quant_to_sci >>> set_preferences(hprec=2,>> quant_to_eng('1.4204GHz', prec=4) '1.4204 GHz' >>> quant_to_sci('1.4204GHz', prec=4) '1.4204×10⁰⁹ Hz'
Specifying hprec (human precision) to be 4 gives 5 digits of precision (you get one more digit than the number you specify for precision). Thus, the valid range for prec is from 0 to around 12 to 14 for double precision numbers.
Passing None as a value in set_preferences returns that preference to its default value:
>>> set_preferences(hprec=None, spacer=None) >>> quant_to_eng('1.4204GHz') '1.4204GHz'
The available preferences are:
- hprec (int):
- Human precision in digits where 0 corresponds to 1 digit, must be nonnegative. This precision is used when generating engineering format.
- mprec (int):
- Machine precision in digits where 0 corresponds to 1 digit. Must be nonnegative. This precision is used when not generating engineering format.
- spacer (str):
- May be ” or ‘ ‘, use the latter if you prefer a space between the number and the units. Generally using ‘ ‘ makes numbers easier to read, particularly with complex units, and using ” is easier to parse.
- unity (str):
- The output scale factor for unity, generally ” or ‘_’.
- output (str):
- Which scale factors to output, generally one would only use familiar scale factors.
- ignore_sf (bool):
- Whether scale factors should be ignored by default.
- assign_fmt (str):
- Format string for an assignment. Will be passed through string format method. Format string takes three possible arguments named n, q, and d for the name, value and description. The default is ‘{n} = {v}’
- assign_rec (str):
Regular expression used to recognize an assignment. Used in add_to_namespace(). Default recognizes the form:
“Temp = 300_K – Temperature”.
Quantity Class
Though rarely used directly, the Quantity class forms the foundation of the engfmt package. It is more flexible than the shortcut functions:
>>> from engfmt import Quantity >>> h_line = Quantity('1420.405751786 MHz') >>> str(h_line) '1.4204GHz' >>> float(h_line) 1420405751.786 >>> h_line.to_tuple() (1420405751.786, 'Hz') >>> h_line.to_eng(7) '1.4204058GHz' >>> h_line.to_sci() '1.4204×10⁰⁹Hz' >>> h_line.to_str() '1420.405751786e6Hz' >>> h_line.to_float() 1420405751.786 >>> h_line.to_unitless_eng(4) '1.4204G' >>> h_line.to_unitless_str() '1420.405751786e6' >>> h_line.strip() '1420.405751786M' >>> h_line.units 'Hz' >>> h_line.add_name('Fhy') >>> h_line.name 'Fhy' >>> h_line.add_desc('frequency of hydrogen line') >>> h_line.desc 'frequency of hydrogen line' >>> h_line.is_infinite() False >>> h_line.is_nan() False
Physical Constants
The Quantity class also supports a small number of physical constants.
Plank’s constant:
>>> plank = Quantity('h') >>> print(plank) 662.61e-36J-s
Boltzmann’s constant:
>>> boltz = Quantity('k') >>> print(boltz) 13.806e-24J/K
Elementary charge:
>>> q = Quantity('q') >>> print(q) 160.22e-21C
Speed of light:
>>> c = Quantity('c') >>> print(c) 299.79Mm/s
Zero degrees Celsius in Kelvin:
>>> zeroC = Quantity('C0') >>> print(zeroC) 273.15K
engfmt uses k rather than K to represent kilo so that you can distinguish between kilo and Kelvin.
Permittivity of free space:
>>> eps0 = Quantity('eps0') >>> print(eps0) 8.8542pF/m
Permeability of free space:
>>> mu0 = Quantity('mu0') >>> print(mu0) 1.2566uH/m
Characteristic impedance of free space:
>>> Z0 = Quantity('Z0') >>> print(Z0) 376.73Ohms
You can add additional constants by adding them to the CONSTANTS dictionary:
>>> from engfmt import Quantity, CONSTANTS >>> CONSTANTS['h_line'] = (1.420405751786e9, 'Hz') >>> h_line = Quantity('h_line') >>> print(h_line) 1.4204GHz
String Formatting
Quantities can be passed into the string format method:
>>> print('{}'.format(h_line)) 1.4204GHz
You can specify the precision as part of the format specification
>>> print('{:.6}'.format(h_line)) 1.420406GHz
The ‘q’ type specifier can be used to explicitly indicate that both the number and the units are desired:
>>> print('{:.6q}'.format(h_line)) 1.420406GHz
Alternately, ‘r’ can be used to indicate just the number is desired:
>>> print('{:r}'.format(h_line)) 1.4204G
Use ‘u’ to indicate that only the units are desired:
>>> print('{:u}'.format(h_line)) Hz
You can also use the string and floating point format type specifiers:
>>> print('{:f}'.format(h_line)) 1420405751.786000 >>> print('{:e}'.format(h_line)) 1.420406e+09 >>> print('{:g}'.format(h_line)) 1.42041e+09 >>> print('{:s}'.format(h_line)) 1.4204GHz
It is also possible to add a name and perhaps a description to the quantity, and access those with special format codes as well:
>>> h_line.add_name('Fhy') >>> h_line.add_desc('frequency of hydrogen line') >>> print('{:n}'.format(h_line)) Fhy >>> print('{:d}'.format(h_line)) frequency of hydrogen line >>> print('{:Q}'.format(h_line)) Fhy = 1.4204GHz >>> print('{:R}'.format(h_line)) Fhy = 1.4204G >>> print('{0:Q} ({0:d})'.format(h_line)) Fhy = 1.4204GHz (frequency of hydrogen line)
Exceptions
A ValueError is raised if engfmt is passed a string it cannot convert into a number:
>>> try: ... value, units = quant_to_tuple('xxx') ... except ValueError as err: ... print(err) xxx: not a valid number.
Text Processing
Two functions are available for converting quantities embedded within text to and from engineering notation:
>>> from engfmt import all_to_eng_fmt, all_from_eng_fmt >>> all_to_eng_fmt('The frequency of the hydrogen line is 1420405751.786Hz.') 'The frequency of the hydrogen line is 1.4204GHz.' >>> all_from_eng_fmt('The frequency of the hydrogen line is 1.4204GHz.') 'The frequency of the hydrogen line is 1.4204e9Hz.'
Add to Namespace
It is possible to put a collection of quantities in a text string and then use the add_to_namespace function to parse the quantities and add them to the Python namespace. For example:
>>> from engfmt import add_to_namespace >>>>> add_to_namespace(design_parameters) >>> print(Fref, Kdet, Kvco, sep='\n') 156MHz 88.3uA 9.07GHz/V
Any number of quantities may be given, with each quantity given on its own line. The identifier given to the left ‘=’ is the name of the variable in the local namespace that is used to hold the quantity. The text after the ‘–’ is used as a description of the quantity.
Scale Factors and Units
By default, engfmt. To allow you to avoid this ambiguity, engfmt accepts ‘_’ as the unity scale factor. In this way ‘1_m’ is unambiguously 1 meter. You can instruct engfmt to output ‘_’ as the unity scale factor by specifying the unity argument to set_preferences:
>>> from engfmt import set_preferences, Quantity >>> set_preferences(unity='_') >>> l = Quantity(1, 'm') >>> print(l) 1_m
If you need to interpret numbers that have units and are known not to have scale factors, you can specify the ignore_sf preference:
>>> set_preferences(ignore_sf=True, unity='') >>> l = Quantity('1000m') >>> l.to_tuple() (1000.0, 'm') >>> print(l) 1km
Testing
Run ‘py.test’ to run the tests.
- Author: Ken Kundert
- Download URL:
- Keywords: quantities,engfmt,engineering,notation,SI,scale factors
- License: GPLv3+
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- Intended Audience :: Science/Research
- License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
- Natural Language :: English
- Operating System :: POSIX :: Linux
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3.3
- Programming Language :: Python :: 3.4
- Programming Language :: Python :: 3.5
- Topic :: Scientific/Engineering
- Topic :: Utilities
- Package Index Owner: kenkundert
- DOAP record: engfmt-1.1.0.xml | https://pypi.python.org/pypi/engfmt/ | CC-MAIN-2017-43 | refinedweb | 1,841 | 50.73 |
I'm very new to scripting and as a result am not sure how best to merge a series of files. I'm attempting to create a Quality Control script that makes sure a nightly load was properly uploaded to the DB (we've noticed that if there's a lag for some reason, the sync will exclude any donations that came in during said lag).
I have a directory of daily synced files labeled as such:
20161031_donations.txt
20161030_donations.txt
20161029_donations.txt
20161028_donations.txt
etc etc
for i in a.txt b.txt c.txt d.txt
do this
done
Expanding on Alex Hall's answer, you can grab the header from one file and skip it for the remaining files to do the merge
from glob import glob from shutil import copyfileobj files = sorted(glob('*_donations.txt'))[-7:] # if you want most recent file first do # files.reverse() with open("merged_file.txt", "w") as outfp: for i, filename in enumerate(files): with open(filename) as infile: if i: next(infile) # discard header copyfileobj(infile, outfile) # write remaining | https://codedump.io/share/YouoJ4d829gK/1/combining-files-based-on-a-date-range | CC-MAIN-2018-09 | refinedweb | 179 | 60.24 |
Implementing XPath for Wireless DevicesImplementing XPath for Wireless Devices
In this two-part article, I will discuss XPath and its use in Wireless applications. In the first part, we will consider applying XPath to both XML and its Wireless counterpart WBXML (Wireless Binary XML). We will also discuss the design of an XPath processing engine suitable for small wireless devices. This discussion will focus on XPath 1.0, which is a W3C recommendation.
XPath allows us to process XML and search for specific nodes in an XML document. To a certain extent, XPath is to XML what SQL is to relational databases, although the forthcoming W3C XML Query language takes that further. This section gives a brief overview of XPath.
Listing 1 (opens in
a new window) is a Web Services Definition Language file. WSDL is an XML-based
grammar to describe Web Service interfaces. The root element in Listing 1 is
definitions. It has a
name attribute, which indicates
the name of the web service that it is defining. In our case it is
BillingService.
While processing a WSDL file, we want to extract the name of a web service. The following simple XPath query will extract the name:
./child::node()[1]/attribute::name
You can see the XPath query has three sections, separated by slashes. The first section is only a dot (.). A dot at the start of a query means you want to start your search from the beginning of the XML document.
The second section (
child::node()[1]) says: Find the first
child element node. The combined effect of the first and second portions
(
./child::node()[1]) is like saying: Find the first element of this
XML document.
The third section (
attribute::name) says: Find the value of an
attribute
name. The combined effect of the query is like saying:
Find value of the name attribute of the first element in the XML
document. Applying this query to the WSDL file of Listing 1 returns,
BillingService.
The following query is similar to the one discussed above and finds the target namespace of this WSDL file:
./child::node()[1]/attribute::targetNamespace
Now let's take this idea further and see a more complex example. Have a look at the following XPath query and try to translate it into simple English:
./child::node()[1]/child::service/child::port/child::soap:address/attribute::location
In simple English, it says:
Find the first child node of the XML document, then find a
servicechild element within the first child, then find a
portelement inside the service element, then look inside the
portelement for an
addresselement that belongs to the SOAP namespace and return the value of its location attribute.
Location step, context node, node-set and location path:
The parts of an XPath query are evaluated sequentially. Each part is referred to as a location step. The results of the first location step are given to the next location step and so on till you reach the end of the query. The result of each location step is a set of nodes referred to as a node-set. Each node in a node-set is known as a context-node during evaluation.
In this way, every location step produces a node-set, which is handed over to the next location step. Each node in the node-set will be treated as a context-node and another node-set will be evaluated by the next location step and so on. A series of location steps forms the complete location path.
Axes and node-test:
Specifying a child after a slash means you want to look for child
elements. Specifying attribute after a slash means you are looking for
attributes. Child and attribute key words are called axes. An
axis specifies the direction of your search. Other XPath axes include
descendant (means you are not only interested in immediate child
elements but you are also interested in grand children, their children, and so
on),
parent (immediate parent),
ancestor (parent,
grand parent, great grand parent, and so on) etc.
You will also notice that every axis is followed by double colon mark (::,
referred to as a resolution operator in programming languages), which is
followed by the name of the node that we are looking for
(
targetNamespace in
attribute::targetNamespace). The
name that appears after the resolution operator is called a
node-test (a test that our required nodes will pass).
The query
./child::node()[1]/attribute::name that we considered
earlier can also be written as
./child::node()[1]/@name. Here we
have abbreviated
attribute::name as
@name.
@ is the short name of the attribute axis.
Another common abbreviation is the omission of child axis. You don't need to specify child axis at all because XPath takes child as the default axis. Therefore, the following two queries will produce the same result:
./child::node()[1]/child::service/child::port/child::soap:address/attribute::location
./node()[1]/service/port/soap:address/attribute::location
The previous queries returned values of attributes. XPath can also return XML elements. Let's apply the following XPath query to the WSDL file of Listing 1:
./node()[1]/child::message[@name="MonthNumber"]
This query looks for the first node (
definitions element) in the
WSDL file. Then it looks for all children of the
definitions
element that have the name
message. There is a special condition
attached with the message element that the query looks for
(
[@name="MonthNumber"]). The condition states that the message
element should have an attribute named
MonthNumber.
There are two
message elements in the WSDL file of Listing 1. The query
returns the
message element whose
name attribute is
MonthNumber.
What if we want to find a message element with a dynamically evaluated name?
For example, in WSDL processing applications, we often need to look for a
message element, whose
name attribute matches the
value of
message attribute of an
input or
output element.
In Listing 1, we have
one
input element and two
output elements, each of
which has a message attribute. The value of each of the
attributes matches the
name attribute of a
element.
All
input and
output elements of Listing 1 reside inside two
operation elements. Each
operation element has a
name attribute. The WSDL processing scenario provides us with the
name of an operation element and asks us to find the message element associated
with its input or output.
The following query performs this job for us, broken over two lines for clarity:
./node()[1]/child::message[@name=//node()/portType/*[ @name="getBillForMonth"]/child::*/@message]
Compare this query with the last one. You will find that the only difference is that the hard coded value "MonthNumber" has been replaced with another XPath query:
//node()/portType/*[@name="getBillForMonth"]/child::*/@message
This example shows that XPath syntax allows us to combine multiple XPath queries into a single complex query. This helps us in many real life applications where the XML processing task requires finding particular information and then, depending on the information read, jumping to another location in the same XML file.
XML works for wireless applications the same way it works for the wired Web. The WSDL processing requirements inside a wireless client are the same as those inside desktop clients.
However, in order to address the bandwidth limitation issues, WAP Forum has proposed a compressed version of XML called Wireless Binary XML (WBXML), which is in use inside WAP-enabled devices.
As we have already seen, XPath is designed to operate on the tree-like XML structure. WBXML maintains the same abstract XML structure, although its concrete expression differs, so there is no conceptual difference in applying an XPath query to an XML file or its WBXML counterpart.
In order to demonstrate this point, let's examine how an XML document is transformed to WBXML.
Consider the simple XML file of Listing 2. We have kept it very simple in order to cover only the basics of WBXML.
A WBXML file starts with a WBXML version declaration. We are using WBXML version 1.3, which is represented by a single byte 0x03. The full translation is presented in Listing 3)
The next byte represents the DTD. We are working with an unknown DTD, which is represented by 0x01.
The third byte is for the character set. We are using the US ASCII character set, represented by 0x03 (third byte in WBXML stream of Listing 3).
The fourth byte is the length of the string table, a technique used to reuse strings that occur at multiple places in an XML file. String tables can help in better compression by writing a string in the table only once and then referring to it from various places. We do not have any strings occurring more than once in our simple XML file, so we will not use a string table; we record its length as 0x00 (fourth byte of WBXML file).
Next is the operation element. We are using the following bytes (called tag tokens) to represent our XML elements:
WBXML assumes that all concerned parties know the meaning of tag tokens, in the same as all concerned understand (or don't) the meaning of XML tags. So we can directly use the tag token and assume that a tag token can be translated back to the name of an element wherever needed.
Therefore, our next byte is 0x05 (operation element) followed by 0x06 (input element).
Next is the string "My input" that needs to appear inline in a WBXML file. 0x03 marks the start of an inline string. A NULL character (0x00) marks the end of the string.
Notice that the same byte value 0x03 was used earlier to mark the WBXML version and character set declarations. This does not produce any ambiguity, because the first three bytes of a WBXML file are reserved for XML, DTD, and charset declarations. The actual WBXML data payload begins with the fourth byte.
0x01 marks all end tags. The combination of start tags (0x05, 0x06 and 0x07) and end tags (0x01) produce the same nested structure as that of the original XML file.
The rest of the WBXML file follows the same logic. Readers can trace Listing 3 to follow the sequence. This example illustrates that WBXML reduces the size of an XML fileq without changing its structure.
We will now consider the design of a small footprint XPath engine which operates on top of the DOM. DOM implementations are available for major wireless platforms, such as WinCE and Java 2 Micro Edition (J2ME).
Consider the two pseudo-code classes XPathExpression (Listing 4) and LocationStep (Listing 5). These two classes together implement an XPath engine.
XPathExpression's constructor takes two string type arguments: an XML file and an XPath expression. It loads the XML file into a DOM object. It also tokenizes the XPath expression string into an array of smaller strings, where each small string represents an XPath location step. While tokenizing, it also translates abbreviated XPath syntax into normal unabbreviated expressions.
The next step is to execute a loop in which each location step string is
passed to the
LocationPath constructor, one after the other. After
the creation of a
LocationPath object, its
getResult
method is called.
Every call to the
getResult method needs an array of nodes (a
node-set) as an argument. The
getResult method will carry out the
search on each node in the node-set and return another array of nodes (another
node-set) that matches the search requirements of the location path.
The resulting node-set from each location step is fed to the next location step, until all location steps in the XPath query are consumed.
Now let's have a look at what's happening inside the
getResult
method of the
LocationPath class. The constructor
has already resolved the location step into axis and node-test strings. The
getResult method will detect which axis we have to work with and
call the appropriate sequence of operations. We have only implemented the child
and descendant axes as a sample guideline. Parent and ancestor axes can also be
implemented in a similar manner.
In this article I presented various XPath queries and discussed how to apply them to XML and WBXML formats. I also presented a basic design of an XPath engine. Next time, we will take this design further and include support for more XPath features.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc. | http://www.xml.com/lpt/a/978 | CC-MAIN-2014-52 | refinedweb | 2,075 | 63.8 |
In operating systems earlier than Windows Vista, the WMI Service maintains text log files. Starting with Windows Vista, these files no longer exist in the operating system. For more information, see WMI Log Files and Logging WMI Activity.
The Wbemcore.log file contains trace messages generated by the WMI service. For example, it records logon attempts as well as capturing why WMI failed to instantiate a class for a provider.
The following table lists some common messages that can occur and offers possible causes and solutions.
This message indicates which credentials and RPC values were used to connect to WMI. This can be useful when Wbemprox.log contains a connection failure message that was caused by bad credentials.
This message is also logged in the Windows Event log and contains more information that may be useful.
The Wbemess.log file contains all warning and error messages related to the WMI event subsystem. Those errors that require administrator attention are also logged in the Windows NT Event log. Warning messages are only logged in the Wbemess.log file when the logging level has been set to verbose.
The following table lists some common problems that can occur and offers possible causes and solutions.
The Mofcomp.log file contains compilation details from the
MOF compiler.
The compiler writes two messages to the log when it compiles your MOF file. The first entry identifies the MOF file being compiled. The second entry shows that the compiler successfully parsed and committed your MOF entries in the WMI repository, or that a syntax or WMI error occurred. Be aware that the compiler returns on the first error it encounters.
The following log entry shows that an illegal constant value was specified on line 15 of the MOF file.
carpool.mof (15) : error SYNTAX 0X006c:
Illegal constant value.
(Numeric value out of range or strings without quotes)
When the MOF file is syntactically correct, the compiler attempts to commit your MOF entries in the CIM repository. If an error occurs, the compiler logs an entry that contains the WMI HRESULT value and a brief description of the error. The description also contains the line number of the object that failed.
For example, the following log entry shows that object 2 of the MOF file contains a type mismatch.
An error occurred while creating object 2 defined
on lines 9 - 15: 0X80041005:
The value specified for property 'parentnodekey' conflicts
with the declaration of the property.
The Wmiadap.log trace log file contains error messages related to the AutoDiscoveryAutoPurge (ADAP) process. For more information, see Performance Libraries and WMI and ADAP Event Messages Before Vista.
This is a WMI connection error that you receive for one of the following reasons:
Trace information and error messages for the
provider framework and the
Win32 Provider. The Provider Framework classes are obsolete and not recommended. For more information about the preferred ways to write a WMI COM provider or a WMI provider that uses the .NET FrameworkSystem.Management namespaces, see Using WMI.
For example, the following message indicates that the provider framework was unable to set the AdapterCompatibility property. The HRESULT indicates that the AdapterCompatibility property was not found in the
Win32_DesktopMonitor class.
ERROR CInstance(Win32_DesktopMonitor)::SetCHString(AdapterCompatibility) FAILED!error# 80041002
Trace information that is typically not used for diagnostics.
The two most common entries in Winmgmt.log are the following:
The core was successfully shut down and a WMI HRESULT of 0 was returned.
The program was unable to shut down the core because the program was still in use. An HRESULT that is non-zero is returned.
Send comments about this topic to Microsoft
Build date: 6/15/2009 | http://msdn.microsoft.com/en-us/library/aa827355(VS.85).aspx | crawl-002 | refinedweb | 610 | 57.16 |
a timer to implement animated shooting.
The CannonField now has shooting capabilities.
include Math
We() is the slot that moves the shot, called every 5 milliseconds when the Qt::Timer fires.
Its tasks are to compute the new position, update the screen with the shot in the new position, and if necessary, stop the timer.
First we make a Qt::Region that holds the old shotRect(). A Qt::Region,.
def shotRect() gravity = 4.0 time = @timerCount / 20.0 velocity = @shootForce radians = @shootAngle * 3.14159265 / 180.0 velx = velocity * cos(radians) vely = velocity * sin(radians) x0 = (@barrelRect.right() + 5.0) * cos(radians) y0 = (@barrelRect.right() + 5.0) * sin(radians) x = x0 + velx * time y = y0 + vely * time - 0.5 * gravity * time * time result = Qt::Rect.new(0, 0, 6, 6) result.moveCenter(Qt::Point.new(x.round, height() - 1 - y.round)) return result end
This private function calculates the center point of the shot and returns the enclosing rectangle of the shot. It uses the initial cannon force and angle in addition to timerCount, which increases as time passes.
The formula used is the standard Newtonian formula for frictionless movement in a gravity field. For simplicity, we've chosen to disregard any Einsteinian effects.
We calculate the center point in a coordinate system where y coordinates increase upward. After we have calculated the center point, we construct a Qt::Rect with size 6 x 6 and move its center point to the point calculated above. In the same operation we convert the point into the widget's coordinate system (see The Coordinate System)..
Kanuuna voi laukaista, mutta ei ole mitään mitä ampua.
Tee laukauksesta täytetty ympyrä. [Vihje: Qt::Painter::drawEllipse() saattaa auttaa.]
Vaihda kanuunan väriä kun laukaus on ilmassa. | https://techbase.kde.org/index.php?title=Development/Tutorials/Qt4_Ruby_Tutorial/Chapter_11/fi&direction=prev&oldid=73667 | CC-MAIN-2016-18 | refinedweb | 288 | 52.56 |
This is my attempt to program the Mandelbrot set in Python 3.5 using the Pygame module.
import math, pygame
pygame.init()
def mapMandelbrot(c,r,dim,xRange,yRange):
x = (dim-c)/dim
y = (dim-r)/dim
#print([x,y])
x = x*(xRange[1]-xRange[0])
y = y*(yRange[1]-yRange[0])
x = xRange[0] + x
y = yRange[0] + y
return [x,y]
def checkDrawBox(surface):
for i in pygame.event.get():
if i.type == pygame.QUIT:
pygame.quit()
elif i.type == pygame.MOUSEBUTTONDOWN:
startXY = pygame.mouse.get_pos()
boxExit = False
while boxExit == False:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP:
boxExit = True
if boxExit == True:
return [startXY,pygame.mouse.get_pos()]
pygame.draw.rect(surface,[255,0,0],[startXY,[pygame.mouse.get_pos()[0]-startXY[0],pygame.mouse.get_pos()[1]-startXY[1]]],1)
pygame.display.update()
def setup():
dimensions = 500
white = [255,255,255]
black = [0,0,0]
checkIterations = 100
canvas = pygame.display.set_mode([dimensions,dimensions])
canvas.fill(black)
xRange = [-2,2]
yRange = [-2,2]
xRangePrev = [0,0]
yRangePrev = [0,0]
newxRange = [0,0]
newyRange = [0,0]
while True:
if not ([xRange,yRange] == [xRangePrev,yRangePrev]):
draw(dimensions, canvas, xRange, yRange, checkIterations)
pygame.display.update()
xRangePrev = xRange
yRangePrev = yRange
box = checkDrawBox(canvas)
if box != None:
maxX = max([box[0][0],box[1][0]])
maxY = max([box[0][1],box[1][1]])
newxRange[0] = mapMandelbrot(box[0][0],0,dimensions,xRange,yRange)[0]
newxRange[1] = mapMandelbrot(box[1][0],0,dimensions,xRange,yRange)[0]
newyRange[0] = mapMandelbrot(0,box[0][1],dimensions,xRange,yRange)[1]
newyRange[1] = mapMandelbrot(0,box[1][1],dimensions,xRange,yRange)[1]
xRange = newxRange
yRange = newyRange
def draw(dim, surface, xRange, yRange, checkIterations):
for column in range(dim):
for row in range(dim):
greyVal = iteration(0,0,mapMandelbrot(column,row,dim,xRange,yRange),checkIterations,checkIterations)
surface.set_at([dim-column,row],greyVal)
def iteration(a, b, c, iterCount, maxIter):
a = (a*a) - (b*b) + c[0]
b = (2*a*b) + c[1]
iterCount = iterCount - 1
if iterCount == 0:
return [0,0,0]
elif abs(a+b) > 17:
b = (iterCount/maxIter)*255
return [b,b,b]
else:
return iteration(a,b,c,iterCount,maxIter)
setup()
Fascinating bug -- it literally looks like a squashed bug :)
The problem lies in the two lines:
a = (a*a) - (b*b) + c[0] b = (2*a*b) + c[1]
You are changing the meaning of
a in the first line, hence using the wrong
a in the second.
The fix is as simple as:
a, b = (a*a) - (b*b) + c[0], (2*a*b) + c[1]
which will cause the same value of
a to be used in calculating the right hand side.
It would be interesting to work out just what your bug has produced. Even though it isn't the Mandelbrot set, it seems to be an interesting fractal in its own right. In that sense, you had a very lucky bug. 99% percent of the times, bugs lead to garbage, but every now and then they produce something quite interesting, but simply unintended.
On Edit:
The Mandelbrot set is based on iterating the complex polynomial:
f(z) = z^2 + c
The pseudo-Mandelbrot set which this bug has produced is based on iterating the function
f(z) = Re(z^2 + c) + i*[2*Re(z^2 + c)*Im(z) + Im(c)]
where
Re() and
Im() are the operators which extract the real and imaginary parts of a complex number. This isn't a polynomial in
z, though it is easy to see that it is a polynomial in
z,z* (where
z* is the complex conjugate of
z). Since it is a fairly natural bug, it is almost certain that this has appeared somewhere in the literature on the Mandelbrot set, though I don't remember ever seeing it. | https://codedump.io/share/PhdlLUFwP4Um/1/mandelbrot-set-displays-incorrectly | CC-MAIN-2021-17 | refinedweb | 630 | 52.49 |
+++ This bug was initially created as a clone of Bug #33757 +++
This branch adds a Stats interface. It doesn't yet solve Bug #24307 (it doesn't include the actual match rules), but it does include various potentially-interesting numbers.
In the longer term Colin wants such things to be on a secondary /var/run/dbus/system_bus_management_socket or some such, but splitting off interfaces now seems like a good start on that.
Created attachment 43106 [details] [review]
Related to Bug #24307.
Created attachment 43107 [details] [review]
configure.in: add --enable-stats
Created attachment 43108 [details] [review]
[3] Add a stub .Debug.Stats interface if --enable-stats
There are no actual statistics yet, just a count of how many times the
method has been called, and (for the per-connection stats) the unique name.
Created attachment 43109 [details] [review]
[4] DBusMemPool: add usage stats
Created attachment 43110 [details] [review]
[5] DBusList: add usage stats
Created attachment 43111 [details] [review]
[6] BusConnections: add usage stats for well-known names, match rules
Created attachment 43112 [details] [review]
[7] DBusConnection, DBusTransport: add queue statistics
Created attachment 43113 [details] [review]
[8] Add an initial round of stats to the Stats interface
Created attachment 43114 [details] [review]
[9] Also record peak values for queued bytes/fds in connection stats
Created attachment 43115 [details] [review]
[10] Include size of link cache in per-connection statistics
Created attachment 43259 [details] [review]
[9 v2] Also record peak values for queued bytes/fds in connection stats
Now actually initialized to 0...
Created attachment 43674 [details] [review]
Include global peaks for message sizes, recipients counts and per interfaces
Limits:
- some leaks if malloc fails in bus_stats_handle_get_stats()
- is there a way to find the message size without using dbus_message_marshal()?
- we could add cumulative sizes per interface
I used this Python script to generate the following spreadsheet:
Review of attachment 43106 [details] [review]:
Seems ok, not wearing any dev hat.
I'd say that in case "%d" turns out to be an important bit, DBUS_NUM_MESSAGE_TYPES can be used to identify unknown types.
Review of attachment 43108 [details] [review]:
the patch seems OK, without any dev hat, again.
::: bus/Makefile.am
@@ +74,3 @@
signals.h \
+ stats.c \
+ stats.h \
Wouldn't it be better to add only when enabled?
In driver.c it's published only if enabled anyway.
::: bus/driver.c
@@ +31,3 @@
#include "selinux.h"
#include "signals.h"
+#include "stats.h"
Same, include only when enabled?
(In reply to comment #13)
> I'd say that in case "%d" turns out to be an important bit,
> DBUS_NUM_MESSAGE_TYPES can be used to identify unknown types.
What I meant by the commit message is: it's not a regression that we no longer stringify unknown message types via "%d" after this patch, because we don't allow adding match rules with such message types anyway, so that code is never (meant to be) reached.
(In reply to comment #14)
> ::: bus/Makefile.am
> @@ +74,3 @@
> signals.h \
> + stats.c \
> + stats.h \
>
> Wouldn't it be better to add only when enabled?
stats.c is entirely #ifdef DBUS_ENABLE_STATS, so it'll compile to nothing when disabled; I think that's clearer than putting conditionals in the Makefile.am, but if a reviewer disagrees, this would also be fine:
if DBUS_ENABLE_STATS
BUS_SOURCES += stats.c stats.h
endif
(automake is clever enough to distribute stats.[ch] whenever they're needed by any conditional.)
> ::: bus/driver.c
> @@ +31,3 @@
> #include "selinux.h"
> #include "signals.h"
> +#include "stats.h"
>
> Same, include only when enabled?
It's one more #ifdef block for no real benefit (apart from an insignificant reduction in time to compile this file), but if that style would be preferred, it's an easy change.
(In reply to comment #16)
> if a reviewer disagrees, this would also be fine:
>
> if DBUS_ENABLE_STATS
This would also require adding to configure.ac:
AM_CONDITIONAL([DBUS_ENABLE_STATS], [test "x$enable_stats" = xyes])
Again, easy to do if preferred.
The rest of the patches looks good (yet again, no hat).
Review of attachment 43674 [details] [review]:
I'd rather not apply this one right now.
::: bus/connection.c
@@ +31,3 @@
#include "expirelist.h"
#include "selinux.h"
+#include <stdlib.h>
Normally avoided outside sysdeps.[ch]
@@ +85,3 @@
+ * MessagesCount:u}
+ */
+ DBusHashTable *many_recipient_messages;
This would be much simpler as four hash tables, each { string => uint } - then you could use the D-Bus equivalent of GUINT_TO_POINTER for the values.
Not correctly documented, the outer keys are now actually "signal com.example.Badgerable.Badgered" or whatever.
@@ +501,3 @@
+
+ if (connections->many_recipient_messages == NULL)
+ goto failed_6;
failed_6? Seriously? This constructor needs refactoring :-)
I find that a better pattern is:
failed:
if (a != NULL)
thing_free (a);
if (b != NULL)
other_thing_unref (b);
...
@@ +2494,3 @@
+ ret = dbus_message_marshal (message, &buffer, &size);
+ if (ret)
+ free (buffer);
Doesn't dbus_message_marshal return a dbus_malloc'd buffer?
Also, adding _dbus_message_get_size() would be better, IMO - even if it's #ifdef DBUS_ENABLE_STATS.
@@ +2517,3 @@
+ {
+ if (!_dbus_string_append_printf (&iface_member, "%s %s.%s",
+ dbus_message_type_to_string (type), interface, member))
interface could be NULL. member could be NULL if it's a reply. Some platforms' printf implementations crash on NULL strings (coping gracefully with NULL is a nonstandard GNU extension).
@@ +2537,3 @@
+ goto free_string;
+ inner = _dbus_hash_table_new (DBUS_HASH_STRING, NULL,
+ (DBusFreeFunction)dbus_free);
Is dbus_free not a DBusFreeFunction? :'-(
::: bus/stats.c
@@ +180,3 @@
+static dbus_bool_t
+asv_add_asasvu (DBusMessageIter *iter,
I haven't reviewed this in detail at all, because using a{su} would make most of this complexity unnecessary.
Patches 1-10 merged, based on review (+ acknowledgement IRL) from Cosimo.
Retitling this bug for Alban's additional patch.
Gaming bugzilla isn't nice.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. | https://bugs.freedesktop.org/show_bug.cgi?format=multiple&id=34040 | CC-MAIN-2017-34 | refinedweb | 933 | 57.47 |
Causality: The relationship between cause and effect [OED].
Supporting a Distributed System can be hard. When the system has many moving parts it is often difficult to piece together exactly what happened, at what time, and within which component or service. Despite advances in programming languages and runtime libraries the humble text format log file is still a mainstream technique for recording significant events within a system whilst it operates. However, logging libraries generally only concern themselves with ensuring that you can piece together the events from a single process; the moment you start to invoke remote services and pass messages around the context is often lost, or has to be manually reconstructed. If the recipient of a remote call is a busy multi-threaded service then you also have to start picking the context of interest out of all the other contexts before you can even start to analyse the remote flow.
This article will show one mechanism for overcoming this problem by borrowing a hidden feature of DCOM and then exposing it using an old design pattern from Neil Harrison.
Manually stitching events together
The humble text log file is still the preferred format for capturing diagnostic information. Although some attempts have been made to try and use richer encodings such as XML, a simple one line per event/fixed width fields format is still favoured by many [Nygard].
For a single-process/single-threaded application you can get away with just a timestamp, perhaps a severity level and the message content, e.g.
2013-01-01 17:23:46 INF Starting something rather important
Once the number of processes starts to rack up, along with the number of threads you need to start including a Process ID (PID) and Thread ID (TID) too, either in the log file name, within the log entries themselves, or maybe even in both, e.g.
2013-01-01 17:23:46 1234 002 INF Starting something rather important
Even if you are mixing single-threaded engine processes with multi-threaded services it is still desirable to maintain a consistent log file format to make searching and parsing easier. For the sake of this article though, which is bound by the constraints of print based publishing, I’m going to drop some of the fields to make the log output shorter. The date, PID and severity are all tangential to most of what I’m going to show and so will be dropped leaving just the time, TID and message, e.g.
17:23:46 002 Starting something rather important
Assuming you can locate the correct log file to start with, you then need to be able to home-in on the temporal set of events that you’re interested in. One common technique for dealing with this has been to manually annotate log lines with the salient attributes of the task inputs, e.g.
17:23:45 002 Handling request from 192.168.1.1 for oldwoodc 17:23:46 002 Doing other stuff now 17:23:47 002 Now doing even more stuff . . . 17:23:59 002 Finished handling request from 192.168.1.1
If your process is single-threaded you can probably get away with putting the key context details on just the first and last lines, and then just assume that everything in between belongs to the same context. Alternatively you can try and ‘remember’ to include the salient attributes in every log message you write.
17:23:45 002 Handling request from 192.168.1.1 17:23:46 002 Doing other stuff now (192.168.1.1) 17:23:47 002 [192.168.1.1] Now doing even more stuff . . . 17:23:59 002 Finished handling from 192.168.1.1
Either way there is too much manual jiggery-pokery going on and as you can see from the last example you have to rely on all developers using a consistent style if you want a fighting chance of filtering the context successfully later.
Diagnostic contexts
The first problem we want to solve is how to ‘tag’ the current context (i.e. a single thread/call stack in the first instance) so that whenever we go to render a log message we can automatically annotate the message with the key details (so we can then grep for them later). More importantly, we’d like to do this in such a way that any code that is not already aware of our higher-level business goals remains blissfully unaware of them too.
In Pattern Languages of Program Design Vol. 3, Neil Harrison presents a number of logging related design patterns [Harrison], one of which is called Diagnostic Context. In it he describes a technique for associating arbitrary data with what he calls a ‘transaction’. The term transaction is often heavily associated with databases these days, but the transactions we are concerned with here are on a much larger scale, e.g. a single user’s ‘session’ on a web site.
A distributed system would therefore have many diagnostic contexts which are related somehow. The connection between these could be viewed as a parent/child relationship (or perhaps global/local). There is no reason why a context couldn’t store different ‘categories’ of tags (such as problem domain and technical domain), in which case the term namespace might be more appropriate. However this article is not so much concerned with the various scopes or namespaces that you might create to partition your contexts but more about how you go about relating them. As you will see later it is a specific subset of the tags that interests us most.
Although you could conceivably maintain one context per task that acquires more and more tags as you traverse each service layer, you would in effect be creating a Big Ball of Mud. However, the more tags you create the more you’ll have to marshal and ultimately the more you’ll have to write to your log file and then read again when searching. Although the I/O costs should be borne in mind, the readability of your logs is paramount if you’re to use them effectively when the time comes. And so multiple smaller contexts are preferred, with thread and service boundaries providing natural limits.
Implementing a simple diagnostic context
The implementation for a Diagnostic Context can be as simple as a map (in C++) or a Dictionary (in C#) which stores a set of string key/value pairs (a tag) that relates to the current operation. The container will almost certainly utilise thread-local storage to allow multiple contexts to exist simultaneously for the multiple threads within the same process.
It should be noted that some 3rd party logging frameworks already have support for diagnostic contexts built-in. However, they may not be usable in the way this article suggests and so you may still need an implementation like the simple one shown below.
At the entry point to our ‘transaction’ processing code we can push the relevant tags into the container for use later. By leveraging the RAII idiom in C++ or the Dispose pattern in C# we can make the attaching and detaching of tags automatic, even in the face of exceptions. For example in C# we could write the code in Listing 1.
Behind the scenes the constructor adds the tag to the underlying container and removes it again in the destructor/Dispose method. The need for the code to be exception safe is important as we don’t want the tags of one context to ‘leak’ by accident and infect the parent context because it would cause unnecessary pollution later when we are searching.
As Harrison’s original pattern suggests we can create contexts-within-contexts by using stack-like push/pop operations instead of just a simple add/remove. However you still want to be careful you don’t overload the meaning of any tag (e.g. ‘ID’) that will be used across related scopes as, once again, it will only create confusion later.
When the time finally comes to render a log message we can extract the set of tags that relate to this thread context, format them up nicely and append them to the caller’s message behind the scenes, as in Listing 2.
The example above would generate a log line like this:
17:23:46 002 Doing other stuff now [ID=1234]
The statement
Context.Format(); hopefully shows that I’ve chosen here to implement the diagnostic context as a static Façade. This is the same façade that the constructor and destructor of
DiagnosticContextTag would have used earlier to attach and detach the attributes. In C# the diagnostic context could be implemented like Listing 3.
The
Attach/
Detach methods here have been marked
internal to show that tags should only be manipulated via the public
DiagnosticContextTag helper class. (See Listing 4.)
Distributed COM/COM+
The second aspect of this mechanism comes from DCOM/COM+. Each call-chain in DCOM is assigned a unique ID (a GUID in this case) called the Causality ID [Ewald]. This plays the role of the Logical Thread ID as the function calls move across threads, outside the process to other local processes and possibly even across the wire to remote hosts (i.e. RPC). In DCOM this unique ID is required to stop a call from deadlocking with itself when the logical call-chain suddenly becomes re-entrant. For example Component A might call Component B (across the wire) which locally calls C which then calls all the way back across the wire into A again. From A’s perspective it might seem like a new function call but via the Causality ID it can determine that it’s actually just an extension of the original one.
This Causality ID is allocated by the COM infrastructure and passed around transparently – the programmer is largely unaware of it.
The primary causality
The Causality mechanism is therefore nothing more than a combination of these two ideas. It is about capturing the primary tags used to describe a task, action, operation, etc. and allowing them to be passed around, both within the same process and across the wire to remote services in such a way that it is mostly transparent to the business logic.
As discussed earlier, the reason for distilling the entire context down into one or more simple values is that it reduces the noise as the processing of the task starts to acquire more and more ancillary tags as you move from service to service. The local diagnostic context will be useful in answering questions within a specific component, but the primary causality will allow you to relate the various distributed contexts to one another and navigate between them.
A GUID may be an obvious choice for a unique causality ID (as DCOM does), and failing any alternatives it might just have to do. But they are not very pleasing to the eye when browsing log data. If the request is tracked within a database via an Identity column then that could provide a succinct integral value, but it’s still not easy to eyeball.
A better choice might be to use some textual data from the request itself, perhaps in combination with an ID, such as the name of the customer/user invoking it. The primary causality could be a single compound tag with a separator, e.g. 1234/Oldwood/192.168.1.1 or it could be stored as separate tags, e.g. ID=1234, LOGIN=Oldwood, IP=192.168.1. Ultimately it’s down to grep-ability but the human visual system is good at spotting patterns too and if it’s possible to utilise that as well it’s a bonus.
Putting all this together so far, along with a static helper,
Causality.Attach(), to try and reduce the client-side noise, we could write the single-threaded, multi-step workflow (a top-level request that invokes various sub-tasks) in Listing 5.
This could generate the output in Figure 1.
The decision on whether to extend the primary causality with the
TaskId or just let it remain part of the local context will depend on how easy it is for you to piece together your workflow as it crosses the service boundaries.
Marshalling the primary causality across the wire
We’ve removed much of the tedium associated with logging the context for a single-threaded operation, but that leaves the obvious question – how do you pass that across the wire to another service? We don’t usually have the luxury of owning the infrastructure used to implement our choice of transports but there is nothing to stop us differentiating between the logical interface used to make a request and the wire-level interface used to implement it. The wire-level interface may well already be different if we know of a more efficient way to transport the data (e.g. compression) when serializing. If we do separate these two concerns we can place our plumbing right on the boundary inside the proxy where the client can remain unaware of it, just as they are probably already unaware there is an RPC call in the first place.
The logical interface in Listing 6 describes the client side of an example service to request the details of a ‘bug’ in an imaginary bug tracking system.
The client would use it like so:
// somewhere in main() var service = BugTrackerProxyFactory.Create(); . . . // somewhere in the client processing var bug = service.FetchBug(bugId);
What we need to do when passing the request over the wire is to tag our causality data on the end of the existing parameters. To achieve this we have a separate wire-level interface that ‘extends’ the methods of the logical one:
interface IRemoteBugTrackerService { Bug FetchBug(int id, List<Tag> causality); }
Then, inside the client proxy we can hoist the primary causality out of the diagnostic context container and pass it across the wire to the service’s remote stub (Listing 7).
We then need to do the reverse (inject the primary causality into the new call stack) inside the remote stub on the service side (Listing 8).
In this example the client proxy (
BugTrackerProxy) and service stub (
RemoteBugTrackerService) classes merely provide the mechanism for dealing with the non-functional data. Neither the caller nor the service implementation class (
BugTrackerServiceImpl) are aware of what’s going on behind their backs.
In fact, as a double check that concerns are correctly separated, we should be able to invoke the real service implementation directly instead of the client proxy and still get the same primary causality appearing in our log output:
//var service = BugTrackerClientFactory.Create(); var service = new BugTrackerServiceImpl(); . . . var bug = service.FetchBug(bugId);
Marshalling the primary causality to other threads
Marshalling the primary causality from one thread to another can be done in a similar manner as the remote case. The main difference is that you’ll likely already be using your language and runtime library in some way to hide some of the grunge, e.g. by using a delegate/lambda. You may need to give this up slightly and provide the proverbial ‘extra level of indirection’ by wrapping the underlying infrastructure so that you can retrieve and inject the causality around the invocation of the business logic. Your calling code should still look fairly similar to before:
Job.Run(() => { backgroundTask.Execute(); });
However instead of directly using
Thread.QueueUserWorkItem we have another static façade (
Job) that will marshal the causality behind the delegate’s back (Listing 9).
Marshalling via exceptions
In the previous two sections the marshalling was very much one-way because you want to unwind the diagnostic context as you return from each scope. But there is another way to marshal the causality, which is via exceptions. Just as an exception in .Net carries around a call stack for the point of the throw and any inner exceptions, it could also carry the causality too. This allows you to avoid one common (anti) pattern which is the ‘log and re-throw’ (Listing 10).
The only reason the try/catch block exists is to allow you to log some aspect of the current operation because you know that once the call stack unwinds the context will be gone. However, if the exception captured the causality (or even the entire diagnostic context) in its constructor at the point of being thrown this wouldn’t be necessary. You also won’t have a ‘spurious’ error message either when the caller manages to completely recover from the exception using other means. (See Listing 11.)
Naturally this only works with your own exception classes, and so you might end up catching native exceptions anyway and re-throwing your own custom exception types just to capture the causality. However, you’ve avoided the potentially spurious log message though so it’s still a small net gain.
If the exception flows all the way back to the point where the transaction started you can then log the captured causality with the final exception message. In some cases this might be enough to allow you to diagnose the problem without having to find the local context where the action took place.
Tag types
So far we’ve restricted ourselves to simple string based tags. But there is no reason why you couldn’t store references to the actual business objects and use runtime type identification (RTTI) to acquire an interface for handling causality serialization and formatting. If all you're doing is rendering to a simple log file though this might be adding an extra responsibility to your domain types that you could do without.
This is one area where I've found Extension Methods in C# particularly useful because they only rely on the public state of an object and you can keep them with the infrastructure side of the codebase. The calling code can then look like this:
using (customer.TagCausality()) { // Do something with customer }
The extension method can then hide the ‘magic’ tag key string:
public static class CustomerExtensions { public Tag TagCausality(this Customer customer) { return Causality.Attach("Customer", customer.Id); } }
Keeping the noise down
Earlier I suggested that it's worth differentiating between the primary and ancillary tags to keep the noise level down in your logs as you traverse the various layers within the system. This could be achieved either by keeping the tags in a separate container which are then merged during formatting, or marking them with a special flag. The same suggestion applies to your context interface/façade - separate method names or an additional flag, e.g.
using (Causality.AttachPrimary("ID", Id))
versus…
using (Causality.Attach("ID", Id, Causality.Primary))
versus…
using (Causality.Attach("ID", Id)) using (Causality.MarkPrimary("ID"))
Whatever you decide it will probably be the choice that helps you keep the noise level down in your code too. Just as we wanted to keep the marshalling logic away from our business logic, we might also choose to keep our diagnostic code separate too. If you're using other tangential patterns, such as the Big Outer Try Block [Longshaw], or measuring everything you can afford to [Oldwood], you'll find weaving this aspect into your code as well might only succeed in helping you to further bury the functional part (see Listing 12).
Most of the boilerplate code can be parcelled up into a helper method that takes a delegate/lambda so that the underlying functionality shines through again, as in Listing 13.
Testing causality interactions
Due to the simplistic nature of the way the context is implemented it is an orthogonal concern to any business logic you might be testing. As the example implementation shows it is also entirely stateful and so there are no mocking concerns unless you want to explicitly test that the context itself is being correctly manipulated. Given that the modus operandi of the diagnostic context is to allow you to extract the tags for your own use the public API should already provide everything you need. This assumes of course that the operation you're invoking provides you with a "seam" [Feathers] through which you can observe the effects (for example, see Listing 14).
Summary
This article demonstrated a fairly unobtrusive way to associate arbitrary tags with a logical thread of execution to aid in the diagnosis and support of system issues via log files. It then illustrated a mechanism to pass the primary tags to other threads and remote processes so that multiple distributed scopes could be related to one another. The latter part contained some ideas on ways to reduce the noise in the client code and finished with a brief comment on what effects the mechanism has on unit testing.
Acknowledgements
A large dollop of thanks goes to Tim Barrass for providing me with the impetus to write this up in the first place and for forgoing sleep to review it too. Steve Love also picked up many problems both big and small. Finally the Overload team (Frances Buontempo, Roger Orr and Matthew Jones) helped to add some spit and polish.
References
[Ewald] Transactional COM+: Building Scalable Applications, Tim Ewald
[Feathers] Working Effectively With Legacy Code, Michael Feathers
[Harrison] Pattern Languages of Program Design 3, edited by Robert C. Martin, Dirk Riehle and Frank Buschmann.
[Longshaw] ‘The Generation, Management and Handling of Errors (Part 2)’, Andy Longshaw and Eoin Woods, Overload 93
[Nygard] Michael Nygard, Release IT!
[OED] Oxford English Dictionary
[Oldwood] Chris Oldwood, Instrument Everything You Can Afford To, | https://accu.org/index.php/journals/1870?cn=cmVwbHk%3D | CC-MAIN-2019-09 | refinedweb | 3,594 | 57.5 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
current login user id
Byon 4/19/14, 9:07 AM • 1,549 views
shafeequemonp
How can we access currently logined user id in openerp ?
For an example, look at the code in: ./addons/base_status/base_state.py
def _get_default_user(self, cr, uid, context=None): """ Gives current user id :param context: if portal not in context returns False """ if context is None: context = {} if not context or not context.get('portal'): return False return u elaborate more where u want user_id | https://www.odoo.com/forum/help-1/question/current-login-user-id-50177 | CC-MAIN-2016-50 | refinedweb | 112 | 62.98 |
Generified and cached empty arrays
Caching of an empty array is a well-known pattern to improve performance. However, it is difficult to use it in generified classes.
Out of curiosity, I created a custom implementation of the array creation method based on Array.newInstance. To cache empty arrays, I use synchronized WeakHashMap, which maps any given component type to a weak reference to the corresponding empty array. This is not the fastest way, but it does not lead to memory leaks.
private static final Map<Class<?>, Reference<?>> map =
new WeakHashMap<Class<?>, Reference<?>>();
@SuppressWarnings("unchecked")
public static <T> T[] newInstance(Class<T> type,
int length,
boolean cache) {
if (!cache || length != 0) {
return (T[]) Array.newInstance(type, length);
}
synchronized (map) {
Reference<?> ref = map.get(type);
Object array = (ref == null) ? null : ref.get();
if (array == null) {
array = Array.newInstance(type, length);
map.put(type, new WeakReference(array));
}
return (T[]) array;
}
}
Note that the method returns an array of a given type. I used the SuppressWarnings annotation to suppress warnings about unchecked casts.
Consider the results of the test application. The left column shows the number of iterations. During each iteration, an empty array is created for every given component type. The middle column shows the average time of an empty array creation. The right column shows the average time of creating and retrieving the shared empty array. Frankly, I was surprised that the caching of a single array is slower than caching of several ones.
Cached empty arrays can be useful for the JVM too. For example, when calling varargs methods without arguments a new empty array is created every time. I do not think that anyone expects such behavior.
What do you think?
- Login or register to post comments
- Printer-friendly version
- malenkov's blog
- 3946 reads
Measuring performance...
by vieiro - 2009-12-07 14:56... that way is always difficult and error prone. There're rounding errors (dividing long values, for instance) and the garbage collector is possibly kicking in during the test. Try this one out: Classes: 1 1 112292.17/463086.43 (ns) 10 718.62/2157.6 (ns) 100 636.96/1973.8 (ns) 1000 57.3/431.0 (ns) 10000 42.54/133.34 (ns) 100000 36.65/115.75 (ns) Classes: 88 1 424.07/1180.59 (ns) 10 102.43/202.93 (ns) 100 24.09/86.4 (ns) 1000 31.45/115.06 (ns) 10000 24.75/109.38 (ns) private static void test(Class<?>... types) { System.out.println("Classes: " + types.length); Class<?>[] empty = new Class<?>[types.length]; int count = 1; test(types, count, false); map.clear(); for (int i = 0; i < 6; i++, count *= 10) { long common = test(empty, count, false); long normal = test(types, count, false) - common; long cached = test(types, count, true) - common; System.out.println(count + "\t" + (normal/100.0) + "/" + (cached/100.0) + " (ns)"); } } private static long test(Class<?>[] types, int count, boolean cache) { long time = -System.nanoTime(); for( int repeat=0; repeat<100; repeat++ ) { for (int i = 0; i < count; i++) { for (Class<?> type : types) { if (type != null) { newInstance(type, 0, cache); } } } time += System.nanoTime(); time /= (long) count; time /= (long) types.length; } return time; } Amazingly, the results are all the way round! As Knuth once said: We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil
optimization
by malenkov - 2009-12-08 08:30
My code is not optimized. I spent more time to write the post on English then to create the code.
Your test code does not show a time needed for creation a single class. The
repeatcycle calls the
newInstancemethod at least 100 times. My table has the "100 attempts" row for such case.
JIT
by snowbird0 - 2009-12-09 14:24You should run your routine a number of times before you start measuring the time. Otherwise your first attempt will be before your code is compiled by the JIT - which might explain the performance penalty.
Optimized
by ricon - 2009-12-09 09:09SoftReferences would be more appropriate here I think, since they are kept as long as there is memory available, WeakReferences will just fade away and imho are seldomly appropriate for caches. See e.g.
Did you test also using concurrent threads? Just guessing: the synchronization may eat-up or reverse any performance gained by the caching.
Excuse me, but it's a mistake
by comrade_k - 2009-12-08 06:45Excuse me, but it's a mistake in last method. time = -System.nanoTime(); goes before additional 100-times loop , and time = +System.nanoTime(); goes inside it. Let's change it a little bit:
private static long test(Class<?>[] types, int count, boolean cache) { long totalTime = 0; for (int repeat = 0; repeat < 100; repeat++) { long time = -System.nanoTime(); for (int i = 0; i < count; i++) { for (Class<?> type : types) { if (type != null) { newInstance(type, 0, cache); } } } time += System.nanoTime(); time /= (long) count; time /= (long) types.length; totalTime += time; } return totalTime / 100; }We'll get:
Classes: 1 1 290/2419 (ns) 10 616/1038 (ns) 100 243/336 (ns) 1000 199/109 (ns) 10000 182/107 (ns) 100000 180/103 (ns) Classes: 88 1 183/123 (ns) 10 192/110 (ns) 100 191/111 (ns) 1000 191/109 (ns) 10000 189/109 (ns) 100000 191/111 (ns)Let's remove synchronized block in the newInstance method (really, we don't need this - really, it's rather low probability of creating 2 different instances of empty array, so possible overhead for this is very small), we'll get:
Classes: 1 1 322/2232 (ns) 10 324/1031 (ns) 100 202/211 (ns) 1000 223/67 (ns) 10000 177/70 (ns) 100000 179/69 (ns) Classes: 88 1 184/88 (ns) 10 196/73 (ns) 100 193/72 (ns) 1000 195/73 (ns) 10000 193/72 (ns) 100000 193/73 (ns)
before and after
by malenkov - 2009-12-08 08:19Look at the
commonvariable. It is used to remove a time you mention.
Cached arrays in the JVM
by linuxhippy - 2009-12-07 10:30Well, usually the arrays created for varargs are tiny - and therefor cheap to allocate. WeakReference's aren't free and add a runtime overhead, and when taking multi-threading into account you have to add synchornization / ThreadLocal storage to the game. Furthermore I don't think your test is fair - uncached arrays are also created by reflection - which usually isn't nescessary ;) - Clemens
Soft vs WeakReference
by aberrant - 2009-12-07 10:28
Have you thought about using A SoftReference instead of a WeakReference. I would think you would want to cache empty arrays as long as possible. In my experimentation weak referenced object only live until the next garbage collection and soft references last until the VM decides it really needs that memory for something else. Also is using WeakHashMap with Class objects seems strange, is this to keep from holding a reference to the classloader?
agree with soft references
by malenkov - 2009-12-08 08:48
I use WeakHashMap for automatical cleaning if no more custom class loader.
generified and cached empty arrays
by mcnepp - 2009-12-07 08:40Hello Sergey, 2 things I'd like to comment: You are using a WeakHashMap with Classes as keys. OK, this allows for custom classloaders to unload classes. But why do you also use WeakReferences as the map's values? If you'd simple put the empty arrays directory into the WeakHashMap you'd gain some performance, and the (finite) number of empty array instances would be freed upon JVM termination! Secondly, you're trying to improve 2 things at once: performance and type-safety. Unfortunately, you cannot replace the non-generic "newInstance" method with a generic one because of the lack of support for primitve types. I suggest you rename the method to "newObjectArray" and add an argument check such as
if(type.isPrimitive()) throw new IllegalArgumentException("cannot create primitive array");to prevent the user from receiving a surprising ClassCastException! Cheers, Gernot
primitive types are not allowed
by malenkov - 2009-12-08 09:03
Thanks! I forgot about primitives...
But why do you also use WeakReferences as the map's values?
by malenkov - 2009-12-08 01:23
Because WeakHashMap uses weak references only for keys. Values are hard references. Note that an array instance has a hard reference to the component type.
Fix in the core
by opinali - 2009-12-07 07:37
A library is the wrong place for such fix. This could be solved trivially: for each loaded Class (including primitives but not including arrays), allocate a zero-length array of that class and reference it from a private field of the reflection metadata, eg. Class.emptyArray. The memory overhead is relatively tiny (a loaded Class needs a ton of metadata), and the GC could be arranged to ignore the emptyArray field so it doesn't block class-GC but doesn't incur any overhead of using a soft reference (gazillions of soft refs in the heap = hell for the GC). Finally, update the array allocator (the bytecode "newarray" or the equivalent native code produced by the JIT) to just return type.emptyArray when length==0. (That's an extra comparison and branch in the allocator, but this is only necessary for arrays which is a small fraction of all allocations so it's not a sensible hit). The only limitation is for arrays-of-arrays, this can't be handled by the proposed design to avoid an infinite loop (as you load class X it creates a cached X[0], that creates a cached X[0][0], etc.), but that's a minor issue.
Now I know there's a compatibility catch - code relying that new X[0] returns a unique object, not == to any existing object, and with a separate monitor, will fail. I suppose such code should be extremely rare because I can't imagine any non-stupid reason to do that. But perhaps the caching optimization could be optional, enabled by a runtime switch. In the worst case, just create a completely new API, similar to Array.newInstance() but with the caching, and let tuned code (or cleaner languages-for-the-JVM) invoke it. A portable library could use your code in JRE < 7, or just delegate to the new AI in JRE >= 7. Everyone's happy.
create empty array on class loaded
by malenkov - 2009-12-08 08:47
It is a good idea. | https://weblogs.java.net/blog/malenkov/archive/2009/12/07/generified-and-cached-empty-arrays | CC-MAIN-2015-18 | refinedweb | 1,740 | 65.62 |
Creating Custom Task Panes in the 2007 Office System
Summary::
The ICTPFactory interface exposes the CreateCTP method. The syntax for this method is:
Creating a custom task pane is straightforward. First, you create a custom ActiveX control project with a Windows Form and other ActiveX controls. The Windows Form and controls represent the interface your users see..
Creating Custom Task Panes.
Creating the Windows Control Library Project
Using the steps in this section, create the ActiveX control to insert into the custom task pane.
To create a Windows control library project.
Adding Windows Controls to the Custom Control
In this section, add controls to the control that you created in the previous section.
To add controls to the form designer.Table 2. Windows Forms controls for the custom control
Click the text box, and then, in the Property window, set the Multiline property to True.
Adding Code to the Project
Next, add code that implements the event handlers, to give the controls functionality.
To implement the button click event handler and initialize the UserControl1 class
If the code window is not already displayed, in Solution Explorer, right-click UserControl1.cs and click View Code.
Locate the namespace SampleActiveX statement, and then add the following lines above it:
These statements create aliases for the namespace and import types defined in other namespaces.
After the open brace, just below the namespace SampleActiveX statement, add the following code::
Replace the public UserControl1() statement with the following code:
Next, this code initializes the control and sets up the event handler for the button.
Below the closing brace of the public myControl routine, add the event handler for the button:
Just below the procedure that you added in the previous step, add the following code:"
Creating Shared Add-ins for Custom Task Pan.
Adding Functionality to Add-Ins.
To add a reference to the Microsoft Office 12 Object Library and to the primary interop assemblies in Word 2007.
Just below the public class Connect() statement, add the following code to create an object that points to the UserControl1 class, and to create a variable for the primary interop assemblies in Office Word 2007:
In the OnConnection procedure, replace the statements with this code:
The MessageBox statement assures you that the add-in loads successfully in Office Word 2007.
In the OnBeginShutDown procedure, add the following code::
Add the following code, which inserts the text into the document when the user clicks the button:
To build and test the add-in
On the File menu, choose Save All to save the project.
On the Build menu, click Build Solution.
If you are able to compile the project with no build errors, the code automatically adds the add-in to the registry at this location:. | https://msdn.microsoft.com/en-us/library/aa338197.aspx | CC-MAIN-2015-22 | refinedweb | 458 | 59.13 |
Victor Katsande3,094 Points
What am i possibly getting it wrong, help please
here is the body of the question
I want you to try and reproduce it yourself.
First, import the random library.
Then create a function named random_item that takes a single argument, an iterable. Then use random.randint() to get a random number between 0 and the length of the iterable, minus one. Return the iterable member that's at your random number's index.
Check the file for an example.
import random def random_item(iterable) random_number = random.randint(0, len(list(iterable))-1) return iterable[random_number]
Steven Parker167,746 Points
Explicit code answers are strongly discouraged by Treehouse
...and comments cannot be upvoted.
Sajan bedi862 Points
Sajan bedi862 Points
Try This.You Don't Need to use List function.Don't Forget to upvote and mark it as resolved. | https://teamtreehouse.com/community/what-am-i-possibly-getting-it-wrong-help-please | CC-MAIN-2019-26 | refinedweb | 143 | 61.53 |
User Tag List
Results 1 to 6 of 6
- Join Date
- Jun 2012
- Location
- Modesto, California, United States
- 6
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
PHP Class Foundation Wanted - Am I in-luck?
I come from a desktop-programming background and am trying to get into web-development. I am probably being a huge noob in many ways, so please bear with me.
It might be that background talking, but I have been a little disappointed (and overwhelmed) regarding looking for a PHP foundation to build my applications on top of.
I have been looking around at various frameworks and so far I haven't found what I am looking for. I am looking less for a system that is largely pre-made and lets you customize, and more for a library full of handy-dandy tools that I can piece together in various ways.
Short version: I am hoping for a well-established set of classes/interfaces in the form of .php source files that are all stored in a directory (or two directories).
Longer Version: Call me spoiled, but one thing I liked about Java / Visual Studio is the large and well-established class structure. If I wanted to use a particular function, I include a particular class (or add a namespace, depending on the circumstances) I just include the file in my code. Or it was also convenient when making a class to be able to extend an existing class or implement an existing interface so that it can have maximum usability with the other items in the collection.
The Plan: I want to have a shared class directory (above web-accessible folders) that I pull components from as desired. I am hoping to maximize code-reuse in this fashion and keep things neat and tidy.
I have recently looked into PEAR, and it seems to be largely what I am looking for, except it is clearly not designed with my plan in mind. As of right now it looks like I would have to retrofit each one to work like so (I am probably being a noob).
I am probably making no sense at all, but maybe I ultimately will make more sense after answering questions form a confounded community.
Thanks in advance.
Symfony components or the Zend library.
But you don't need to pick PHP, and based on your background, it may not even be the best option for you. Both Java and .NET are good options too. A few additional factors to consider in that decision are: How many jobs are available; how big is the community; and what kind of workplace do you prefer.
- Join Date
- Feb 2005
- Location
- Burlington, Canada
- 2,688
- Mentioned
- 89 Post(s)
- Tagged
- 2 Thread(s)
+1
for Symphony2. This is based on beautiful code, it has a pretty darn good ORM and excellent extensibility using bundles.
Pear and Zend are more libraries and less frameworks, although many people will sell you on the idea that Zend is a framework so it is more a matter of my opinion on this one.
Good luck in deciding.
Steveictus=="✓"
- Join Date
- Jun 2012
- Location
- Modesto, California, United States
- 6
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Thank you guys for your help. Your patience is appreciated.
My initial reaction is one of disappointment so far with Zend and Symfony. I am getting more the impression that what I want does not exist as of yet.
Both of these are far more weighty than I was looking for. Now, as far as I can tell they look like decent enough frameworks. I just found myself trying to find their source include directory so I can rip it out and set it aside.
I am playing around with Symfony (since it seems to come highly recommended) to give it an honest shot to change my mind.
Source of my pain? - I suppose one thing that is causing me grief is that the project I am working on is for a school with some difficult security settings. This means I have no telnet access, all PHP database commands are disabled (so no databases for me), and no FTP access (I am supposed to e-mail the site to their web master who then loads it into directories). I want to make my own projects, but I don't want to have a collection of different systems for different circumstances if I can help it, I at least want to be operating under the same basic set of paradigms. So I am looking for a decent foundation that can be used under such draconian security conditions with ease, and not finding anything.
I have a vision of a framework that I would like to exist, but would be a truly massive undertaking that seems horribly not worth it if it is just for me. Particularly because it feels so much like reinventing the wheel.
Once again, I think this ultimately stems from my noobish status and having unreasonable expectations.
- Join Date
- Oct 2006
- Location
- France, deep rural.
- 6,864
- Mentioned
- 17 Post(s)
- Tagged
- 1 Thread(s)
Having just replied on your other thread about Galleries, and now having read of your source of pain -- I wouldn't hesitate in using the idea of "aggregated content" which is a term I think I may have overheard or indeed may have made up myself.
This is where very few simple PHP scripts aggregate and display content from other websites.
I decided to push this idea for a local sporting club event once and we used:
Flickr for all the images.
Posterous for all of the article content (and images)
Tumblr for news headlines
Twitter for updates and headlines
So that way your "content authors" can email Posterous with a new article, someone with a Twitter account can be writing all your headlines and deeplinking into your content, mulitple photographers can be adding images - geocoded images can turn up on maps.
They sign up and create their own accounts, you don't have to worry about GUIs or supporting mobile -- all you have to do is orchestrate the content to appear in the right place.
I have found that all PHP ORM offerings to be lacking in some way or another. Either they rely on ActiveRecord methodology, with minimal mapping on top for automation, or they rely on the business entities having looser scopes than ideal on their properties. I like my business entities encasuplated. I find SQRS with Event Sourcing a much better solution than an ORM. It only takes a few simple classes to implement, eliminates the need for mapping code, and your entities can have private members.
Bookmarks | http://www.sitepoint.com/forums/showthread.php?860397-SQL-Injection-related-question&goto=nextoldest | CC-MAIN-2013-48 | refinedweb | 1,119 | 67.49 |
form..
Hey,
even though I’m setting a random seed for both numpy and tensorflow as described by your post, I’m being unable to reproduce the training of a model consisting of LSTM layers(+ 1 Dense at the end).
Can you tell me if this is simply by the nature of LSTMs or if there is something else I can look into?
Thanks.
No, I believe LSTM results are reproducible if the seed is tied down.
Are you using a tensorflow backend?
Is Keras, TF and your scipy stack up to date?
I’m using the tensorflow backend and yes, everything is up to date. Freshly installed on Arch Linux at home. I stumpled upon the problem at work and want this to be fixed.
I tried the imdb_lstm example of keras with fixed random seeds for numpy and tensorflow just as you described, using one model only which was saved after compiling but before training.
I’m loading this model and training it again with, sadly, different results.
I can see though that through the start of both trainings the accuracies and losses are the same until sample 1440 of 25000 where they begin to differ slightly and stay so until the end, e.g. 0.7468 vs. 0.7482 which I wouldn’t see critical. At the end then the validation accuracy though differs by 0.05 which should not be normal, in this case 0.8056 vs. 0.7496.
I’m only training 1 epoch for this example .
I also got pointed at LSTMs being deterministic as their equations don’t have any random part and thus results should be reproducible.
I guess I will ask the guys of keras about this as it seems to be a deeper issue to me.
If you have another idea, let me know. Great site btw, I often stumpled upon your blog already when I began learning machine learning 🙂
Thanks for you help 🙂
Try an MLP in the same setup and see if it is the problem or data handling (e.g. embedding) introducing the random vars.
Hi DR jason,
I encountered the same problem as Marcel, and after doing research i found out that the problem is in Tensorflow Backend.I switch to theano backend, and i get reproducible results
Excellent. Thanks for sharing.
I have read your responses and there seems to be a problem with the Tensorflow backend. Is there a solution for that because I need to use the tensorflow backend only and not the theano backend. I also have a single LSTM layer network and running on a server with 56 CPU cores and CentOS. Pls help !!!
Did you try the methods for tensorflow listed in this post? Did they work for you?
According to the solutions you gave:
I am running the program on the same machine, so environment is being repeated.
I am also seeding the random number generator for numpy and tensorflow as you have shown in your post.
I am running my program on a server but using CPU only, no GPU.
I am using only Keras and numpy so explicitly no third party software.
My model isn’t that sophisticated either. Just a single layer LSTM and a softmax layer binary classification.
Pls help !! I am stuck !!
All of my best ideas are in the post above.
Double check your experiments.
Repeat of prior posts; I can’t edit it, and HTML trashed it:
Some experiences getting Theano to reproduce, which may
be of value to others.
My setup was Win7 64 bit, Keras 2.0.8,
Theano 0.10.0beta2.dev-c3c477df9439fa466eb50335601d5c854491def8,
Most of the effort was using my GPU, a GEForce 1060,
but I checked at the end that everything worked with
the CPU as well. What I was trying to do was make the
“Nietzsche” LSTM example reproduce exactly. The source
for that is at
General remark: It is harder than it looks to get reproducibility,
but it does work. It’s harder because there are so many little
ways that some nondeterminism can sneak in on you, and because
there is no decent documentation for anything, and the info on
the web is copious, but disorganized, and frequently out of
date or just plain wrong. In the end there was no way to find
the last bug except the laborious process of repeatedly modifying
the code and adding print statements of critical state data to
find the place the divergence began. Crude, but after you’ve
tried to be smart about it and failed enough times, the only way.
Lessons learned:
(1)
It is indeed necessary to create a .theanorc (if it isn’t already
there) and add certain lines to it. That file on windows is found at
C:\Users\yourloginname\.theanorc
If you try to create it in Windows Explorer, windows will block
you because it doesn’t think “.theano” is a complete file name
(it’s only a suffix). Create it as .theano.txt and then rename it
using a command shell.
It is said you need to add these lines to your .theanorc file:
[dnn.conv]
algo_bwd_data=deterministic
algo_bwd_filter=deterministic
optimizer_excluding=conv_dnn
It turned out I didn’t need the last one and I commented it out.
I imagine it is needed if you are using conv nets; I wasn’t.
(2)
You need to seed the random number generator FIRST THING:
import numpy as np
np.random.seed(any-constant-number)
You need to do this absolutely as early as possible.
You can’t do it before a “from future…” import but
do it right after that, in your __main__ i.e. the .py
file you run from the IDE or command line.
Don’t get clever about this and put it in your favorite
utility file and then import that early. I tried and it
wasn’t worth it; there are so many ways something can
get imported and mess with the RNG.
(3)
This was my last bug, and naturally therefore the dumbest:
There are two random number generators (at least) floating
around the Python world, numpy.random and Python’s native
random.py. The code posted at the URL above uses BOTH of
them: now one, now the other. Francois, I love you man, but…
I seeded one, but never noticed that there was another.
The unseeded one naturally caused the program to diverge.
Make sure you’re using one of them only, throughout.
If you are unsure what all your libraries might be doing,
I suppose you could just seed them both (haven’t tried).
(4)
Prepare for the siege: cut your program down before you begin.
Use a tiny dataset, fewer iterations, do whatever you can do
to reduce the runtime. You don’t need to run it for 10 minutes
to see that it’s going to diverge. When it works you can run
it once with the full original settings. Well, okay, twice,
to prove the point.
(5)
GPU and CPU give different results. This is natural.
Each is reproducible, but they are different.
The .theanorc settings and code changes (pinning RNG’s)
to get reproducibility are the same.
(6)
During training, Theano produces lines like these on the console:
76288/200287 [==========……………….] – ETA: 312s – loss: 2.2663
These will not reproduce exactly. This is not a problem; it
is just a real-time progress report, and the point at which
it decides to report can vary slightly if the run goes a little
faster or slower, but the run is not diverging. Often it does
report at the same number (here: 76288) and when that happens,
the reported loss will be the same. The ETA values (estimated time
remaining to finish the epoch) will always vary a little.
(7)
How exact is exact? If you’ve got it right, it’s exact
to the last bloody digit. Close may be good enough for
your purposes, but true reproducibility is exact. If the
final loss value looks pretty close, but doesn’t match exactly,
you have not reproduced the run. If you go back and look at
loss values in the middle of the run, you are apt to find
they were all over the place, and you just got lucky that
the final values ended up close.
Really nice write-up Jim, thank you so much for sharing!
I forgot one crucial trick. The best thing to monitor, to see
if it is diverging, is the sequence of loss values during training.
Using the console output that model.train() normally produces has
the issues I mentioned above under point 6. Using LossHistory
callback is way better. See “Example: recording loss history”
at
Jason, thanks for this blog page, I don’t know how much I have
added to it, but without it I wouldn’t have succeeded.
/jim
Thanks Jim, I really love to hear about your experiments and their results.
Maybe I should start a little community forum for us “boots on the ground practitioners” 🙂
I’ve seen it written that requiring deterministic execution will slow down execution by as much as two times.
When I timed the LSTM setup described above, on GPU, the difference was negligible: 0.07% — 5 seconds on 6,756.
It may depend on what kind of net you are running, but my example above was unaffected.
That makes it acceptable to use deterministic execution by default. (Just remember to re-fiddle the random number generator seed if you actually want a number of different runs, eg to average metrics.)
/jim
Interesting.
Generally, I think the only place that fixing the random seed has is in debugging code. We should not be fixing the random seed when developing predictive models.
… and I meant to say I was using Python 2.7.13
Hi Jason,
Thank you for this helpful tutorial, but i still have a question!
If we are building a model and we want to test the effect of some changes, changing the input vector, some activation function, the optimiser etc … and we want to know if they are really enhancing the model, do you think that it makes sense if we mixed the two mentioned ways.
In other words, we can repeat the execution for n times +30, every time we generate a random integer as a seed and we save its value.
At the end, we calculate the average accuracy and recover the seed value generating the most close score of this average, and we use this seed value for our next experiences.
Do you agree that in this scenario we get the “most representative” random values which could be usable and reliable in the tuning phase?
No, I would recommend multiple runs with different random numbers. The idea is to control for the stochastic nature of the algorithm, you need different randomness for this.
Hi Jason,
I think if we want to get best model from repeating the execution for n times +30, we need to get the highest accuracy rather than average accuracy. Am I right?
Finalizing a model is different from evaluating it. You can learn more here:
This one is helped me to solve the problem for TF as backend for keras.
Great, thanks for sharing. Looks new.
I’m trying to reproduce results on an NVIDIA P100 GPU with Keras and Tensorflow as backend.
I’m using CUDA 8.0 with cuDNN. I wanted to know if there is a way to get reproducible results in this setting. I’ve added all the seeds mentioned in the code shown above, but I still get around 1-2% difference in accuracy every time I run the code on the same dataset.
import os
os.environ[‘PYTHONHASHSEED’] = ‘0’
os.environ[“CUDA_VISIBLE_DEVICES”]=”-1″
os.environ[“TF_CUDNN_USE_AUTOTUNE”] =”0″
from numpy.random import seed
import random
random.seed(1)
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
worked for me. I never got the GPU to produce exactly reproducible results. I guess it’s because it is comparing values in different order and then rounding gets in the way. With the CPU this works like a charm.
Nice, thanks for sharing.
sess = tf.Session(config=tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)) also helps if tensorflow is used as a backend to Keras.
I think there is a type.
Where you say “This misunderstanding may also come in the **for** of questions like”
I think you mean “This misunderstanding may also come in the **form** of questions like”
Thanks, fixed.
Hello sir,
what about keras using cntk how to fix this problem?
I have not used CNTK, sorry. | https://machinelearningmastery.com/reproducible-results-neural-networks-keras/ | CC-MAIN-2018-34 | refinedweb | 2,103 | 73.68 |
By now you are chomping at the bit, eager to gallop into XML coding of your own. Let's take a look at how to set up your own XML authoring and processing environment.
The most important item in your XML toolbox is the XML editor. This program lets you read and compose XML, and often comes with services to prevent mistakes and clarify the view of your document. There is a wide spectrum of quality and expense in editors, which makes choosing one that's right for you a little tricky. In this section, I'll take you on a tour of different kinds.
Even the lowliest plain-text editor is sufficient to work with XML. You can use Text- Edit on the Mac, NotePad or WordPad on Windows, or vi on Unix. The only limitation is whether it supports the character set used by the document. In most cases, it will be UTF-8. Some of these text editors support an XML "mode" which can highlight markup and assist in inserting tags. Some popular free editors include vim, elvis, and, my personal favorite, emacs.
emacs is a powerful text editor with macros and scripted functions. Lennart Stafflin has written an XML plug-in for it called psgml, available at. It adds menus and commands for inserting tags and showing information about a DTD. It even comes with an XML parser that can detect structural mistakes while you're editing a document. Using psgml and a feature called "font-lock," you can set up xemacs, an X Window version of emacs, to highlight markup in color. Figure 1-5 is a snapshot of xemacs with an XML document open.
Morphon Technologies' XMLEditor is a fine example of a graphical user interface. As you can see in Figure 1-6, the window sports several panes. On the left is an outline view of the book, in which you can quickly zoom in on a particular element, open it, collapse it, and move it around. On the right is a view of the text without markup. And below these panes is an attribute editing pane. The layout is easy to customize and easy to use. Note the formatting in the text view, achieved by applying a CSS stylesheet to the document. Morphon's editor sells for $150 and you can download a 30-day demo at. It's written in Java, so it supports all computer platforms.
Arbortext's Epic Editor is a very polished editor that can be integrated with digital asset management systems and high-end compositing systems. A screenshot is shown in Figure 1-7. Like Morphon's editor, it uses CSS to format the text displayed. There are add-ons to extend functionality such as multiple author collaboration, importing from and exporting to Microsoft Word, formatting for print using a highly detailed stylesheet language called FOSI, and a powerful scripting language. The quality of output using FOSI is good enough for printing books, and you can view how it will look on screen. At around $700 a license, you pay more, but you get your money's worth with Epic.
These are just a few of the many XML editors available. Table 1-1 lists a few more, along with their features and prices.
The features of structure and validity checking can be taken too far. All XML editors will warn you when there are structural errors or improper element placement (validity errors). A few, like Corel's XMetal, prevent you from even temporarily making the document invalid. A user who is cutting and pasting sections around may temporarily have to break the validity rules. The editor rejects this, forcing the user to stop and figure out what is going wrong. It's rather awkward to have your creativity interrupted that way. When choosing an editor, you'll have to weigh the benefits of enforced structure against the interruptions in the creative process.
A high-quality XML authoring environment is configurable. If you have designed a document type, you should be able to customize the editor to enforce the structure, check validity, and present a selection of valid elements to choose from. You should be able to create macros to automate frequent editing steps and map keys on the keyboard to these macros. The interface should be ergonomic and convenient, providing keyboard shortcuts instead of many mouse clicks for every task. The authoring tool should let you define your own display properties, whether you prefer large type with colors or small type with tags displayed.
Configurability is sometimes at odds with another important feature: ease of maintenance. Having an editor that formats content nicely (for example, making titles large and bold) means that someone must write and maintain a stylesheet. Some editors have a reasonably good stylesheet-editing interface that lets you play around with element styles almost as easily as creating a template in a word processor. Structure enforcement can be another headache, since you may have to create a document type definition (DTD) from scratch. Like a stylesheet, the DTD tells the editor how to handle elements and whether they are allowed in various contexts. You may decide that the extra work is worth it if it saves error-checking and complaints from users down the line.
Editors often come with interfaces for specific types of markup. XML Spy includes many such extensions. It will allow you to create and position graphics, write XSLT stylesheets, create electronic forms, create tables in a special table builder, and create XML Schema. Tables, found in XML applications like HTML and DocBook, are complex structures, and believe me when I tell you they are not fun to work with in markup. To have a specialized table editor is a godsend.
Another nicety many editors provide is automatic conversion to terminal formats. FrameMaker, Epic, and others, can all generate PDF from XML. This high-quality formatting is difficult to achieve, however, and you will spend a lot of time tweaking difficult stylesheets to get just the appearance you're looking for. There is a lot of variation among editors in how this is achieved. XML Spy uses XSLT, while Epic uses FOSI, and FrameMaker uses its own proprietary mapping tables. Generating HTML is generally a lot easier than PDF due to the lower standards for documents viewed on computer displays, so you will see more editors that can convert to HTML than to PDF.
Database integration is another feature to consider. In an environment where data comes from many sources, such as multiple authors in collaboration, or records from databases, an editor that can communicate with a database can be a big deal. Databases can be used as a repository for documents, giving the ability to log changes, mark ownership, and store older versions. Databases are also used to store raw data, such as personnel records and inventory, and you may need to import that information into a document, such as a catalog. Editors like Epic and XML Spy support database input and collaboration. They can update documents in many places when data sources have changed, and they can branch document source text into multiple simultaneous versions. There are many exciting possibilities.
Which editor you use will depend a lot on your budget. You can spend nothing and get a very decent editor like emacs. It doesn't have much of a graphical interface, and there is a learning curve, but it's worked quite well for me. Or, for just a couple hundred dollars, you can get a nice editor with a GUI and parsing ability like Morphon XMLEdit. You probably wouldn't need to spend more unless you're in a corporate environment where the needs for high-quality formatting and collaboration justify the cost and maintenance requirements. Then you might buy into a suite of high-end editing systems like Epic or FrameMaker. With XML, there is no shortage of choices.
If the ultimate purpose of your XML is to give someone something to look at, then you may be interested in checking out some document viewers. You've already seen examples of editors displaying XML documents. You can display XML in web browsers too. Of course, all web browsers support XHTML. But Internet Explorer can handle any well-formed XML.
Since version 5.0 on Macintosh and 5.1 on Windows, Internet Explorer has had the ability to read and display XML. It has a built-in validating XML parser. If you specify a DTD in the document, IE will check it for validity. If there are errors, it will tell you so and highlight the affected areas. Viewing a document in IE looks like Figure 1-8.
You may have noticed that the outline view in IE looks a lot like the outline view in Morphon XMLEdit. It works the same way. The whole document is the root of a tree, with branches for elements. Click on one of the minus icons and it will collapse the element, hiding all of its contents. The icon will become a plus symbol which you can click on to open up the element again. It's a very useful tool for navigating a document quickly.
For displaying formatted documents on computer monitors, the best technology is CSS. CSS has a rich set of style attributes for setting colors, typefaces, rules, and margins. How much of the CSS standard is implemented, however, varies considerably across browsers. There are three separate recommendations, with the first being quite widely implemented, the second and more advanced less so, and the third rarely.
IE also contains an XSLT transformation engine. This gives yet another way to format an XML document. The XSLT script pointed to by your document transforms it into XHTML which IE already knows how to display. So you have two ways inside IE to generate decent presentation, making it an invaluable development tool. Since not all browsers implement CSS and XSLT for XML documents, it's risky to serve XML documents and expect end user clients to format them correctly. This may change soon, as more browsers catch up to IE, but at the moment it's safer to do the transformation on the server side and just serve HTML.
There are a bunch of browsers capable of working with XML in full or limited fashion. The following list describes a few of the more popular and interesting ones. Some technologies, like DOM and CSS, are broken up into three levels representing the relative sophistication of features. Most browsers completely implement the first level of CSS (CSS1), a few CSS2, and hardly any completely support the third tier of CSS.
Amaya is a project by the W3C to demonstrate technologies working together. It's both a browser and an editor with built-in XML parsing and validating. It supports XHTML 1.1, CSS1 and parts of CSS2, MathML, and much of SVG. It is not able to format other kinds of XML, however.
Available for flavors of Unix, this browser has lovely HTML formatting, but only limited support for XML. Its standards set includes HTML 4, most of CSS1 and part of CSS2, DOM1, DOM2, and part of DOM3.
Microsoft has tried very hard to play ball with the open standards community and it shows with this browser. XML parsing is excellent, along with strong support for DTDs, XSLT, CSS1, SVG (with plug-in), and DOM.
Strangely, Internet Explorer is split into two completely different code bases, with versions for Windows and Macintosh independent from each other. This has led to wacky situations such as the Mac version being for a time more advanced than its Windows cousin. The best versions (and perhaps last) available are 6.0 on Windows and 5.1 for Macintosh.
Mozilla is an open source project to develop an excellent free browser that supports all the major standards. At Mozilla's heart is a rendering engine, code-named Gecko, that parses markup and churns out formatted pages. It's also the foundation for Netscape Navigator and IBM's Web Browser for OS/2.
How is it for compliance? Mozilla fully supports XML using James Clark's Expat parser, a free and high-quality tool. Other standards it implements include HTML 4, XHTML, CSS1, CSS2, SVG (with plug-in), XSLT, XLink, XPath, MathML (with plug-in), RDF, and Unicode. Some standards are only partially supported, such as XBase, XLink, and CSS3. For more information about XML in Mozilla, go to the Web at.
As of version 6, Navigator has been based on the Mozilla browsers internal workings. Therefore, it supports all the same standards.
Opera is a fast and efficient browser whose lead designer was one of the codevelopers of CSS1. Standard support varies with platform, the Windows version being strongest. It implements XML parsing, CSS up to level 2, WAP and WML (media for wireless devices such as cell phones), and Unicode. An exciting new joint venture with IBM has formed recently to develop a multimodal browser that will handle speech as well as keyboard/mouse control. Opera's weakest point is DOM, with only limited support.
This fascinating tool describes itself as "an open XML browser for exotic devices." It supports some standards no other browsers have touched yet, including SMIL, XForms, X3D (three-dimensional graphics) and XML Signature. Using a Java-based plug-in, it can parse XML and do transformations with XSLT.
You can see browsers vary considerably in their support for standards. CSS implementation is particularly spotty, as shown in Eric Meyer's Master Compatibility Chart at. People aren't taking the situation lying down, however. The Web Standards Project ( monitors browser vendors and advocates greater compliance with standards.
Things get really interesting when you mix together different XML applications in one document. Example 1-9 is a document that combines three applications in one: XHTML which forms the shell and handles basic text processing, SVG for a vector graphic, and MathML to include an equation at the end. Figure 1-9 shows how it looks in the Amaya browser.
<?xml version="1.0"?> <html xmlns=""> <head> <title>T E L E G R A M</title> <!-- CSS stylesheet --> <style> body { background-color: tan; font-family: sans-serif; } .telegram { border: thick solid wheat; } .message { color: maroon; } .head { color: blue; } .name { font-weight: bold; color: green; } .villain { font-weight: bold; color: red; } </style> </head> <body> <div class="telegram"> <h1>Telegram</h1> <h2><span class="head">To:</span> Sarah Bellum</h2> <h2><span class="head">From:</span> Colonel Timeslip</h2> <h2><span class="head">Subj:</span> Robot-sitting instructions</h2> <!-- SVG Picture of Zonky --> <svg xmlns="" width="100" height="100"> <rect x="5" y="5" width="90" height="95" fill="none" stroke="black" stroke- <rect x="25" y="75" width="50" height="25" fill="gray" stroke="black" stroke- <rect x="30" y="70" width="40" height="30" fill="blue" stroke="black" stroke- <circle cx="50" cy="50" r="20" fill="blue" stroke="black" stroke- <circle cx="43" cy="50" r="5" fill="yellow" stroke="brown"/> <circle cx="57" cy="50" r="5" fill="yellow" stroke="brown"/> <text x="25%" y="25%" fill="purple" font-Zonky</text> <text x="40" y="85" fill="white" font-Z-1</text> </svg> <!-- Message --> <div class="message"> <p>Thanks for watching my robot pal <span class="name">Zonky</span> while I'm away. He needs to be recharged <em>twice a day</em> and if he starts to get cranky, give him a quart of oil. I'll be back soon, after I've tracked down that evil mastermind <span class="villain">Dr. Indigo Riceway</span>.</p> <p>P.S. Your homework for this week is to prove Newton's theory of gravitation: <!-- MathML Equation --> <math xmlns=""> <mrow> <mi>F</mi> <mo>=</mo> <mi>G</mi> <mo></mo> <mfrac> <mrow> <mi>M</mi> <mo></mo> <mi>m</mi> </mrow> <mrow> <msup> <mi>r</mi> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </math> </p> </div> </div> </body> </html>
If you're going to be using XML from error-prone sources such as human authors, you will probably want to have a parser (preferably with good error reporting) in your XML toolkit. XML's strict rules protect programs from unpredictable input that can cause them to crash or produce strange results. So you need to make sure that your data is clean and syntactically correct. Before I talk about these tools, let me first explain how parsing works. With a little theoretical grounding, you'll be in good shape for understanding the need for parsers and knowing how to use them.
Every program that works with XML first has to parse it. Parsing is a process where XML text is collected and broken down into separate, manageable parts. As Figure 1-10 shows, there are several levels. At the lowest level are characters in the input stream. Certain characters are special, like <, >, and &. They tell the parser when it is reading a tag, character data, or some other markup symbol.
The next level of parsing occurs when the tags and symbols have been identified and now affect the internal checking mechanisms of the parser. The well-formedness rules now direct the parser in how to handle tokens. For example, an element start tag tells the parser to store the name of the element in a memory structure called a stack. When an element end tag comes along, it's checked against the name in the stack. If they match, the element is popped out of the stack and parsing resumes. Otherwise, something must have gone wrong, and the parser needs to report the error.
One kind of symbol, called an entity reference, is a placeholder for content from another source. It may be a single character, or it could be a huge file. The parser looks up the source and pops it into the document for parsing. If there are entity references inside that new piece of content, they have to be resolved too. XML can come from many sources, including files, databases, program output, and places online.
Above this level is the structure checking, also called validation. This is optional, and not all parsers can do it, but it is a very useful capability. Say, for example, you are writing a book in XML and you want to make sure that every section has a title. If the DTD requires a title element at the beginning of the section element, then the parser will expect to find it in the document. If a <section> start tag is followed by something other than a <title> start tag, the parser knows something is wrong and will report it.
Parsers are often used in conjunction with some other processing, feeding a stream of tokens and representative data objects to be further manipulated. At the moment, however, we're interested in parsing tools that check syntax in a document. Instead of passing on digested XML to another program, standalone parsers, also called well-formedness checkers, tell you when markup is good or bad, and usually give you hints about what went wrong. Let's look at an example. In Example 1-10 I've written a test document with a bunch of syntax errors, guaranteed to annoy any XML parser.
<!-- This document is not well-formed and will invoke an error condition from an XML parser. --> <testdoc> <e1>overlapping <e2> elements </e1> here </e2> <e3>missing end tag <e4>illegal character (<) </e4> </testdoc>
Any parser worth its salt should complain noisily about the errors in this example. You should expect to see a stream of error messages something like this:
$ xwf ex4_noparse.xml ex4_noparse.xml:5: error: Opening and ending tag mismatch: e2 and e1 <e1>overlapping <e2> elements </e1> here </e2> ^ ex4_noparse.xml:5: error: Opening and ending tag mismatch: e1 and e2 <e1>overlapping <e2> elements </e1> here </e2> ^ ex4_noparse.xml:7: error: xmlParseStartTag: invalid element name <e4>illegal character (<) </e4> ^ ex4_noparse.xml:8: error: Opening and ending tag mismatch: e3 and testdoc </testdoc> ^
The tool helpfully points out all the places where it thinks the XML is broken and needs to be fixed, along with a short message indicating what's wrong. "Ending tag mismatch," for example, means that the end tag in question doesn't match the most recently found element start tag, which is a violation. Many parsing tools will also validate if you supply a DTD for them to check against.
Where can you get a tool like this? The easiest way is to get an XML-aware web browser and open up an XML file with it. It will point out well-formedness errors for you (though often just one at a time) and can also validate against a DTD. Figure 1-11 shows the result of trying to load a badly formed XML document in Mozilla.
If you prefer a command-line tool, like I do, you can find one online. James Clark's nsgmls at, originally written for SGML, has served me well for many years.
If you are a developer, it is not hard to use XML parsers in your code. Example 1-11 is a Perl script that uses the XML::LibXML module to create a handy command-line validation tool.
#!/usr/bin/perl use XML::LibXML; # import parser library my $parser = new XML::LibXML; # create a parser object $parser->validation(1); # turn on validation $parser->load_ext_dtd(1); # read the external DTD my $doc = $parser->parse_file( shift @ARGV ); # parse the file if( $@ ) { # test for errors print STDERR "PARSE ERRORS\n", $@; } else { print "The document '$file' is valid.\n"; }
XML::LibXML is an interface to a C library called libxml2, where the real parsing work is done. There are XML libraries for Java, C, Python, and almost any other programming language you can think of.
The parser in the this example goes beyond just well-formedness checking. It's called a validating parser because it checks the grammar (types and order of elements) of the document to make sure it is a valid instance of a document type. For example, I could have it check a document to make sure it conforms to the DTD for XHTML 1.0. To do this, I need to place a line in the XML document that looks like this:
<!DOCTYPE html PUBLIC "..." "...">
This tells the parser that it needs to find a DTD, read its declarations, and then be mindful of element types and usage as it parses the document. The location of the DTD is specified in two ways. The first is the public identifier, inside the first set of quotes above. This is like saying, "go and read the Magna Carta," which is an unambiguous command, but still requires you to hunt around for a copy of it. In this case, the parser has to look up the actual location in a concordance called a catalog. Your program has to tell the parser where to find a catalog. The other way to specify a DTD is to give a system identifier. This is a plain ordinary URL or filesystem path.
The act of changing XML from one form to another is called transformation. This is a very powerful technique, employed in such processes as print formatting and conversion to HTML. Although most often used to add presentation to a document, it can be used to alter a document for special purposes. For example, you can use a transformation to generate a table of contents, construct an excerpt, or tabulate a column of numbers.
Transformation requires two things: the source document and a transformation stylesheet. The stylesheet is a recipe for how to "cook" the XML and arrive at a desired result. The oven in this metaphor is a transformation program that reads the stylesheet and input document and outputs a result document. Several languages have been developed for transformations. The Document Style Semantics and Specification Language (DSSSL) uses the programming language Lisp to describe transformations functionally. However, because DSSSL is rather complex and difficult to work with, a simpler language emerged: the very popular XSLT.
XSLT didn't start off as a general-purpose transformation language. A few years ago, there was a project in the W3C, led by James Clark, to develop a high-quality style description language. The Extensible Style Language (XSL) quickly evolved into two components. The first, XSLT, concentrated on transforming any XML instance into a presentational format. The other component of XSL, XSL-FO (the FO stands for Formatting Objects), describes that format.
It soon became obvious that XSLT was useful in a wider context than just formatting documents. It can be used to turn an XML document into just about any form you can imagine. The language is generic, using rules and templates to describe what to output for various element types. Mr. Clark has expressed surprise that XSLT is being used in so many other applications, but it is testament to the excellent design of this standard that it has worked so well.
XSLT is an application of XML. This means you can easily write XSLT in any non-validating XML editor. You cannot validate XSLT because there is no DTD for it. An XSLT script contains many elements that you define yourself, and DTDs do not provide that kind of flexibility. However, you could use an XML Schema. I usually just check well-formedness and rely on the XSLT processor to tell me when it thinks the grammar is wrong.
An XSLT processor is a program that takes an XML document and an XSLT stylesheet as input and outputs a transformed document. Thanks to the enthusiasm for XSLT, there are many implementations available. Microsoft's MSXML system is probably the most commonly used. Saxon () is the most compliant and advanced. Others include Apache's Xalan () and GNOME's libxslt ().
Besides using a programming library or command-line tool, you could also use a web browser for its built-in XSLT transformer. Simply add a line like this to the XML document to tell the browser to transform it:
<?xml-stylesheet type="text/xsl" href="/path/to/stylesheet"?>
Replace /path/to/stylesheet with a URL for the actual location of the XSLT stylesheet. This method is frequently used to transform more complex XML languages into simpler, presentational HTML.
Technology pundits have been predicting for a long time the coming of the paperless office. All data would be stored on computer, read on monitors and passed around through the network. But the truth is, people use paper now more than ever. For reading a long document, there is still no substitute for paper. Therefore, XML has had to embrace print.
Formatting for print begins with a transformation. Your XML document, which is marked up for describing structure and information, says nothing about the appearance. It needs to be converted into a presentational format that describes how things should look: typefaces, colors, positions on the page, and so on. There are many such formats, like TEX, PostScript and PDF. The trick is how to get from your XML to one of these formats.
You could, theoretically, write a transformation stylesheet to mutate your XML into PostScript, but this will give you nightmares and a head full of gray hairs. PostScript is UGLY. So are most other presentational formats like troff, RTF, and MIF. The fact that they are text (not binary) doesn't make them much easier to understand. Any transformation stylesheet you write will require such intimate knowledge of byzantine rules and obscure syntactic conventions that it will quickly lead to madness. Believe me, I've done it.
Fortunately, somebody has had the brilliant idea to develop a formatting language based on plain English, using XML for its structural markup. XSL-FO, the cousin of XSLT, uses the same terminology as CSS to describe typefaces, inline styles, blocks, margins, and all the concepts you need to create a nice-looking page. You can look at an XSL-FO document and easily see the details for how it will be rendered. You can edit it directly and it won't blow up in your face. Best of all, it works wonderfully with XSLT.
In the XSL process (see Figure 1-4), you give the XSLT transformer a stylesheet and an input document. It spits out an XSL-FO document that contains the data plus style information. A formatter takes the XSL-FO instance and renders that into a terminal format like PDF which you can print or view on a computer screen. Although you could edit the XSL-FO directly, it's unlikely you would want to do that. Much better would be to edit the original XML source or the XSLT stylesheet and treat the XSL-FO as a temporary intermediate file. In fact, some implementations won't even output the XSL-FO unless you request it.
My favorite XSL-FO formatter is called FOP (Formatting Object Processor) and it's a project of the prodigious Apache XML Project (). FOP is written in Java and comes bundled with a Java-based parser (Xerces) and a Java-based XSLT transformer (Xalan). The whole thing runs as a pipeline, very smooth and clean.
As an example, I wrote the XSLT script in Example 1-12. Unlike Example 1-6, which transforms its source into HTML, this transforms into XSL-FO. Notice the use of namespace qualifiers (the element name parts to the left of colons) to distinguish between XSLT instructions and XSL-FO style directives. The elements that start with xsl: are XSLT commands and elements that start with fo: are formatting object tags.
<?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet <xsl:template <fo:root> <fo:layout-master-set> <fo:simple-page-master <fo:region-body <fo:region-before <fo:region-after </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence <fo:flow <xsl:apply-templates/> </fo:flow> </fo:page-sequence> </fo:root> </xsl:template> <xsl:template <fo:block <xsl:text>TELEGRAM</xsl:text> </fo:block> <xsl:apply-templates/> </xsl:template> <xsl:template <fo:block <xsl:text>To: </xsl:text> <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block <xsl:text>From: </xsl:text> <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block <xsl:text>Subj: </xsl:text> <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:inline <xsl:apply-templates/> </fo:inline> </xsl:template> <xsl:template <fo:inline <xsl:apply-templates/> </fo:inline> </xsl:template> <xsl:template <fo:inline <xsl:apply-templates/> </fo:inline> </xsl:template> </xsl:stylesheet>
After running the telegram example through FOP with this stylesheet, Figure 1-12 is the result. FOP outputs PDF by default, but other formats will be available soon. There is work right now to add MIF and PostScript as formats.
When all else fails, you can write a program.
Parsers are the front line for any program that works with XML. There are several strategies available, depending on how you want to use the XML. The "push" technique, where data drives your program, is like a one-way tape drive. The parser reads the XML and calls on your program to handle each new item in the stream, hence the name stream processing. Though fast and efficient, stream processing is limited by the fact that the parser can't stop and go back to retrieve information from earlier in the stream. If you need to access information out of order, you have to save it in memory.
The "pull" technique allows the program to access parts of the document in any order. The parser typically reads in a document and stores it in a data structure. The structure resembles a tree, with the outermost element as the root, and its contents branching out to the innermost text which are like leaves. Tree processing, as we call it, gives you a long-lasting representation of the document's data and markup. It requires more memory and computation, but is often the most convenient way to accomplish a task.
Developers have come up with standard programming interfaces for each of these techniques. The Simple API for XML (SAX) specifies how a parser should interact with a program for stream processing. This allows programs to use interchangeable modules, greatly enhancing flexibility. It's possible to write drivers, programs that simulate parsers but get their input data from databases or non-XML formats, and know that any SAX-enabled program will be able to handle it. This is illustrated in Figure 1-13.
What SAX does for stream processing, the Document Object Model (DOM) does for tree processing. It describes a wide variety of accessor methods for objects containing parts of an XML document. With DOM, you can crawl over all the elements in a document in any order, rearrange them, add or subtract parts, and extract any data you want. Many web browsers have built-in support for DOM, allowing you to select and repackage information from a server using Java or JavaScript.
SOAP is a way for browsers to trade complex data packages with servers. Unlike HTML, which marks up data based on appearance, it describes its contents as data objects with types, names, and values, which is much more handy for computer processing.
Extracting data from deep inside a document is a common task for developers. DOM and SAX are often too complex for a simple query like this. XPath is a shorthand for locating a point inside an XML document. It is used in XPointers and also in places like XSLT and some DOM implementations to provide a quick way to move around a document. XPointer extends XPath to create a way to specify the location of a document anywhere on the Internet, extending the notion of URLs you know from the a element in HTML. | http://etutorials.org/Programming/Learning+xml/Chapter+1.+Introduction/1.4+How+Do+I+Get+Started/ | CC-MAIN-2017-30 | refinedweb | 5,655 | 63.49 |
Talk:OpenPisteMap
Discuss OpenPisteMap here
Server OpenPisteMap inaccessible
Dear OpenPisteMap developers,
the Server seems not to answer at this time. What's up? Maintenance, or lack of funds?
Gpermant 09:52, 14 December 2011 (UTC)
- The openpistemap server is currently up, but is giving 404 errors for the background images. On the sidebar it says "OpenPisteMap is currently experiencing some problems after migration to a new server. Please be patient while we resolve these issues. ... Please see the Wiki for project status information."
- So it would be useful to have some "project status information" on this page to give people an idea if/when it's going to get back up to full working order. Does anyone know? Eric2 09:23, 19 January 2012 (UTC)
- The status is exactly as given on the side of the main OPM page - OpenPisteMap is currently experiencing some problems after migration to a new server. The I/O demands on the old server were causing problems for my paying customers, as well as an excessive bandwidth bill (largely due to a few abusive pieces of software that pull down huge swathes of the map at a time - sorry, but there is just no excuse for trying to pull all tiles for an entire country at all zoom levels for hours on end!) so the site had to be taken offline until a better solution could be found. OPM has now been moved to a newer, more powerful server, but not all the functionality has been restored yet. Steve Hill 14:41, 7 February 2012 (UTC)
Google Maps Showing Pistes and Lifts for South Lake Tahoe
If you look at South Lake Tahoe, CA on Google Maps you will see a large number of pistes and ski lifts rendered. As far as I know this is the only area where pistes and lifts are rendered on Google Maps so it must be a special project by a Google employee or something. Anyhow, it looks pretty nice so I thought I'd post it here to generate ideas for future Open Piste Map Rendering styles. --Ezekielf 20:16, 21 April 2009 (UTC)
transparent layer
Wouldn't be a lighter job only to render piste features, and view them as a transparent layer over the existing OSM tiles ???? Pistes feature could then be un rendered on OSM and make it lighter on it's side also... --PhilippeP 13:03, 10 March 2008 (UTC)
- Possibly, but that doesn't allow the existing features to be customised to make them more appropriate for a piste map. Also, contour lines probably won't work well as a transparent layer since they would obscure other features. Steve Hill 21:34, 10 March 2008 (UTC)
- What existing features ?? As a layer could be switched on/off, nothing would be obscured ... And it wouldn't be necessary to render osmarender and mapnik versions also ... --PhilippeP 08:29, 11 March 2008 (UTC)
- Certain features will probably want to be modified - for example, minor tracks probably want to be rendered in a lighter colour since they will often be obscured by a piste during the winter. Steve Hill 12:56, 11 March 2008 (UTC)
Contours tip
Contours duplicate text - you don't need to worry about that, if you're following my guide for the cycle map. If you want to render the contour names for the 10m and 100m shapefiles, just don't render it for the 100m shapefiles at all! All the heights are in the most detailed shapefile. Gravitystorm 13:36, 11 March 2008 (UTC)
- Thanks for the tip. I've actually solved it in a different way, which I hope will work out neater once I start importing lots of contour data - instead of generating lots of shapefiles (and 3 layers for each), I am loading the whole 10m dataset into PostGIS. I then define 3 layers (10m, 50m, and 100m) and select the appropriate contour intervals out of the database, excluding the overlapping contours from the higher frequency sets. This means that for n shapefiles I only need 3 layers rather than 3n layers, and don't need to adjust the XML whenever I add contours for more areas. I'll add information about this approach to your contours wiki page once I've got it all working. :)
openpistemap.org - IE6 browser troubles
Oh. I just realised that you have actually made some good progress with setting something up at I was suggesting you needed some kind of holding page, because I thought there was nothing there, but using firefox, I see a piste map!
It must be serving a wrong mimetype or something because I.E.6 comes up with a download prompt, and then doesn't know what to open the file with
...and before you say it. I'm forced to use IE6 at work. I don't do it voluntarily (but over 50% of web-surfers are still on it) Dunno if IE7 copes better.
-- Harry Wood 18:47, 12 March 2008 (UTC)
- The page is served with the correct mime type (but IE, of course, doesn't support the standard mime type). I will be switching to a text/html mime type at some point anyway because OpenLayers actually uses some functionality that is disallowed in XHTML, but that will require some adjustments to the XHTML itself and I haven't had chance to do it yet. - Steve Hill 21:42, 12 March 2008 (UTC)
- I've now tweaked the page and it is being served as text/html. I have no way to test it in IE though as I have no access to any Windows machines. - Steve Hill
- Yep. that's fixed that problem. Still don't see the slippy map in IE6 though. It looks like the map div is rendering with zero height (I'm seeing a 2px black line along the top of the page) Is it javascript which is supposed to be setting the div's height? Not seeing any javascript error though. -- Harry Wood 09:58, 13 March 2008 (UTC)
- The div is positioned and sized by CSS (it is positioned absolutely with the top, bottom, left and right properties set appropriately). I don't have access to any Windows machines, so I can't test it in IE to see why it breaks - if someone else can submit a patch then I'll apply it though. (It's a really simple page... I'm not sure how IE can manage to get it so wrong). Steve Hill 10:29, 13 March 2008 (UTC)
- I took a look in a bit more detail. On OpenStreetMap we have
- window.onload = handleResize;
- window.onresize = handleResize;
- ...kicking off a resizing function. Openpistemap does not have this. Maybe IE6 depends upon this, while other browsers manage purely with CSS (although I think all browsers use that function when expanding out the search sidebar?) That's my guess. I'll look into it some more some time. I know it's not going to be easy for you to fix it without access to this shabby old browser :-)
- -- Harry Wood 11:03, 13 March 2008 (UTC)
- Looks possible, although I'd prefer to fix this with (possibly IE-specific) CSS if at all possible. Steve Hill 11:40, 13 March 2008 (UTC)
- IE6 should be spelled IESux ... I also have IE @work but I use FF anyway , and I can confirm the IEFix (A hair on my tongue) bug... In the meantime, maybe adding a message promoting another browser in the left pane ? --PhilippeP 12:22, 13 March 2008 (UTC)
- I've added an [if IE] section to the sidebar that explains the situation and points at this talk page.Steve Hill 13:09, 13 March 2008 (UTC)
- IE8 apparently works ok, so I have changed the conditional comment to only display a warning to people using earlier IE versions. Steve Hill 09:09, 17 February 2009 (UTC)
- So for us stuck with old versions of IE through applications servers, a burochrautic IT department, and nazi firewalled access to internet, just have to accept that we cannot see these maps, or wait until IE12 is lounched so they maybe upgrade to IE8.....? --Skippern 01:42, 18 August 2009 (UTC)
Adding piste:type
What about adding type snowshoe and winter-walks. I thank that snowshoe is an activity available in many station, and for winter-walks it is available in "Les Diablerets" () and it's use ful to know witch ways are available in winter.
CU and thanks in advance Sarge 13:21, 27 December 2008 (UTC)
- I've moved this discussion to Talk:Proposed_features/Piste_Maps. - Steve Hill 09:58, 30 December 2008 (UTC)
Add cross country skiing
Please also add cross-country ski runs as they are quiet common here...
I didn't know of your project before... Would you suggest a different Tagging? How about relations as most ski runs use at least some part of a track
Currently rendered Map
What part do you currently render? In which zoomlevel? I sadly see that Austria is (mostly) not included. Could that be changed?
- The whole planet is rendered on-demand down to zoom level 17. If you look at a region of the map that hasn't been viewed before and the server is under heavy load it may not render quickly enough to display immediately - try looking again a few minutes later. Contours are only imported for selected regions due to the amount of disk space they would require for the whole planet - if you want contours to be added for a region then email me (webmaster@openpistemap.org). Steve Hill 08:57, 17 February 2009 (UTC)
piste:name?
Hi,
I'd like to suggest a tag like piste:name which is the name of the piste only. The problem is, that standard Mapnik does not render pistes, but renders every name of a road, regardless of the other tags. This would help too, when the piste leads over tracks or roads, so that the piste does not need to get it's own way. In general I'd suggest, that OPM removes all piste: prefixes from tags it doesn't know, and tries to render them, as they were just the tag without the prefix. Doing this would make it clear to everyone who uses the data, that the name or ref or whatever applies to the piste, which in most cases is only valid from December to March. --BearT 07:02, 10 February 2009 (UTC)
PS: Would it be possible to render some snow on forrests? Skiing through green forrests (even on the map) is not that nice. ;-)
- I dislike fixing a Mapnik rendering problem with creating a new tag (i'd also dislike fixing osmarenderer problems with a new tag ;). The other problem with having two different names for a road and a piste still exists. -- studerap, 16. February 2009
- I understand your concerns about fixing rendering bugs in the data, but still the name tag on a highway=track is neither clearly part of the track road nor of the piste on the same location. And most often the piste has another name as the track, if the track has any. So it's not a rendering bug, it's simply impossible to know if the name-tag applies to the track, to the piste or to both unless all piste features are prefixed accordingly. --BearT 22:12, 16 February 2009 (UTC)
- I'd support the idea of allowing namespacing in names for situations where the 2 names are in conflict (e.g. a piste and road that follow the same way but have different names could have both "piste:name" and "highway:name" tags). However, I know that some people are almost violently opposed to namespacing, so I'm sure it would create a lot of arguments.
- The other option is to use relations - i.e. a way could have both a "highway" relation and a "piste" relation at the same time - this seems like a neater solution, although more complex. In the long run I imagine that most tags will end up migrating into relations, leaving the ways themselves untagged; but I'm not sure the editors are there yet.
- I do dislike the way that the OSM Mapnik styles have a habit of rendering the name text for features that it doesn't know how to render, but that is a Mapnik style sheet bug and shouldn't be worked around by changing the tags. Steve Hill 09:06, 17 February 2009 (UTC)
- What I'm talking about is, in my case, no bug, since it's a highway=track and a name=*, which is perfectly valid and rendered as such. It's just that the name and the highway are on the same way, but apply to different usages of the way. But of course this bug exists in other places and is very annoying, but that's no topic here.
- I really do like the idea of relations for every data attached to a way, but as you said it is complex and hard to do (currently) in the editors. So I'd think of namespacing as a valid and easy solution at the moment. I'd also expect the namespaced tags to be relatively easy (automatically) transformable into relations. But I'd be glad to hear the arguments against namespacing, since I do not really see any at the moment. --BearT 23:13, 17 February 2009 (UTC)
- You can read one of the many arguments here: although I'm fast coming to the conclusion that namespaces on attributes are the wrong direction to go in - I'd prefer to switch to a tagging scheme that provides a properly identifiable context and use relations - see (I do still think that the tagging scheme we use on OSM is an almighty mess and needs to be overhauled) Steve Hill 21:24, 19 February 2009 (UTC)
Nordic pistes
Hi,
I've a question: I've problems to find nordic pists in the OPM. I can see downhill, but not nordic. Is that a problem on my side or are there no nordic pistes on OPM rendered at the moment. Maybe somebody can reference an example? Thanks! -- Mathias71 15:49, 24 March 2009
- I think I can answer that question by myself: I just realized, that nordic pistes are listet under ToDo. So my next question is: Are there any time plans when they will be included? -- Mathias71 14:00, 25 March 2009
Direction rendering
Hi, have you thought about rendering using arrows (like oneway street rendering in osm)? Most pistes are one-way (by the law of gravity, not accounting for the crazy people that climb up the hill), and some of the lifts are too. Norpan 09:51, 13 October 2009 (UTC)
Rendering separately the basemap and the pistes
Hi,
Lonvia [1] render only an overlay for hiking path and use other people maps as base maps (base of OSM, shading of H&B). Is it possible to separate the rendering to reuse them on remote sites ? You already edit two maps with/without countours. You could use the osm mapnik as base, you contour rendering as overlay and the pistes as overlay.
[1]
--FrViPofm 15:36, 31 August 2010 (BST)
Update Interval
Hello, how often do you update the map? There are some pistes missing, which are mapped in OSM. --Efred 07:39, 13 April 2012 (BST) | http://wiki.openstreetmap.org/wiki/Talk:OpenPisteMap | crawl-003 | refinedweb | 2,585 | 67.79 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
One thing I am sure of: as long as programming remains something many people do, there will be debates about static type checking.
Update: To put this in perspective - LtU turned fifteen last month. Wow.
Update 2: Take the poll!
This is a topic I discussed with Edward Yang, more than a year ago, in the the more specific context of "functional programming". He planned to do a blog post about it at the time, but I guess he hasn't got around to writing it yet, so someone here may be interested in, or amused by, the list I had prepared.
One thing that I find perplexing is that many of the questions in this list are not *formal* questions (in the sense mathematicians may have formal questions; although many open question in mathematics are actually informally formulated because we don't know how to make them precise). Some other fields, some even very closely related to functional programming, have list of rather formal open problems (TLCA list of open problems), but I often feel that they are relatively "small" problems (except maybe "what is a good computational model of Univalence?") -- not talking about "P, NP?" which is not really PL-related.
Some researchers also have a knack for hitting perplexing open problems that are easy to formulate formally, and it often comes from being very demanding about using the most rigorous tool in the most expressive way possible, even though the particular need at hand could possibly be filled by a quicker, less principled (compromise, workaround) solution.
The question about the K axiom has since been solved by Jesper Cockx.
# High-level questions
- Can we make it practical to write programs along with their specifications, and *guarantee* that they match?
- Can we make proof-assistants so usable that all mathematicians could use it?
- Can we extend industrial-strength automated reasoning technology to high-order logics, instead of encoding our problems to SAT/SMT solvers?
- Can we produce a usable effect-typing system?
- Can we produce a usable separation logic?
- Why are we consistently failing to deliver satisfying module systems to actual programming languages?
- Can we use functional programming technology to produce a good low-level programming language?
- Can our language theories accomodate for live code update, variations (architecture-, configuration-, language-version-dependent code), code distribution?
- What *are* the "right" presentation*s* of equality in dependently typed languages?
# More specific questions
- Can we regain principal type inference in realistic programming languages with dependent types? (This may require layering the type system.)
- What is the structure of the subclass of dependent pattern-matching that can be written without using the K axiom?
- Can we make reasoning about parametricity practical?
- Can we design practical *pure* languages, with no ambiant non-termination effect?
- What is the right proportion of nominal typing / generativity in a type system for usable programming languages?
# Solved questions
- Can we produce functional programming languages that are competitive in performance with other languages for most tasks?
- Can our language theories accomodate and explain object-oriented programming?
- Can we integrate the main ideas of functional programming into everyday practice of users of mainstream lanugages?
- Can we design programming languages with no ambiant effect (except non-termination), and implement them as (non-derivable) libraries?
- Can our language theories account for incomplete programs with holes?
# Maybe-out-of-scope-maybe-underestimated questions
- Why do some languages/systems see wide-ish adoption while others don't? Besides expressivity, what are the success factors?
- What are good surface syntaxes to accomodate users tastes, varied tooling, convenient library-level/domain-specific abstractions/sublanguages ?
- What are good programming systems to teach programming?
- Can we make developping good tooling easier?
- What are the right user interaction models for collaborative program construction and maintenance? Do programming languages and theories need be adapted to remain pertinent with respect new interaction models (mashup, code wiki, etc.)?
Can we extend industrial-strength automated reasoning technology to high-order logics, instead of encoding our problems to SAT/SMT solvers?
The SAT community is already working on quantifiers from the other end, but these are quantifiers over finite domains. In SMT the issue is that you run into undecidability, but a lot of techniques to handle quantifiers have been developed. I think the best approach is to have a SAT/SMT solver as a tactic, and integrated with the language so that if you don't manually specify a proof term then the SMT solver is invoked automatically (so that you have the same streamlined workflow as with a pure SMT approach, but with the ability to drop into manual proof if necessary).
Why are we consistently failing to deliver satisfying module systems to actual programming languages?
I would say it's not entirely failing: Scala's has modules (objects).
Can our language theories accomodate for live code update
I'm not sure if languages can or need to do much here. It's mostly a question of application architecture. Techniques like substituting one code pointer for another in a running program are never going to work well. The application needs to handle this in a sensible way, for example serialize all state, then convert the data representation to version n+1, then start up the new version (at least conceptually, the conversion could be done lazily when the new program accesses pieces of the data to minimize downtime). Even better is an event sourcing architecture so that you could run two versions of the code side by side for a while.
What *are* the "right" presentation*s* of equality in dependently typed languages? [...] Can we make reasoning about parametricity practical?
The answer to the first question is most likely HoTT. The answer to the second question is most likely internalized parametricity. An interesting question is whether these fit into a common framework. I think they do. Both are about functions that preserve something. In HoTT all functions preserve equality as defined in HoTT, with parametricity all functions preserve relations. They are both of the form if F x y then F (f x) (f y) where F depends on the type. Even in an ordinary dependently typed language you could define a special function space that preserves F:
if F x y then F (f x) (f y)
A ~> B = (f : A -> B, F A x y -> F B (f x) (f y))
i.e. a function paired up with a proof that it preserves F, which is indexed by the type. There are two further tweaks needed:
(1) You want the function space of the second component to be the ~> function space too F A x y ~> F B (f x) (f y)
F A x y ~> F B (f x) (f y)
(2) You want to support dependent functions (x:A) ~> B x.
(x:A) ~> B x
So you end up with something like:
B : A ~> Type
(x:A) ~> B x = (f : (x:A) -> B_0 x, (e : F A x y) ~> F (B_1 e) (f x) (f y))
Note that B itself has ~> function space.
You can put a locally cartesian closed structure on this function space, and then HoTT & internalized parametricity overloads normal lambda calculus syntax to work with this function space. Perhaps you can make this overloading parametric in F, i.e. the thing that's being preserved. You might even be able to support multiple such function spaces in the same language.
There are other F's that are interesting. HoTT uses equivalences for F on Type, i.e. two types A,B if there are f : A->B, g : B->A with f (g b) = b and g (f a) = a. This results in a groupoid structure. Maybe if you only require f (g b) = b you get a category structure.
If you leave both out, so you only require the existence of f : A->B, g : B->A, you get exactly the invariant functors. Given a Q : Type ~> Type, and given f : A->B, g : B->A, you would get f' : Q A -> Q B and g': Q B -> Q A. Of course this won't be locally cartesian closed, but it will be cartesian closed.
If you only require f : A->B, then you get the usual functors.
This is even related to type classes. If you have the List type constructor, then this preserves Eq-ness. If we have Eq A then we have Eq (List A).
Lots more things could fit into this framework, like unit systems (functions that preserve certain symmetries), continuity (functions that preserve closeness), differentiability (functions preserve small perturbations), etc.
A more general mechanism to locally overload the syntax could be very useful in other cases too.
What is the right proportion of nominal typing / generativity in a type system for usable programming languages?
Generativity is basically implicit effects in the type system, so just like any other effects it should be (1) used as sparingly as possible (2) explicit in the type. Explicit effects like F : Type -> SomeEffect Type, sure, but implicit ones with F : Type -> Type, no. F x should be equal to F x no matter what.
I was rather careful to formulate my question allowing several important notions of equality.
HoTT might be unsuitable as a sole notion of equality for programming, because the fact that equality conversion may incur arbitrary computations is not a thing I would be comfortable with, at least if it was implicit (and if all uses of equality must be explicits, well, life is hard). It may be possible to give good guarantees when manipulating h-sets, and find a good language design that helps people ensure that the types they are programming with are indeed h-sets.
It has been suggested to consider two-level system with a "rich" HoTT equality and a weaker, "strict" equality. In any case, this would be at the level of the propositional equality. There remain the question of what is a good definitional equality of the system, and which operations it allow. It might be the case that programming can be done mostly using definitional equality, with propositional equality used in proofs.
But is the number of people churning code going to remain significant? In five years - of course. Ten? Most likely. Fifteen - I am not so sure. Twenty? I have no idea.
Programmers are already increasingly useless. This is inevitable.
To see why, consider agricultural labor:
I have read that in 1800, just shy of 90% of the U.S. working population worked in agriculture.
In 1900, only about 40% worked in agriculture.
In 2000, only 2.6% worked in agriculture.
This shift is the result of the blind process of capitalist competition and nothing more.
Socially, this shift is in some ways, a disaster. Our food system is petroleum dependent. Our produce is, on average, of poor nutritional quality. We have decimated the soil. We have emptied ancient aquifers. We have poisoned the Gulf of Mexico. We are killing off our pollinators. We stand just a few crises away from mass starvation.
Perhaps worst of all: there is essentially nothing individuals can do to escape this situation and reverse the problems.
Sure, a few lucky, eccentric people can escape to private paradises and hope nothing intrudes but the point is this: At every step of the way, our society has not only displaced smaller-scale, often individualistic agriculture -- our society has made community food security impossible for communities to secure.
We've not just stopped taking care of our own subsistence while industrial centralization does its thing. We have given up our freedom and capacity to take care of our own subsistence.
Why would computing be any other way?
At every step of the agricultural disaster, colleges and universities trained new experts to concentrate on ever more esoteric, ever more "sophisticated" problems.
Those professionals had an economic imperative, at every step: get rid of farm workers.
Get more food commodities out of the workers you can't get rid of.
Figure out how to mass produce and mass market industrial food.
Every step of the way the professionals competed to see who could jump the highest when told to jump. That's where the money and the acclaim were. That's what the competition among the professionals required. If you wanted to be a professional but would not help accelerate the agricultural disaster? You would not find work.
In the 1970s there was a brief dream that we might expand the capacity of communities to build computing systems and program for their own needs. This was naive.
The vast resources of industrial capital, and the experts who aspire to it, were deployed to eliminate as much labor as possible from computing, and to sell the greatest possible amount of computing commodities, at the highest profit, using the least amount of labor possible.
Our devices are locked. Source code is unavailable. Computing is increasingly centralized on massive industrial server farms. Socially, we are only a crisis or two away from a complete disaster when these systems fail.
As in agriculture, we are utterly dependent on precarious, centralized, ill-conceived, highly centralized production -- and we have sacrificed our liberty and our capacity to get by without those systems.
Just highlighting key points:
Our devices are locked. Source code is unavailable. Computing is increasingly centralized on massive industrial server farms. Socially, we are only a crisis or two away from a complete disaster when these systems fail.
This is such a pessimistic view of the world. It's like saying self-driving cars will fail, when they already exist and are poised to change the world. Where PL fails us, ML is picking up the slack.
We have been on the course of industrialization since the neolithic revolution. Turns out people want to specialize rather than be hunter gatherers; it has worked well for us, and the impending disasters predicted every decade have failed to materialize.
This is such a pessimistic view of the world.
The hell you say. It implies that capitalism must collapse under its own weight as a direct consequence of its internal rules combined with its positive effects of developing production.
Was it pessimistic to say "Hey, the feudal lords are doomed!"?
We have been on the course of industrialization since the neolithic revolution.
Wrong. We have been on a course of technological advancement for all of history.
"Industrialization" is not merely technological advancement. It is technology combined with (1) The invention of governments that impose capitalistic property laws (the revolutionary overthrow of feudalism); (2) The eviction of the peasants from the land from which they drew subsistance (the enclosure of all productive land under the jurisdiction of capitalistic property law); (3) The invention of mandatory wage labor for the propertyless (the assumption of the power of life and death by capitalist governments). "Industrialization" is currently in the process of dying because capitalist competition continuously tries to eliminate factor (3) -- wage labor. (IMO it's a bit of a horse race whether civilization or capital collapses first.)
Turns out people want to specialize rather than be hunter gatherers;
History suggests that hunter gatherers succeeded well enough to destroy their lifestyle by over-exploitation, whereupon they were materially compelled to take up slave-based agriculture and animal husbandry. History begins around that point.
impending disasters predicted every decade have failed to materialize.
You should read more news.
The feudal lords were never doomed. They evolved into an aristocracy than as capitalists, or in communist countries as high officials.
This is a bit political, but the detachment of land from subsistence was necessary in achieving industrialization and urbanization, which is a lot more efficient than subsistence farming. You only have to look at the countries that have not yet undergone that transformation to see that. Or heck, just look at China, where farmers still have lots of land that is used quite inefficiently...the agri-corps or at least large-scale farmers would be a huge improvement here.
"Industrialization" is currently in the process of dying because capitalist competition continuously tries to eliminate factor (3) -- wage labor. (IMO it's a bit of a horse race whether civilization or capital collapses first.)
Human beings are incredibly versatile and have been adept at avoiding doom for the last 100,000 years or so. Hey, Asia is crowded...well, let's travel over that land bridge to America!
What is happening right now is that automation is eliminating the need for lower skilled jobs, and lessening the need for even high skilled jobs. We are approaching a pivotal moment in our history where work is much less valuable. They are not working to eliminate labor, labor is just dying out naturally. The huge question is what will replace it, which is something that we will have to start tackling in the next decade.
History suggests that hunter gatherers succeeded well enough to destroy their lifestyle by over-exploitation, whereupon they were materially compelled to take up slave-based agriculture and animal husbandry.
Citation? Hunter gatherer populations were naturally kept in check by their inability to exploit the land very well. They were never able to over exploit it, just like squirrels will never eat all the nuts off of trees in your city. It was only with agriculture that a sedentary lifestyle was even possible.
News tends to emphasize the negative because that is what is interesting to read/hear about (makes sense: problems need fixing, so focus on the problems). But the truth is, we are living in an unprecedented period of peace and prosperity.
The feudal lords were never doomed.
That's why there are so many literal serfs in England! Wait....
They evolved into an aristocracy than as capitalists, or in communist countries as high officials.
As far as I know, the rise of mercantalism compelled the end of the feudal system which first adapted as an aristocracy and then collapsed (often with violence) into the simplified, two-class society of liberal capitalism. (Titular aristocracy notwithstanding.)
This is a bit political, but the detachment of land from subsistence was necessary in achieving industrialization and urbanization, which is a lot more efficient than subsistence farming.
Well, duh. That is not political, it is tautological. Capital is a revolutionary force that rapidly accelerated the development of society's productive capacity. (Remember, I started off by pointing out a decline of agricultural employment from around a 90% share in 1800 to about a 2.5% share today.)
Or heck, just look at China, where farmers still have lots of land that is used quite inefficiently...the agri-corps or at least large-scale farmers would be a huge improvement here.
By aspiration, if nothing else, Chinese socialist government is attempting to rapidly progress through capitalist development, including with policies that encourage urban migration and agricultural modernization.
You said this following:
Human beings are incredibly versatile and have been adept at avoiding doom
You said that in response to a statement that the industrial age is ending because capitalism is very far along the way of eliminating wage labor.
I hope you are right that humans will adapt to the death of capitalism. I think it is a horse race.
In any event, just as feudalism was doomed, now capital is doomed.
Hey, Asia is crowded...well, let's travel over that land bridge to America!
There is no frontier we can open up anymore, not even space, that can restore the utility of wage labor.
What is happening right now is that automation is eliminating the need for lower skilled jobs,
All jobs. One of the striking features of high-tech firms is how little labor they consume relative to their revenues.
We are approaching a pivotal moment in our history where work is much less valuable.
We hit that wall in 1929. Then we blew up a very large share of the world's factories and fields and slaughtered 10s of millions of workers. Then we rebuilt better for a few decades and hit the same wall again around 1970. They we stopped using commodity money for trade and emphasized financial capital. Now we have skyrocketing permanent unemployment, mass incarceration, environmental ruin, environmental and economic refugees on a mass scale, ongoing and growing water crises, impending food crises, the collapse of political states at the periphery of the developed world, escalating threats of a hot version of WWIII, ....
. They are not working to eliminate labor, labor is just dying out naturally.
I don't know who "they" is supposed to be. Capitalists work to eliminate labor through blind competition. Workers actually do the heavy lifting of making themselves obsolete. Everyone in capitalism, top to bottom, contributes to bringing about the increasing obsolecense of wage labor.
The only real problem is that as wage labor becomes useless, workers become poorer and poorer, even while the capacity for wealth creation goes through the roof.
The huge question is what will replace it, which is something that we will have to start tackling in the next decade.
I hope you are right. The elimination of wage slavery is the elimination of class-based domination and exploitation. They are one in the same. It entails the end of the state and the end of exchange-based production.
Citation? Hunter gatherer populations were naturally kept in check by their inability to exploit the land very well.
That appears to be false.
I think one of the seminal works in this area (not a work I'm personally that familiar with) is "The food crisis in prehistory. Overpopulation and the origins of agriculture" Mark Nathan Cohen, 1977, Yale University Press.
The transition to agriculture is associated with a great decline in quality of life (more, harder work), lifespan, and nutrition. The techniques of agriculture were apparently understood well before it became dominant. From this we can conclude that the transition to agriculture was materially compelled, not a "lifestyle choice". Finally, ancient agriculture could not operate without slavery and general hierarchy of domination -- it both required and enabled class-based society.
They were never able to over exploit it, just like squirrels will never eat all the nuts off of trees in your city.
Animal populations without a balance of predators starve themselves by over-exploitation all the time. Why do you think we moderns hunt deer?
It was only with agriculture that a sedentary lifestyle was even possible.
There was nothing sedentary about it. It was much harder work than the pre-neolithic era. And for a poorer life, at that.
Uh... yeah. I didn't mean "watch more tv". We are not living in an "unprecedented period of peace and prosperity," no matter what Stephen Pinker wants to tell a TED conference.
Uh... yeah. I didn't mean "watch more tv". We are not living in an "unprecedented period of peace and prosperity," no matter what Stephen Pinker wants to tell a TED conference.
You're in a minority then. I can only charitably interpret you to be considering absolute numbers rather than the statistics for your definition of "peaceful period". Unfortunately, that's not a meaningful metric of peace.
Some decent critique of the claim that we are in a peaceful period is given by John Gray, earlier this year, in the Guardian:
"A new orthodoxy, led by Pinker, holds that war and violence in the developed world are declining. The stats are misleading, argues Gray – and the idea of moral progress is wishful thinking and plain wrong"
John Gray: Steven Pinker is Wrong about Violence and War.
To a lot of people, what this article says is plainly obvious by direct experience.
So Gray criticizes Pinker's explanations of the statistics, discusses a number of irrelevant tangents about incarceration and proxy wars, all to paint a poetic dystopian narrative that doesn't discuss any concrete numbers at all. I'm not sure how this is supposed to be convincing to anyone who's even remotely familiar with crime and war statistics and the economic growth of nations and how this correlates with overall quality of life.
criticizes Pinker's explanations of the statistics,
I believe that is a fundamental misunderstanding. It is true that Gray criticizes Pinker's inferences, but he also faults Pinker's choice of statistics.
Let me try to lay out the form of the critique a little differently. I think I am faithful to Gray here but if I stray, it is only into my own very similar critique.
1. Pinker's argument begins with an exercise in selecting data sets which, he asserts, measure the violence of society. Pinker's selection is cramped and arbitrary. As you noted, for example, Pinker ignores the skyrocketing incarceration rate in the U.S. as evidence of state violence.
Here is another example: Pinker uncritically repeats FBI rape statistics when he presents his thesis. Per FBI reporting, incidences of rape in the U.S. have markedly declined since the 1970s.
Defending his choice of data sets, Pinker writes, in his FAQ:
I had two guidelines. The first was to use data only from sources that had a commitment to objectivity, with no ideological axe to grind, avoiding the junk statistics commonly slung around by advocacy groups and moral entrepreneurs. [....]
Anyone even modestly familiar with the provenance of FBI statistics on crime knows that Pinker is already off the rails. The FBI statistics are anything but objective and are buffeted by ideology at every step. Why?
Well, local, county, and state law enforcement agencies independently collect and report these statistics. The process goes on with no real oversight. The statics are used in public policy debates to make or break ideological arguments, and to "prove" or "disprove" the success or failure of policing strategies. They are some of the most politicized and uncontrolled data sets out there!
That is why police departments are often found to lie in this reporting. In the case of rape, there is evidence that the supposed precipitous decline in rape since the 1970s (Pinker's graph) is fake and that in fact the fake statistics mask a rape crisis. "How to Lie with Rape Statistics: America's Hidden Rape Crisis" Corey Rayburn Yung, Iowa Law Review V.99, n.1197, 2014
In short, Pinker has simply asserted that his statistics measure human violence, but even a cursory examination calls that assertion into doubt. In the case of rape, for example, Pinker is not using a measure of violence but rather a measure of indirect crime reporting to the FBI -- a critically important difference.
2. Why does Pinker treat the numbers so sloppily? In particular: what is he doing with his avalanche of junk statistics?
Here, Gray points out -- and I agree -- that Pinker is advancing an ideological argument based on pure faith. Gray spends some time getting to the particulars of this argument and pointing out we need to be more critical and sceptical of Pinker's cartoonish "enlightenment". I won't go into that here.
You (naasking) wrote:
I'm not sure how this is supposed to be convincing to anyone who's even remotely familiar with crime and war statistics and the economic growth of nations and how this correlates with overall quality of life.
I am familiar with crime and war statistics. I understand what is popularly called "economic growth" very well.
I do not know what you, personally mean by "quality of life."
I know that in the United States, especially where I live, it is a racist and classist dog whistle that is used to rally people in support of state violence directed against the destitute and the low-income non-white people.
None of that state violence appears in Pinker's statistics.
Certainly some of the data sets are highly politicized, but Pinker's claims don't depend upon only a single data set, they are trends seen around the world. Even if the rates in a single place increased, that doesn't necessarily significantly affect the overall trend.
State violence wasn't covered by Pinker's analysis, but even here, you seem to have a US-centric view. The US indeed as an incarceration crisis, but again, the rest of the world doesn't. Furthermore, you'd be hard-pressed to convincingly argue that the significant reduction of non-state violence has been matched or exceeded by state violence.
By "quality of life", I mean access to basic needs like shelter, food, education, etc. The US has certainly slid backwards on some of these metrics since the 70s, but not on all of them, and that backwards trend again does not apply to the rest of the world.
The problem is not that Pinker's data sets are "politicized" but that they measure something other than the level of violence in society.
With reference to the modern world, Pinker is studying various administrative categories, not violence itself. With respect to the ancient world, he is studying modern administrative categories projected onto a hypothesized past (without much regard for the human meaning of those modern categories in ancient societies).
Where violence does not take a form that meets the gaze of sovereigns, Pinker defines it away. To his FAQ again:.”
So far as I know, his list (deprevation, repression of communities, environmental poisoning, etc.) are not simply "metaphorical" violence. They are not condemned by analogy but because and when they harm or kill.
The Bhopal disaster comes to mind as one example that is, for Pinker, not "violence" but merely a "bad thing" outside of his story. Similarly the invisible-to-law violence and deprevation that characterizes the lives of many migrant farm workers.
Personally, I think Pinker is wrong to even begin to claim he might quantify the degree of violence in society. There is no overcoming these kinds of definitional problems he runs into not on one or two data sets, but across the board.
If he had confined his conclusion drawing to something much more literal -- i.e., rapes reported to the FBI have declined a lot since 1970 -- the work would be unassailable and also boring. It wouldn't get him on the NYT best seller's list or as an invited speaker or interview subject.
He has used the data to spin an ideological yarn around some not-too-coherent concept of technocratic enlightment. A certain audience just eats that kind of implicit reassurance right up.
I don't think that your analogy of framing is relevant to the computer industry. Farming has a meaningful split between the ownership and exploitation of resources. The dominant resource in farming is fertile land: all other relevant resources are fungible. The owner of the land will reap the lion's share of any exploitation, regardless of who carries out the work. Two farmers who are to collaborate need to share the resource: they must be physically co-located on that resource. This creates clear benefits for scaling in size.
Why would the computer industry behave the same way? I can't see any dominant resource that is required for exploitation. Talent is not a commodity, even if the owners of capital wish it to be so. I would ask you to consider the games industry as a more relevant example. This is an industry that has arisen entirely within our lifespans, and look at the evolutionary path that it has followed. In the 80s the Ferrari lifestyle was an aspiration for most programming industries: yet almost all development was carried out in small scale development teams that struggled financially.
Even when the industrial base was distributed in a pattern close to cottage industry, there was a winner-takes-all dynamic that caused a power-law distribution in financial returns. The result is Electronic Arts and the other conglomerates that dominate the AAA slice of the industry. But their rise has not eradicated the cottage industry, and this is unusual in the development of an industrial sector. In any industry where there are low barriers to entry talent will dominate capital. The "indie" game market is just the latest name for that constant bottom-up rebirth of the industry.
Something new is happening in the games industry, specifically because it is not dominated by owning slices of a non-expandable resource: there is a small scale bottom-up industry co-existing with an effective cartel of large-scale top-down conglomerates. That is far more interesting as an a pattern of industrial development than agriculture and the serfs moving to the factories. I think it tells more of what we will see in the future of programming. There would seem to be more than one equilibrium in the patterns of activity required to exploit this area. Most interestingly they seem to exhibit scaling effects: hitting the level of polish required on a $2.5B game title requires the combined efforts of 1000 people and the level of capital to fund their 3-year development cycle. Aiming at a lower level of complexity in the product means that 2 guys in a bedroom can develop more efficiently than a large team (Fez?). This is a genuinely new pattern of industrial development, what other industry has cottage-like efficiency in low-complexity producers competing / co-existing with industrial-scale efficiencies in high-complexity producers?
This is a genuinely new pattern of industrial development, what other industry has cottage-like efficiency in low-complexity producers competing / co-existing with industrial-scale efficiencies in high-complexity producers?
Agriculture. (See, e.g., farmer's markets where the sellers include boutique producers.)
Something new is happening in the games industry, specifically because it is not dominated by owning slices of a non-expandable resource
Anyone can make the big time in Hollywood.
there is a small scale bottom-up industry co-existing with an effective cartel of large-scale top-down conglomerates. That is far more interesting as an a pattern of industrial development than agriculture and the serfs moving to the factories.
There is nothing very unique there.
One symptom of the rising disutility of labor is that capitals substitute stringers for regular employees. The so-called sharing economy shows this most directly. It appears as "zero-hour contracts" in the U.K. It appears as casting calls like app-stores and V.C. goat rodeo pitch shows. Next up: Hunger Games!
Agriculture is a low-skill (commodity labour) market of huge size (absolute number of people employed in the industry). Programming is a high-skill, small market. Why would the effects of capitalism on the former predict the effects of capitalism in the later with any accuracy?
Can anyone make it big in Hollywood? I don't know the figures, but I used to spend enough time at the Watershed in Bristol to see that indie-films were not rolling in money.
You obviously don't have any understanding of agriculture, if you think it is a low skilled field. It has both low-skilled and high-skilled aspects. Programming is no different, it has low-skilled and high skilled aspects.
I have seen many low-skilled workers producing programs for many years. It is why there are those of us who are paid to fix up all those stuff ups that are made by low-skilled programmers.
Many of the incidents that make the news media demonstrate the low level of skills that being used in the development of programming. Buffer overflows are a simple example of a common problem that should no longer exist but is still a major source of attacks. SQL injection is another that is due to the lack of understanding about how to properly design and implement validation code. A simple example of contacting a bank through its web-based customer interface and the failure to properly validate and translate a comment field is my latest foray into seeing such problems.
Farming is much more than most people have any understanding about. I have my own orchard, own stock and gardens and there is much skill required even for the little I have (that's not to say that I have mastered these skills as yet).
Web developers in an enterprise shop aren't highly skilled but they get the jobs they are assigned done. I've heard the word "code farmer" more often than I feel comfortable with (disclaimer: I live in a developing country where the farmers mostly haven't industrialized yet).
I can accept that I know very little about farming, but tell me: if farming is high-skill does this mean that only certain people have the natural talent to be farmers and we have an education crisis attempt to teach the other 60% of the population? Roughly speaking that is a measure of the skill-requirement in programming. So as a comparison, is farming really that high-skill?
I think you are making the common mistake of assuming that high skills are only to be found in high technology. There are those who have a natural talent for some field and there are those who, through much effort, gain the skills for those fields. There are also many who, in their selected skills, become arrogant about their abilities and forget that they are ignorant about many areas.
With regards farming, we have a situation today where multi-national companies are attempting to totally control what farming is and reduce it to their products, processes and equipment. This is really no different to what is happening in the IT world. When we look at companies like Microsoft, ORACLE, Google, IBM, the game is to try and develop a monoculture environment.
To give you and example of a complex skill that is in the farming environment (so to speak), that of butchering a carcass into the various and many different kinds of cuts. I have a wonderful video of the butchering of a lamb. He describes the cuts and then performs them, showing how to do each cut. I know just how difficult it is to do this. It takes understanding, knowledge, training and development of skills to do this properly. My efforts with the three sheep I'll be killing later this year will be nowhere near this level. But I want to learn and one has to do to develop the skills.
As far a farming is concerned, it takes a great deal of skill, knowledge and understanding to be successful. Whether it is ensuring that the right conditions are managed for crops, flocks or herds or ensuring that when adverse conditions occur that you actually can maintain your farm (and associated, crops, flocks or herds), You have to understand feed management, disease control, asset management, infrastructure management, sales, product development, etc.
I suppose what frustrates me about such comments as yours is that all such arguments are, at their core, ignorant of the skills required for fields which one is not acquainted with but on the surface appears to be easy.
Try doing a good weld, or plane a surface flat, or sand a surface smooth, paint a wall, propagate seedlings, sweep a floor properly. Each of these tasks actually requires a great deal of skill that takes time to learn properly (unless you are one of those few who have an innate talent for it).
My own experience says that many who are in high technology jobs (including computing) are there only because there is supporting technology that reduces the skills required to do their jobs. I know that in my country, many of the programming jobs are moving off-shore to programming farms that produce less than adequate code simply to make the bottom line in staff costs lower.
Finally, to answer your question, it doesn't take natural talent to do a good job. It can certainly help. But it does take a desire to do the best and a desire to learn. Programming, like most jobs, can be done poorly with some training.
To say we have an education crisis in developing the necessary programming skills is missing the mark completely. We have an education crisis because we don't teach people to think clearly or to look at problems from different angles. There are many in the programming field that, because they are over the age of 40, are no longer considered adequately skilled in modern technologies, yet will run rings about those who are much younger.
Programming can be taught to anyone, just as cooking and farming and cleaning and engineering can be. The first and major requirement is that there is a willingness to learn and develop the skills needed. The crisis comes when our education systems fail to develop our young people with the desire to develop themselves.
Our education systems are the failure because on the whole society has given up caring about the future. Instead of being positive, society is all about the negative. We care more about peoples rights than we do about people. We care more about our pleasures more than we care about the consequences of those pleasures on the future generations.
We care more about our position on type-checking in programs than on seeing that each side has some valid points even if we don't agree with their stance and that appropriate use is possible. We care more about dogma than about what is the best tool for a specific problem set.
That's enough for now, I want to go and eat my wife's chocolate pudding. Her skill far surpasses mine and I am privileged to partake of her skills in cooking.
In particular, while we all associate clusters of meaning with particular words it is often the case that they only overlap weakly with other people's clusters. Rereading what I wrote (and also gasche's fair points below) I can see that I've been clumsy in two regards: "natural talent" has overloaded connotations that I was unaware of, and I've mashed together a relative comparison with an absolute one. Both of which are a result of lazy thinking on my part. Let me try again to frame the comparison that I was making.
In referring to an activity as low-skill or high-skill is a gross simplification of many variables. One of these would include the amount of experience necessary to attain a particular level of result. Another variable would be what proportion of people can convert the experience into knowledge and improvement. Even that is a gross simplification as ability in learning is dependent on acquiring many previous skills, such as clearly thinking about a problem and exploring it from different angles. Even something labelled as a predisposition, such as curiosity, is really a manifestation of previous training and expectations from other domains.
The relative comparison that I was aiming for is the average rate of return (in advancement of a skill) per unit of effort expended. I would label an activity high-skill when this average is low, yet some individuals show a rate of return far from the average. Whether these individuals are demonstrating an innate talent, or expressing unseen returns in previous learning is speculative at best and downright dogmatic at worst. Programming shows a sharp bimodal distribution in this regard, and many studies have found this distribution remains constant across different populations of students. I won't dig out the references to the literature now as your point that this is missing the mark is fair.
Let me conclude by cutting down my previous post to the accurate part: "I don't know anything about farming", and wishing you the best experience in chocolate pudding eating.
I don't believe much in "natural talent" and I think this convenient idea prevents us from thinking deeper about issues. I am far from a specialist in cognitive sciences, but I have never encountered a real-life situation where skills couldn't be explained by a favourable growth environment and lots of (possibly indirect) practice.
(I would be ready to buy some "natural" determinism for extreme physical activities such as international-level sports, although I wouldn't be surprised if that was over-rated in this area as well. I am strongly sceptical, on the contrary, of anything "natural" about skills in technology, or even about harder scientific fields (eg. maths).)
Also I think that thinking of skills as something acquired, rather than something "natural", gives us a more positive view of people. I certainly couldn't properly teach programming if I believed that it was a "natural talent" -- and I suspect there is some correlation between poor teachers and people holding that belief.
I think you must be good at everything then? I certainly am not. I think people's brains are genetically different which predisposes then to certain skills. Yes you can train in your weak areas, but it is harder and takes longer to gain skill in that area. For example I could train to be a sprinter, but I am never going to win the Olympics, as I don't have the physique for it (stride length, body proportions etc). To be truely great at something you have to work hard in an area you are naturally talented at. If you are weak in an area, you can work harder than everyone else and only be mediocre. We only have a limited time available to us, so those that stand out work hard in areas they already have natural (genetic) ability. Performance is a result of both nature and nurture.
For programming I think there is not a single talent, but several different ways of learning it. I definitely see differences in the ways that people approach problems. Some people think visually, others linguistically etc. I think this means if you adapt the teaching many differently talented people can be good at programming, but they can't all be rock-stars. I am sure in teaching you have come across people who get the concepts first time they are explained, and others that have to go over it several times to understand. Given our limited lifespans, the one that picks it up first time will be able to achieve higher levels of excellence, although I am sure a good teacher can get them both to the level needed to pass a qualification (which is usually graded for minimal competence).
I think you must be good at everything then?
I think that I could become good at many things given a good teaching and many hours of learning and practice -- and so could anyone else. A barrier may be that we may be better at learning things during youth (that would make sense to me), meaning that once a certain fast-growth fast-learning period is over we are slower to learn radically new skills. In particular, I think that physical activity during youth account in large part for the development of the "physique" required to become Olympics-level sport player (although that is the one area where I would believe in some influence of development).
I think people's brains are genetically different which predisposes then to certain skills.
This argument has been brought forward many times in history, and it never paid for its costs. Many people have made terribly wrong deductions from it, and as far as I know there has been no social value generated from ideas inspired from it. I'm certainly interested in scientists researching (as they have continuously been doing since centuries) ways to verify it, but I would find it silly to make lifestyle choices based on it, given that it's mostly a choice of belief so far, whose odds of impact are largely negative.
I definitely see differences in the ways that people approach problems. Some people think visually, others linguistically etc.
None of that cannot be equally explained by environmentally acquired dispositions. (Maybe your parents used to sing lots of different sounds when you were a child, and you have a good auditive memory.) There are extreme cases (eg. autism) where genetic causes are being considered, but then there are also known extreme cases of cognitive function changes impacted by the environment (eg. the psychiatric literature is replete with examples of the patient that, after some trauma, has this particular symptom that would intuitively seem to be explained by physical aspects of brain development).
Finally I find all this talk of "olympics", "rockstar programmer" and "excellence" rather boring. Some (usually privileged) people approach life with the goal of becoming "the world's best at X", but I doubt that is how you generate social value or even happiness in life. The most interesting comments in this sub-thread are by Bruce Rennie, explaining how to sweep a floor. It's not about which parental matches we should seek to produce the word leader of floor sweeping.
Autism is a great example. Extreme forms of autism make learning difficult, of course, but mild functional forms of autism (Asperger) seems to be more prevalent in engineering and programming. Separating cause and effect is difficult; it could just be the appeal of limited social interaction. But that is also interesting: that some people might choose to do A because they have some innate limitation in doing B. They are not necessarily good at A, it is just the best option for them to spend their time on.
Twin studies are the classic psychological technique for studying this:
Certainly some studies show enough correlation that it makes being dismissive of the genetic element hard. Recent studies have generally shown genetics contribute to things more than we thought. I find that a balanced view is that we are both nature and nurture.
As for excellence, often what you are interested in and enjoy doing, is what you are good at (and achieve success in). People are happy when they do for a job whatever they would do even if they were not getting paid for it. I don't think it has got anything to do with privilege, find something you are good at and enjoy doing and you can make as success of it. I think happy people doing fulfilling jobs is a much better ideal than some synthetic concept of social value.
I am not advocating any kind of parental matching, that's just crazy, and misreading my position to a deliberately unrealistic extreme. I am taking about each individual finding the best path for themselves. Some people might be very good doctors, but terrible programmers. They won't be happy, or get promoted as a programmer, so why force them to do it? Let them be a great doctor, teacher, or whatever... my view is not that some people are absolutely more talented than others, but that everyone has talents, they just have to find out what they are, and its societies job to give them enough opportunities in schooling to do that.
With light editions,
People are happy when they do for a job whatever they would do even if they were not getting paid for it. [...] Find something you are good at and enjoy doing and you can make as success of it. I think happy people doing fulfilling jobs is a [good] ideal.
I fully agree, but I don't see why this would require "natural talent" (whether this notion actually has a significant impact on stuff or not). Note that you don't need to be "the best" at what you do for this to work (as in "rock star" or "Olympics"), just to be good enough to be able to make a living out of it. This is an observable quality that is not in any way related to the way people acquired those skills they enjoy and which can sustain them.
We can agree with "everyone has talents" as long as we don't pre-conceive a notion of talent of the kind "either you get it or you don't", but are flexible with the idea that the talents people have may be influenced by their life.
(On the other hand, if someone was thinking of technical education as a job of detecting which students have an innate gift for computer stuff and which don't, believing they are justified in this view by research on the IQ tests of twins, then I would strongly disagree with that person and suspect that they make un-scientific extrapolations from specific results to sustain a subjective belief in how life works.)
Re. privilege: many people live in an environment where they cannot afford to develop their talents and find a job they enjoy to sustain themselves. Having this chance is being privileged -- something, in that circumstance, that should be cherished, but also understood not to be a generality.
Well I think "talent" means something inherent (inherited genetically), which is why I used the word "skill" to differentiate something learned. You can learn new skills, you cannot learn new talents.
See the above paper, interest in a subject may also be inherited, so finding something you enjoy might be more or less the same as finding something you are good at (and enjoying it motivates you to study harder).
I think it is for the student to understand what they are good at and choose subjects accordingly. Some students my deliberately choose to study things they are not good at to get a more rounded education, but when we focus on grades, people don't do that, they choose whichever subjects they think they can get the best grades in.
I don't think giving people the opportunity to develop their talents is a privilege, I think it is something we as a society should strive to give to every student. I don't think its right to rest until every student has that opportunity (if they wish to take it). Is is not a privilege, and I think that way of thinking allows society to excuse any inadequacies in the education system.
It is both about the hardware we are given and how we learn to use it. It is not as simple as everything is nature or everything is nature.
You can't teach anything if you don't buy that nuture has a role, a dominant role, to play, otherwise you wouldn't be teaching (unless you just needed the money)! But in reality, there isn't much scientific evidence to guide us since these kinds of things are so hard to study. But what is clear is that nuture plays some role, so it's worth doing.
Programming is a practical art, so it really depends on the student's willingness to practice it, more than anything else.
Time spent doing something is a large factor, and that is very much influenced by interest, but you do need a certain level of intelligence. If you think anybody can do college level math you are very much mistaken. I used to think as you do until I taught some classes at a high school, which prompted me to look into the science. It is a well established scientific fact that intelligence is partly or even mostly heritable. The IQ of kids who are adopted at birth is much better predicted by the IQ of their biological parents than the IQ of their adoptive parents. In fact the correlation of the IQ of somebody and their adopted sibling is ~0.0 whereas the correlation of the IQ of somebody and their biological sibling is 0.6 (1.0 would be perfectly correlated, and -1.0 perfectly anti-correlated). The correlation between the IQ of two identical twins raised by separate families (!) is ~0.86. Of course there are many different studies with slightly different results, but overall this is widely accepted.
Yep, but not all programming is highly "mathy", and most programming teaching could be enormously improved*. Better teaching could produce many more programmers.
In other words: programming ability is also limited by intelligence, but I believe the bottleneck is teaching. With current teaching, learning to program requires more intelligence than strictly needed.
*Simply the amount of time and attention to pour in for a proper course is amazing — see How to Design Programs. And we've had much less experience teaching programming compared to other subjects.
I don't think heritability is limited to mathy things. IQ tests are after all not very mathy as far as I am aware. The focus should absolutely remain on better teaching, tools, and languages regardless of whether aptitude for programming is heritable. That said, I think it's more important to be actually correct than politically correct. The idea that everyone is born with equal potential for every occupation is harmful when politicians and teachers and parents and children make decisions based on this belief.
I have observed "natural talent" on many occasions. It is the start of the story in some cases and in others there is not much talent just a lot of interest.
but I have never encountered a real-life situation where skills couldn't be explained by a favourable growth environment and lots of (possibly indirect) practice.
Though you may not have, I certainly have and have seen examples where the starting apprentice has already exceeded the master craftsman - these are rare certainly. But they do exist.
I myself have been involved in certain activities where under testing my own results were removed from the overall group results because the tester felt that they would significantly skew what they were testing for. Not that I cared, I still got paid for being a test subject.
I have also seen great natural talent wasted because of the choices made by the individual.
I have seen average natural talent blossom into extraordinary talent through shear doggedness and hard work (practice, practice, practice).
Skills are acquired but in some cases, the person has a natural ability/affinity for the subject. There are more factors involved than just natural talent and if you cannot get the students interested in the subject matter then that is a failure by you as the teacher and completely useless for them.
Also I think that thinking of skills as something acquired, rather than something "natural", gives us a more positive view of people.
If you don't have a positive view of people whether they have "natural" skills or not, then I don't expect you to have a positive view of people anyway. Everyone is different, just because they have some skills shouldn't affect that view. I have come to the view that everyone you meet can show you something that you never knew before (that even includes politicians, lawyers and used car salesmen).
I don't think so. Agriculture is not the same as information technology. Application of technology to agriculture .. and of course to offices .. reduced drastically the number of low level workers required in both industries (how many managers have a typist these days?). But agriculture is still big business involving a lot of people, its just that they're not all farm hands. Some are helicopter mechanics (in Australia at least!)
In programming, few people write sort routines these days. But the challenges continue to grow and there is plenty of room for more and more automation yet. Perhaps when 90% of all the solar energy we can possibly collect on Earth and near Earth orbit is used for computing there will finally be a limit. Oh wait, but there is now a serious attempt to get a huge Tokomak fusion reactor running.
So exactly where will the need for software development end? Every part of the process that is done manually and follows a pattern can be automated, but then there will just be new higher level patterns to automate. So long as our gross computing power continues to rise globally there will be not only a need for programmers, but an increasing need.
But agriculture is still big business involving a lot of people, its just that they're not all farm hands. Some are helicopter mechanics (in Australia at least!)
The employment statistics count all agricultural job categories. It used to take the vast majority of all available labor power to keep everyone fed -- only 200 years ago. Today, fewer than 3 in 100 people contribute any effort towards feeding us.
So exactly where will the need for software development end?
The human need? I don't think that will end.
The capitalist need? It is collapsing already.
The actual human practice, regardless of human need? It seems on the ropes to me but time will tell.
Do kids program anymore?
They program minecraft. Which is to say, not much at all, but if minecraft was extended with a richer programming experience that added value to the game, I'm sure they would.
How would we be able to tell if kids program anymore? :-) Some high schools have classes. But I found one 2014 article just now, reporting such classes were still a rarity. When I asked my younger son if his high school had programming classes about three years ago, he investigated, but the end result was that only writing html pages was taught (and this was described as programming). Seemed a little odd for a San Jose school in an okay neighborhood.
Kids still compete, and there's plenty of encouragement to engage in cut-throat competition, but how much this translates into programming is hard to say. My sons would write games if that was easy to do, but I told them what it took to achieve results (like those they like in triple-A games) would strike them as fantastically complex. The younger son's girlfriend is in CS at a UC school, but evidently they don't talk about it, as programming is not a fun topic.
When we learned how to program it wasn't 'cause it was easy!! (Boy, do I sound like a grumpy old man...)
I guess that kids that in previous decades might have been into programming are now into arduino and the like.
Maybe my generation is after yours, but programming WAS easy when we learned via BASIC on a TI/99, C64, Adam Coleco, etc... We didn't have very high expectations, however.
99/4A it was... our expectations were low in a sense, sure, but the point wasn't that it was easy or that our expectations were low. Rather it was that making a computer do your bidding was an amazing thrill, and once you got hooked it became increasingly more challenging to realize all the ideas we had.
The Raspberry Pi Foundation folks are trying to keep alive and expand and reinvigorate learning opportunities like what you describe re 99/4a.
p.s.: I think something like a $25 Mozilla phone will also have a huge positive impact.
+1
I agree with that. The problem is that now computers are capable of so much more, so the simple programs of the past are not interesting to kids these days. The problem is that our programs have become so sophisticated and advanced, that "controlling the computer" has a much higher bar now.
Well, that isn't completely true; see minecraft, which is very simple compared to what the computer can do, but this simplicity enables creativity.
so the simple programs of the past are not interesting to kids these days.
That's become somewhat of a truism. But frankly I don't feel this can be right. I have an iPad sitting on the desk next to me, and if I could just get it to do something not pre-programmed by someone, I'd be delighted. I think we need to think more about how devices became less hackable, and how this relates to warranties, licenses. It seems that to just have a wee bit fun these days requires jail breaking an expensive bit of consumer electronics belonging to mom and dad (or, even more often, on lease from some telco). Computers should be cheap enough you can give one to your kids to play with (I think we are past the days of making them build one themselves from a kit; though, I think the arduino/pi stuff shows that kids [at heart] will do that too).
Kids really don't care about that kind of hacking, at least the impressionable 9-15 set. They want fun, that is why games are the injection method. And you'll find lots of programming based games on the App Store not to mention games with creation and even programming components like minecraft and little big planet.
Now give them a unique goal that they accomplish by hacking some hardware, and they will be all into it. But blinking lights and hello world don't count. Mindstorms is much more accessible (and satisfying) to a kid at that age range than a arduino/PI.
So what changed? That's not how I remember myself at that age.
I think we should be careful to distinguish between "kids" in general, and the sort of kids that were drawn to programming in previous decades.
The thing is, computers were mysterious things back then, you were lucky to have one, and they didn't do much, so making them do more was "fun." Why do you think they booted into BASIC in the first place? But now computers are just so common, mundane, and they do enough that you know making them do more will require lots of effort. There is just so much for geek kids to fill their time with these days, they are more likely to start a YouTube channel showing off their minecraft creations.
So you are saying that it's not that computer are less inviting as platforms but rather that they are just so powerful when you take them outside the box. Of course to some extent it is both, but I still think the former is no less of an issue. Just think of all the things you can get the computer to do for you. Invent stuff! It doesn't have to be complicated. Just goofy stuff is fine as well. Think of all the puzzles we programmed computers to solve for us; think about Lego Mindstorms and robots and how you can make them do tricks; think of the things you could do with all the libraries available (submit your English report in iambic pentameter!). I dunno, I think the reason can't be just that computers have great graphics and multimedia capabilities out the box.
But why play with real expensive tangible Legos when I can just go play minecraft with my friends? The computer is actually filling up all their time with distractions, ironically enough, that they don't even think of these kinds of things.
why play with real expensive tangible Legos when I can just go play minecraft with my friends?
Because it's cool?
Lots of cool things to do, only 16 hours in a day to do them.
I learned mid-70's, because my math teacher was an enterprising type who found resources and started his own new class. Probably he would never be a school teacher today, given other options. I was the oddball who wrote more programs than all other students combined, in all assignments, because it was fun. But this was a place and time when there was nothing to do, in the midwest, unless you liked truly awful network television. My father sold desktop calculators on commission to shop keepers for $500 apiece, which plummeted in price to ten bucks in about four years. I sold some TI-99s a few years later when I was between school gigs.
My guess is being satisfied with miniscule feedback, in the form of trivial amounts of text, is hard for kids to tolerate now, given what they are used to experiencing in immersive UI graphics tech. My older son complains "pointers are weird", and lacks interest in low level mechanics as being interesting in themselves. A taste for low-feedback gnarly puzzles is a help when learning to code. Presenting code tasks as puzzles may help uptake with some kids, while repelling others. An adventure world where you build a machine is likely a good context to learn programming, if little feedback exists other than machine building.
Hey, pointers are weird! The only place they finally made sense is C. Which, frankly, is a weird language in its own right...
My nephews (16 and 12) both program. The older is building a discreet chip based system. The younger quite happily hacks away on his games machines (including in hex). The older is involved with developing a language translation system (German - English) and writes programs for a company that belongs to one of his fathers friends.
There are many who write robotic control code and do various hacking of control systems using the various small scale (as of today) systems based on the Raspberry Pi and Arduino, etc.
The computing technology of today is only more capable than previously available in the amount of memory and the clock speed. Otherwise. It is essentially no further along than what we had 30, 40 and 50 years ago.
The flip side of this is that the available courses available in schools are very limited and are designed (in many cases) for the lowest common denominator. I think the more advanced classes are in things like robotics, etc,, which are NOT classified as IT/Programming classes.
I had the opportunity some years ago to talk with the senior (grade 12/year 12) programming class at the school that my sons had attended. The attitude of the students was quite varying and (from my perspective) most would have been better off doing some other subject. They were not really into programming. The curriculum for the course (as developed by the State Education Department) was not very interesting. Nor was the teacher trained in programming as a part of his professional development - he did have an interest in it though.
The way that young people get into programming these days are via alternate avenues in much the same way many of use did when we were younger. Though some of us didn't get access until our undergraduate engineering days.
Edward Snowden at IETF 93 characterized the path from CyberLocalism to CyberTotalism as follows:
“idea of a simple core and smart edges -- that's what we planned
for. That's what we wanted. That's what we expected, but what happened
in secret, over a very long period of time was changed to a very dumb
edge and a deadly core.”
It's hard to state concrete questions in a field like programming languages. Asking the right concrete question can go a very long way to providing the solution, unlike mathematics where you have questions like "are there infinitely many twin primes?" which are easy to state but incredibly hard to answer. Instead I'll mention the research directions that I think will be important:
#4 and #5 are very general, however (1) where a PL perspective has been tried in those areas it has been quite successful, and there's low hanging fruit left (2) if we apply Hamming's principle then rather than working on e.g. some type system extension of Haskell/ML/Java which will soon be made obsolete by dependent types anyway, why not work on something with lasting impact?
About #2: in particular, is it possible to unify type classes & implicit arguments & implicit type arguments & tactics into a coherent whole. Secondly, is it possible to make this more integrated with the dynamic semantics of the language, instead of the current approach as an extra layer on top of the base language.
If we apply Hamming's principle then rather than working on e.g. some type system extension of Haskell/ML/Java which will soon be made obsolete by dependent types anyway.
I hear people say that from time to time, and I think it is both antagonistic and wrong. You are in good company, this is what Harper says about GADTs, but it may still be wrong.
I have worked with both dependent systems and what you disparagingly call "some type system extensions of Haskell/ML/Java", and in my experience the latter is not subsumed by dependent types, they tackle problems that dependent type systems have not tackled yet, or are also struggling with.
For example, the idea that ML module systems would be subsumed by dependent records is an oversimplification that omits that the really hard thing about module systems (that make them convenient to use) are not realized by dependent records alone. The idea that GADTs are "just dependent elimination" neglects the fact that GADTs are their core are about type equalities, and that the status of equality between types is also one of the big unknown of dependent type theories (so that would be a case of "clean up your own backyard"). The work on type classes has been massively reused by dependent type systems.
I agree that work on non dependent type systems may often be transferable to dependent type systems. Of course the usefulness of research isn't solely defined by how applicable it is to dependent type systems either. The poll at the top indicates that in 25 years or less we won't be programming any more anyway (very optimistic if you ask me, but certainly qualitatively true), so in the long run almost all research is short term focused.
However, if we assume that usefulness in dependent type systems is the only criterion, which isn't far from the truth if you think that dependent types are around the corner, isn't it then more efficient to investigate the issues directly in a dependently typed setting? E.g. extending dependent records to be powerful enough for modules, or investigating whether inductive families can be encoded in a similar way as GADTs with HoTT's equality?
I don't believe that always picking the most general and difficult setting to conduct one's research is the right way to optimize research resources.
Early in my understanding of the space of type system and programming language designs, I used to be (wrongly) skeptical about the work on meta-programming that added specific type system features to reason about bindings (nominal types, contextual type systems, etc.). In my view, we already had very powerful type system features around the corner (dependent types), the Right Way¹ was obviously to leverage them to provide binding-like types as libraries (which was nicely done in Nicolas Pouillard's thesis, for example).
Later I understood that we need both people trying to (1) express this static reasoning about binders as libraries in an expressive system and people trying to (2) seriously use static disciplines to manipulate syntax with binders and see what worked and what didn't. If task (1) had been completed before (2) started, we could have used said libraries from the start, but (1) was not (and is still not, I'm afraid) usable enough for (2) to be effective in this setting. The work on hard-coding binders in the type system that allowed people to effectively work on (2) in parallel with (1) proved invaluable.
And something else happened that I still find very surprising: it turned out that one of the work that started in the "hardcoding binders" category, namely "nominal X" (for X in types, sets, and type theory) developed into a beautiful theory of its own, drawing fascinating connections with topos theory and, now, cubical type theories.
Bottom line: sometime taking shortcuts is the right way to move forward in research. And it's hard to predict what outcomes will result of one particular brand of research. If some people are earnestly interested in pursuing a research direction, it's almost always a bad idea (and rather pretentious in any case) to try to stop them.
¹: another possible option would be to find out that dependent type theories could not faithfully encode static reasoning about binders, but that would be problematic. Are our foundational type systems really foundational if they need tweaking already for the first problem-domain we attack? I felt sure at the time that they would not need tweaking -- that was also my opinion when mathematicians started suggesting things (in Munich in 2009) that later became higher inductive types. (They started using our beautiful inductive types a few months/years ago and already want to change them? they should work harder!)
Nominal types aren't overlapping with dependent types in an immediately clear way, and despite this, hasn't research on dependent types eaten nominal types, as in nominal types are "just" a higher inductive type that equates alpha equivalent terms? This turns 'evaluation respects alpha equivalence' from a metatheorem into a theorem inside the language.
I wouldn't want to stop anybody from researching anything, and I am not under the illusion that my idea on what's important is even nearly as likely to be correct as that of an actual PL researcher. This just happened to be the idea that formed in my mind when I read this thread's intro about Hamming's "you and your research".
There has been a singular focus on the "L" in PL, but this has failed to create programming experiences that don't absolutely suck. The big open problem is how to create experiences that don't suck, or rather, admit that this is a problem and focus on it. Programming experiences must be considered holistically.
Another problem: ML is on the rise and will probably supplant programming in 20-40 years (when machines will be better at programming than humans), there is no point in avoiding that. But in the meantime, can we leverage ML into PL to augment how programmers write code? We have a huge corpus of existing code that goes to waste because we cannot figure out how to "reuse" it directly, but this is ideal for training something that can identify when the wheel is being re-invented and guide the programmer to writing a new solution based on the N existing solutions already written. No one knows how to do this yet, though plenty are working on it.
Related to that, PL is currently suffering from a lack of popularity in funding and uptake by new students: they are gravitating towards areas that are "hot" with new technologies that are rapidly progressing (ML...cough). Can we make PL hot again, rather than fixating on the glories of a past that are well...in the past?
So far (~20 replies in), it seems most respondent think programming will remain a wide spread human activity for at least 30 years. That's certainly reassuring. Though I think people on LtU are a bit biased on the topic...
Look at programming 25 years ago, and extrapolate that into the future. Machine learning is making progress, but general intelligence is still very very far off, and I think for automatic programming you're better off looking in the automatic theorem proving literature than in the machine learning literature. That area hasn't made an unusual amount of progress. Moore's law is slowing down.
How we are programming will change in 25 years, but we will still be programming if things continue as they have been going.
Question is, will we be programming in much the same way (e.g., text based languages, that anyone who programmed in Fortran/COBOL would recognize as programming languages :-)
I expect the mix of old and new will change, with old coding style hanging on very perniciously for several generations, in some problem categories. A sudden break in better quality of new coding style seems likely, with newer end of the spectrum having a large sea change due to some tactic or other making outcomes better, cheaper, and more flexible.
The way old coding style is formulated might change, fitting new tools better. It might look completely different, while being structurally almost the same in terms of semantics. But I expect very slow progress in AI code making judgments about whether an algorithm works, and what mix of tactics will best suit a situation.
It would be relatively much easier to write AI verifying whether rules are followed with utter consistency, as opposed to AI that is clever about making up things from whole cloth. I don't see general intelligence happening in software for a long time, like not in this century. Progress in change in quality, as opposed to volume of data, seems to be very slow.
We've advanced more in the past 5 years than we did in the last 20. When the rapid progress stops, then we will be able to make better predictions about the future. But if you asked me 10 years ago if self driving cars would be a thing in 2015, I would have said no. Oops.
Automatic theorem proving is the PL mindset answer to ML that is comparatively stagnate. If you believe the future of programs is all logical and well formed by construction, then of course you wouldn't consider ML with their opaque outputs with error margins and theorem proving would be the only option. Perfect is the enemy of good enough.
How much of the self- in self-driving car is programmed and how much is learned?
The easy parts are programmed, the hard parts are learned. There are just some things that we can't program very well, control systems that have to accommodate arbitrarily complex conditions being one of those things.
Of course, a lot of programming still goes into the learning process.
How do you envision programming via ML? It's clearly an AI-complete problem unlike self driving cars which requires almost no intelligence but rather very good perception of the environment around the car. Until we've reached the human level intelligence AI stage, automatic theorem proving remains the main contender for any significant progress into automatic programming.
Personally, the thing I like about programming is that the computer does what I tell it to do, rather than it deciding on its own... You know, the reason we prefer them over, well... people.
The computer only does what you tell it to, which is why it is not as effective as...well...people. We increasingly want our software to be smarter than what we can program manually...we want them to be like people in their ability to handle situations that the programmer did not anticipate, which means that more and more systems will have to be machines learned.
Seriously, don't ignore the huge investments that Microsoft, Google, Facebook, Amazon, ... are making in ML right now, and what they are using them for. PL researchers must at least have an idea of what their competition is up to.
I I am deeply interested in ML (as I was in AI was I was a kid). I am just not sure if in the present context (future of programming) it is really as relevant as you suggest. I used to think as you do, but this morning I am in a contrarian mood.
Put differently: will next generation learning systems be programmed? And if so, will they be programmed using similar tools as today? I think the answer is Yes to both questions for the time period I am comfortable forecasting.
Does the market for new compilers alone justify graduating thousands of programmers a year?
Not sure I follow.
In the future, just because some programming is still involved doesn't mean programming is really still a thing that more than a few people do. And eventually, even building training tech will rely on....trained systems.
just because some programming is still involved doesn't mean programming is really still a thing that more than a few people do
Sure, that might happen. Question is when, so we can decide how much to invest in programming related technologies.
30 years I hope, I can live with 20 for my own career. But who really knows?
Supposedly, Engelbart's mother of all demos back in '68 was not well received by many academics because they thought real AI was just around the corner...what good is a mouse and graphical display when the computer will just do everything for you! However, they were off by about 100 years.
Until then, I do see a nice future for PL being integrated more closely with ML (super smart code completion), and maybe debugging. There are plenty of mid points between completely manual and completely automated programming to consider! If I were a new student in PL today, I would look that way, but I've already sunk too much effort into experience.
There are really two questions. One about where to invest time if you are a computer scientist, the other about where to spend money if you are a commercial entity. If you see programming becoming obsolete in fifty years you won't give lots of money to projects that will take that long to reach their goals (not that anyone funds things with that time frame).
Right now, you put your money where you see visible advancement, so that means ML and not PL.
The biggest problem facing PL today is the lack of meaningful progress. We have become the new theory field, and really...we don't want to be there.
However, they were off by about 100 years.
Or 200 years ;-)
p.s. Where's my flying car?
… "The Future" by the Tinklers running through my head. Thanks for that.
Oddly, though, although it covers flying cars, meals in pills, esperanto and living on the moon, it doesn't mention better programming languages. It should.
I am on the pessimistic side. I strongly suspect we'll still be debugging stack corruptions and off-by-one loop errors way into the 22nd century.
"FOR statement considered harmful"
When I was a small and measly child of about 11 years I had to undergo the British tradition of a "career advice day". This involves filling out a wildly psuedo-scientific questionnaire about personal strengths and weaknesses. Character-building stuff. After analysis (by an expert system no less, anyone remember those?) we had an interview with a "career guidance councillor".
Expert: So the computer suggests the garbage disposal industry.
Me: That's nice, I'm going to be a programmer.
Expert (muffling laughter): Of course you are, what makes you think that you are cut out for that line of work and yet our expert system has missed it?
Me: Because I already know languages X,Y and Z and I've written programs to do...
Expert (with a look of sincere sadness in his eyes): That's great kid, but you may as well forget about all of that now. The computers will be programming themselves by the time you enter the job market. Those things you know are only 4th generation languages, and they're just getting the kinks out of 5th generation languages right now. Sorry kid, do you know much about handling garbage?
Sadly all true, and barely paraphrased at all. The joy of an English comprehensive education, that. So if I am a one trick pony, might be nice to ride it for another 30 years :)
That sounds horrible even by American standards. No wonder we revolted.
Funny (and heart breaking at the same time).
BTW, I always had a "soft spot" for the language generation thing (i.e. I hate it), which seems like non-rigorous but meaningful scheme up to the fifth-generation thing with the famous Japanese project etc.
I told a fellow undergrad I meant to specialize in programming language design, and he told me there was no point because in ten or twenty years all computers would be optical computers that program themselves. I countered that if the computers were programming themselves they'd want a really good programming language to do it in. (I think I've mentioned that incident before on LtU; but, well, it's awfully relevant here.)
I suspect programming will change in different ways than we expect it to. Yet another application for the comment attributed to Mark Twain, "History doesn't repeat itself, but it rhymes."
We'll know computers are intelligent when they start programming themselves in functional languages. Surely it will take them a few more decades until they grok monads, but that will be after the singularity for sure.
That not even super-intelligent computers will master category theory...
the super-intelligent computers, being on the other side of the singularity, would be the ones to master category theory. (The relationship between category theory and god-like intelligence puts an interesting twist on the adjunction between mathematics and religion.)
Isn't well known that category theory mavens are travelers form the future try to lead us astray?
I think its a bit arrogant to think that they will need to work with monads. They will be much better at reasoning about state than us, and the "programs" that they write will be completely opaque to us.
Truthfully, I'm skeptical about the whole super-intelligence thing. It's certainly possible to be more intelligent (in one or another way) than the average human, since some humans are; but I think there are some fundamental limitations on intelligence. I don't know quite what they are, of course; but there seem to me to be some actual, bonafide trad-offs involved. Albert Einstein's intelligence was of a different, er, flavor than Leonard Euler's; both first-rate of their respective kinds, but I don't think either of them could have done what the other did. And at a less elite level, I think there's a functional reason why human short term memory isn't much bigger, or much smaller, than it is. So, maybe they'll be better at reasoning about state than we are; but then again, maybe that could only be acheived at the price of, say, lesser general intelligence. I suspect there's some kind of trade-offs involved, and sooner or later perhaps someone will figure out what those trade-offs are.
It doesn't take a genius to write code. Actually, much of the code written today is sort of routine and done in a mostly automatic (to the human writing it) manner. If its just a matter of "finding" code that solves some problem through stochastic gradient descent, then (a) the computer will be able to do that without achieving anything close to real "intelligence" and (b) the code that it comes up with will be completely bizarre and non-modular from our point of view; it won't be an elegant Haskell solution.
If its just a matter of "finding" code that solves some problem...
That's a big "if". I'm not so sure programming is such an easy problem. Various tasks that were expected to be "easy" now look likely to require full sapience (e.g., for translation between natural languages, there seems no substitute for actually knowing what the text means). Questions are harder than answers, so that in this case, knowing what problem to solve is a big part of the problem.
We must somehow make code differentiable, a challenge to be sure, but not an impossible one.
The Skype Translator today is actually quite useful...not as good as having a human translator but good enough for many tasks. It turns out that, no, you really don't have to grock meaning to get 80-90% of the way there, and we might find that we can even get almost 100% of the way there without intelligence.
Translating a human language to another is a very different problem than translating some desiderata to a program that implements them. Two human languages are very similar. The words have to be translated, and the order changed a bit, then you're done. This is not at all the case for programming.
I only mentioned it because parent did, I happen to know a researcher working on skype translator so it gets drilled into me a lot.
Programming is definitely different from machine translation and I hope I didn't appear to suggest otherwise.
My experience of translation is mainly from the context of verifying Wikinews synthesis articles. To verify facts for a news article from reliable sources in another language you need a whole lot more than 80–90% comprehension. You need to know what everything means, especially including nuances. In my experience, working out a good translation for a single sentence can be extremely difficult on those occasions where it's possible, and there are nuances and even first-order meanings you'd never get without advice from a native speaker.
Although superficially I'd of course agree that natlang translation and programming are different, I suspect that ultimately they're akin in that you need full general-purpose intelligence for both. I don't think programming can ever be fully reduced to selecting programs (unless you simply give up on doing things right), because the act of selecting can never be separated from thinking about what you're doing.
DOCTOR: All elephants are pink. Nellie is an elephant, therefore Nellie is pink. Logical?
DAVROS: Perfectly.
DOCTOR: You know what a human would say to that?
DAVROS: What?
TYSSAN: Elephants aren't pink.
Not only are elephants not pink, they don't play chess either. Neither task requires full blown intelligence, because they are much more about search and pattern matching/recognition than we consciously think they are. The logicians are going to lose PL like they lost AI, because elephants don't play chess, and it turns out programmers don't do formal proofs either.
I'm inclined to agree logicians will lose PL, though (as in the quoted exchange) I see it as because the task requires intelligence.
Long ago, as I was learning some of my first few programming languages (think Turbo Pascal 2.0, MBASIC), I used to find it interesting to write programs to play battleship against the user. I didn't try to optimize the computer's tactics because I was more interested in comparing the modularity of the program in different languages; the computer would place its ships randomly (subject only to the constraint they couldn't overlap), and fire randomly in unexplored areas unless it was following up unresolved information from a hit or near miss. Statistically, any human would mostly win against the computer because their tactics would be better than random guessing. Playing against the computer was boring. It didn't take much consideration to see that one could probably program the computer with tactics that would be statistically unbeatable, and it seemed clear that that too would be boring to play against. This led to a major aha moment for me: the thing that makes playing battleship interesting (to the extent that it is) is interacting with the other human player. After that long-ago insight, I was much less interested in later years to hear about a computer beating a human master at chess, or a computer beating human players at Jeopardy! (although that one may be somewhat interesting for what it says about Jeopardy!, in that apparently the appearance of knowledge from doing well in that game is somewhat illusory).
Programming, I suggest, is more like battleship than like chess.
Programming is neither like battleship nor chess. It is a lot like medicine, where we regurgitate from a huge amount of knowledge and experience to solve similar problems over and over again. There is no logic or thought out strategy to it, just experience and remembering what incantations were used previously, or if something new, googling up a new solution. To top that off, we spend much of our time debugging code and tweaking it until it is good enough.
Programming is more like Jeopardy.
One does wonder how the different ways people's minds work are likely to affect the way their perceive a complex task, such as programming.
Well, it's good that at least we've reduced our differences of perception about programming to something really fundamental and important, like competitive sports. <is there an emoticon for struggling to keep a straight face?>
[Note: I do find these analogies helpful in understanding where different perspectives are coming from; thanks. Though I'd want a human being involved in diagnosing me.]
Actually, I was talking about this with an ML researcher. He is convinced that we are really just pattern marchers and searchers, that what we see as carefully orchestrated thinking is just lots of fuzzy matching. The only difference is scale; even naive ML methods work when data and compute power is vast enough.
Interesting. I, on the other hand, don't think scale can ever achieve something qualitatively akin to sapience; I see both as useful but for different purposes.
Those who study this don't share that opinion, mostly. But if you hold that belief, then it makes sense why one would think a computer could never program, win at chess, or drive a car by itself.
What's that about chess? I never said that. I said I wasn't particularly interested in a computer's ability to beat a human at chess. I never claimed it couldn't.
Btw, although I'm certainly interested to hear what those in the field believe, I note on the skeptical side that those who choose to specialize in it would naturally self-select to be people enthusiastic about the technology.
I've heard the same from neurobiologists: after a few decades they have not found any magic located in the brain. It is only 4 or 5 simple statistical functions replicated a 100 billion times. It is quite sad that the scruffy approach will probably win eventually. Where is the interesting pattern for us to discern in that result?
On the other hand, people who spend a long time trying unsuccessfully to cope with a very hard problem may tend to ascribe the solution to something they don't have access to atm. Such as, truly massive scale. Likely, once they have the scale there'll be a controversy over whether or not they've solved the problem, leading to some researchers — but not others — looking elsewhere for the solution.
It seems like the classic confusion between whether the scale is a necessary or a sufficient condition. I imagine that scale alone will not lead a system to program itself. But it does seem plausible that evolution has done a reasonable job of optimising the software to reduce the hardware resources.
Scale is probably integral to the way the human mind achieves sapience, yeah. May be necessary to all ways of achieving it, though I wouldn't rule out some other means being available. As for nature optimizing, though, I'm not so sure. Optimization within the close neighborhood in search space, sure. But there's been a lot of difficulty over the question of why only homo sapiens developed sapience. It's downright embarrassing to folks trying to be humble about our place in the world by ridiculing nineteenth century thinkers who viewed us as the pinnacle toward which evolution climbed. Emphasis has been placed on the need to find a continuous path of gradual development by which sapience arose, and that makes the uniqueness thing more perplexing. I suspect the explanation is that sapience, while it's favored by evolution once it happens, can only be reached by a small (and perhaps unlikely) step from a certain direction, and it's very uncommon for a species to be situationed at the point from which that small step is possible. So, we may be optimized within this little evolutionary pocket we've wandered into, but it was almost impossible to find our way into here, and if there are other greatly different ways to achieve sapience there's a good chance they're not reachable by a small step from anywhere evolution would ever get to on its own.
We are not special. What we call sapience is just the result of better brain capacity and a capability for language. We were first, but other animals can evolve there, and some are quite close already; it's just a matter of a few ten thousands year, which seems like an eternity to us but is just a blink in the history of life.
It is the "we are special" mindset that keeps getting destroyed by science. We are not special. We are just capable.
I don't claim we're special, I claim we're the beneficiaries of a wildly unlikely accident.
The attractiveness of the "we are not special" mantra is what makes our uniqueness so embarrassing. The idea that
We were first, but other animals can evolve there, and some are quite close already; it's just a matter of a few ten thousands year, which seems like an eternity to us but is just a blink in the history of life.
seems to me to weaken under scrutiny, exactly because a few tens of thousands of years doesn't seem like an eternity to me, rather it seems like (as you aptly put it) a blink in the history of life. You're describing a scenario in which life spends billions of years evolving on this planet without developing a sapient species, then suddenly a bunch of species are all developing it practically simultaneously. (I set aside, for now, the variant scenario that sapience has happened here many times before.) In the alternative scenario I've described, advent of sapience is a wildly unlikely event, requiring in effect that a species follow an evolutionary path that navigates a maze (how complicated a maze is unclear, but sufficient to make the event wildly unlikely); in this scenario, any perception that other species are quite close to achieving sapience is merely a result of underestimating what it takes to evolve sapience. If we were altogether wiped off the face of the planet (which might actually be quite difficult to accomplish as the whole species may be a lot harder to get rid of than our current civilization), I suspect it could be some hundreds of millions of years before another sapient species emerged. Or longer.
We are not more evolved than other animals, let's just get rid of that thought right away. We are adapted for our niche, and that includes a set of features unique to us, but all animals have a set of features unique to them! We are no better than an elephant, dolphin, parrot, and arguably a colony of ants that exhibit intelligence collectively if not individually.
I said other animals could evolve our abilities, not that they will, that is not how evolution works. We didn't evolve a feature called sapience overnight, and it was hardly anymore unlikely than any feature that has evolved. Life goes on around us, and is just different from what we recognize as "sapience", not any less special and unique! What that means is that any feature is no less harder to replicate than the other, we just need enough hardware and time to support that feature.
We are not more evolved than other animals, let's just get rid of that thought right away.
You're trying to get rid of something that, not only have I never said, but I've repeatedly refuted. It seems we've a total breakdown of communication here.
I didn't say you said that. I just wanted to eliminate it from the discussion and make my position very clear.
Perhaps a confusing factor here is that, on careful reflection, while I don't consider us "special", there is a sense in which I do consider sapience special. I'd better expand on that. Each species you mentioned has something "unique" about it. But of these uniquenesses, sapience is qualitatively different from all the others. Reasoning makes it possible to discover strategies and tactics that evolution itself cannot directly access. It's not just a niche, it's a meta-niche; we can reason out new niches for ourselves that don't have to be reachable by gradual migration from anything else viable. I mean, of course, gradual genetic migration; evolution is to some extent able to handle abrupt jumps in phenotype. The phenotypical flexibility of genes has been remarked on, in recent years at least — a small change in genes producing a profound change in phenotype, with at least the seeming of an expanding genetic database of phenotypical tricks that need't be in use by current species. But the proximate driving force is still random; the blind watchmaker is still blind. Reasoning has a different shape; search-tree pruning is focused locally rather than globally (sorry, I've never tried to put this distinction into words before). One person can potentially reason out, in what might as well be zero time from an evolutionary perspective, a solution that evolution couldn't ever arrive at. There's no escape by saying "evolution led to sapience, so sapience is just a way for evolution to access those solutions", either, because that doesn't diminish the qualitative uniqueness of sapience unless there's also some other way for evolution to access those solutions. If sapience is the way for evolution to access those solutions, it's still singular.
Sapience is as different as the others are different, however, which leverage hardware budgets for different activities and optimizations. We are quite self centered and think our reasoning capabilities put us above the rest of the animals, but in reality it just frames us with respect to each other. Our technology is great, sure, but ants are still outcompeting us. Other animals build things, use tools, and communicate using language in ways that we just can't comprehend. We are just slightly different apes who can talk.
So, with that being said, if you believe human reasoning was a grand unusually unlikely not accident, then it could be a very long time before computers can emulate and surpass it. If you believe that sapience is as likely as any other feature, then it is very clear we will get there if we can get to points before that involve less hardware (doing an ant brain, rat brain, dog brain, puts us on the way).
Still, automatic programming doesn't require sapience, so the differences in belief are quite moot to the question at hand.
Sapience is as different as the others are different
My point was that it's different in a different way than the others; I did then elaborate some on why/how.
We [...] think our reasoning capabilities put us above the rest of the animals
I hope you're not trying to put me in that box. I don't see that "above" is a well-defined concept in this context.
Other animals [...] communicate using language in ways that we just can't comprehend.
There's reason to think this is not so. Have you read Deacon's The Symbolic Species?
Still, automatic programming doesn't require sapience, so the differences in belief are quite moot to the question at hand.
Seems to me these differences of belief bear on whether or not it's true that automatic programming doesn't require sapience. Making them quite salient to the quesiton at hand. It seems that if one believes sapience is just a large-scale application of unremarkable elementary search tactics, then one is likely also to believe it is unimportant for tasks to which it is applied, whereas if one beieves there's a trick to it, something that makes it more than the sum of its parts, then one is likely also to believe it brings a qualitatively different perspective to the tasks to which it is applied. (The latter view is more subtle than one technique doing tasks "better" than the other; brute force is likely to exceed quantitatively on its favored ground, sapience qualitatively on its favored ground.)
You're describing a scenario in which life spends billions of years evolving on this planet without developing a sapient species, then suddenly a bunch of species are all developing it practically simultaneously.
That's the way evolution works. Major advances don't just happen in one place; the machinery for them to happen generalises to the point where it's not only inevitable that it will happen once, but more than probable that it will happen over, and over again.
I very much doubt we're the only sapient species to have developed. We may be the first where being sapient has been enough of an advantage to help our survival. We may be the last, but that'll be down to our behaviour, not evolution.
That's the way evolution works. Major advances don't just happen in one place
Sometimes; the definition of "major advance" is tricky. Who says sapience is a major advance? A succifiently specific, idiosyncratic development doesn't have to be likely; and if it's located up a sort of cul-de-sac in evolutionary search space, and its benefits don't accrue from nearby genetic configurations, then that development could in fact become something wildely unlikely, happening only very rarely and by accident.
I very much doubt we're the only sapient species to have developed. We may be the first where being sapient has been enough of an advantage to help our survival. We may be the last, but that'll be down to our behaviour, not evolution.
Heh. Re our behavior, there's nothing distinctive about it; we may be individually morally outraged at some of it, but the ability to take that individual perspective is a luxury of sapience. Standing back from the individual perspective, our collective behavior is what it evolved to be, and I doubt another sapient species would behave significantly differently.
The possibility of sapience having evolved before is rather interesting. There's the suspicion that it's enough of an evolutionary advantage that once one gets into the sapient mode, it's exceedingly likely to lead eventually to a neolithic revolution and subsequent explosive memtic evolution to a high-tech civilization. To make the repeated emergence of sapience work as a theory, one ought to explain why one chooses to reject that suspicion.
[...] and I doubt another sapient species would behave significantly differently.
Neanderthals were, by all accounts, less aggressive than us. Had we not evolved, their behaviour likely would have been pretty different.
I also think your statement is easily falisified given how significantly even human cultures differ. I doubt very much that the Native Americans would have ever developed like the Europeans or the Japanese.
The Europeans ran roughshod over the native Americans. That's not just some random coincidence, it's natural selection at work. The more aggressive culture dominated. The final chapter has not yet been written, of course, but I suggest the brutality in the earlier chapters may at least have been necessary plot development.
(Btw, re how much we actually know about the Neanderthals, have you read David Macaulay's Motel of the Mysteries?)
...probably indicates you said something rather interesting in this sentence:
You're describing a scenario in which life spends billions of years evolving on this planet without developing a sapient species, then suddenly a bunch of species are all developing it practically simultaneously.
Actually I would only find this unlikely if I expected the feedback events in the process to be statistically independent from one another. If I did not know that then I would suspect that there was a catalyst prompting a phase transition from one sort of population to another.
Sure. It could be that this batch of life is just on the verge of blooming into sapience, and we happened to be first. My intuition is suggests to me this probably isn't so, because I'm inclined to doubt some element necessary to sapience is now cropping up broadly across diverse species in the biosphere that hasn't already been around for some time. But I do acknowledge the possibility; mammals are a fairly recent development, after all. In which case, if we disappeared altogether, another sapient species might come along in perhaps a few million years rather than hundreds of times that long. (Keeping in mind that we were probably running the wetware for a couple of million years or more before our memes underwent their neolithic phase-change.
This isn't an easily tested hypothesis. Even if it were common, any better adapted sapient species would quite possibly (even likely!) kill off any other sapient species, just like we did with the Neanderthals, so you're inferring a probability given evidence subject to significant bias.
Yes, "claim" is an overly strong term. The sort of shorthand terminology one is apt to somewhat inattentively drop into as a timesaver, and then spend more time defusing later than one saved in the first place. I'm suggesting a hypothesis, even one I'm favorably inclined toward; one considers such a thing along with alternative hypotheses, comparing and contrasting, considering differences in implications and predictions, etc.
It is a lot like medicine, where we regurgitate from a huge amount of knowledge and experience to solve similar problems over and over again.
Well, and everything else at some level. Trial, error, experience, replicated success. We see it more purely in fields where we understand less like medicine.
Math and theory are really exceptional in this regard. It will be a very very long time until a computer is creative enough to come up with new theory and math. It's just that for most programming, that isn't required.
but I'm unimpressed with mathematicians pontificating on what is good programming. Programming is not mathematics.
Sean McDirmid wrote:
There is no logic or thought out strategy to [programming], just experience and remembering what incantations were used previously, or if something new, googling up a new solution. To top that off, we spend much of our time debugging code and tweaking it until it is good enough.
Speak for yourself, pal.
You just think you are in control and not just pattern matching very quickly based on experience and knowledge. Well, that is the general illusion of sapience.
We want computers to what I mean not what I say. We don't want them to go beyond what we want. They must always show us in the best light and when anything arises that causes that image to fracture then they must provide an avenue to direct the blame elsewhere.
Seriously, the only reason that these companies are making their investments is to increase their market share and hence increase the total number of currency units they control and their control of their environment (political, financial,etc).
Any benefits that accrue to the rest of us is incidental.
ML != AI, the notion of AI complete doesn't really apply (if it ever did even to classical AI).
All we have to do is figure out how to make code differentiable and annotate our corpus correctly with the language we want to describe our problems in, then it's just a matter of using one of many off the shelf algorithms. Now it is true that we are horrible at describing problems, but being able to evaluate a solution to a bad problem description allows the user to iterate on the problem description as well (could you call that programming? Perhaps, but programming is very much a task of just figuring out what you really want!).
We already know how to get character level DNNs to write plausible code without having any goal at all, theorem provers have no such capability to learn from a huge code base...they are similar to constructed AI with a fancy reasoning algorithm. Automatic theorem proving also has the drawback that you have to express your problems formally, which is of course ludicrous since we often aren't even very sure of the problems we are trying to solve.
I guess that's a good test of general AI: translate a requirements document into code that implements it.
But a programmer would translate it with bias, filling in details that weren't specified. Writing the requirement document itself is half the battle! With ML, people will just half ass the requirements, see what the computer comes up with, refine it, see what the computer comes up with, wash rinse repeat. It is the ultimate agile, and why human programmers won't be able to even compete at that point.
The point of self driving cars is not to free people from driving, but to optimize the use of expensive infrastructure (roads). People often just think the former and miss the big picture, the real disruption will be that in a city like Beijing, we will actually be able to breathe again and get to work on time. Likewise, the point of automated programming is not to eliminate programmers, but to allow stakeholders to actually get what they wanted rather than what they had to settle for. It won't replace programmers so much as it will fundamentally change our relationship with software.
With ~60 votes, ~70% think we will be programming in much the same way in 30 years. Well, if that's what you think then investing in programming language research makes sense!
Can we take a machine learning system (with deep learning, of course...) and have it learn to behave by observing a prototype or older version, thus eliminating the need to further maintain the original code?
With 100 votes in, 71% think programming will remain the same sort of activity it currently is for at least 30 years. Pretty complacent, if you ask me. This is either totally myopic or entirely realistic. Only time will tell.
An interesting case study in the future of programming is now unfolding on the wikimedia sister projects (the most familiar of them being Wikipedia).
Wikis took off in the first place because wiki markup is easy to use and easy to learn — when you edit a wiki page, if you're doing something very simple you probably either don't need to know any markup techniques, or if you do there's probably an example right there that you can imitate; and as you edit such code routinely, you're exposed to nearby examples of other simple markup techniques, so that by the time you want to use them you probably already know how to do them. There's a rudimentary mechanism provided for extending the markup language: "template" pages, which are naive macros. I disapprove of naive macros, of course; I maintain that not only do they not provide good abstractive power, but they actively interfere with later attempts to improve abstractive power; but one thing that can be said for wiki templates is that they provide a continuous learning curve, following within the pattern of wiki markup generally: users can gradually pick it up by osmosis.
A fundamentally conflicting model of how wikis should work is being pushed, at great expense, by the WMF (Wikimedia Foudation, the nonprofit that operates the servers for Wikipedia and its sister projects). For years they've been laboring to reduce the ability of voluteer contributors to access the levels at which wikis are controlled. They're imposing a pseudo-WYSIWYG interface on the wikis, hiding wiki markup from users and thus depriving them of the constant exposure that has traditionally provided a continuous path to increasing technical expertise. There's now an extension that allows template pages to transform code, under the hood, using Lua code; I acknowledge that Lua is a nifty little language, strikingly easy to use for what it's good for — but in a wiki context, procedural programming has a terrific impedence mismatch with wiki markup, very high administrative overhead alien to wikis, and the effect is to further block ordinary wiki users from any continuous path of self-improvement. All this seems to me to be a classic instance of Conway's law: a centralized organization arranges things to maximize central control of software, strongly favoring restriction of programming to an elite, preferably of people in their employ. Which, of course, fundamentally undermines the core reasons for success of a movement based on grass-roots volunteerism.
There's an alternative model for moving forward, in which facilities are wiki-markup-driven; it's all about empowering the users rather than disempowering them. There are editor designs around (the comments systems on some blogs, iirc) that combine writing markup with instantly seeing its visual consequences, which might allow users to enjoy the sort of editing aids the Foundation wants without losing the continuous learning path of wiki markup. Trickier but perhaps also possible, wiki markup itself could be carefully augmented in a way that lets users do easily within wiki markup, and within the continuous learning path, what the centralized model would shunt off discontinuously into procedural languages such as Lua. It's a big question whether that can happen, though, because the Foundation generally treats wiki markup as an evil to be eliminated. Go figure.
I've put some deep thought into what programming would look like if it were done the way wikis grow, by some form of radically open crowd-sourcing. It's safe to say it would require some deep evolution of the wiki model, and fundamental shifts of emphasis in the programming process. If successful, it might produce something that would startle both the current Wikipedian community and the current professional programming community. And it might be very hard to say whether or not the result was "programming" in the traditional sense.
For years they've been laboring to reduce the ability of voluteer contributors to access the levels at which wikis are controlled.
I disagree somewhat. I read WMF pushes WYSIWYG over its users because it ended up only attracting "geeks" and rejecting other users, while WMF wants Wikipedia to be for all humanity — and I approve of the latter goal. I agree some non-programmers will slowly learn Wiki markup, but for many others markup will be a barrier. That's why WMF wanted WYSIWYG by default (having it only for logged in users seems a bad idea, for this goal).
I like your ideas, but given how bad humankind is at technical education*, I think WYSIWYG-by-default is more important if you want user diversity. If you can reconcile additional goals with that, great; but I'm skeptical. Of course, one could debate (elsewhere?) how much diversity Wikipedia should actually aim for.
*On technical education: I take the discredited "The camel has two humps" as evidence on how bad programming education often is, together with the amount of work it takes for a proper introduction to programming using, say, How to Design Programs. OTOH, Wiki *is* easier than programming, I just wonder to which extent.
What I always find remarkable is that we consider our arcane icons and conventions intuitive simply because we were indoctrinated to consider them WYSIWYG. Read some Iverson, damn you! (This outburst is not directed at the post above!)
The WMF kid themselves into believing their plan is justified based on a fictional population of potential users. For the success of the projects, the people they actually have to attract are in fact geeks, because those are the people who will go on to become more serious contributors. A wiki cannot actually be supported entirely, nor even mainly, by drive-by contributions, important though drive-bys are to recruitment and breadth of perspective. (It's also relevant to the character of the folks they attract, that students are key to the community, because they have the time to get drawn in and are at an impressionable age so that, once drawn in, some of them will continue contributing after they graduate.) Ease-of-use matters anyway; but, just as I don't think the WMF has a realistic model of their potential contributors, I don't think they really understand geeks, either, nor the nature of the technical activities needed on the projects. (I wasn't sure at first if that was getting off topic, but on reflection I think it's actually still precariously on-topic.)
Re technical education, I note that instruction manuals are a poor way to pass on knowledge (not to belittle the importance of literate versus oral society). Nobody reads them. It's long since become a joke; "when all else fails, read the instructions"; and, on the other side of the expertise threshold, "RTFM". Wikis, such as Wikipedia, have gone in very heavily for instructional documents describing their internal function for the simple reason that creating documents is what wikis are especially good at. An alternative is to tweak wiki markup, just slightly, by adding a few simple-but-potent primitives for interactivity. Then the wiki community, working entirely within wiki markup, can build interactive "wizards" for performing various tasks on-wiki that require expertise. Crowd-sourcing the development of such "software", which is really the only way to do it because the wiki community are the only ones who know what's needed (the WMF, and the programmers it employs, can never efficiently acquire that knowledge by liasing with the community, just as they can't take over writing the wiki content themselves and merely liase with the community to learn what content is wanted). At that point, wikis cease to be merely documents and become something like "software", and the process by which the "software" grows is downright alien to a professional programmer, to the point where it's hard even to recognize what is happening as "programming".
Wikimedia Foundation is fiddling with formatting instead of addressing Wikipedia'a more fundamental censorship and exclusion issues. For example, Wikipedia Administrators are now busy trying to censor mention of Inconsistency Robustness and exclude its discussion.
Better technology for "programming" Wikipedia could help eliminate censorship and exclusion to make it a more inviting place and thereby expand the community.
You might also be interested in research on end-user programming — — in particular the surprise-explain-reward paradigm — see new forum topic.
This wizard idea sounds interesting, and could enable such interactivity.
On the WMF, you have a self-fulfilling prophecy: Let "geeks" build Wikipedia, only "geeks" will contribute to the resulting community. But lots of evidence suggests (though not conclusively) that geek communities, left to themselves, will end up discouraging people who could give useful and important contributions*. In particular, Wikipedia also needs, say, archaeology geeks, who might not be computer geeks (so that markup is an obstacle), but could still be contributors. How sure you are such people don't exist?
*For instance, people will flame you to a crisp if you point out (intelligently) that you can't select smart programmers by checking if they are fluent with shell scripting. Here, at least, most will agree there are legitimate complaints against bash. But I still recommend Philip Guo's excellent essay, or even better his The Two Cultures of Computing.
Not all geeks are "computer geeks". But they do have properties in common. I suspect that while it's a losing strategy to try to make wikis appeal to non-geeks, one can make them appeal to a broader class of geek — or, viewing the same process from a different angle, to make computer geekdom appealing to a broader class of geeks. I'm reminded of a report from someone-or-other in recent times, that (iirc) they'd been testing out the WMF's WYSIWYG interface on a bunch of librarians with no wiki experience, and had trouble getting the librarians to use the WYSIWYG interface because the librarians found wiki markup itself so easy to use (to the librarians' surprise because they'd been told wiki markup is hard to use). I do agree that communities tend to close themselves to outsiders; a community really has to be designed (whether by self- or external will) to remain open.
It seems the two-fold distinction in the essay should be closely related to the one Ehud makes below.
users with guis vs. unix with text programs, pipes and byte streams...
I think it's high time we admitted a few display technologies into programming.
People are already gravitating toward xml as a gui description language (all other approaches are disappearing).
We might as well have our source in xml with guis allowing us to embed pictures and controls.
But as they're saying in the feedback thread, having knobs sounds more powerful, more than static description languages are called for.
the '2 cultures' to me is more an indictment of all our approaches to programming than anything else. We need to be in the Star Trek future. So I would urge everybody to at least profusely apologize to all those new kids in class that our environments are not more consumer-esque. HyperCard, take me away!
Philip Guo's essay on two cultures of computing is a contrast of user culture (i.e. glossy iphone graphics) and programmer culture (i.e. shell command man pages, etc). An older traditional meaning of the phrase "two cultures" is a reference to C.P. Snow's Two Cultures, which was basically an art-vs-tech opposition, characterized by humanities degrees versus science (stem) degrees in colleges. (As a literary topos, it was repeated tediously in the 70's and 80's, until eternal September in the 90's helped wash away any persistent historical perspective in internet venues.) It was originally a "you don't speak my language" meme, caused by the different world views, which Guo is likely re-purposing to contrast graphics vs text in user interfaces.
To get a less text-oriented programming interface only requires pursuing that as an objective, making that "what you want" in the development phase of creating an IDE (integrated development environment). This is an example of what knob tuning is unlikely to manage, because someone must program knobs to encompass the scope of potential graphics layout, behavior, and behavior binding. Of course, once you have some parameterized graphics knobs, you can probably tune them within an expected design range. However, graphical programming languages are not very dense in terms of amount of semantics you can pack within a bit of screen real estate. (I worked on one in '93, running on NeXT stations, but I was struck by how few parse tree nodes you could pack into editable areas.)
It would not be terribly hard to make something like HyperCard run in a browser. But because the number of ways you can do it are so very large, you can expect several competing intermediary design goals to emerge in the form of inter-developer wars of an almost religious nature, about such issues as language, thin-vs-thick, persistence, messaging, OS integration, and processing models like how threads and fibers are used, and even whether sync or async is the default orientation. If you made them all split up into different groups, with a little competition and sharing of components, at least one ought to survive. But not if they waste all their time fighting.
As I see it, the single biggest force preventing such efforts at cooperative invention is a habit and practice of competition, as learned and encouraged in school systems. There's a correlation between folks talented enough to be good programmers and folks who do well in school systems, and cut-throat competition might be instilled at the same time coding is learned. As you move into the business world, it only gets worse, because any product that is not the one you are trying to sell now (even if it's an old product you want to mothball) can only shrink your potential market. So a there-can-be-only-one incentive is definitely present, rewarding any zero-sum tactic to undermine competitors, even your own models from last year.
That said (and I hope this sounds funny), I find xml a horrible kludge that needs to die. Since that's not what I imagined using, it could only get in our way, sapping our precious resources. (Insert Dr. Strangelove reference here.)
On a serious note, you can treat an app like a browser as a dumb terminal, where your programming environment runs in another process (or group of processes). Then you can have a mixture of text and graphics interfaces, depending on what you want. A library of standard stuff can be used to bootstrap newcomers until they learn enough to replace parts that start to constrain them.
Two views:
It easily takes 10-20 years for PL research to go mainstream. Most of the design research today adds types to the same old code, or is more-computer-assisted-yet-still-manual. That makes me sad.
The other is that the number of people coding is increasing. They increase at the top of the stack faster than at the bottom, so the relative weight will shift. That seems good. Whether what they do to achieve 'coding' is what we consider coding.. I don't think so. ML + synthesis + VPLs (ex: Tableau) are amazing, and libraries enable stuff like IFTTT. Ex: if cat then that
It easily takes 10-20 years for PL research to go mainstream.
So far, it has been more like 20-40. (Which leads people with short attention span to mistake PL research as irrelevant.)
Still waiting for those parenthesis to go mainstream.
seem like kludges by people who don't know how to program simple things. That's why forth is in reverse polish.
And if lisp macros are so powerful why are they never used to make the language readable?
Perhaps one answer to that is that the idea that you can add anything significant to pre-existing language is an illusion because anything you add won't be compatible with other programmer's code or won't be considered prestigious enough that programming shops will use something non-standard.
Some forms of language extension are self-defeating; rather than cleanly moving things forward, they twist the language into a mess from which the only way to make further progress is to start over from scratch. Macros are like that. TeX provides macros for extension, and when the LaTeX macro package wanted further extended it required a massive project spanning decades. But just because some forms of language extension are like that, doesn't mean all are. Fexprs lack the foundational defects of macros. Seems they may have the potential to support smooth extension. Or at least be a step in the right direction.
In themselves they don't get rid of the parentheses, though. The parentheses are a separate issue: they exist because Lisp has natively no notation at all for control structure; it only has a notation for data structure. Some Lisp dialects try to pretend otherwise, but only make things messy thereby because they're denying the nature of the language. For a Lisp syntactic strategy to stand a chance of success it has to recognize that and work with it, rather than fight it.
McCarthy never intended S expressions to be the canonical way of writing LISP code, it just sort of wound up like that. It makes me wonder if there might be a better way to hominicty that just never got explored.
You can match on structures the way you type them, thus decomposing or substituting, you can also compose them from lists or decompose them into lists with =..
Since prolog allows you to specifying new operators at run time, =.. works fine with arbitrary operators too. I'm not sure how it handles macros but you can apply macro phases by hand.
That kind of extensibility lives on in unapply/extensible pattern matching.
where can one find "unapply/extensible pattern matching"
Maybe check out Burak's paper:
if you weren't aware of this I'd like to make it clearer.
You can match your way through structures in Prolog as easily as you can walk lists in Lisp. Though it may actually require a nonstandard extension in one case.
a(1,2+3) = a(B,C) matches the structure and sets B to 1 and C to 2+3 for instance (note that 2+3 is also a structure also written as +(2,3) )
Note you can break things down in a single match a(1,2+3) = a(B,C+3). That matches C to 2
But a(1,2+3) = A(B,C) where matching happens on the term itself being a variable is a nonstandard extension, supported by the way, by that prolog I linked to above.
In any case it could be done in a standard way with
a(1,2+3) =.. [A,B,C]
Note you can break a term into car/cdr with =.. too
a(1,2+3) =.. [CAR | CDR]
Also to help you talk about it, =.. is called "Univ"
I see no reason for lisp to have lists as the only input format. Prolog demonstrates that there is nothing wrong with parsing to a list rather than requiring the input to be a list. You don't lose any generality, and the transformation is bidirectional and trivial.
Edit: In case you don't know prolog. In prolog upper case letters start variables and lower case letters start atoms.
Also "=" means do unification which is a bidirectional pattern match. If it can't succeed then execution backtracks.
I've implemented my fair share of Prolog interpreters so I'm pretty clear about how unification works. Advanced pattern matching brings some of that to the table, with unapply acting as a surrogate for what otherwise occurs through backward chaining.
McCarthy never intended S expressions to be the canonical way of writing LISP code, it just sort of wound up like that.
Yup, in effect they blundered into it. Imo, that outcome was pretty much inevitable, because S-expression Lisp is (up to minor variants) a point of resonance in design space, profoundly more powerful than anything else in its neighborhood. Anyone who hit on it, even accidentally, would create something that would continue to resonate down through the decades (though it's too early yet to say "down through the centuries", however much one might suspect it).
But the best syntax is no almost syntax at all? It is a weird conclusion to make, and it is not found to be very usable by the vast majority of programmers. Of course, such hyper-regularity is great for programmatic manipulation of code, but is it worth the cost?
Not every specific characteristic of S-expression Lisp is necessary to the resonance effect. Some characteristics really do just happen to have fallen out the way they did. Some eventually change. Some stay as they are by tradition. Some stay as they are because those who have tried to do better have discovered the hard way that it's actually remarkably difficult to do better. I'm rather fascinated myself by the syntax issue; from all those failed attempts to reform Lisp syntax, one doesn't have to conclude that it can't be done, but one certainly ought to conclude that it can't be done the way people have been trying to do it.
The "is it worth the cost?" question is well-traveled territory within, as well as without, the Lisp community. Lots of answers have been offered..
Great quote!
To connect this subthread to the original question -- if programming (not the "knob turning" variety) becomes less wide-spread, might we not see less pressure on readability by non-experts and a move toward more expressive, if obscure to the uninitiated, notations? APL anyone?
for sufficiently nunanced values of 'mainstream'.
It was a JOKE... but yeah.
but yeah. ;-)
Looking at the poll results. Just because you don't want something to happen in the future doesn't mean it won't.
I wonder how many selected the last option because it seemed the most fun ....
I would have gone for "longer than 25 years", but I wasn't comfortable with the excess baggage of the last option. Then again, I almost never participate in surveys because I almost always have objections either to the way they're designed or, failing that, to the way they're likely to be interpreted. The last time I recall participating in a nontrivial survey was, let's see, about 25 years ago, and from things said afterward I concluded the people analyzing the results had an agenda that would prevent them from interpreting my answers correctly. (No, this wasn't a political survey; I've never participated in those. It was about how well people understand statistics, with those doing the study having a strong bias to interpret the results as supporting their thesis that people are clueless about statistics. Ah, irony.)
I think this was just meant as fun, not a serious survey. Though I really do wonder what people think the answer is. I think a lot of PL people would think that the need for programming would last forever. It is difficult to not think this way, since it is one of the hardest things we do. On the other hand, computation is something that we have been consistently pushing up the chain since the activity started (a bunch of secretaries sat in a room doing math by hand).
Of course it was for fun. I think the answers do tell a story, and the discussion so far has been great, but the poll? Really? Cold dead hands and everything? I thought the Onion-like headline was a dead giveaway.
Sorry if my serious remark about surveys struck a sour note. Even when having fun I tend to stay firmly tethered to the serious side of things. In my childhood, my mother would sometimes call me "Eeyore". In graduate school, I knew this guy who'd grown up in Ukraine. Interesting perspective, he had. I remember he remarked once he was always impressed by American supermarkets because there was stuff on all the shelves. He also once remarked that I reminded him of a character from a children's story that was popular in Ukraine, though he wasn't sure if we had it in English, the character's name being "Eeyore".
The last option simply means people will be programming longer than our lifetimes (~50 years). That's not so unreasonable.
Heh. I picked the last option because we should resist the centralization of control over computing.
I have a big question related to yours (about how long we'll keep programming in ways we are used to), which delves into phases that compose programming, under a premise that phases will evolve at different rates. I will leave the number of phases as an exercise for the reader. :-) But a couple initial phases are important and perhaps difficult to target with automation to help:
Depending on the problem-solving ideology one subscribes to, this may gloss over a lot of internal steps involved in massaging a problem statement and framing an acceptable solution budget and time-frame. How much of noticing and shaping the problem statement is programming? Actually trying to code up a planned solution is just one part of programming, and perhaps not the principal part of it.
McDirmid notes earlier that math and theory creation are things software addresses very little (and presumably requires strong AI). How much of programming is ideation about theory? About causes and effects? Even statements of fact? If we want automation to turn a crank and generate code for us, presumably we must indicate a result we want, expressing a plan about inputs and outputs in a way capturing an agreeable solution to a problem. We don't expect software to notice the problem by itself and create a plan, do we?
It seems clear that more and more things are being automated, machine learning is improving, systems are becoming harder to tinker with, and so on.
Systems might be easier to tinker-with if actually designed to be easy to alter, instead of merely being the most expedient thing generated by earlier coders' development process. In fact, the growing body of old software is one of the things making it harder to program, because it increases difficulty in perceiving the exact nature of problems, and appropriate means of addressing them.
We can probably automate a lot of the plan-execution phase of coding, but it may be very hard to notice problems, shape plans, and make decisions. The part of programming that amounts to formulating relevant issues may stay with us a long time.
Systems might be easier to tinker-with if actually designed to be easy to alter
My point was that they are now designed to be harder to alter.
The part of programming that amounts to formulating relevant issues may stay with us a long time.
For me, this requires programming just as much as building solutions. Programming is an expressive medium, not just something that comes after understanding requirements. (Come to think of it, this is one reason why waterfall models fail.)
If programming becomes just turning some knobs until you get what you want, is it still programming? Right now, the feedback loops are impossibly long, which agile tries to address at the human engineer level. But imagine if you commanded a team of programmers who was really quick and whose weren't expensive. Then you could be really agile about requirements and just try out many solutions until it was adequate. Is that programming on your part? Or just getting what you want?
I'm okay with rapid feedback, turning knobs, and affordable programmers. My focus on having someone understand doesn't require a long runway (like a waterfall model), just that a cogent idea of current cause and effect in relationships is roughly on target. Spontaneous and changing plans ought to work, if tools are forgiving and fast, if something like a clear plan for system behavior exists at least informally in one or more minds.
I can imagine a team creating a mess when they don't collectively have a common agenda, or at least grasp what one person wants. When they twiddle knobs and you don't get what you want, how do you correct course without knowing what happens now? Maybe that's not a problem if a valid "you are here" description always pops out of the tools. But I expect chaos or limbo is a result of random walks in digital effects.
There's a lot of ceremony and process in current development that's completely unnecessary -- a total waste of time -- sometimes in the name of following a plan ("doing stuff" in order to "provide value"). Tools to short circuit that sounds fine. It's just that computing tech acts like an amplifier much of the time, and it's just as easy to amplify poor objectives as good ones. Faster doesn't need to yield smarter.
People will create a mess when they don't have a common agenda. Agile only partially solves the problem of not having good requirements, you actually still need good requirements, good developers, time for them to work...that code is expensive and you can't just throw it away.
But if a machine is writing the code, then it is cheap and disposable, you can just have it redo the program with refined requirements until you get what you want. It is completely different model than how we build software today. It is not automating programming, it is making software engineering much more fluid and viable. The process would just be totally different from what we recognize today.
Automated programming without a tight feedback loop is stupid: the programming isn't going to magically know what you want, and likely you won't either! But it gives us a chance to iterate: I might not know what I want, but I sure as hell know what I don't want! So it produces something, you tell it what is wrong about it vs. telling it what should be right about it! This is actually how much of the real world works. We are the weak link in the process, and this would allow us to be much more effective.
(Yes, I reply only to clear the acid after-taste of genocide politics, so my questions are largely rhetorical, and don't need answers unless yours are remarkably good ones with useful insights.)
Will there be modules and replaceable parts? Or just a big organic lump of spot-welded monolithic executable? Jules Jacobs recently noted the importance of being able to check whether parts obey contracts at module boundaries. Will interactive knobs also include module boundary identification and characterization?
One problem of interactively tuning (code until you like it) is not exercising the whole state space, so what might have happened with other inputs (or another configuration) may not be visible at all, encouraging an out-of-sight-out-of-mind complacency.
Surprising things can hide in complex code, even if it seems to behave last time you looked. For example, one hard thing to see is internal processing that happens more times than necessary, because it's idempotent and extra repetitions have no effect beyond burning a little more fossil fuel. Doing something twice instead of once is hard to see in profiling, even if a thousand times would show up (if you looked). In the 90's I would only find these walking through code in a debugger, when I was sure I had done it exactly right.
A common sort of bug I have to find these days is when nothing happens. You send a message, or an event, or do whatever it is that trips a reaction, and it's just ignored as if it never happened. Often there are up to dozens of places where input can be rejected as noise, as invalid, as not fitting the config, as being inconvenient just then, or some other kind of stimulus not expected in the current runtime state. A user just sees silent unresponsiveness, which is not conducive to diagnosis and correction.
If getting what you want is limited to observing what is visible in a user interface, undesirable things can lurk under the surface. They can happen (or not happen) without drawing attention, or lurk until a different combination of future conditions is achieved. It's hard to turn empirical observation into an accurate gauge of future potential without a theory about internal mechanisms. So automated machinery ought to have a transparent mode where you look inside at the wheels turning, to see which gears jam and which don't turn at all at surprising moments. Realistically there are too many of them for a human to grasp without some kind of hierarchical breakdown and summary.
Knowledge is already modular. The process of programming is entangling knowledge with other knowledge to implement something that actually does something. So if the computer was programming, the modules would be in its training set, but the output code would be an optimal tangled mess of concerns! Think of the original goals of AOP: to separate concerns via meta-programming. Well, if you have a tool that is generating code from modular higher level descriptions, no point in making the generated code modular...in fact the whole point is to have it tangle knowledge so you don't have to!
I think it was only with great effort and interpretation that the Google researchers were able to identify a "cat neuron" in their image recognition work using deep learning. My colleague who works on SR basically calls the resulting DNNs as pretty opaque, they are the ultimate of black boxes. But think of it as an evolved system: you'll be able to identify subsystems (circularity, nervous, respiratory) that emerge organically.
If there is one thing computers are good at doing, it is iterating methodically through a state space. In fact, this probably has more to do with generating the code in the first place! But at the end of the day, you'll only get a program with N% chance of being correct and optimal, where N is high but not 100. Uhm, but how is that different from the way development is done now?
I doubt just because the computer is generating the code that we will be able to stop testing on users. I mean, that is part of the knob turning process. Oops, nothing happened, something was missed, have the computer do it again with additional liveliness constraints! We could look inside, it would be much like medicine is today, but why bother if we could just rebuild it with additional requirements? We can't do that with life since...well, we are attached to living things, but programs are disposable since they are so easily recreated.
I am skeptical programming can be reduced to knob turning, it is a complex non-linear multi-dimensional activity.
Testing only works remotely well because we can do corner case analysis. With a black box like a neural network everything is a corner case, and the failures are unpredictable. Why one image of a cat is recognised as a cat and another not is unpredictable.
I think a specification for a search to find a program that is accurate enough to define the problem, is itself a program. You just move to writing the specification (a bit like programming in Prolog).
I think the best we can achieve would be something lime a bank of algorithms, where you search for a solution to a problem specification using a Prolog type search, each algorithm having a mathematical specification. The solution to the initial problem is effectively a proof search of the algorithm bank. I don't know how useful it would be, but it seems interesting.
(1) we will never absolute never ever specify programs. The logicians have lost. You will just keep refining some set of vague requirements until you get what you want.
(2) computers are much better at corner cases than humans.
Logicians will win because our artificial neural networks will eventually be smart enough to completely specify programs. Oh you wanted the programmers to be humans?
[insert bemused emoticon here]
The logicians have already lost AI, NN's have nothing to do with ligic. The logicians will also probably lose PL sometime in the future, though that hasn't happened yet.
I suppose I should read the thread more carefully, but there's a huge difference between "proofs about programs" and "programs that rely on provers or logic methods".
Ie, proving that your program is correct is torture. Using logic to run your program is a pleasure.
So I hope that logicians get more say at a direct level in programming.
I don't like languages that force programmers into a narrow paradigm, logic or functional. And I think that what's nicest about logic programming may have nothing to do with logic, it might be a matter of stealing the logicians favorite algorithms and pointing out that they're useful even when they're not working in a logic domain.
NN's don't involve proofs or provers. They are as far from a logical method as one can get! There is only corpus and what is generalized from that corpus. So the unreasonable effectiveness of ML is what got the logicians out of AI...
As for whether the selected code in a corpus is based on logic, I don't think it matters to the computer, at least.
But fully specified programs are useful.
I'm saying that if we make NNs more reliable than humans with better memory and organization then THEY can write fully specified programs for us.
And who knows, maybe humans will make NN AIs and NN AIs can make logic AIs.
NNs require a different sort of architecture and much greater computing power than programs. So we will always need programs to solve problems using less silicon and watts.
But we still cannot (or will not) write specifications, so even if the computer could act on full specifications, it's not a very interesting capability.
NN are statistical machines, and can be reduced to Bayesian decision trees (you can extract the tree from the trained network). This is really just a probabilistic logic (the Bayesian interpretation of probability is an extension of propositional logic).
So the logicians have won, they just don't know it :-)
Does this mean that Google can train some ungodly huge network on every picture in the world or translations from Korean to Urdu, then convert that network into a tree, simplify it to take out the nonsense connection and then convert that tree into a chip more efficiently than as a neural network?
Whilst I am sure you can simplify the chips, I think the parallel implementation is optimal. The Bayesian tree representation is more useful for execution on a sequential computer, and for reasoning about the decision making.
My point was a trained network will have a predictable failure rate say 1/1000, but that is the limit of predictability, we have no way of knowing which cat will fail to be recognised.
With source code, we can see the corner cases (if x < 0 then...), we read the code and find all the decision points, then construct all the branching paths through the source code. We can then construct a small test suite to test all the code paths for correctness. We can thus be reasonably sure the code is correct.
Computers can analyse source code to find the corner cases, but they cannot do so with neural networks. Any simplification of the network becomes an approximation, so it is not predicting the exact failures.
When I write programs I start from requirements. Refining the requirements to a concrete specification is programming. This is an iterative process, and the requirements may change and not be well defined, but the process of coding that specification into a rigid formal language is what exposes all the inconsistencies and problems that need to be dealt with to have a reliable process.
With source code, we can't detect cats.
The computer is taking requirements and refining it into a specification, this just happens to be the program. And as in real life, this translation is where all the problems are exposed; I.e. you not getting what you want because you didn't really ask for the right thing. That doesn't change, just the feedback loop is tighter and the middle man is eliminated.
Anyways this is all futurology, so we really don't know how this will play out. We all have very different ideas about the nature of programming and even intelligence, which effect how we each see the future. I also get now why the PL field is so stagnate, as it seems to be a popular consensus that the future will be like the present and past. | http://lambda-the-ultimate.org/node/5228 | CC-MAIN-2019-51 | refinedweb | 26,699 | 60.85 |
: * Copyright (c) 2004 Matthew Dillon <dillon@backplane.com>: * ---------------------------------------------------------------------------- 27: * "THE BEER-WARE LICENSE" (Revision 42): 28: * <phk@FreeBSD.ORG> wrote this file. As long as you retain this notice you 29: * can do whatever you want with this stuff. If we meet some day, and you think 30: * this stuff is worth it, you can buy me a beer in return. Poul-Henning Kamp 31: * ---------------------------------------------------------------------------- 32: * 33: * $FreeBSD: src/sys/sys/disk.h,v 1.16.2.3 2001/06/20 16:11:01 scottl Exp $ 34: * $DragonFly: src/sys/sys/disk.h,v 1.5 2004/05/19 22:53:02 dillon Exp $ 35: */ 36: 37: #ifndef _SYS_DISK_H_ 38: #define _SYS_DISK_H_ 39: 40: #ifndef _SYS_DISKSLICE_H_ 41: #include <sys/diskslice.h> 42: #endif 43: 44: #ifndef _SYS_DISKLABEL 45: #include <sys/disklabel.h> 46: #endif 47: 48: #ifndef _SYS_DISKLABEL 49: #include <sys/msgport.h> 50: #endif 51: 52: struct disk { 53: struct lwkt_port d_port; /* interception port */ 54: struct cdevsw *d_devsw; /* our device switch */ 55: struct cdevsw *d_rawsw; /* the raw device switch */ 56: u_int d_flags; 57: u_int d_dsflags; 58: dev_t d_rawdev; /* backing raw device */ 59: dev_t d_cdev; /* special whole-disk part */ 60: struct diskslices *d_slice; 61: struct disklabel d_label; 62: LIST_ENTRY(disk) d_list; 63: }; 64: 65: #define DISKFLAG_LOCK 0x1 66: #define DISKFLAG_WANTED 0x2 67: 68: dev_t disk_create (int unit, struct disk *disk, int flags, struct cdevsw *sw); 69: void disk_destroy (struct disk *disk); 70: int disk_dumpcheck (dev_t dev, u_int *count, u_int *blkno, u_int *secsize); 71: struct disk *disk_enumerate (struct disk *disk); 72: void disk_invalidate (struct disk *disk); 73: 74: #endif /* _SYS_DISK_H_ */ | http://www.dragonflybsd.org/cvsweb/src/sys/sys/disk.h?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.5 | CC-MAIN-2014-42 | refinedweb | 259 | 67.79 |
Jul 20, 2010, at 2:58 PM, SourceForge.net wrote:
> I know it\'s wierd but 2 always seemed too hard to see and 4 seemed too much.
Please use 4-space indents. This is the standard value used by all major Python projects, and the one described in PEP 8. The minor aesthetic preference you have for 3-space indents will be massively outweighed by every tool (pydev, emacs, vim) fighting you, not to mention the fact that you will need to re-indent any example code that you wish to copy.
The following forum message was posted by at:
I am using RHEL 5.2 and:
Eclipse IDE for C/C++ Developers
Version: Helios Release
Build id: 20100617-1415
The following forum message was posted by at:
As a newbie, my problem is I started out coding with a mixture of space and tab indents and rapidly evolving coding styles. The Eclipse [b]Source->Format Code[/b] is nice but the resulting code still has all my[b] bad indention warnings[/b]. I prefer to use a 3-space indent. I know it\'s wierd but 2 always seemed too hard to see and 4 seemed too much.
I am new to both Eclipse an Python. Pydev certainly makes programming in Python more enjoyable. Thanks for a great tool!
-Ed
The following forum message was posted by at:
Hi Fabio,
Thank you very much for your time!
Firewall and SE Linux are both disabled.
I have 2 network cards installed.
eth0 - Is currently connected to a switch that goes nowhere. Static IP: 172.16.1.2 / Mask: 255.255.254.0
eth1 - Connected to Internet: DHCP / Mask 255.255.255.0
I modified code as follows:
[code]#=======================================================================================================================
# StartClient
#=======================================================================================================================
def StartClient(host, port):
\"\"\" connects to a host/port \"\"\"
PydevdLog(1, \"Connecting to \", host, \":\", str(port))
try:
s = socket(AF_INET, SOCK_STREAM);
s.connect((host, port))
PydevdLog(1, \"Connected.\")
return s
except:
import traceback;traceback.print_exc()
sys.stderr.write(\"server timed out after 10 seconds, could not connect to %s: %s\\n\" % (host, port))
sys.stderr.write(\"Exiting. Bye!\\n\")
sys.exit(1)
[/code]
I now get:
[code]pydev debugger: warning: psyco not available for speedups (the debugger will still work correctly, but a bit slower)
pydev debugger: starting
Traceback (most recent call last):
File \"/home/esutton/setup/eclipse/plugins/org.python.pydev.debug_1.5.9.2010063001/pysrc/pydevd_comm.py\", line 334, in StartClient
s.connect((host, port))
File \"<string>\", line 1, in connect
gaierror: (-2, \'Name or service not known\')
server timed out after 10 seconds, could not connect to localhost: 48145
Exiting. Bye![/code]
Any clues?
eth0 goes nowhere; it is connected to an empty switch. Does the socket connection need loopback enabled or something? Or is there someway to connect to eth1 instead?
Thank you for your help,
-Ed
The following forum message was posted by rekveld at:
hi all,
not too relevant to this list perhaps, but still I\'m happy to share
the cause of the problem I had:
the fstab entry for my volume read like this: rw,exec,auto,users
And it turns out that the \'users\'-option \"automatically implies
noexec, nosuid, nodev unless overridden\" according to wikipedia.
Apparently the options are set sequentially, so that having \'users\' at
the end invisibly sets noexec which was the cause of my problem.
so the following very similar-looking fstab entry works where the
above one doesn\'t: users,rw,exec,auto
what about that !
Took a couple of days to figure out, but I learned a lot.
thanks,
Joost. | https://sourceforge.net/p/pydev/mailman/pydev-users/?viewmonth=201007&viewday=20 | CC-MAIN-2018-05 | refinedweb | 594 | 64.81 |
Type: Posts; User: 2kaud
That explains the use of size_t rather than DWORD and why the cast is required. Note that both size_t and DWORD are unsigned types so care has to be taken with decrement as you can't have a negative...
If nsize = 1, ncount = 4 then
stwritten = 0
sttotal = 4 * 1 = 4
stremain = 4
stwritten < sttotal
dwwritecount = 0
dwwritten = ?? (say 987654)
See
Also try it with This scans the specified file using 49 different virus scanners.
Does this happen for any user-written .exe or just this one - have you tried just a simple 'hello world' program? Is the program being rejected because it is downloaded and isn't signed?
Now fixed - is that a record?
The link to the AUP is broken
Re-build the soultion using Debug mode. Release mode strips away debug info etc and does code optimisation. You should only compile as Release when you're happy with the solution - for final testing...
That 'trick' is also useful if the arguments are of different types:
std::min(1, 2U);
won't compile as the 1st arg is int, the second is unsigned int - so template type deduction reports...
Another way is to specify the type for the template- again so that the **** pre-compiler won't confuse with a macro!
std::min<int>(1, 2);
The problem for 'Joe programmer' is that the committee considers proposals for word changes to the standard to bring new ideas into the language. There's no restriction on who can submit, but the...
template <class U>
static yes_type test(U&, decltype(U(source<U>()))* = 0);
This looks like a function declaration with name test. The first parameter is U& and the second is an optional...
IMO I wouldn't hold your breath on this. There is an ISO working group/committee (WG21 - chaired by Herb Sutter from MS) which deals with the standard (as C++ is an ISO standard). The members are...
For over the past 25 years I've only used c/c++ - so I'm a little Rusty on the languages I used before then so can't comment. :)
Look it up in Wikipedia -
case 1:
fResource = IDR_MYFONT_1FEDRA;
case 2:
fResource = IDR_MYFONT_3FEDRA;
case 3:
fResource = IDR_MYFONT_5FEDRA;
You are missing the break statements after the assignments!
Welcome to codeguru :wave:
Even with 16.6.1, issues regarding the upgrade are still being reported.16.6.1 seems to work OK for some but not for others.
See...
As this thread is nearly 13 years old, I suggest you start a new thread detailing your specific issue.
Try removing constexpr from the #defines.
Note that .insert() within an iterator loop can have a similar - but different- issue and needs to be considered on a case-by-case basis depending upon the container type and location of insertion.
It is returning end(). However:
- for 1 or more elements, end() returns 1 past the last element
- for no elements, end() returns begin()
This is so that the standard iterator for statement:
...
Using very long names within the code makes the code very hard to read and understand! Yes, names should be meaningful but also easy to read. IMO I suggest a limit of about 15.
Using structured...
The way to erase elements from a container is:
include <list>
#include <iostream>
int main()
{
std::list<int> lst {3, 2};
This issue is now supposedly fixed in release 16.6.1
The c/c++ dll should look something like (not tested):
_declspec(dllexport) void __stdcall Mytest(char* buf, int buflen)
{
char szSampleString[] = "Hello. | http://forums.codeguru.com/search.php?s=107d6ff26867579c15ab09af59fd753d&searchid=20765080 | CC-MAIN-2020-29 | refinedweb | 592 | 72.36 |
Red Hat Bugzilla – Bug 75751
switchdesk writes invalid shell script
Last modified: 2007-03-26 23:57:42 EDT
Description of Problem:
'switchdesk' writes two files in the user's home directory: .Xclients and
.Xclients-default.
.Xclients-default is execed from .Xclients; however, the -default file does not
have the initial "#!/bin/sh" line that an executable shell script should have.
The script does work, but it is technically incorrect.
Version-Release number of selected component (if applicable):
3.9.8-9
Steps to Reproduce:
1. run switchdesk
2. examine ~/.Xclients-default
3. observe that it does not begin with #!/bin/sh
Additional Information:
As a separate enhancement request: IMHO ~/.Xclients should start with
'#!/bin/sh' and not '#!/bin/bash' for portability. Only if it were really
dependent on some "bash-isms" should it specify /bin/bash.
Scripts which do not have a shebang are perfectly technically correct.
The OS uses the default shell automatically. However, it isn't a big
deal to add a shebang, so I'll add #!/bin/bash which is our official
system shell (and not /bin/sh)
... so, it makes no difference which is there as the same shell gets
used in either case. Again, bash is the supported system shell, so
that is what we care about being there.
(Clarifying so as to avoid a pointless /bin/sh purist debate which will
end up in me doing what I've decided to do anyway)
mharris wrote:
************************
Yes, I know that sh is linked to bash -- I wasn't filing the report on
'purist' grounds, so no need to worry there. ;)
Thanks for clarifying the other issue regarding lack of an interpreter line.
I'm not a purist, but I'll play one here ;-)
u-pl6:~/BIG/translate$ /bin/bash
u-pl6:~$ set |grep POSIX
u-pl6:~$ exit
u-pl6:~/BIG/translate$ /bin/sh
u-pl6:~/BIG/translate$ set |grep POSIX
POSIXLY_CORRECT=y
u-pl6:~/BIG/translate$ exit
bash changes its behavior depending on whether it was invoked as bash or sh
That's not a surprise. It is documented behaviour and should be present
in the bash manpage. When invoked as /bin/sh or with POSIXLY_CORRECT,
bash will definitely behave differently. I still think it would have
been nice if RMS would have called it POSIX_ME_HARDER like he wanted
to originally though.
Mike, you are not completely right. I'm closing this old bug, as your
could would work anyway. But about shebang's being technically
optional, I'm afraid not. It's the shell that defaults to /bin/sh.
If you use exec set of functions, it would not work without shebang.
Here's an example:
Compile this one as 'a':
#include <unistd.h>
void
main()
{
execl("./x", "x", 0);
}
and write a simple script x:
echo "I'm x".
and try ./a, you see nothing. Now add shebang to x, try ./a, you see
the output.
Would you close then. I can't.
That isn't the proper way to invoke a shell script from a C program.
No surprise there.
Let's please not waste each other's time with a totally pointless
debate in a bug report though.
The purpose of this report ultimately is to request that a shebang
be added to a script because it is missing. I will do that probably
in good time if it isn't done already, but it's a rather low priority
issue. The bug can stay open until someone fixes it. Part of the
reason switchdesk bugs are low priority is because we'd like to
completely remove switchdesk, but for some reason or another it
keeps sticking around. ;o) The switchdesk CVS repo is also
screwed up IIRC.
Just FYI, the proper way to invoke a shell script, is with the
interpreter as the first argument to exec*()
Anyway, all of the information required to fix and close this bug
report is within the bug report, no further comments are needed
from anyone. Thanks for pinging me on it though.
Adding to tracker
it's fixed in 4.0.0, which is available in rawhide. | https://bugzilla.redhat.com/show_bug.cgi?id=75751 | CC-MAIN-2017-30 | refinedweb | 685 | 75.81 |
Why nette always show default 500 error instead of 404 error template?
4 years ago
Hi,
I have an error presenter like here: "":…resenter.php
In bootstrap.php I have this line:
<?php $container->getService('application')->errorPresenter = 'Front:Error'; ?>
Error presenter is located in FrontModule and I have Latte templates for every kind of error: 404, etc.
My problem is that, when a 404 error is thrown, the Nette default 500 error template is shown (of course, in production mode).
Can anybody tell me what am I doing wrong? :)
Thanks in advance!
4 years ago
I would say, that problem can be in your Error presenter. While you process error message (eg. 404), Nette throw some exception and return 500. Try check your log folder, there should be some file(s) with more information.
Almost every time when it happend to me, the problem was with it.
btw. Have your presenter right namespace with FrontModule?
4 years ago
Hi Oli,
Thanks for your reply. This will help me cause I found the problem in logs. A fatal error stops the execution until the error presenter is loading. I'm a beginner so, from now on I know what should I do.
Yes, error presenter has the right namespace.
Thank you! Problem will be solved!
4 years ago
May I disturb you again with a question?
My error comes from a function that gets from database some categories to build the menu of the website. This function is called in beforeRender() of the BasePresenter (all other presenters extend this presenter so I thought is the best way to do it here because will run for all pages).
Can you recommend me where to collect common data (as menu items) that are used among the all pages?
The fatal error was: “Call to a member function table() on a non-object” but is strange that the function that build menu works well when a route is matching the rules.
4 years ago
Can you give here how you inject database to BasePresenter and how you give menu to template? Have you in Your BasePresenter something like this? Or you use constructor injection?
/** @var \Nette\Database\Context @inject */ public $context; protected function beforeRender() { $this->template->menu = $this->context->table('menu')->where(/* ... */); parent::beforeRender(); }
4 years ago
I'm using constructor injection.
So, in BasePresenter.php:
<?php public $database; public function __construct(Context $database) { $this->database = $database; } protected function beforeRender() { .... $this->loadMenu(); } public function loadMenu() { $nodes = $this->database ->table('node_language') ->where('language_id', $this->session->language_id) ->where('node.visibility', 'public') ->where('node.level', '2') ->order('node.lft, node.parent_id') ; $this->template->menu = $nodes; } ?>
4 years ago
Constructor injection is bad choice in BasePresenter. It's bad, because you have to use it in every child presenters. Use injectMethod or ineject annotation insted.
It would solve your problem. I gues, you forgot call
parent::__construct in ErrorPresenter, don't you?
4 years ago
You are right. Error presenter doesn't call parent::__construct. I will try to use inject annotation. Thank you for your help!
4 years ago
Works perfect now! Thanks again! | https://forum.nette.org/en/23344-why-nette-always-show-default-500-error-instead-of-404-error-template | CC-MAIN-2019-47 | refinedweb | 517 | 61.22 |
In this recipe, we will be composing queries out of reusable chunks into larger business specific queries.
We will be using the NuGet package manager to install the Entity Framework 4.1 assemblies.
The package installer can be found at.
We will also be using a database for connecting to and updating data.
Open the Improving Complex Where Clauses solution in the included source code examples.
Carry out the following steps in order to accomplish this recipe.
whereclauses, which needs to be abstracted. Use the following code:
using System; using System.Collections.Generic; using System.Linq; using BusinessLogic; ...
No credit card required | https://www.safaribooksonline.com/library/view/entity-framework-41/9781849684460/ch06s04.html | CC-MAIN-2018-39 | refinedweb | 102 | 53.98 |
.
Note
The reflection-only context is new in the .NET Framework version 2.0..
Note
Starting with the .NET Framework version 2.0, the runtime will not load an assembly that was compiled with a version of the .NET Framework that has a higher version number than the currently loaded runtime. This applies to the combination of the major and minor components of the version number. namespace System; using namespace System::Reflection; public refptr); } }; int main() { Asmload0::Main(); }); } }
Imports System.Reflection Public Class Asmload0 Public Shared Sub Main() ' Use the file name to load the assembly into the current ' application domain. Dim a As Assembly = Assembly.Load("example") ' Get the type to use. Dim myType As Type = a.GetType("Example") ' Get the method to call. Dim myMethod As MethodInfo = myType.GetMethod("MethodA") ' Create an instance. Dim obj As Object = Activator.CreateInstance(myType) ' Execute the method. myMethod.Invoke(obj, Nothing) End Sub End Class | https://docs.microsoft.com/en-us/dotnet/framework/app-domains/how-to-load-assemblies-into-an-application-domain | CC-MAIN-2021-39 | refinedweb | 153 | 55.3 |
Piotr Sarnacki 2016-09-29T12:39:52+00:00 Piotr Sarnacki drogus@gmail.com I'm tired by "Rails should fundamentally change" crowd 2014-01-01T00:00:00+00:00 2014-01-01T00:00:00+00:00 <p>I read <a href="">How can Rails react to rise of the JavaScript applications?</a> by <a href="">Andrzej Krzywda</a> recently. A part of the discussion in the comments derailed into more general question of “Where is Rails heading?”. As <a href="">@solnic</a>.).</p> <p.</p> <a href="">Łukasz Strzałkowski</a> as a part of GSOC 2013. I wanted to extract Action View from Action Pack specifically because I care more and more for APIs which either doesn’t use views at all or use them in a very limited way, where much simpler solutions like <a href="">Tilt</a> could be used. Does DHH care about this? I strongly doubt it. Is it ok? It is totally ok in my opinion.</p> <p href="">a flashcards application with a builtin dictionary</a>.</p> <p>Am I happy with Rails? Yes, in my opinion it gives me a nice balance between flexibility and a set of conventions, which lets me use some of the patterns I like and in the same time it lowers the time to ship.</p> <p.</p> <p>I want to be clear: I’m not saying that majority of people use Rails in “standard way”, but you shouldn't be saying otherwise either if you don’t have any data to back it up other than the number of your friends talking about it.</p> <p>To sum this part up: I’m sorry folks, in my opinion there is a little chance of a fundamental changes in Rails. It’s certainly possible, but not likely, at least not in a near future.</p> <p.</p> <p>First thing is that I saw <em>a lot</em>.</p> <p?</p> <p.</p> <p.</p> Mountable apps tutorial 2010-12-21T00:00:00+00:00 2010-12-21T00:00:00+00:00 <p!</p> <p <a href="">last post on that topic</a>.</p> <p>Although almost all of the changes made to engines were usable shortly after summer, the process of setting up a new engine, suitable for mounting, was really rough. Of course, there is a great gem, <a href="">enginex</a>,.</p> <p>Let’s start with the tutorial. First things first. As the new APIs work only on rails edge, you need to get rails from github:</p> <figure class="highlight"><pre><code class="language-bash" data-git clone <span class="nb">cd </span>rails bundle install</code></pre></figure><p>To generate any rails extension in rails 3.1 you can use <code>rails plugin new</code> command. It generates a directory with lib directory, gemspec, tests and dummy application for testing. There are 2 really important things here:</p> <ul> <li>gemspec is automatically generated, unless you use <code>--skip-gemspec</code> option – I hope that this will help to move more people to gem plugins</li> <li>dummy application, which by default lives in test/dummy – this is standard rails application that you can test your plugin</li> </ul> <p>To generate a mountable engine, we can use:</p> <figure class="highlight"><pre><code class="language-bash" data-bundle <span class="nb">exec</span> ./bin/rails plugin new ../blog --edge --mountable <span class="nb">cd</span> ../blog bundle install</code></pre></figure><p>We need to add <code>--dev</code> or <code>--edge</code> option to ensure that Gemfile will point to rails edge version, either from local clone or github. I used edge here, cause it’s better for most of you – you can get newest rails version with simple <code>bundle update</code>. Mountable option will add files that will come in handy for developing a mountable engine, such as: integration tests, <code>Blog::ApplicationController</code>, <code>config/routes.rb</code> and so on.</p> <p>Let’s see how the key files look like.</p> <p>Probably the most important file is <code>lib/blog/engine.rb</code>, which keeps definition of the engine:<="no">Blog</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>The thing that differs from what we could see in Rails 3.0 engine, is <code>isolate_namespace</code> which makes engine isolated from host application. If you don’t remember how it works exactly, please check <a href="">the documentation</a> or <a href="">my last post</a>.</p> <p>Next important thing is <code>config/routes.rb</code> file:<="k">end</span></code></pre></figure><p>These are empty routes, but as you can see, they belong to the engine, not to the host application.</p> <p>The last thing that you may not be familiar with is the dummy application located in <code>test/dummy</code>. This is a standard rails application that will be used to test engine, both with automated tests and manually during development. The nice thing about the new plugin generator is that with <code>--mountable</code> option, it automatically mounts engine in <code>test/dummy/config/routes.rb</code><>Ok, let’s write some code! Or… actually don’t write code, I’m too lazy, I’m gonna use scaffold generator:</p> <figure class="highlight"><pre><code class="language-bash" data-rails g scaffold post title:string body:text</code></pre></figure><p>While the scaffold should behave exactly the same as the one generated in regular app, there are some differences. Notice that migration for that is called <code>create_blog_posts</code> instead of <code>create_posts</code>. Also almost all of the files are places in <code>blog/</code> subdirectory. When you open <code>app/models/blog/post.rb</code>, you will see:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="k">module</span> <span class="nn">Blog</span> <span class="k">class</span> <span class="nc">Post</span> <span class="o"><</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>Everything is namespaced for a good reason: we want to avoid conflicts between engine and host application.</p> <p>Ok, now we can migrate our database (sqlite3 by default):</p> <figure class="highlight"><pre><code class="language-bash" data-rake db:migrate</code></pre></figure><p>Engine’s Rakefile includes all the tasks that are needed to manage database. Migrations are run from both engine’s and dummy’s application directories to make development and testing easier. Finally, it’s time to check it!</p> <figure class="highlight"><pre><code class="language-bash" data-rails s</code></pre></figure><p>Now point your browser to <a href=""></a>. You should see the standard rails scaffold working – the only difference is that it’s namespaced with <code>/blog</code> path. It was easy, wasn’t it?</p> .</p> Lightweight controllers with rails 3 2010-12-12T00:00:00+00:00 2010-12-12T00:00:00+00:00 <p>Some time ago <a href="">I showed you</a>, how easily you can reuse rails 3 modules. Today I want to present lightweight controllers, which is also really really easy in rails 3.</p> <p>Why would you want controller to be more lightweight? Speed of course. We all know that rails can’t scale and ruby is slow, let’s make it a little bit faster ;)</p> <p>Let’s start with getting familiar with new controller architecture. The base for all the controllers in rails is <a href=""><code>AbstractController::Base</code></a>. If you want to build controllers without support for handling requests, you can start with that as a base. What if we want to create something looking more like standard rails controllers? We can check how does implementation of <code>ActionController::Base</code> look like: <a href=""><code>ActionController::Base</code></a></p> <p>As you can see, it inherits from <code>ActionController::Metal</code> and <a href="">includes bunch of modules</a>. Looks like Metal will be a good start for our purpose. Let’s say that we want to render a simple <span class="caps">JSON</span> response generated from a model:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># app/controllers/api_controller.rb</span> <span class="k">class</span> <span class="nc">ApiController<">index</span> <span class="n">render</span> <span class="ss">:text</span> <span class="o">=></span> <span class="s2">"Good morning!"</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>As you can see I created controller that inherits from <code>ActionController::Metal</code> and included <code>ActionController::Rendering</code>. To render simple <span class="caps">JSON</span> response, we will not need anything more.</p> <p>How fast is it comparing to <code>ActionController::Base</code>? For comparison I’ve created controller with exactly the same action:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># app/controllers/home_controller.rb</span> <span class="k">class</span> <span class="nc">HomeController</span> <span class="o"><</span> <span class="no">ActionController</span><span class="o">::</span><span class="no">Base</span> <span class="k">def</span> <span class="nf">index</span> <span class="n">render</span> <span class="ss">:text</span> <span class="o">=></span> <span class="s2">"Good morning!"</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure>I’ve done a few benchmarks, but don’t take it too seriously, these are not scientific, I just wanted to show you how approximately can it differ. Here are ab results: <a href=""></a>. As you can see, <code>ActionController::Metal</code> is almost 40% faster. This can of course vary, depending on version of ruby, concurrency (I used 1) and other factors, but clearly it can give you nice boost. <p, <del>why can’t we use <a href="">Rails Metal</a>, which is available for quite some time now</del> <ins>rack middleware</ins> (rails metal was removed in rails 3). We could, but then we loose possibility to use router and include other modules to <code>ActionController::Metal</code> (like <code>ActionController::Helpers</code> or other useful things).</p> <p>I hope that this post will help you to experiment with new Rails 3 cool features! If you want more of that stuff, in better form and with much better explanation and depth, I highly recommend reading José Valim’s book <a href="">Crafting Rails Applications</a>!</p> Mountable engines - RSoC Wrap Up 2010-09-14T00:00:00+00:00 2010-09-14T00:00:00+00:00 <p>As some of you probably know, during the summer I was working on “Rails mountable applications” project thanks to Ruby Summer of Code. It was great experience, I learned a lot new things and I hope that I delivered something useful for the community. I tweeted that the other day, but I must emphasize: it would not be possible without great support from my mentors: Carl Lerche, Yehuda Katz and José Valim! They helped me a lot with both ideas and implementation. The biggest internet hug should go to José, who spent enormous amount of his time on discussions, reviewing my commits and helping me to set the goals. I also want to thank all the sponsors and people that helped to organize Ruby Summer of Code.</p> <p>Getting back to my work. My main task was to extend capabilities of rails engines. Although at first I wanted to get straight to mountable full rails applications, after discussions with Carl and José, I knew that starting with the smaller target will be more sane way to go. The biggest problem with implementing mountable applications (that is running more than one rails application in same process) is configuration and application initialization in general. Right now config values are shared between all the railties classes, because they’re kept in class variables. The other problem with mountable apps is that they’re pretty new concept in rails community and we will need some time to settle standards. With those problems in mind, it is much better to test the concept with engines and implement truly mountable applications later, based on results and feedback. Right now, engines are almost as powerful as applications. The main difference is that engine can’t be run without an application. The best thing about such strategy is that <span class="caps">API</span> for using more than one application is already here, so if it is needed, converting engine to application will be as easy as changing a bunch of configuration files.</p> <p>In that post, I would like to briefly describe my changes.</p> <h2>Extending engines</h2> <p>First thing needed to allow build entire applications on engines, was to add some of the application’s features to engines. That’s why engines got:</p> <ul> <li>its own middleware stack</li> <li>routes</li> <li>plugins support</li> <li>config/environment.rb</li> </ul> <p>With all the capabilities that engines had before, engine is now almost as powerful as application.</p> <h2>Mounting engine</h2> <p>Since engine is now a rack app, you can simply mount it in your application’s routes:<>This will mount <code>Blog::Engine</code> at <code>/blog</code> path. There are 2 huge benefits of such method:</p> <ul> <li>you can change the engine’s path, which was much harder before</li> <li>you can use all of the routes magic</li> </ul> <p>Want to mount your engine with dynamic scope? No problem:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">scope</span> <span class="s2">"/:username"</span><span class="p">,</span> <span class="ss">:username</span> <span class="o">=></span> <span class="s2">"default"</span> <span class="k">do</span> <span class="n">mount</span> <span class="no">Blog</span><span class="o">::</span><span class="no">Engine</span> <span class="o">=></span> <span class="s2">"/blog"</span> <span class="k">end</span></code></pre></figure><p>You can also use devise to force authentication for the engine:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">authenticate</span> <span class="ss">:admin</span> <span class="k">do</span> <span class="n">mount</span> <span class="no">Tolk</span><span class="o">::</span><span class="no">Engine</span> <span class="o">=></span> <span class="s2">"/tolk"</span> <span class="k">end</span></code></pre></figure><p>It’s easy, isn’t it?</p> <h2>Cross applications routes <span class="caps">API</span></h2> <p>Since you can mount engine with its own router inside application, there is a possibility of having more than one router in your app. To handle all the routers easily, there are new helpers providing access for each router. Application’s router is always available as <code>main_app</code>. That said, you can call any application route from engine just like that:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">main_app</span><span class="p">.</span><span class="nf">logout_path</span> <span class="n">main_app</span><span class="p">.</span><span class="nf">root_path</span></code></pre></figure><p>Helpers for other routers are available after mounting an engine:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">mount</span> <span class="no">Blog</span><span class="o">::</span><span class="no">Engine</span> <span class="o">=></span> <span class="s2">"/blog"</span> <span class="c1"># default helper for such engine is "blog":</span> <span class="n">blog</span><span class="p">.</span><span class="nf">posts_path</span> <span class="n">blog</span><span class="p">.</span><span class="nf">root_path</span></code></pre></figure><p>If you need to change the helper name, just pass the <code>:as</code> attribute:<">"my_blog"</span> <span class="c1"># now helper is called my_blog:</span> <span class="n">my_blog</span><span class="p">.</span><span class="nf">posts_path</span> <span class="n">my_blog</span><span class="p">.</span><span class="nf">root_path</span></code></pre></figure><p>You can also use those helpers in polymorphic url (which is used, among the other places, in <code>form_for</code>):</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">polymorphic_url</span><span class="p">([</span><span class="n">blog</span><span class="p">,</span> <span class="vi">@post</span><span class="p">])</span> <span class="n">form_for</span><span class="p">([</span><span class="n">blog</span><span class="p">,</span> <span class="vi">@post</span><span class="p">])</span></code></pre></figure><p>Note that you have to explicitly add those helpers to <code>polymorphic_url</code> only when you need to call an engine route from your application or from another engine.</p> <h2>Namespacing</h2> <p>Having more than one source of controllers, models and helpers can cause conflicts. Imagine you have <code>Post</code> model in your application and you would like to install blog engine having model with the same name. To avoid that, you can put your engine inside the namespace. Although it will help you with conflicts, it’s not enough level of separation for some engines. There are basically two possible scenarios here. One of them is shared engine, when you want to share helpers between engine and application. A good example of such engine is <a href="">Devise</a>. The other use case is when you want to make isolated engine, which will not likely share anything. It could be an engine that provides tools for application like <a href="">Tolk</a> or the other app like blog or <span class="caps">CMS</span>.</p> <p>Engines are shared by default, therefore you must explicitly mark it as isolated with <code>isolate_namespace</code> method:<">isolate_namespace</span> <span class="no">Blog</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>With such engine definition, only the engine’s helpers and routes will be included inside engine’s controllers and views.</p> <h2>Migrations</h2> <p>If you are one of NoSQL (or rather schema less) database users, you can probably skip that point. In other cases you will probably need migrations. We decided that the easiest way to handle engine’s migrations is to copy them to application’s db/migrate directory and change their timestamps for not breaking the migrations timeline. It can be done by simply calling rake task:</p> <figure class="highlight"><pre><code class="language-bash" data-rake railties:copy_migrations <span class="c"># or to copy only selected engines</span> rake railties:copy_migrations <span class="nv">RAILTIES</span><span class="o">=</span>foo,bar</code></pre></figure><p>The nice side effect of copying migrations is that you can easily review them before applying.</p> <h2>Assets</h2> <p>The chances are that you will have some assets in your engine’s public directory. The default way of serving assets in development mode in Rails is <code>ActionDispatch::Static</code> middleware. If you have mounted any engine, it will automagically serve their assets. In production you have 2 options:</p> <ul> <li>stay with <code>ActionDispatch::Static</code> by turning it on with: <code>config.serve_static_assets = true</code> in your <code>environment.rb</code></li> <li>create symlinks to engine’s public directories</li> </ul> <p>You can automatically create symlinks with a rake task:</p> <figure class="highlight"><pre><code class="language-bash" data-rake railties:create_symlinks</code></pre></figure><p>If you want to change the default asset path, you can set it in <code>Engine</code> definition:<">config</span><span class="p">.</span><span class="nf">asset_path</span> <span class="o">=</span> <span class="s2">"/my_blog_assets%s"</span> <span class="c1"># note %s at the end</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>This will change both <code>ActionDispatch::Static</code>’s and create_symlinks rake behavior.</p> <h2>Why does it matter?</h2> <p>Although engines are available since Rails 2.3 and even ealier as a plugin, building isolated applications using them can be hard. With new <span class="caps">API</span>, building entire apps (like forum, blog or <span class="caps">CMS</span>) that can be reused, will be much easier. But hey, big components are evil, right? I would say that it depends on your needs. There is huge space for engines as tools to help develop your application. If you use i18n, there is <a href="">Tolk</a>. Most of applications could probably benefit from <a href="">Rails Admin</a> (it’s also one of the RSoC projects). Also the new Carlhuda’s ;-) work on <a href="">Rails 3.1 assets</a> is based on engines.</p> <p>Another topic are applications. You can argue that making flexible application that would suit needs of many developers will be hard and inefficient… and you are probably right. However there are many situations when you need to quickly add some <span class="caps">CMS</span> or small forum to your application and you pretty much do not care about features. It’s not your core product, you just need something simple. Of course you can always do it by yourself, but what are the benefits? If you’re not trying to revolutionize <span class="caps">CMS</span> systems and just want an easy way to add a few articles to your site, you can use something generic. If it’s really needed, you can always write your own better suited engine later on and replace the previous one.</p> <h2>What’s next?</h2> <p>My work is already merged to rails master and most of the things are finished, but there are still some places that need more work. If you have any suggestions on engines, feel free to comment, add a ticket on lighthouse or send a pull request with changes.</p> <p>There is also need for engines creators that will battle test the new <span class="caps">API</span> and new concepts. If you have any questions feel free to contact me on <a href="">twitter</a>, <a href="">github</a> or <a href="mailto:drogus@gmail.com">by email</a>. I will also prepare a guide shortly, so stay tuned!</p> RSoC status: Namespacing engines 2010-09-06T00:00:00+00:00 2010-09-06T00:00:00+00:00 <p.</p> <p>The big news is: my work has been merged to rails official repository recently! That means that you can start using the things that I described right away (you just need to use edge version of rails).</p> <p:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># ENGINE/app/models/blog/comment.rb</span> <span class="k">module</span> <span class="nn">Blog</span> <span class="k">class</span> <span class="nc">Comment</span> <span class="o"><</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span> <span class="k">end</span> <span class="k">end</span> <span class="c1"># ENGINE/app/controllers/blog/comments_controller.rb</span> <span class="k">module</span> <span class="nn">Blog</span> <span class="c1"># note that ApplicationController here is in fact Blog::ApplicationController</span> <span class="k">class</span> <span class="nc">CommentsController</span> <span class="o"><</span> <span class="no">ApplicationController</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>At first, it may seem that this will not require any changes to Rails, it’s standard Ruby technique, right? To some degree it’s true, but there are some places that made it hard to use namespaced engines before.</p> <p>Let’s look at controllers as an example. In rails 3.0, when you create any class that inherits from <code>ActionController::Base</code>,.</p> <p>There are basically 2 types of engines. Some of the engines are considered to be a part of application, they provide helpers and controllers that will be used within application and there is no need to isolate them (take <a href="">Devise</a> as example of such engine). I will call it “shared engine”. The second use case is engine that should not share anything, like blog engine. I will call such constructs “isolated engines”.</p> <p <code>isolate_namespace()</code> method in Engine definition:<="p">(</span><span class="no">Blog</span><span class="p">)</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>With such code, all of the stuff namespaced with <code>Blog</code> will be isolated. There are quite a few effects of that. As I mentioned before, one of the problems are controllers. With engine explicitly marked as namesapaced, controllers will include only helpers from within the same namespace and they will include only an engine’s routes.</p> <p>The next thing is router itself. When you have every controller inside the module, normally you would have to reflect that in routes:<="n">scope</span><span class="p">(</span><span class="ss">:module</span> <span class="o">=></span> <span class="ss">:blog</span><span class="p">)</span> <span class="k">do</span> <span class="n">resources</span> <span class="ss">:posts</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>This code will ensure that <code>posts</code> resource points to <code>Blog::PostsController</code>. With isolated engine this is not needed at all. You can ommit that scope, cause it will be applied automagically behind the scene. With engine marked as isolated you can just do:<="c1"># everything here is namespaced with :blog namespace</span> <span class="n">resources</span> <span class="ss">:posts</span> <span class="k">end</span></code></pre></figure><p>The next thing to change was <code>ActiveModel::Naming</code>. It is used in many places in rails and by default it leaves namespace, e.g. it would convert <code>Blog::Post</code> to <code>blog_post</code>. While in some places it’s ok, it can be a major <span class="caps">PITA</span> in other ones. The first pain point is <code>polymorphic_url</code>, which is used among the other things in <code>form_for</code>. Let’s consider example of <code>Blog::Post</code> model:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># @post = Blog::Post.new</span> <span class="o"><</span><span class="sx">%= form_for(@post) do |f|%> <%=</span> <span class="n">f</span><span class="p">.</span><span class="nf">text_field</span> <span class="ss">:title</span> <span class="o">%></span> <span class="o"><</span><span class="sx">%= f.submit %> <% end %></span></code></pre></figure><p>Without any changes in <code>ActiveModel::Naming</code>, such code would try to use <code>blog_posts_path</code> helper. Using namespace here is not needed at all, as you can’t have a conflict with helpers from other routers. That’s why in isolated engine <code>form_for(@post)</code> will use <code>posts_path</code> and thanks to that, we didn’t need to add that prefix to all the routes.</p> <p>The next thing connected with forms and model name are param names. Normally, the last piece of code would generate field <code><input type="text" name="blog_post[title]" id="blog_post_title" /></code>. That kind of prefixes are also not needed as it’s highly improbable that you would need <code>params[:post]</code> for some other thing (and even if it is the case for you, you can use <code>:as => :some_other_name</code> to change the default). The <code>blog_</code> is also omitted here and in isolated engine the field would look just like that: <code><input type="text" name="post[title]" id="post_title"/></code>.</p> <p>All of the other places where <code>ActiveModel::Naming</code> is used, like partials, generating dom elements or i18n, take the namespace into account.</p> <p>As you can see, there was quite a few changes in Rails code to handle namespaces, but in the end they’re pretty transparent to for the developer.</p> Rails3 modularity 2010-07-31T00:00:00+00:00 2010-07-31T00:00:00+00:00 <p>Since:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># app/parts/articles_part.rb</span> <span class="k">class</span> <span class="nc">ArticlesPart</span> <span class="o"><</span> <span class="no">Parts</span><span class="o">::</span><span class="no">Base</span> <span class="k">def</span> <span class="nf">index</span> <span class="vi">@articles</span> <span class="o">=</span> <span class="no">Article</span><span class="p">.</span><span class="nf">limit</span><span class="p">(</span><span class="n">params</span><span class="p">[</span><span class="ss">:limit</span><span class="p">]</span> <span class="o">||</span> <span class="mi">10</span><span class="p">).</span><span class="nf">order</span><span class="p">(</span><span class="s2">"created_at DESC"</span><span class="p">)</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><figure class="highlight"><pre><code class="language-erb" data-# app/parts/views/articles_part/index.html.erb <span class="nt"><ul></span> <span class="cp"><%</span> <span class="vi">@articles</span><span class="p">.</span><span class="nf">each</span> <span class="k">do</span> <span class="o">|</span><span class="n">article</span><span class="o">|</span> <span class="cp">%></span> <span class="nt"><li></span><span class="cp"><%=</span> <span class="n">article</span><span class="p">.</span><span class="nf">title</span> <span class="cp">%></span><span class="nt"></li></span> <span class="cp"><%</span> <span class="k">end</span> <span class="cp">%></span> <span class="nt"></ul></span></code></pre></figure><p>This simple part will fetch last <code>params[:limit]</code> articles or 10 if limit is not provided. Let’s use it in our view:</p> <figure class="highlight"><pre><code class="language-erb" data- <span class="cp"><%=</span> <span class="n">part</span><span class="p">(</span><span class="no">ArticlesPart</span> <span class="o">=></span> <span class="ss">:index</span><span class="p">,</span> <span class="ss">:limit</span> <span class="o">=></span> <span class="mi">5</span><span class="p">)</span> <span class="cp">%></span></code></pre></figure><p>Such call will render list of last 5 articles.</p> <p>The question is: how much lines of code does it take to implement something like that with ability to render views, layouts, <code>:inline</code>, use helpers, filters and much more? Over 100 lines of code including <code>part()</code> helper and a railtie (railtie is used to plug it into rails) – if you don’t believe me grab the <a href="">repo</a> and check it yourself.</p> <p>How is it possible? Let’s look at the implementation of <code>Parts::Base</code>:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="nb">require</span> <span class="s1">'parts/default_layout'</span> <span class="k">module</span> <span class="nn">Parts</span> <span class="k">class</span> <span class="nc">Base</span> <span class="o"><</span> <span class="no">AbstractController</span><span class="o">::</span><span class="no">Base</span> <span class="kp">attr_reader</span> <span class="ss">:params</span> <span class="kp">include</span> <span class="no">AbstractController</span><span class="o">::</span><span class="no">Layouts</span> <span class="kp">include</span> <span class="no">AbstractController</span><span class="o">::</span><span class="no">Translation</span> <span class="kp">include</span> <span class="no">ActionController</span><span class="o">::</span><span class="no">Helpers</span> <span class="kp">include</span> <span class="no">AbstractController</span><span class="o">::</span><span class="no">Rendering</span> <span class="kp">include</span> <span class="no">ActionController</span><span class="o">::</span><span class="no">ImplicitRender</span> <span class="kp">include</span> <span class="no">DefaultLayout</span> <span class="kp">include</span> <span class="no">AbstractController</span><span class="o">::</span><span class="no">Callbacks</span> <span class="k">def</span> <span class="nf">initialize</span><span class="p">(</span><span class="n">controller</span><span class="p">,</span> <span class="n">params</span><span class="p">)</span> <span class="vi">@params</span> <span class="o">=</span> <span class="n">controller</span><span class="p">.</span><span class="nf">params</span><span class="p">.</span><span class="nf">dup</span> <span class="vi">@params</span><span class="p">.</span><span class="nf">merge!</span><span class="p">(</span><span class="n">params</span><span class="p">)</span> <span class="k">unless</span> <span class="n">params</span><span class="p">.</span><span class="nf">empty?</span> <span class="nb">self</span><span class="p">.</span><span class="nf">formats</span> <span class="o">=</span> <span class="n">controller</span><span class="p">.</span><span class="nf">formats</span> <span class="k">end</span> <span class="k">def</span> <span class="nc">self</span><span class="o">.</span><span class="nf">inherited</span><span class="p">(</span><span class="n">klass</span><span class="p">)</span> <span class="k">super</span> <span class="n">klass</span><span class="p">.</span><span class="nf">helper</span> <span class="ss">:all</span> <span class="k">end</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>That’s all… ? Yes!</p> <p>As you cans see <code>Parts::Base</code> inherits from <code>AbstractController::Base</code>, which gives it really basic functionality. Additionaly a few helpers are included to add a bit more behavior. The only mixin that I needed to create myself is <code>Parts::DefaultLayouts</code> which ensures that layout with the name of the part is rendered by default, unless <code>:layout => false</code> or there is no such layout in layouts directory.</p> <p>Ok, that all is nice and dandy, but how does it work internally?</p> <p>The implementation of such pattern can be demonstrated with such code:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="k">class</span> <span class="nc">Foo</span> <span class="k">def</span> <span class="nf">foo</span> <span class="nb">puts</span> <span class="s1">'foo'</span> <span class="k">end</span> <span class="k">end</span> <span class="k">module</span> <span class="nn">Bar</span> <span class="k">def</span> <span class="nf">foo</span> <span class="nb">puts</span> <span class="s1">'bar'</span> <span class="k">super</span> <span class="k">end</span> <span class="k">end</span> <span class="k">module</span> <span class="nn">Baz</span> <span class="k">def</span> <span class="nf">foo</span> <span class="nb">puts</span> <span class="s1">'baz'</span> <span class="k">super</span> <span class="k">end</span> <span class="k">end</span> <span class="k">class</span> <span class="nc">Omg</span> <span class="o"><</span> <span class="no">Foo</span> <span class="kp">include</span> <span class="no">Bar</span> <span class="kp">include</span> <span class="no">Baz</span> <span class="k">def</span> <span class="nf">foo</span> <span class="nb">puts</span> <span class="s1">'omg'</span> <span class="k">super</span> <span class="k">end</span> <span class="k">end</span> <span class="no">Omg</span><span class="p">.</span><span class="nf">new</span><span class="p">.</span><span class="nf">foo</span> <span class="c1">#=> omg</span> <span class="c1"># baz</span> <span class="c1"># bar</span> <span class="c1"># foo</span></code></pre></figure><p?</p> <p>When you instantinate Omg object with <code>Omg.new</code>.</p> <p <code>Parts::Base</code> I used a few modules from AbstractController, but also a module from ActionController. With implementation without mixins, I would need to take all or nothing approach.</p> <p!</p> RSoC status: routes (a.k.a OMG it's hard) 2010-07-20T00:00:00+00:00 2010-07-20T00:00:00+00:00 <p>In my last post I briefly described some of the RSoC changes and plans. One of the things that I left is router. The topic is much harder and I think it deserves separate blog post.</p> <p.</p> <p>For the good start, let’s identify things to consider:</p> <ul> <li>recognition</li> <li>generation</li> <li>named routes from more than one router (for example <code>posts_path</code>)</li> </ul> <h2>Recognition</h2> <p>At the beginning I would like to explain how does recognition work. Here is a simple example of engine mounted in application:<">resources</span> <span class="ss">:users</span> <span class="n">mount</span> <span class="no">Blog</span><span class="o">::</span><span class="no">Engine</span> <span class="o">=></span> <span class="s2">"/blog"<>As you can see engine provides :posts resource and it’s mounted at /blog path. Let’s study simple request to our blog:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="no">GET</span> <span class="sr">/blog/</span><span class="n">posts</span></code></pre></figure><p>Application’s router will match the first part of the path, which is <code>/blog</code>, with <code>/blog</code> mount point and pass the request to rack app mounted there (which in that case is <code>Blog::Engine</code>). To allow engine recognizing path properly it needs to pass <code>/posts</code> as a path. The <code>/blog</code> (which I will call prefix later on) is not a part of mounted engine, but we don’t want to loose that information by simply removing it from <span class="caps">PATH</span>. In that case <code>/blog</code> prefix is attached to <code>env["SCRIPT_NAME"]</code> (<code>SCRIPT_NAME</code> is part of <a href="">rack’s spec</a>). That way, engine will get only <code>/posts</code> part as <span class="caps">PATH</span>, which will allow to properly recognize it.</p> <p>The only one problem with recognition is connected with rack middlewares. Consider such example:<="n">match</span> <span class="s2">"/blog/omg"</span> <span class="o">=></span> <span class="s2">"omg#index"</span> <span class="k">end</span> <span class="c1"># APP/app/controllers/omg_controller.rb</span> <span class="k">class</span> <span class="nc">OmgController</span> <span class="o"><</span> <span class="no">ApplicationController</span> <span class="n">use</span> <span class="no">SomeMiddleware<="n">config</span><span class="p">.</span><span class="nf">use</span> <span class="no">SomeMiddleware</span> <span class="k">end</span></code></pre></figure><p>What will happen when you will request <code>/blog/omg</code> path? It will call the <code>Blog::Engine</code> first, as it has higher priority than <code>/blog/omg</code>. Request will pass through the Engine’s middleware stack firing SomeMiddleware and hit the Engine’s router. If <code>/omg</code> is not a valid route for <code>Blog::Engine</code> it will return 404 and Application’s router will try next route, that is <code>/blog/omg</code> pointing to <code>OmgController</code>. Middleware stack for that controller also includes <code>SomeMiddleware</code> so it will be fired again, which is probably not something that we want.</p> <p>The solution is to mount apps with lower priority than other routes.</p> <h2>Generation</h2> <p>The problem with generation is related to mount point, the place where engine is mounted. Consider such example:<">match</span> <span class="s2">"/"</span> <span class="o">=></span> <span class="s2">"users#index"</span> <span class="n">scope</span> <span class="s2">"/:user"</span><span class="p">,</span> <span class="ss">:user</span> <span class="o">=></span> <span class="s2">"drogus"</span> <span class="k">do</span> <span class="n">mount</span> <span class="no">Blog</span><span class="o">::</span><span class="no">Engine</span> <span class="o">=></span> <span class="s2">"/blog"</span> <span class="k">end<><code>Blog::Engine</code> is mounted at <code>"/:user/blog"</code> (let’s just suppose that blog implements multi user setup) and it provides <code>posts</code> resources. As you can remember from recognition part, <code>/:user/blog</code> part is engine’s prefix. Additionally, default user is set to “drogus”, so when :user is not specified it will be set to drogus. Imagine that you need to generate path to posts. Something that you would normally achieve by calling <code>posts_path</code> method. With more than one router the situation is not so simple. We can’t simply use named url helpers (like <code>posts_path</code>) inapplication’s controllers, so we need some other way to handle that (if you’re wondering why can’t we just use <code>posts_path</code>, it’s explained in named routes section of this post).</p> <p>First problem connected with that is <span class="caps">API</span>. Jeremy Kemper came up with helper that will allow using mounted engine’s url helpers by simply calling <code>some_engine.posts_path</code>. The name of that helper is taken from Engine’s name or from <code>:as</code> option used in mount method. Because <code>:as</code> option is not provided in my example, helper’s name will be <code>blog_engine</code> (based on <code>Blog::Engine</code>). Using that helper we can generate paths for Engine:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">blog_engine</span><span class="p">.</span><span class="nf">posts_path</span> <span class="n">blog_engine</span><span class="p">.</span><span class="nf">url_for</span> <span class="vi">@post</span> <span class="n">blog_engine</span><span class="p">.</span><span class="nf">polymorphic_path</span> <span class="vi">@post</span> <span class="n">blog_engine</span><span class="p">.</span><span class="nf">url_for</span> <span class="ss">:controller</span> <span class="o">=></span> <span class="s2">"posts"</span><span class="p">,</span> <span class="ss">:action</span> <span class="o">=></span> <span class="s2">"index"</span></code></pre></figure><p>We can also generate application’s urls with <code>app</code> helper:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">app</span><span class="p">.</span><span class="nf">root_url</span></code></pre></figure><p.</p> <h4>Generating engine’s url inside application’s controller</h4> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># APP/app/controllers/foo_controller.rb</span> <span class="k">class</span> <span class="nc">FooController</span> <span class="o"><</span> <span class="no">ApplicationController</span> <span class="k">def</span> <span class="nf">index</span> <span class="n">blog_engine</span><span class="p">.</span><span class="nf">posts_path</span> <span class="c1">#=> "/drogus/blog/posts"</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>Generating posts path inside one of application controllers (and consequently views and helpers) should generate prefix according to mount point. In that particular situation, default user will be inserted in place of :user, so the url will be <code>"/drogus/blog/posts"</code>. You could also do <code> blog_engine.posts_path(:user => "john")</code>, which would generate <code>"/john/blog/posts"</code>.</p> <h4>Generating engine">#=> ??</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p <code>blog_engine.posts_path</code>? In that example it would work, but note that engine can be mounted with different <code>:as</code> option. With mount looking like:<">"blog"</span></code></pre></figure><p>you would have to use <code>blog.posts_path</code> instead of <code>blog_engine.posts_path</code>. Basically engine should not need to know how it’s mounted. That said, we don’t have any information about options used to mount it, all that we know is what’s the request that was used to reach the engine.</p> <p>But what about the generated path? Someone could say that it should also generate prefix, but that would not work as expected. Imagine that someone requested one of your users blog with path <code>"/dhh/blog/posts/1"</code>. When you click on link with url generated by <code>posts_path</code>, you should stay in the same scope, so url should depend on you current path. This is achieved by using <code>env["SCRIPT_NAME"]</code> value. In request to <code>"/dhh/blog/posts/1"</code>, the script name would be set to <code>"/dhh/blog"</code>, as this is the part of path that does not belong to engine. It should be clear now, that example above will generate <code>"/dhh/blog/posts"</code> path for such request.</p> <p>The next thing that is worth mentioning is <code>_routes</code> method. When you call posts_path directly, it must use routes object. This object is available through <code>_routes</code> method. This method is defined when you include <code>url_helpers</code>, so it points to application’s routes by default. When <code>Blog::Engine.routes.url_helpers</code> are included in PostsController, <code>_routes</code> is changed to use engine’s routes, and because of that we can use posts_path safely.</p> <h4>Generating application">def</span> <span class="nf">index</span> <span class="n">app</span><span class="p">.</span><span class="nf">root_path</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>We would like to generate root_path from our application’s router. Obviously it should generate <code>"/"</code> path, without Engine’s prefix. The only exception is situation when app is hosted in a sub path (eg /myapp). This can be done with Phusion Passenger, using RailsBaseURI option. <code>/myapp</code> part would be passed as <code>SCRIPT_NAME</code> in such case. Nothing complicated, we already know how to use <code>SCRIPT_NAME</code>, right? Not really (wouldn’t it be to simple? ;-). The problem is, <code>SCRIPT_NAME</code> is kept as a string. Let’s see how the request to <code>"/myapp/user/blog/posts"</code> looks like.</p> <p>At first it hits the application. <code>SCRIPT_NAME</code> is set to <code>/myapp</code> by Passenger and the path is <code>/user/blog/posts</code>. Now, application’s router recognizes that this request should get to engine, so <code>Blog::Engine</code> is called. As engine needs only <code>/posts</code> as path, the prefix (<code>/user/blog</code>) will be attached to <code>env["SCRIPT_NAME"]</code> resulting in <code>/myapp/user/blog</code>. As this is one string and we need to get just application’s script name, solution is not obvious. How do we get the original script name? Right now our approach is to use whatever is set in <code>Rails.application.routes.default_url_options[:script_name]</code>, so it should be set to “/myapp” in that case.</p> <h4>Generating engine’s url in any other class (including ActionMailer)</h4> <p>In that case, url should be generated with prefix (which would be <code>/user/blog</code> in my example). In ActionMailer we are not inside request, so <code>script_name</code> is not available and with that in mind we need to generate the full path with <code>"/user/blog/"</code> at the beginning.</p> <h4>Solution</h4> <p <code>env["SCRIPT_NAME"]</code> here. How to check if we should attach script name? We need to check if routes used to generate url are the same as routes connected with current request.</p> <p>Right now, to make it possible, router object is passed via <code>env["action_dispatch.routes"]</code>. When application is called, it sets it to <code>Rails.application.routes</code> and then when engine is called, it sets its own router there. That way, we always now in which controller are we. If <code>_routes</code> method points to the same routes as the <code>env["action_dispatch.routes"]</code> it means that we try to generate engine’s url inside engine and we should use the <code>SCRIPT_NAME</code>.</p> <h2>Named routes</h2> <p>The initial idea was to allow using named routes from 2 routers in one scope, just like that:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="c1"># APP/app/controllers/foo_controller.rb</span> <span class="k">class</span> <span class="nc">FooController</span> <span class="o"><</span> <span class="no">ActionController</span><span class="o">::</span><span class="no">Base</span> <span class="c1"># Rails.application.routes_url_helpers are included by default<">#=> "/blog/posts" - path from engine's router</span> <span class="n">root_path</span> <span class="c1">#=> "/" - path from application's router</span> <span class="k">end</span> <span class="k">end</span></code></pre></figure><p>Although it would be handy if you will have to use paths from mounted engine a lot, it is the cause of many issues:</p> <ul> <li>named route method must know the router that it was generated by, as now we could have multiple routers</li> <li>there is a collision problem if 2 routers have the same helpers (situations when both routers will have for example posts_path is not so common, but problem will exist for root_path for sure)</li> <li>currently, after including url_helpers twice <code>_routes</code> method will point to second routes (<code>Blog::Engine.routes</code> in last example), which will cause problems with using url helpers directly</li> </ul> <p <code>blog_engine.posts_path</code> helper). It also means that including url_helpers from engine will overwrite current routes, so in case above, you can’t directly use application routes any more. In such case you should use <code>app</code> helper.</p> <p>I hope that this post is good introduction to current status of router usage in mountable apps. As usual: if you have any ideas, feature requests, critique or any other thoughts that could help bringing mountable apps to life, you’re more than welcome ;-)</p> RSoC status: bringing engines closer to application 2010-07-06T00:00:00+00:00 2010-07-06T00:00:00+00:00 <p>I’ve been working for Ruby Summer of Code for last 2 weeks and so far it’s great! In this post I will try to sum up the work on engines and outline a couple of problems that are still not solved.</p> <p>The first idea for RSoC was to bring Rails::Engine closer to Rails::Application. One of the long term targets is to allow to run more than one Application instance in one process. <a href="">As I described in my last post</a>, application is a bit more specialized engine, so while moving most of the functionalities from Application to Engine, I could identify and solve most of the problems with running several apps in one process.</p> <p>First things first. What can Engine do right now and where is it used in Rails? When you drop anything in vendor/plugins directory, it will implicitly be declared as Engine. The features of engine are:</p> <ul> <li>everything in app/* works as in application</li> <li>it can load config/routes.rb</li> <li>config/locales/* are automatically picked by I18n</li> <li>config/initializers works as in application</li> <li>engines also have initializers blocks inside the Rails::Engine class to customize rails booting</li> <li>you can customize paths like in application (for example change where controllers are)</li> <li>it can define custom generators</li> <li>rake tasks are loaded from lib/tasks</li> </ul> <p>All these features are great, but we can take it even further. Here is the plan for bringing Rails::Engine closer to Rails::Application. Rails::Engine should:</p> <ul> <li>be a Rack app</li> <li>have a middleware stack</li> <li>have its own routes</li> <li>allow to store assets in public/</li> <li>be able to run its own migrations</li> <li>be able to load plugins</li> <li>allow to do more configuration</li> <li>allow to namespace models and controller without problems</li> </ul> <p>A few things from that list are already finished (not in rails master yet, on <a href="">my fork</a> for now). I will describe my changes, but beware, this is code that’s not currently a part of rails and it can be changed before merging it to rails. Its here for getting feedback mainly.</p> <p>Engine can be now a rack application by providing rack endpoint:<">endpoint</span> <span class="no">AnyRackApp</span> <span class="k">end</span></code></pre></figure><p>That code would create engine with AnyRackApp as endpoint. Now you can mount it with:</p> <figure class="highlight"><pre><code class="language-ruby" data->Mount method will tell application router that <code>Blog::Engine</code> is located at “/blog” path. Let’s investigate a request to “/blog/posts”. At first, it will hit the application and it will be passed through entire application’s middleware stack. The last middleware in application is the router. Router will recognize that “/blog” should point to <code>Blog::Engine</code> app, so it will pass the request to <code>Blog::Engine</code>. Then it will be passed through Engine’s middleware stack and finally it will hit Engine’s rack endpoint.</p> <p:<">middleware</span><span class="p">.</span><span class="nf">use</span> <span class="no">Rack</span><span class="o">::</span><span class="no">Subdomain</span> <span class="k">end</span></code></pre></figure><p>By default endpoint is set to <code>routes</code>, probably a more common scenario.</p> <p.<>With such setup, <code>Blog::Engine</code> <span class="caps">URL</span> Recognition and Generation. Currently if you use <code>posts_path</code>, it will generate <code>/posts</code>. The problem is, if you’re in application you should prepend prefix for mounted app, so it would be “/company/blog/posts”.</p> <p.</p> <p <code>vendor/plugins</code> full of plugins.</p> <p>One of the things that has not been implemented yet is migration support. There is quite long <a href="">discussion</a> on it on lighthouse and a <a href="">blog post with a few solutions described</a>, but there is sill no consensus on that one. If you have any thoughts on that topic, you can add a comment on lighthouseapp.</p> <p:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="k">class</span> <span class="nc">Blog</span><span class="o">::</span><span class="no">PostsController</span> <span class="o"><</span> <span class="no">Blog</span><span class="o">::</span><span class="no">ApplicationController</span> <span class="k">end</span></code></pre></figure><p>it will affect application in many places. Using that controller in routes will also require namespacing it with: “blog/posts#index”, which is probably not something that we want for all the routes. In case of models, most of ORMs will name table for <code>class Blog::Post</code> as “blog_posts”. It could be actually ok, as it will help to avoid name conflicts, but it’s not always desired behavior.</p> <p>I will appreciate any ideas and thoughts on that topic. Anything, like <span class="caps">API</span> ideas, feature requests or solutions to some problems will be welcome and will help me with delivering better mountable apps. Stay tuned for the next post on RSoC status: routes. I promise you will not have to wait for it for the next 2 weeks ;-)</p> Rails internals: Railties 2010-06-18T00:00:00+00:00 2010-06-18T00:00:00+00:00 <p.</p> <p!</p> <p <a href="">Railtie in docs</a>:</p> <blockquote> <p>Registers filename to be loaded (using Kernel::require) the first time that module (which may be a String or a symbol) is accessed.</p> </blockquote> <figure class="highlight"><pre><code class="language-ruby" data- <span class="nb">autoload</span><span class="p">(</span><span class="n">module</span><span class="p">,</span> <span class="n">filename</span><span class="p">)</span> <span class="o">=></span> <span class="kp">nil</span> <span class="nb">autoload</span><span class="p">(</span><span class="ss">:MyModule</span><span class="p">,</span> <span class="s2">"/usr/local/lib/modules/my_module.rb"</span><span class="p">)</span></code></pre></figure><p (<a href="">code</a>)..</p> <p><ins>I need to rectify one thing. Although autoload is very nice, it’s also known to be thread unsafe and according to Carl Lerche it will be removed from Rails. Use with consideration ;-) </ins></p> <p>Getting back to Rails itself. The next class in hierarchy is <a href="">Engine</a>..</p> <p>Now it’s time for Application. Application is a subclass of Engine and it’s capable of booting the Rails app. As you can <a href="">read in the docs</a>, Application is singletone and that’s why 2 Rails apps can’t be run in a single process. So what does exactly Rails application do?</p> <ul> <li>it loads default middleware stack (it’s amazing how much things can be handled by rack middlewares, allowing to keep Rails code simpler)</li> <li>it loads plugins</li> <li>it sets bunch of other things: database, logging, sessions, environment config, cache</li> <li>it loads activesupport (interesting thing, you can set config.active_support.bare to not load ‘active_support/all’)</li> </ul> <p>It’s also interesting how does Rails use OO model to make things simpler. After generating Rails 3 apps you can notice that frameworks are loaded by just requiring their railties:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="nb">require</span> <span class="s2">"action_mailer/railtie"</span> <span class="nb">require</span> <span class="s2">"active_resource/railtie"</span></code></pre></figure><p <code>subclasses</code>: it’s not ruby method, subclasses are gathered using <code>inherited</code> method (<a href="">code</a>).</p> <p>Now let’s look how is that all used to boot rails application. As you can see in Rails 3 application, config.ru file looks like:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="nb">require</span> <span class="o">::</span><span class="no">File</span><span class="p">.</span><span class="nf">expand_path</span><span class="p">(</span><span class="s1">'../config/environment'</span><span class="p">,</span> <span class="kp">__FILE__</span><span class="p">)</span> <span class="n">run</span> <span class="no">MyApplication</span><span class="o">::</span><span class="no">Application</span></code></pre></figure><p:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="k">def</span> <span class="nf">method_missing</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">&</span><span class="n">block</span><span class="p">)</span> <span class="n">instance</span><span class="p">.</span><span class="nf">send</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">&</span><span class="n">block</span><span class="p">)</span> <span class="k">end</span></code></pre></figure><p>What is instance? As Application is singleton, call to instance method returns instantinated application or instantinates it and then returns the instance. Application instance is built on initialization, such behavior is triggered in <a href="">Finisher</a></p> <p.</p> <p>After reading this blog post you should have a pretty good overview of core classes of Rails. I strongly encourage you to dive into Rails code yourself, it’s a good way to learn new things about design, object oriented programming in ruby and to pick some cool patterns.</p> <p.</p> New address 2010-05-15T00:00:00+00:00 2010-05-15T00:00:00+00:00 <p>I.</p> <p>Finally I’ve merged my blogs and moved it to current address: <a href="">piotrsarnacki.com</a>.</p> <p>The blog is <a href="">hosted on github</a> with help of <a href="">jekyll</a>. Jekyll is great so far. Its simplicity is impressive and I don’t have to worry about any server related issues. Layout is borrowed from <a href="">Tom Preston-Werner’s site</a> (layout is <span class="caps">MIT</span> licensed). It helped me start within a few minutes, but I will try to change it in a few days.</p> <p>Although old <span class="caps">RSS</span> feed still works, I recommend to use new one: <a href="">feeds.feedburner.com/piotrsarnacki</a>. All the feeds available for this blog are:</p> <ul> <li><a href="">feeds.feedburner.com/piotrsarnacki</a> – English content</li> <li><a href="">feeds.feedburner.com/piotrsarnacki-pl</a> – Polish content (available on <a href="">piotrsarnacki.com/pl</a>)</li> <li><a href="">feeds.feedburner.com/piotrsarnacki-all</a> – All posts</li> </ul> <p>I deleted some of the posts and marked some <a href="">as deprecated</a>. I try to not abandon that blog this time and post some new stuff.</p> <p>On the side note. I have managed to be one of 20 students working on <a href="">Ruby Summer of Code</a> projects. I will work in the team with <a href="">Bogdan Gaza</a> on Rails 3 mountable apps and Rails admin panel (I will focus more on mountable apps part and Bogdan will work on rails admin panel). <a href="">Carl Lerche</a> and <a href="">Erik Michaels-Ober</a> will be our primary mentors.</p> <p>I will try to post progress updates on this blog. Stay tuned!</p> Cucumber and Celerity - testing unobtrusive javascript 2009-06-16T00:00:00+00:00 2009-06-16T00:00:00+00:00 <p.</p> <p.</p> <p>I will use <a href="">Cucumber</a>, <a href="">celerity</a> (which is <a href="">HtmlUnit</a> wrapper with <span class="caps">API</span> compatible with watir) and <a href="">culerity</a>. Culerity is a proxy between celerity and your app. Celerity requires JRuby and probably your app need <span class="caps">MRI</span> or <span class="caps">REE</span> – culerity resolves this problem.</p> <p>Let’s get started.</p> <p>You need to <a href="">install JRuby</a> in order to run celerity. After installing and adding jruby to your <span class="caps">PATH</span> install celerity gem (probably as a root):</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">jruby</span> <span class="o">-</span><span class="no">S</span> <span class="n">gem</span> <span class="n">install</span> <span class="n">celerity</span></code></pre></figure><p>Now you can create rails app and configure the environment:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">rails</span> <span class="n">culerity</span><span class="o">-</span><span class="n">example</span> <span class="n">sudo</span> <span class="n">gem</span> <span class="n">install</span> <span class="n">cucumber</span> <span class="n">rspec</span> <span class="n">rspec</span><span class="o">-</span><span class="n">rails</span> <span class="n">haml</span> <span class="c1"># add config.gem "haml" to environment.rb</span> <span class="n">gem</span> <span class="n">install</span> <span class="n">langalex</span><span class="o">-</span><span class="n">culerity</span> <span class="o">--</span><span class="n">source</span> <span class="n">http</span><span class="ss">:/</span><span class="o">/</span><span class="n">gems</span><span class="p">.</span><span class="nf">github</span><span class="p">.</span><span class="nf">com</span> <span class="n">cd</span> <span class="n">culerity</span><span class="o">-</span><span class="n">example</span> <span class="p">.</span><span class="nf">/</span><span class="n">script</span><span class="o">/</span><span class="n">generate</span> <span class="n">cucumber</span> <span class="p">.</span><span class="nf">/</span><span class="n">script</span><span class="o">/</span><span class="n">generate</span> <span class="n">rspec</span> <span class="c1"># now edit database.yml and set database options</span> <span class="n">rake</span> <span class="n">db</span><span class="ss">:migrate</span> <span class="n">rake</span> <span class="n">features</span> <span class="n">rm</span> <span class="n">features</span><span class="o">/</span><span class="n">step_definitions</span><span class="o">/</span><span class="n">webrat_steps</span><span class="p">.</span><span class="nf">rb</span> <span class="c1"># cause we will be using celerity</span></code></pre></figure><p>At this point you should have cucumber configured and you should be able to run “rake features” with output similar to:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="mi">0</span> <span class="n">scenarios</span> <span class="mi">0</span> <span class="n">steps</span> <span class="mi">0</span><span class="n">m0</span><span class="o">.</span><span class="mo">000</span><span class="n">s</span></code></pre></figure><p>Let’s add some tests! You will need step definitions and hooks. Culerity provides some basic step definitions and hooks which you can generate with “./script/generate culerity” but I’ve changed them a bit for my needs, so you can find them on <a href="">this example repository</a>.</p> <ul> <li><a href="">features/step_definitions/common_celerity.rb</a></li> <li><a href="">features/support/hooks.rb</a></li> </ul> <p>Copy those files to your app.</p> <p>The first file is just rewrite of webrat steps and the second file adds hooks for firing celerity server and browser. Let me explain the hooks.rb file:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="nb">require</span> <span class="s1">'culerity'</span> <span class="sb">`mongrel_rails start -e cucumber -p 3001 -d`</span> <span class="no">Before</span> <span class="k">do</span> <span class="vg">$server</span> <span class="o">||=</span> <span class="no">Culerity</span><span class="o">::</span><span class="n">run_server</span> <span class="vg">$browser</span> <span class="o">=</span> <span class="no">Culerity</span><span class="o">::</span><span class="no">RemoteBrowserProxy</span><span class="p">.</span><span class="nf">new</span> <span class="vg">$server</span><span class="p">,</span> <span class="p">{</span><span class="ss">:browser</span> <span class="o">=></span> <span class="ss">:firefox</span><span class="p">}</span> <span class="vg">$browser</span><span class="p">.</span><span class="nf">webclient</span><span class="p">.</span><span class="nf">setJavaScriptEnabled</span><span class="p">(</span><span class="kp">false</span><span class="p">)</span> <span class="vi">@host</span> <span class="o">=</span> <span class="s1">''</span> <span class="k">end</span> <span class="no">Before</span><span class="p">(</span><span class="s2">"@js"</span><span class="p">)</span> <span class="k">do</span> <span class="o">|</span><span class="n">scenario</span><span class="o">|</span> <span class="vg">$browser</span><span class="p">.</span><span class="nf">webclient</span><span class="p">.</span><span class="nf">setJavaScriptEnabled</span><span class="p">(</span><span class="kp">true</span><span class="p">)</span> <span class="k">end</span> <span class="nb">at_exit</span> <span class="k">do</span> <span class="vg">$browser</span><span class="p">.</span><span class="nf">exit</span> <span class="vg">$server</span><span class="p">.</span><span class="nf">close</span> <span class="sb">`mongrel_rails stop`</span> <span class="k">end</span></code></pre></figure><p.</p> <p>Second Before hook is fired only for scenarios tagged with @js tag. It will be useful for explicitly saying which scenarios should be tested with javascript.</p> <p><ins>I’ve also added lines to start mongrel before the tests and stop it at exit. It’s handy if you don’t want to run and restart mongrel manually</ins></p> <p>Now it is time to write some scenarios. File <a href="">features/javascript.feature</a></p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="no">Feature</span><span class="p">:</span> <span class="no">Javascript</span> <span class="no">In</span> <span class="n">order</span> <span class="n">to</span> <span class="nb">test</span> <span class="n">javascript</span> <span class="no">As</span> <span class="n">a</span> <span class="n">developer</span> <span class="no">I</span> <span class="n">need</span> <span class="n">a</span> <span class="n">way</span> <span class="n">to</span> <span class="n">run</span> <span class="nb">test</span> <span class="n">scenarios</span> <span class="n">with</span> <span class="n">javascript</span> <span class="n">enabled</span> <span class="n">or</span> <span class="n">disabled</span> <span class="vi">@js</span> <span class="no">Scenario</span><span class="p">:</span> <span class="no">With<">"Javascript rocks!"</span> <span class="no">Scenario</span><span class="p">:</span> <span class="no">Without<">"I am also working without javascript!"</span></code></pre></figure><p>Both scenerios rely on “Click me!” link but have different expectations. <del>To run those tests start mongrel (or any other web server):</del><br /> <del></p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">mongrel_rails</span> <span class="n">start</span> <span class="o">-</span><span class="n">e</span> <span class="n">cucumber</span> <span class="o">-</span><span class="nb">p</span> <span class="mi">3001</span> <span class="o">-</span><span class="n">d</span></code></pre></figure><p></del><br /> <del>This will fire mongrel in background on port 3001.</del><br /> <ins>It is not necessary as mongrel control commands are in hooks.rb file</ins></p> <p.</p> <p>Let’s run tests: <pre>rake features</pre></p> <p>Of course both tests should fail with <pre>Unable to locate Link, using :text and /Click me!/</pre> and rails error in log file: <pre>ActionController::RoutingError (No route matches “/” with {:method=>:get}):</pre></p> <p>Now we can fix it. We need some controller to show the link: <br /> <pre>script/generate rspec_controller home</pre></p> <p>Add <pre>map.root :action => “show”, :controller => “home”</pre> to routes file.<br /> Next copy <a href="">app/views/layouts/main.html.haml</a> (it just yields action and includes jquery.js and application.js) and <a href="">jquery.js</a></p> <p>You need to also set layout for home controller: <pre>layout ‘main’</pre></p> <p>And here comes html and javascript.<br /> app/views/home/show.html.haml</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="o">=</span> <span class="n">link_to</span> <span class="s2">"Click me!"</span><span class="p">,</span> <span class="s2">"?clicked=1"</span><span class="p">,</span> <span class="ss">:id</span> <span class="o">=></span> <span class="s2">"click_me"</span> <span class="c1">#text</span> <span class="o">-</span> <span class="k">if</span> <span class="n">params</span><span class="p">[</span><span class="ss">:clicked</span><span class="p">]</span> <span class="no">I</span> <span class="n">am</span> <span class="n">also</span> <span class="n">working</span> <span class="n">without</span> <span class="n">javascript!</span></code></pre></figure><p>It displays a link and a text if params[:clicked] is present. So after clicking on that link page will be reloaded with parameter clicked=1 and the text will be displayed.</p> <p>Let’s check if it’s passing:<br /> <pre>mongrel_rails restart; rake features</pre><br /> We need to restart mongrel before “rake features” in order to load changes because of “cache_classes = true”. It’s one of the drawbacks of using this method, but I’m sure that someone will find out better way to do that.</p> <p>If you did everything properly the second scenario should pass now. We’re green! :D</p> <p>Add javascript code to your application in order to make the second scenario pass:</p> <figure class="highlight"><pre><code class="language-ruby" data-<span class="n">jQuery</span><span class="p">(</span><span class="n">function</span><span class="p">()</span> <span class="p">{</span> <span class="err">$</span><span class="p">(</span><span class="s2">"#click_me"</span><span class="p">).</span><span class="nf">click</span><span class="p">(</span><span class="n">function</span><span class="p">()</span> <span class="p">{</span> <span class="err">$</span><span class="p">(</span><span class="s2">"#text"</span><span class="p">).</span><span class="nf">html</span><span class="p">(</span><span class="s2">"Javascript rocks!"</span><span class="p">);</span> <span class="k">return</span> <span class="kp">false</span><span class="p">;</span> <span class="p">});</span> <span class="p">})</span></code></pre></figure><p>Now both scenarios should pass.</p> <p>To sum up, now you should be able to:</p> <ul> <li>configure rails app with cucumber and celerity</li> <li>specify how your tests should be run by placing @js tag on top of javascript scenarios</li> <li>test unobtrusive javascript with ease</li> </ul> <p>TODOs:</p> <ul> <li><del>figure out better way to reload rails app while testing</del></li> <li>provide better error explanations in cucumber with celurity to test without tailing logs</li> </ul> <p>Cheers!</p> <h3>Additional resources</h3> <ul> <li><a href="">Celerity ajax examples by Alvin Schur</a></li> </ul> Fixing safe_erb with memcached improved 2009-05-12T00:00:00+00:00 2009-05-12T00:00:00+00:00 <p>Ruby.</p> <p.</p> <p>One of examples is <a href="">fixing memcached and safe_erb duet with alias_method_chain</a> I don’t know if this solution worked in older versions of rails, but in 2.3 I got stack level too deep error. After some thinking I’ve just created subclass of MemCacheStore:</p> <script src=""></script><p>And then you can simply set it as cache store: config.cache_store = :my_mem_cache_store</p> <p>And that’s it, no alias method chain :)</p> My git workflow 2009-04-26T00:00:00+00:00 2009-04-26T00:00:00+00:00 <p>I know that this post may be obvious for experienced git users, but it may be useful for some of you.</p> <p>Github has <a href="">added default branch picking</a> recently. It’s great, as you don’t have to use master branch as your main branch. I usually have three “main” branches in my repo – production, staging and development.</p> <p.</p> <p:</p> <pre><code class="console"> git checkout -b bug345 production </code></pre> <p>It will create new branch bug345 with production as a parent branch. Now:</p> <pre><code class="console"> # fix the bug git commit -m "Fixed the nasty bug #345" </code></pre> <p>Sometimes client doesn’t need reviewing the changes but when he does you should have ability to show it and merge it to the production branch later.</p> <pre><code class="console"> git checkout staging git merge bug345 # deploy staging </code></pre> <p>After the client approval of your solution you can merge the bugfix to the production branch:</p> <pre><code class="console"> git checkout production git merge bug345 # deploy production # you can remove bug345 branch now git branch -d bug345 </code></pre> <p.</p> <p>Hope it will help :)</p> JSON-P support for Apache upload progress 2009-02-10T00:00:00+00:00 2009-02-10T00:00:00+00:00 <p>Just a quick note about new feature added to apache upload progress and jquery upload progress libs.</p> <p><a href="">Ron Evans aka deadprogrammer</a> has added support for <span class="caps">JSON</span>-P in <a href="">this commit</a>. What does it mean? Cross domain requests are now possible, so if you need such a functionality pull the newest changes.</p> <p>Here is Ron’s description from <span class="caps">README</span>:<br /> <blockquote><br /> - <span class="caps">JSON</span>-P Support</p> <p>You can also request progress updates by using <span class="caps">JSON</span>-P, if you are uploading the file from a different domain or subdomain than the web server that is handling your original request. Adding a “callback=yourCallbackFunction” parameter to your request to the progress server will activate this functionality.</p> <p>For example, a request like:<br /></p> <p>Would return the <span class="caps">JSON</span>-P function: <br /> jsonp123(new Object({ ‘state’ : ‘uploading’, ‘received’ : 35587, ‘size’ : 716595, ‘speed’ : 35587 }));</p> <p>The normal <span class="caps">JSON</span> request:<br /></p> <p>Would return the <span class="caps">JSON</span> data: <br /> new Object({ ‘state’ : ‘uploading’, ‘received’ : 35587, ‘size’ : 716595, ‘speed’ : 35587 })</p> </blockquote> <p>Remember to update <a href="">jquery upload progress</a> also, to use jsonp.</p> <p>Enjoy :)</p> Tweaking Rails app with jQuery, part I 2008-07-03T00:00:00+00:00 2008-07-03T00:00:00+00:00 <div class="quick-links"> <p>Quick links:</p> <ul> <li><a href="">completed demo</a> (demo is reseted every few hours)</li> <li><a href="">sources of application on github</a> (<a href="">or download tarball</a>)</li> </ul> </div> <p>I’m in the train from <a href="">Zgorzelec to Warsaw</a> returning from my girlfriend’s place. Polish trains are like turtles, so I will have pretty much time for writing ;-)</p> <p>I’ve wrote (or maybe it’s better to say copy&paste) little rails app like in <a href="">Mike Clark’s tutorial for attachment_fu</a>. A few months ago there was <a href="">Mugshots exhibition in Yours Gallery</a> in Warsaw based on <a href="">work of Peter Doyle</a>. I saw it with Kathleene, she took some pictures. Great! I have material to fill my new app, what else could I possibly dream of?! (yeah… macbook, but it’s obvious ;-).</p> <p>Now you can admire my hard work: <a href="">mugshots.drogomir.com/js/no-javascript/mugshots/</a></p> <p>But wait… It’s not so cool… where are all those shiny javascript effects? Don’t worry. I will show you how to spice this dish.</p> <p>We will need:</p> <ul> <li><a href="">jQuery</a></li> <li><a href="">jQuery form plugin</a></li> <li><a href="">jQuery livequery plugin</a></li> <li><a href="">jQuery upload progress</a></li> <li><a href="" title="used by multifile">jquery blockUI</a></li> <li><a href="">jquery mutli file</a></li> <li><a href="">jquery lightBoxFu</a></li> <li><a href="">lightbox</a></li> </ul> <p>I’ve pushed application to github, so you can see entire code. Clone it or <a href="">grab the tarball</a></p> <p>There is one thing that is not straight forward. @main_js variable in app/views/layouts/main.rhtml:</p> <pre><code class="ruby"> <%= javascript_include_tag @main_js %> </code></pre> <p).</p> <p>Lets begin.</p> <p>What to do first? It’s all about uploading files, so I would add upload progress bar to form in <a href="">mugshots.drogomir.com/mugshots/new</a>. To implement it you will need some kind of server module:</p> <ul> <li><a href="">Apache upload progress module</a></li> <li><a href="">Nginx upload progress module</a></li> <li><a href="">Lighttpd upload progress</a></li> </ul> <p>You have to install and enable one of the above modules to make progress bar work.</p> <p>Then add some javascript to applications.js. This example is using “LightBoxFu”: – little script that I wrote to show progress bar as an overlay. It’s based on <a href="">Riddle’s work</a> – all positioning is in <span class="caps">CSS</span> (except expressions for IE) so it’s really light and fast. Ideal for such a task. If you don’t like lightBoxFu you can use any other form of displaying message (you can use some other lightbox with displaying code function or even blockUI plugin).</p> <pre><code class="javascript"> //"> <"] }); }); </code></pre> <p>And some styling for progress bar:<br /> <pre><code class="css"> #progress { margin: 8px; width: 220px; height: 19px; }</p> <p>#progressbar {<br /> background: url(‘/images/ajax-loader.gif’) no-repeat;<br /> width: 0px;<br /> height: 19px;<br /> }<br /> </code></pre></p> <p>That’s it, just add “progress” class to your form and progress bar is working:</p> <pre><code class="ruby"> <% form_for(:mugshot, :url => mugshots_path, :html => { :multipart => true, :class => "progress" }) do |f| -%> </code></pre> <p>Uploading files looks much better right now, check it here: <a href=""></a></p> <p.</p> <p>Javascript enabling mutlifile:</p> <pre><code class="javascript"> jQuery(function($) { $('.multi-file').each(function() { // change name of element before applying MultiFile // so array of files can be send to server with mugshot[uploaded_data][] $(this).attr('name', $(this).attr('name') + '[]'); }).MultiFile(); }); </code></pre> <p>We must also add “multi-file” class to file field:</p> <pre><code class="ruby"> <%= f.file_field :uploaded_data, :class => 'multi-file' %> </code></pre> <p>From javascript point of view that’s all. Let’s see how uploaded photos are handled by rails app:</p> <pre><code class="ruby"> @mugshot = Mugshot.new(params[:mugshot]) </code></pre> <p>So mugshot.uploaded_data is filled with data from <code>params[:mugshot][:uploaded_data]</code>. Good for one file. But with array of files we should create Mugshot for each file. I would add a method in model:</p> <pre><code class="ruby"> </code></pre> <p>and slightly change controller code:</p> <pre><code class="ruby"> def create @mugshots = Mugshot.handle_upload(params[:mugshot]) # if @mugshots is empty there are no errors if @mugshots.blank? flash[:notice] = 'Mugshot was successfully created.' redirect_to mugshots_url else render :action => :new end end </code></pre> <p>Only one problem left. Validation.</p> <p>Easiest way is to change error_messages_for:</p> <pre><code class="ruby"> <%= error_messages_for :object => @mugshots %> </code></pre> <p>It works. But suppose you are uploading 3 files and 2 of them are too big. You will end with:</p> <ul> <li>Size is not included in the list</li> <li>Size is not included in the list</li> </ul> <p>Which one was added? Some lottery here…</p> <p>I would tweak attachment_fu error messages a bit. By default it uses validates_as_attachment method which simply adds:</p> <pre><code class="ruby"> validates_presence_of :size, :content_type, :filename validate :attachment_attributes_valid? </code></pre> <p>Instead validates_as_attachment we can isert our new code:</p> <pre><code class="ruby"> </code></pre> <p>Now it’s a lot more readable:</p> <ul> <li>Size is not included in the list (filename.jpg)</li> <li>Size is not included in the list (filename1.jpg)</li> </ul> <p>Submit form looks better now, but viewing files is still ugly. Maybe we could add some lightbox? No problem:</p> <pre><code class="javascript"> $('#mugshots li a').lightBox(); </code></pre> <p>I used <a href="">that lightbox</a> cause I had it configured for my previous rails apps, but pick your favourite one, as there are gazilions of them.</p> <p>This is first step of tweaking our app. Javascript is in step1.js file: <a href="">mugshots.drogomir.com/js/step1/mugshots/new</a></p> <p>What now? User can upload many files at one submit and see progress bar. What else do we need? Ajax! My preciousssss…</p> <p>As all children know, XMLHttpRequest can’t upload files. What a shame… our new tweaked mugshots app is all about uploading files. Although you can’t do it with <span class="caps">XHR</span>, there is a way to imitate it. It is obtained by creating an iframe and uploading files to it.</p> <p>Luckily Mike Malsup has done hard work for us writing <a href="">jQuery form plugin</a>.</p> <p>First, we need our form. I would place it instead “New mugshot” link. Link has id=“new_mugshot_link”, so this piece of code will replace it with form:</p> <pre><code class="javascript"> /*); </code></pre> <p>Our form has to be send to an iframe, so we have to apply ajaxForm to it. After replacing link with form we can’t figure out when form is actually appended to <span class="caps">DOM</span>. To be sure that form is there, we can use livequery. It will fire callback function when ‘form.ajax’ will be available:</p> <pre><code class="javascript"> $((); } }); }}); }); </code></pre> <p>When new form tag with class “ajax” will be available callback function will be run. iframe option tells form plugin to add hidden iframe (it will handle file upload).</p> <p>The above code has ajax call to “/mugshots” url which will run index.js.erb (<span class="caps">RJS</span>), so we will need one:</p> <p>app/views/mugshots/index.js.erb<br /> <pre><code class="ruby"> jQuery('#mugshots').html(<%= js render(:partial => 'mugshot', :collection => @mugshots) %>); </code></pre></p> <p>to handle it we need to use respond_to:</p> <pre><code class="ruby"> respond_to do |format| format.html # layout => false is here beaceuse without it rails are looking # for layouts/index.js.erb format.js { render :layout => false } end </code></pre> <p>Normally I try not to use <span class="caps">RJS</span> to keep all my javascript (and ajax) logic in javascript files, but in case of images it isn’t so esay. I will write about it and about javascript templating systems in one of the next posts.</p> <p>Take a look at: <a href="">mugshots.drogomir.com/js/step2/mugshots</a> Doesn’t it look nice?</p> <p. :)</p> Upload progress script with safari support 2008-06-30T00:00:00+00:00 2008-06-30T00:00:00+00:00 <p>Quick links:</p> <ul> <li>Source on github for jquery version: <a href=""></a></li> <li>Source on github for prototype version: <a href=""></a></li> <li>jQuery demo: <a href=""></a></li> <li>Prototype demo: <a href=""></a></li> <li><a href="">Installing apache upload progress</a></li> </ul> <p>Recently I’ve wrote about <a href="">apache upload progress module</a>. I work mainly on linux and I haven’t check my scripts on safari. It was working even on IE, what possibly could be harder to obtain? ;-) Some people reported that demo is not working on safari and Michele <a href="">resolved the problem</a> (thanks Michele :).</p> <p>Solved! The only thing to do was to open WinXP on <a href="">VirtualBox</a>.</p> <p>With Safari? No, not really. I’ve wrote it in a few minuttes and checked in firefox. It worked, great, now safari. Nope….</p> <p>Although Safari have great <span class="caps">CSS</span> support, it is really terrible with Javascript. <span class="caps">WYSIWYG</span>, javascript history, ajax issues, now the upload progress and iframes. In edge case libraries I often see hacks for IE and safari mainly.</p> <p!</p> <p><a href="">Check the commit on github</a>. Lines 18-22 especially. And 28-line issue with safari not waiting to load previous script.</p> <p…</p> <p><del>I will add prototype version</del> and possibly some usage page shortly (for know look at the <a href="">demo code</a>).</p> Upload progress bar with mod_passenger and apache 2008-06-18T00:00:00+00:00 2008-06-18T00:00:00+00:00 <p><strong><span class="caps">UPDATE</span></strong>: I found 2 bugs in upload progress module. If you have already installed. update to at least 0.1 version: <a href=""></a></p> <p>I’ve installed <a href="">mod passenger</a> on my server recently. It’s really great software. Now I don’t have to worry about monitoring, nginx proxy, load balancing, big file uploads… and it’s fast! With <a href="">Ruby Enterprise Edition</a> it’s even faster.</p> <p>Personally I don’t care about people saying that phusion wants to <a href="">promote themselves on <span class="caps">REE</span></a> as long as it gives faster ruby with lower memory use (but yes, I know, <span class="caps">REE</span> is not best choice for a name :).</p> <p>After installing I’ve realised that my shiny upload progress bar (thanks to <a href="">Upload Progress Module for nginx</a>) was not working. Oh gods! What a tragedy!</p> <p>But fear not. I’ve written <a href="">apache upload progress module</a> :)</p> .</p> <p>So you want to be cool and have your own sexy progress bar in your app? Keep reading ;)</p> <p>To install module you must download it using git:</p> <figure class="highlight"><pre><code class="language-bash" data-git clone git://github.com/drogus/apache-upload-progress-module.git</code></pre></figure><p>or get the package: <a href=""></a></p> <p>To compile/install/activate you have to use apxs2:</p> <figure class="highlight"><pre><code class="language-apache" data-apxs2 -c -i -a mod_upload_progress.c</code></pre></figure><ul> <li>-c is for compiling</li> <li>-i is for installing (copy mod_upload_progress.so to apache library dir)</li> <li>-a is for activating (add LoadModule option into your apache conf file)</li> </ul> <p>If you want to install and activate run this command as a root. Otherwise you can just compile and add LoadModule to apache conf:</p> <figure class="highlight"><pre><code class="language-apache" data-<span class="nc">LoadModule</span> upload_progress_module path/to/apache-upload-progress-module/.libs/mod_upload_progress.so</code></pre></figure><p>Currently there is only one global option:</p> <figure class="highlight"><pre><code class="language-apache" data-UploadProgressSharedMemorySize 1024000</code></pre></figure><p>This sets shared memory size to 1M. By default it’s 100kB.</p> <p>To add tracking and reporting upload for a virtual host in apache you will need to add:</p> <figure class="highlight"><pre><code class="language-apache" data-<span class="p"><</span><span class="nl">Location</span><span class="sr"> /</span><span class="p">> </span> <span class="c"># enable tracking uploads in /</span> TrackUploads On <span class="p"></</span><span class="nl">Location</span><span class="p">> </span> <span class="p"><</span><span class="nl">Location</span><span class="sr"> /progress</span><span class="p">> </span> <span class="c"># enable upload progress reports in /progress</span> ReportUploads On <span class="p"></</span><span class="nl">Location</span><span class="p">></span></code></pre></figure><p>Now all uploads will be tracked and reports are under /progress</p> <p>Format of the report is <span class="caps">JSON</span>. From nginx wiki:</p> <blockquote> <p>The returned document is a <span class="caps">JSON</span> text with the possible 4 results:</p> <ul> <li>the upload request hasn’t been registered yet or is unknown:</li> </ul> <p>new Object({ ‘state’ : ‘starting’ })</p> <ul> <li>the upload request has ended:</li> </ul> <p>new Object({ ‘state’ : ‘done’ })</p> <ul> <li>the upload request generated an <span class="caps">HTTP</span> error:</li> </ul> <p>new Object({ ‘state’ : ‘error’, ‘status’ : <error code> })</p> <p>One error code that is interesting to track for clients is <span class="caps">HTTP</span> error 413 (Request entity too large)</p> <ul> <li>the upload request is in progress:</li> </ul> <p>new Object({ ‘state’ : ‘uploading’, ‘received’ : <size_received>, ‘size’ : <total_size>})</p> <p>The <span class="caps">HTTP</span> request to this location must have either an X-Progress-ID parameter or X-Progress-ID <span class="caps">HTTP</span> header containing the unique identifier as specified in your upload/<span class="caps">POST</span> request to the relevant tracked zone. If you are using the X-Progress-ID as a query-string parameter, ensure it is the <span class="caps">LAST</span> argument in the <span class="caps">URL</span>.</blockquote></p> <p>Now the last thing to do is to implement progress bar. I don’t like repeating others and <a href="">there is great tutorial on setting up upload progress bar with nginx and merb</a></p> <p><strong><span class="caps">UPDATE</span></strong>: I released jquery upload progress library with Safari 3 support. More info <a href="">here</a>.<br /> <strong>UPDATE2</strong>: I’ve upgraded prototype version to work in Safari.</p> <p.</p> <p>If you’re using prototype I’ve rewritten script and made <a href="">a demo</a>. <a href="">You can also grab files</a></p> <p>I hope you enjoy this article. Progress bar is in my opinion one of the most useful technics – there is nothing more annoying than large file uploading without any info on state of an upload.</p> | http://feeds.feedburner.com/piotrsarnacki | CC-MAIN-2018-09 | refinedweb | 15,519 | 54.42 |
Higher-kinded data in Scala
by Chris Birchall
- •
- January 19, 2021
- •
- scala• functional programming• functional
- |
- 9 minutes to read.
The other day I came across a nice use case for a concept known as “higher-kinded data” that I thought was worth sharing. Higher-kinded data is a reasonably well-known concept in the Haskell world, but I haven’t seen many people talking about it in Scala. I’ll use a real-world example to explain the concept and show how it can be useful.
Batch job configuration
At my client, we have a batch job implemented in Scala and Spark. It takes a couple of command line arguments, but most of the configuration is done using a file that the job downloads from S3 at startup.
The config file is parsed and decoded to a case class instance using PureConfig. The corresponding case class looks something like this:
final case class JobConfig( inputs: InputConfig, outputs: OutputConfig, filters: List[Filter], timeRange: TimeRange, tags: List[String] )
And the meat of the job looks like this:
def doStuff(config: JobConfig): Unit
The part of the configuration we are most interested in is the
TimeRange,
which specifies the time period of the data the job should process:
final case class TimeRange( min: LocalDateTime, max: LocalDateTime )
There is a new requirement to change the way this time period is passed to the
job. In some cases, we want to continue specifying it in the config file, but
we also want the ability to pass the
min and
max values via CLI arguments.
For example, when using a job scheduler such as Airflow to run a job every day
against the previous day’s data, the scheduler needs a way to pass the
appropriate datetime values to the job.
The required behavior is as follows:
You can optionally pass
--timeRangeMinand
--timeRangeMaxCLI arguments to the job. These arguments come as a set: if you pass one, you must also pass the other.
You can continue to specify the time range in the config file, as before.
If the time range is specified both via CLI arguments and via the config file, the CLI arguments take precedence.
If you don’t specify a time range, either using CLI arguments or in the config file, the job will throw an error at startup.
Updating the JobConfig class
With these new requirements, the
timeRange field in the config file becomes
optional, where previously it was required. So we need to update the
corresponding
JobConfig case class, so it doesn’t fail when we try to decode a
config file with that field missing:
final case class JobConfig( inputs: InputConfig, outputs: OutputConfig, filters: List[Filter], timeRange: Option[TimeRange], // this becomes an Option tags: List[String] )
The entrypoint to the job would now look something like:
Look for the optional
--timeRangeMinand
--timeRangeMaxCLI arguments.
Decode the config file, using the updated case class definition.
If the time range was passed via CLI args, replace the
timeRangefield with those values. Otherwise, if the time range was not specified in the config file, fail with a useful error message.
Call
doStuff(config)
But hold on, something isn’t quite right here!
If we’ve reached step 4, we know that the time range has been correctly
specified, either via the CLI or the config file. But the
timeRange field in
the
JobConfig is an
Option[TimeRange]. In other words, we know the value
will always be present, but the compiler doesn’t. We haven’t encoded our
knowledge properly in the types.
We’ll have to change a load of code inside the
doStuff method to handle the
Option, and we’ll probably end up writing something like this:
val min = config.timeRange.get.min // this won't blow up, trust me :)
Can we do better than this?
Enter higher-kinded data
The issue is that we want to re-use our
JobConfig class for two different
purposes: decoding the contents of a config file, and passing a fully validated
and populated job configuration to the core of the job.
We could make two different (but very similar) case classes and implement the
conversion from one to the other, but that is quite laborious, potentially
error-prone, and not very DRY. An alternative approach is to use a trick called
“higher-kinded data,” whereby we make
JobConfig polymorphic with a
higher-kinded type parameter:
final case class JobConfig[F[_]]( inputs: InputConfig, outputs: OutputConfig, filters: List[Filter], timeRange: F[TimeRange], // this is now polymorphic tags: List[String] )
Introducing this abstraction means that we can now teach the compiler what we
already knew: when decoding the config file, the
timeRange field may or may
not be present, but by the time we call
doStuff(config), the field is
definitely there.
Decoding the config file
The change to the PureConfig code is very minor, from:
val config: JobConfig = configSource.loadOrThrow[JobConfig]
to:
val config: JobConfig[Option] = configSource.loadOrThrow[JobConfig[Option]]
Here, we set the
F[_] type parameter to
Option, because the
timeRange field
might be missing from the config file.
Requiring the time range to be present
In the
doStuff method, we expect the time range to be set, and we don’t want
to be dealing with
Option. So we can update the method signature from:
def doStuff(config: JobConfig): Unit
to:
def doStuff(config: JobConfig[Id]): Unit
Here, we’re using the
Id type
(short for “Identity”) from Cats. This is nothing more than a type alias, so you
could define it yourself if you prefer:
type Id[A] = A
In other words, the type
Id[TimeRange] is precisely
TimeRange, but it has the
shape we need to match our
F[_] type parameter. Within the
doStuff method, we
can refer to the
timeRange field directly without any unwrapping, just like we
did before:
val min = config.timeRange.min
Ensuring the time range is set
The entrypoint to the job now looks something like this:
val timeRangeFromCLI: Option[TimeRange] = parseCLIArgs() val configFromFile: JobConfig[Option] = loadConfigFile() val configWithTimeRange: JobConfig[Id] = setTimeRange(configFromFile, timeRangeFromCLI) doStuff(configWithTimeRange)
where the
setTimeRange method is defined as follows:
def setTimeRange( config: JobConfig[Option], timeRangeFromCLI: Option[TimeRange] ): JobConfig[Id] = { val timeRange = timeRangeFromCLI .orElse(config.timeRange) .getOrElse( throw new Exception( "Time range must be specified either as CLI args or in the config file" ) ) config.copy[Id](timeRange = timeRange) }
Parsing the CLI arguments
There was one more requirement we haven’t covered: it shouldn’t be possible to set only one CLI argument and not the other. As a slight aside, let’s look at how to implement that.
We use the Decline library for CLI argument parsing. It makes it really easy to compose the two arguments in the way we need:
private val timeRangeMinOpt: Opts[LocalDateTime] = Opts.option[LocalDateTime]("timeRangeMin", help = "...") private val timeRangeMaxOpt: Opts[LocalDateTime] = Opts.option[LocalDateTime]("timeRangeMax", help = "...") val timeRangeOpt: Opts[Option[TimeRange]] = (timeRangeMinOpt, timeRangeMaxOpt).mapN(TimeRange).orNone
This will behave as required: you can pass no arguments, or both, but not only one.
Conclusion
Higher-kinded polymorphism lets you re-use your case classes in multiple situations, increasing the coherence of your model and reducing code duplication. | https://www.47deg.com/blog/higher-kinded-data-in-scala/ | CC-MAIN-2022-27 | refinedweb | 1,193 | 56.08 |
Learn Java in a day
Learn Java in a day
....
Create first program
in Java
In this section, you will learn to
write a simple Java program. You just need
My first Java Program
My first Java Program I have been caught with a practical exam to do the following:
Write a program that takes input from a user through the command line.
The user must be prompt to enter a sentence (anything). The sentence
Find the Day of the Week
the day number within a current year.
e.g. The first day of the year has value 1...
Find the Day of the Week
This example finds the specified date of an year and
a day
Day for the given Date in Java
Day for the given Date in Java How can i get the day for the user input data ?
Hello Friend,
You can use the following code:
import...");
String day=f.format(date);
System.out.println(day
Find Day of Month
Find Day of Month
This example explores how to find
the day of a month and the day of a week This example sets the year as 2007
and the day as 181. The example
learn
learn how to input value in java
learn
learn i need info about where i type the java's applet and awt programs,and how to compile and run them.please give me answer
How to create LineDraw In Java
How to create LineDraw In Java
Introduction
This is a simple java program . In this section, you
will learn how to create Line Drawing. This program implements a line
Hello world (First java program)
Hello world (First java program)
...
and running. Hello world
program is the first step of java programming language... to develop the robust application. Java application program is
platform
to learn java
to learn java I am b.com graduate. Can l able to learn java platform without knowing any basics software language.
Learn Java from the following link:
Java Tutorials
Here you will get several java tutorials
JDBC Training, Learn JDBC yourself
:
Learn Java in a Day and
Master... java program to connect java application and execute
sql query like create table... from given table through java code. Java code create connection between
program
Learn java
Learn java Hi,
I am absolute beginner in Java programming Language. Can anyone tell me how I can learn:
a) Basics of Java
b) Advance Java
c) Java frameworks
and anything which is important.
Thanks
Java Program
Java Program How to Write a Java program which read in a number... out in the standard output. The program should be started like this:
java... to be read. In this example, the program should create three threads for reading
How to Create Text Area In Java
How to Create Text Area In Java
In this section, you will learn how to create Text Area
in Java. This section provides you a complete code of the program for
illustrating
Getting Previous, Current and Next Day Date
Getting Previous, Current and Next Day Date
In this section, you will learn how to get previous,
current and next date in java. The java util package provides
Java Program
Java Program Problem Statement
You are required to write a program... given, then your program must create the expression: 8 - 5 * 2 = -2 Here..., thus expression 2-2+2 evaluates to 2 and not -2
Input Specification
First line
First Step towards JDBC!
that enables the java program to manipulate data stored into the
database. Here... in Java
In this section, you will learn how to connect the MySQL database... that helps you
to create a database table in a database through the java file
Depth-first Polymorphism - tutorial
Depth-first Polymorphism
2001-02-15 The Java Specialists' Newsletter [Issue 009] - Depth-first Polymorphism
Author:
Dr. Heinz M. Kabutz
If you... of Common Lisp."
And Java?
Depth-first Polymorphism (or Customised
Conditions In Java Script
Conditions In Java Script
In this article you learn the basics of JavaScript and
create your first JavaScript program.
About JavaScript
Java
Java EE or Java
should first learn Java and then JEE.
Tutorials to Learn Java
Java Index...What to learn - Java EE or Java?
As a beginner if you are looking... Java correctly. So, let's
first understand about different distribution
Create a Desktop Pane Container in Java
Create a Desktop Pane Container in Java
In this section, you will learn how to create a desktop
pane container in Java. The desktop pane container is a container, which
Java Create Directory - Java Tutorial
Java Create Directory - Java Tutorial
In the section of Java Tutorial you will learn how to
create directory using java program. This program also explains
Java Script With Links and Images
Java Script With Links and Images
In this article you learn the basics of JavaScript and
create your first JavaScript program.
JavaScript Images
Java program? - Java Beginners
Java program? In order for an object to escape planet's.... The escape velocity varies from planet to planet.Create a Java program which calculates the escape velocity for the planet. Your program should first prompt
Hello world (First java program)
Hello world (First java program)
.... Hello world program is the first step of java programming
language... to develop the robust application. Java application program is
platform independent
Looping In Java Script
Looping In Java Script
In this article you learn the basics of JavaScript and
create your first JavaScript program.
What is JavaScript loop?
The JavaScript loops used to execute the same block or code a specified number
java program
java program write a program to create text area and display the various mouse handling events
Java get Next Day
Java get Next Day
In this section, you will study how to get the next day in java...()
provide the string of days of week. To get the current day, we have used
Navigation with Combo box and Java Script
Navigation with Combo box and Java
Script
In this article you learn the basics of JavaScript and
create your first JavaScript program.
What is JavaScript program
java program write a program to create server and client such that server receives data from client using BuuferedReader and sends reply to client using PrintStream Layout Components in a Grid in Java
Create Layout Components in a Grid in Java
In this section, you will learn how to create layout components
with the help of grid in Java Swing. The grid layout provides
Create a JRadioButton Component in Java
Create a JRadioButton Component in Java
In this section, you will learn how to create a radio
button in java swing. Radio Button is like check box. Differences between check
Create a Frame in Java
Create a Frame in Java
Introduction
This program shows you how to create a frame in java AWT package. The frame in java works like the main window where your
java program
java program Problem 1
Write a javaScript program that would input Employee Name, rate per hour, No. of hours worked and will compute the daily wage... I received already the answer last day thanks for the help...Plz answer
java program
java program Create a washing machine class with methods as switchOn, acceptClothes, acceptDetergent, switchOff. acceptClothes accepts the noofClothes as argument & returns the noofClothes
Java Program MY NAME
Java Program MY NAME Write a class that displays your first name vertically down the screen where each letter uses up to 5 rows by 5 columns...() { }, Then, method main should create an object of your class, then call the methods
Java Program - Java Beginners
Java Program Hi I have this program I cant figure out.
Write a program called DayGui.java that creates a GUI having the following properties...
Mnemonic B
Add individual event handlers to the your program so that when
How to create a class in java
How to create a class in java I am a beginner in programming and tried to learn how to do programming in Java. Friends please explain how can I create a class in Java
java program
java program . Create Product having following attributes: Product ID, Name, Category ID and UnitPrice. Create ElectricalProduct having the following additional attributes: VoltageRange and Wattage. Add a behavior to change
java program
java program Create an Employee class which has methods netSalary which would accept salary & tax as arguments & returns the netSalary which is tax deducted from the salary. Also it has a method grade which would accept
Sum of first n numbers
Sum of first n numbers i want a simple java program which will show the sum of first
n numbers....
import java.util.*;
public class SumOfNumbers
{
public static void main(String[]args){
Scanner input=new
Java Program
Java Program A Simple program of JSP to Display message, with steps to execute that program
Hi Friend,
Follow these steps:
1)Go...:\apache-tomcat-5.5.
2)Like that create another variable classpath and put the path
Java program - Java Beginners
Java program Dear maam/Sir,
I am a 2nd year Computer programming student and we are given an assignment, a program that will show... not that familiar with java environment.. My friend told me that you can help me with my
java program
java program You need to keep record of under- graduate, PhD students, faculty and staffs
of your institute.
Write an abstract class Person... the
classes.
Create a TA class which is derived from PhD_ student and Faculty
java program
java program . Create a Bank class with methods deposit & withdraw. The deposit method would accept attributes amount & balance & returns the new balance which is the sum of amount & balance. Similarly
How to Java Program
for the respective operating system.
Java Program for Beginner
Our first...
How to Java Program
If you are beginner in
java , want to learn and make career in the Java
Java Program
Java Program I want the answer for this question.
Question:
Write a program to calculate and print the sum of odd numbers and the sum of even numbers for the first n natural numbers, where n is given as input from the user
First Hibernate Application
First Hibernate Application
In this tutorial you will learn about how to create an application of Hibernate 4.
Here I am giving an example which... : At first I have created a table named person in
MySQL.
CREATE TABLE `person I have this program I cant figure out.
Write a program called DayGui.java that creates a GUI having the following properties...
Setting- cmdBad, Bad, B
Add individual event handlers to the your program so
Java Program - Java Beginners
Java Program Write a java program to find out the sum of a given number by the user? Hi Friend,
Try the following code:
import...){
System.out.print("Enter first number:");
Scanner input=new Scanner(System.in Program
Java Program Problem Statement
You are required to play a word-game... is a bull and which one is a cow! Bulls are counted first and then cows... and the number of cows and bulls for each word, your program should be able to work out
OOP Tutorial [first draft]
Java: OOP Tutorial [first draft]
Table of contents
Introduction....
Using the constructor
Here is the first program rewritten to use the above class....
These notes are about programming and Java language features necessary
for OOP, and do
java
a program easily from one computer system to another.
* Java works on distributed... in program during the execution of respective program code.
* Java supports... tasks simultaneously within a program. The java come with the concept
program help - Java Beginners
program help In the following code, I want to modify class figure...,
Abstract Class
In java programming language, abstract classes are those... are not instantiated directly. First extend the base class and then instantiate
Create a ToolBar in Java
Create a ToolBar in Java
In this section, you will learn how to create toolbar in java... been arranged
horizontal in this program but, if you
want to make it vertically
Java Program
Java Program Problem Statement
A game is made around the basic concept of a man pushing a box in a maze, from its source to its destination... destination cell.
Input Specification
The first line contains two integers R
Java Program - Java Beginners
Java Program Code an application called Week Four Program... for an option
If the user selects the first menu option... the third menu option, the application exits.
Program should be coded
What is JavaScript? - Definition
the basics of JavaScript and
create your first JavaScript program.
What... JavaScript Program
In the first lesson we will create very simple
JavaScript program...;head>
<title>First Java Script</title>
<script
Java Thread Priority
Java Threads run with some priority
There are Three types of Java Thread priority
Normal, Maximum and Minimum
Priority are set by setPriority() method.
Java Thread Priority Example
public class priority implements
java program - Java Beginners
java program "Helo man&sir can you share or gave me a java code hope... of "a" to its previous value.
III. Assign the variable "first" the value returned by method ONE with the parameters 6 and 8
IV. Update the value of "first Program - Java Beginners
java Program 1. Write a java program to accept 10 numbers in an array and compute the sum of the square of these number.
2. Wrtie a java program...)
{
Scanner input=new Scanner(System.in);
System.out.print("Enter First String
Java Program - Java Beginners
Java Program Code an application using a GUI called Week Four Program that provides 3 menu options:
1: List Entries
2: Add Entries
3: Exit
Prompt for an option
If the user selects the first menu option
java program - Java Beginners
java program m postin a program...hv 2 create a class,wid 6 data membr havin name,no,grade,n marks n 3 sub..........dere vil b 3 membr function.d 1st taks input,2nd displays outut,3rd assign grades......2 obj wil b creatd
Learn Features of Spring 3.0
. The Spring 3.0 Framework is released with the support of
Java 5. So, you can use all the latest features of Java 5 with Spring 3
framework.
The first... and released to simplify the development of
Enterprise Java applications
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/92448 | CC-MAIN-2013-20 | refinedweb | 2,418 | 62.07 |
Maintaining a growing software product can be daunting. You go from a two bedroom apartment to an office full of people and along the way, teams are formed, deployments are slow, and the new data science guy only codes in R.
Everyone is producing code and lots of it, but where do you put it all?
At LogRocket, we have thousands of files that keep our services looking nice and running smoothly. From frontend to backend, every line of code is stored in a single, gloriously fat git repository.
This approach is known as a monorepo.
Sounds like a mess
Surprisingly, it really isn’t. A common alternative approach is to have one repository per service.
This is clearly appealing.
It keeps services focused and avoids coupling of code. Unfortunately it just never stays this contained. If you deploy two microservices in the same language, odds are they’ll share a lot of boilerplate code. And if they also want to communicate, they should probably share an RPC schema.
The end result is a bunch of random shared repositories that only exist to serve as the glue between real services. It may look cleaner, but it really isn’t any less of a mess:
Repositories also add a lot of developer overhead. They need to be created, hooked up to CI, and cloned by everyone.
And that’s just to get started.
Keeping them up-to-date is hard, changes to more than one involve tracking multiple PRs at the same time. And git subrepos are rarely the answer if you want any sort of consistency. If most of your developers have the same set of repositories cloned into the same places, there must be some clear benefit to that separation.
The taming of the mess
Separation of code, you say. Of course, a monorepo can also backfire in similar ways. Keeping code together is enticing; having it grow into a seamless ball of mud is not. But separation is not the problem, repositories are the problem. Every project should still have a module structure to keep code separated. Luckily, this is easily solved by a bit of tooling.
In JavaScript, local module management is most easily done with one of two tools:
- Lerna — Sits on top of npm and manually symlinks local packages into your node_modules folder. Provides a lot of tooling for publishing individual sub-packages.
- Yarn Workspaces — A fairly new yarn feature, similar to lerna, but with a leaner feature set and a focus on performance
Both of these essentially turn your entire repository into a collection of private npm packages. Set up with yarn workspaces, the multi-repository project becomes:
Since it avoids the overhead associated with creating and managing git repositories, a lot of things start to break out more clearly. The penguin base repository here has turned into separate server, logging, and errors packages.
Other benefits
Development
Sharing a single directory tree is surprisingly handy. You can set up a file that imports all the services used by your app, and start them with a single command:
import Server from 'server'; import Queue from 'queueWorker'; import Resizer from 'fileResizer'; Server.create({ port: 5000 }); Queue.create({ port: 5001 }); Resizer.create({ port: 5002 });
This is much simpler than having to remember to start everything, or taking the extra steps to recreate your production environment on a local docker installation.
Testing
Taking this idea of importing other packages further, end-to-end tests become much more manageable. Imagine for example, that you’re testing the processing pipeline for your instaphoto startup. You can simply mock out the parts you don’t want in any service of the pipeline. This is how you get truly fast end-to-end tests:
import Server from 'server'; import Logger from 'logger'; import Slack from 'slackNotifier'; import sinon from 'sinon'; it('should log startup errors and send them to slack', () => { sinon.spy(Logger, 'logException'); Slack.notify = sinon.spy(() => {}); Server.create({ port: 5000 }); Server.create({ port: 5000 }); // port already taken expect(Slack.notify).to.be.called(); expect(Logger.logException).to.be.called(); });
This setup allows for much simpler development than having to recreate your production environment on a local docker installation.
Code review
In a monorepo, all code changes for an improvement or a new feature can be contained in a single pull request. So you can, at a glance, see the full scope of the change. Code review can also be done in one place and discussions are tied to the feature, not the individual parts of whatever teams are involved. That’s true collaboration.
Deploy, roll back, deploy again!
Merging a pull request like this means deployment to all involved systems can happen at the same time.
There is some work required to build an individual package when using lerna or yarn workspaces. At LogRocket we’ve settled on roughly this:
- Create a new build folder containing just the global package.json
- Go through all local packages required for the service
- Copy them into the build folder and add their external dependencies
- Run npm install
And since there’s nothing like production traffic to find edge-cases, rolling back buggy code is as easy as reverting a single commit. Something that is easily done, even at 3am on a Sunday.
Public packages
At LogRocket we share code across our entire stack: backend, frontend, and even with our public SDK. To keep our wire format in sync, the SDK is published with some of the same packages used by the backend services that process data. They’re never out of sync, because they can’t be out of sync.
Final thoughts
There are still cases where you will still need separate repositories. If you want to open source some of your code, or if you do client work, you may wish to keep some things separate.
Do you have a better way? Let us know here or on Twitter.. | https://blog.logrocket.com/the-monorepo-putting-code-in-its-place-e073b3eb6295/ | CC-MAIN-2019-43 | refinedweb | 987 | 64.2 |
In the previous article, we created a game pad service. Here, we will create another service that will provide graphical feedback about the pad. Even for this article, remember that this is not a CCR or a DSS tutorial, and a good knowledge of these technologies will be very useful. In this article, I assume you know what messages, queues and services are. I will also skip all the actions that are similar to the previous service: service creation, message definition and message handling.
The base idea is this: the GUI service can receive some kinds of messages and in the corresponding message handlers, the UI forms elements are modified.
I decided to create a service rather than modify the pad service form to avoid tight coupling modules. A GUI service is always more reusable than a single form. It can also be used to handle different hardwares such as a joystick. Few words about messages: in this, services are defined as ALL the Pad service messages so that we have a 1 to 1 relationship between the two services messages. I don't think this is a coupling to "Pad Service" because the message definition is quite abstract. Remember that this service assumes that stick ranges are in [-1,1]. Remember that Service Messages have the same class name but are defined inside two different namespaces.
[-1,1]
The Solution has two projects:
Pad.Controls
PadGui
The main form has two stick controls and 4 labels which will be red when a button is pressed. We also have an About box. Inside the main form, several properties are declared to handle pad state. Inside the StickGauge class, two properties are defined to handle x and y stick position. Whenever the values change, an Invalidate() is performed and the corresponding OnPaint code is as follows:
StickGauge
x
y
Invalidate()
OnPaint
private void panelPainter_Paint(object sender, PaintEventArgs e)
{
e.Graphics.DrawEllipse(
new Pen(Brushes.Red), 5, 5, panelPainter.Width - 10,
panelPainter.Height - 10);
e.Graphics.FillEllipse(
Brushes.Red,
panelPainter.Width / 2 - 5, panelPainter.Height / 2 - 5, 10f, 10f);
int x = Convert.ToInt32((_x + 1) * panelPainter.Width / 2);
int y = Convert.ToInt32((_y + 1) * panelPainter.Height / 2);
e.Graphics.FillEllipse(Brushes.Blue, x - 5, y - 5, 10, 10);
e.Graphics.DrawCurve(new Pen(Brushes.Blue),
new Point[]{
new Point(panelPainter.Width/2,0),
new Point(x,y),
new Point(panelPainter.Width/2,panelPainter.Height),
});
e.Graphics.DrawCurve(new Pen(Brushes.Blue),
new Point[]{
new Point(0,panelPainter.Height/2),
new Point(x,y),
new Point(panelPainter.Width,panelPainter.Height/2),
});
txtXY.Text = string.Format("X({0})
Y({1})",Math.Round(_x,4),Math.Round(_y,4));
}
This is not a really amazing graphic experience, but it is perfect to have device feedback.
Inside the service startup phase, I use this snippet to start a Window. Remember to add a reference to the assembly Ccr.Adapters.WinForms.dll:
using Microsoft.Ccr.Adapters.WinForms;
...
//Create Form
PadInfoForm _form = new PadInfoForm();
WinFormsServicePort.Post(new RunForm(
delegate()
{
return _form;
}
));
And whenever a message is received, we perform the following actions:
[ServiceHandler(ServiceHandlerBehavior.Exclusive)]
public IEnumerator<ITask> RightStickUpdateHandler(RightStickUpdate rsUpd)
{
_form.RX = rsUpd.Body.Stick.StickValueX;
_form.RY = rsUpd.Body.Stick.StickValueY;
rsUpd.ResponsePort.Post(DefaultUpdateResponseType.Instance);
yield break;
}
The assignment to form property causes the stick update.
If you are asking who notifies messages to the GUI Service, you are on the right path. Right here, we created an Active Service that sends notifications to other services. This service instead is something like a "Passive Service". It receives only messages so that we can compile and run this service but we won't get any feedback as long as we don't send messages to this service. This is the aim of the next article that I'll write. I will use VPL to connect the pad service and the pad GUI service. The next improvement for this service will be a WPF Interface. | http://www.codeproject.com/Articles/20407/Microsoft-Robotics-Pad-GUI-Service | CC-MAIN-2015-48 | refinedweb | 658 | 51.44 |
DuplicateMnemonicKiller class automatically assigns accelerators to controls
Environment: VC5, NT4 SP5
Here's how you use it:
1). Put the file DuplicateMnemonicKiller.cpp in your project's directory. (There's no need to actually add it to your MSVC project, though.)
2). Add this line to the CPP source code file of the dialog box class or property page class that you want to work on. This needs to go after the #ifdef _DEBUG stuff that ClassWizard inserts at the top of the file.
#include "DuplicateMnemonicKiller.cpp"
3). Add this line to either (a) OnInitDialog(), or (b) OnInitialUpdate(), depending on whether you're working on a dialog box, property page, CFormView, etc., after any call to the base class method. The "sReserved" parameter is for letters you don't want to be assigned a accelerator key; top-level menu items, for example. Setting "bUsePunctuation" allows the class to use characters like periods and parenthesis as accelerators.
The "nUseAnyUnique" parameter causes the class to take a "shortcut"; the values start at zero, and the larger the value, the faster the class can find a solution. What's the catch? Well, the larger the value of "nUseAnyUnique", the more likely it is that the accelertor keys will be at the end of the controls' text, and most people don't like that. Generally, you should start with the parameter at zero, and increment it by one if the class can't find a solution quickly.
DuplicateMnemonicKiller dmk(this, /*sReserved*/ "FEVH", /*bUsePunctuation*/ false, /*nUseAnyUnique*/ 0);
4). Make sure that all the controls you want accelerators for in your dialog box (or whatever) already have an "&" in them. This is how DuplicateMnemonicKiller knows which controls to try to assign accelerators for, and which not to.
5). Build and run your program.
That's it! Progress reports are written to your application's title bar every few iterations. The final results are both TRACE'd and AfxMessageBox()'d. It is important to recognize that above about 17 controls, it can take a long, long time for DuplicateMnemonicKiller to find a solution. For most normal dialogs, however, it's quick enough; if it gets too slow on a particular dialog box, try building in Release Mode. Personally, if it takes more that a minute or two to find a solution, I increase "nUseAnyUnique" by one and try it again.
I created this class using MSVC 5, but it should work in MSVC 6. Please let me know of any bugs or enhancement ideas.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/cpp/controls/controls/article.php/c2321/DuplicateMnemonicKiller-class-automatically-assigns-accelerators-to-controls.htm | CC-MAIN-2014-10 | refinedweb | 427 | 64.51 |
So, nothing extraordinary here, but I’ve finally taken the dive into LINQ. I’ve got a great book that helping me a lot that I recently review called Pro LINQ by Joseph Rattz (his humor is a little odd, but the book is great). At anyrate, moving right a long, all I did was right click on my web projects app_code directory, choose LINQ to SQL classes,
Then, choose some tables from my database. Then, in my code, all I had to write was the class in my c# code (part of my database layer classes). Here is what the code looks like (sorry about the jpg, I’m still trying to figure out how to get Copy SourceAsHTML working in vs2008)
And, without LINQ, this is what I would have had to do. No type safety and took 3 times as long to write.
You can tell what I’ll be doing for now on (climbing the curve!)
Thank you for the explanation…
return (int)SqlHelper.ExecuteScalar(connectionString, “select count(*) from content where IsDirectory = 1 and id = @id”) == 0;
That’s a single line.
And does the LINQ implementation send a full query or just a count query? That can be a big waste, especially if this is called often.
I would probably be caching this item as well. | http://peterkellner.net/2007/12/24/linqcountfirsttime/ | CC-MAIN-2017-39 | refinedweb | 221 | 78.79 |
fsetpos (3p)
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAMEfsetpos — set current file position
SYNOPSIS
#include <stdio.h>
int fsetpos(FILE * stream, const fpos_t *pos);
DESCRIPTIONThe functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard.. The fsetpos() function shall not change the setting of errno if successful.
RETURN VALUEThe fsetpos() function shall return 0 if it succeeds; otherwise, it shall return a non-zero value and set errno to indicate the error.
ERRORSThe fsetpos() function shall fail if, either the stream is unbuffered or the stream's buffer needed to be flushed, and the call to fsetpos().
- ENXIO
- A request was made of a nonexistent device, or the request was outside the capabilities of the device. | https://readtheman.io/pages/3p/fsetpos | CC-MAIN-2019-18 | refinedweb | 178 | 55.74 |
Wikipedia:Wikiquette alerts
From Wikipedia, the free encyclopedia
Wikiquette alerts are an informal streamlined way to request perspective and help with difficult communications with other editors, so it can be a good place to start if you are not sure where else to go. It is hoped that assistance from uninvolved editors can help to resolve conflicts before they escalate. That's usually the best solution for everyone involved. If problems continue, then there are further options as described in this dispute resolution process link and the info-box to the right. For more information, click any of those links, or start with this article: Wikipedia:Etiquette.
IMPORTANT: Read the instructions before posting your alert. Review the section titled Procedure for this page, immediately following the table of contents below.
Please help to respond to Wikiquette alerts. This page is run by regular editors just like you, and needs more editors to help with the alerts. Anyone is welcome to help out, and in particular if you have been helped by this page, please return the favor by offering your advice on other incident reports. That's not a requirement for posting your report, but it's a good idea, and your help would be much appreciated. Responding to alerts is also a good way to learn more about Wikipedia policies and even more, about how to work with other users to calm situations without resorting to formal procedures. Wikiquette in a way is the basis of what allows the community to work smoothly together, so those are valuable skills to develop for those who like to edit Wikipedia. Thanks!
[edit] Are you in the right place?
- To report obvious vandalism please use administrator intervention against vandalism.
- Violations of the three revert rule should be reported at Wikipedia:Administrators' noticeboard/3RR.
- If you feel an article needs protecting against rampant vandalism or a persistent edit war, please use requests for page protection.
- If you believe that one or more editors are sock puppets, please use Wikipedia:Suspected sock puppets.
- See also Wikipedia:Dispute resolution, and helpful advice is at Wikipedia:Disruptive editing#Dealing with disruptive editors
- For all other incidents that may require administrator intervention, please use Wikipedia:Administrators' noticeboard/Incidents.
- If two or more editors have warned another editor about his/her behavior, and the problem has continued, an alternative to posting to this page is to post at Wikipedia:Requests for comment/User conduct.
- If you would like to get one-to-one advice, feedback, and counseling from another editor, you can use Wikipedia:Editor assistance.
[edit] Procedure for this page
[edit] Instructions for users posting alerts
This page is not formally monitored; all helpers here are volunteers. It may take some time to receive a response. If the problem is continuing and you have not received any results from your posting here, return to this page and post an update to your original statement of the problem.
If you have not received help and the problem becomes urgent or is escalating, refer to the list above and post your report to one of the Administrator Notice boards instead. In that case, please edit your post on this page to inform us that you have reported it elsewhere.
- Post your alert to the bottom of the page — add a new sub-heading with a short description of the issue:
- A single posting per alert is sufficient. Avoid an extensive discussion of the problem or issue on this page and instead supply a simple direct explanation of the problem, along with the user-ids of the Users involved and a link or two to the page where the problem is happening. A concise way to do this is to include diffs that show the problem. (A guide to creating diffs is here).
- Describe the problem or issues as neutrally as possible. Avoid emotional content that could cause the problem to escalate.
- Sign your report by using ~~~~ or the signature button in the edit bar.
- Notify the reported user(s). Place a polite short statement on the user(s) talk page, or on the talk page of the article if several users are involved, to notify them that you have filed an alert here.
- Do not continue your discussion in detail here. Instead, continue discussing it wherever you originally were—editors responding to posts here will review the discussions where they are occuring. They may post notes there, however in situations that involve mulitple pages, reviewing editors will post their comments here, so watch this page and refer back to your entry when you notice that a comment has been posted.
If your specific issue is already being discussed elsewhere, please do not file a WQA. It is much easier for other users to help you when your dispute is being handled in one forum, not ten. If an issue is already serious enough to have gone to WP:ANI or WP:RFC, there's not much we can do to help.
If you're filing a report to complain about a WQA editor who responded to a previous WQA alert, please stop now, and think. If you were contacted by a WQA volunteer based on a previously filed alert, they were acting as a neutral third party and probably have no interest in personally entering into a dispute with you. Asking you to respect WP:CIVIL or telling you not to make personal attacks does not itself constitute any sort of incivility or personal attack.
[edit] Instructions for editors responding to alerts
All editors are invited to assist resolving reports entered on this page. Please strive for neutrality and focus first on calming tempers where discussions have become heated.
To help with this page, place it on your watchlist so you can see when new alerts are posted. Or you quickly scan the page for items here that are either marked with the WQA "Work in progress" message-box or have no status indicator at all. If an alert does not have any kind of response yet, it is open and needs attention.
- Review open items and when you find one that you feel you can help with, visit the relevant pages and observe the situation. When you are ready, enter your helpful comments and strive to improve the situation, at the disputed page if appropriate, or on this page if that seems better.
- Enter a note on this page in the relevant section to indicate that you have joined the process. Add the {{WQA in progress}} template (as displayed above) to the top of the report item's sub-section.
- If there is no action needed, or after action has produced results, enter a note on this page to describe the results, including constructive comments on about any Wikiquette breaches you may have seen.
- When the issue is ready to be closed, follow the steps below.
Closing the reports:
- If you believe the situation is resolved with consensus or at very least grudging acceptance of the involved parties, close the item by entering the {{resolved}} template at the top of the item's sub-section.
- If on the other hand, after you have done your best you find that the problems are continuing and you feel there is nothing further that you can do, then consider what the best next step for the parties would be. State your suggestions on the relevant talk page, and also in a comment here, what dispute resolution process you recommend, and include a link for them to follow. To close the item here, enter the {{stuck}} template at the top of the item's sub-section.
- For items that remain open but have no additional comments added after several days, close the item by entering the {{stale}} template at the top of the item's sub-section.
- For items that should not have been posted to WQA at all -- blatant vandalism, accusations of sock puppetry, a request for adminship, etc., please refer the original poster to the proper forum, and place the {{NWQA}} template under the section heading.
The five templates may optionally include your signature and additional summary comments by including them after a pipe, for example, {{resolved|User has been reminded about [[WP:NPA]] ~~~~}}, or {{stuck|Referred to [[WP:ANI]].}} This automatically formats the included text as small text displayed next to the template. For detailed instructions on using the templates, click any of the template name links above. Please note that using a timestamped signature with one of these templates will delay the automatic bot archival of that alert to five days from when you sign the template.
[edit] Archiving alerts
Reports are considered closed and automatically archived by bot (whether tagged or not using any of the templates described above) five days after the last timestamped signature in the discussion. Reports marked resolved, stuck, or NWQA (as above) may be manually archived sooner than this. Links to the archive pages can be found in the Archive Box next to the Table of Contents above.
[edit] Active alerts
[edit] User:Thegreyanomaly
[edit] RobJ is trouble
[edit] User:B'er Rabbit
This user is upset about an AfD he feels is incorrect (which is of course acceptable), but responds to this with personal attacks. If this was a one-off incident, I wouldn't really care, but he has done this to multiple users[7], and looking at his recent contribs, also a lot worse on completely unrelated pages"anal-retentive utter douchebags". A warning on his talk page was met with even worse PA's (IP, but looking at the contribs and language, it looks very likely that it's the same user)"You are a straight up ass hole", "you major pig-headed DOUCHE of the world". User claims to have retired. Input from uninvolved people may be helpful here. Fram (talk) 11:16, 25 June 2009 (UTC)
- ...already at WP:ANI for hopefully an enforced retirement, and as it requires verification that the IP editor was indeed the user. (talk→ BWilkins ←track) 11:27, 25 June 2009 (UTC)
- User was given a 2-week break in the briar patch on June 27 [8] which will hopefull chill him down sufficiently. Baseball Bugs What's up, Doc? carrots 03:53, 1 July 2009 (UTC)
[edit] ChyranandChloe
ChyranandChloe has dominated Health effects of tobacco and the associated Talk page, and there are huge WP:Ownership issues which are preventing the article being edited in the usual way, and have lead to the article becoming POV. See Talk:Health effects of tobacco#Ownership and POV. Johnfos (talk) 05:27, 26 June 2009 (UTC)
- This is more of a content dispute than a civility issue. I would recommend initiating a request for comment on the articles talk page in order to involve more users and form a consensus on what form the article should take. I do see the problem you have identified though, Chloe does appear to want to retain control of the article, which is not acceptable. I'm going to leave a note on her talk page to that effect. Beeblebrox (talk) 20:47, 26 June 2009 (UTC)
- Johnfos's comment is a personal attack. My intentions are simple, improve the article. Do I believe the actions conducted by Johnfo improves the article? No. The statements in the discussion explains how. Do I believe my comment was well written? No, if I could take it back, I would. Does this imply that the comment is without merit? No. Now, before templating, ask how. Are you saying, Beeblebrox, that because I "want to control the article", I should be dismissed, without reasonable discussion, and without an objective assessment? I believe this article, and those who work on it, should receive and objective assessment that does not attack character first and content second. ChyranandChloe (talk) 19:49, 27 June 2009 (UTC)
- Please read my above remark, in which I clearly acknowledge that the core problem here is a content dispute and I suggest appropriate remedies. I don't see any blatant incivility or personal attacks or anything like that coming from either of you, but you do seem to want to keep "your" version of the article. That's all I'm trying to indicate, and I think the simplest way out is to initiate an rfc, both of you state your case, then let others chime in until a rough consensus is reached on the article's structure. Beeblebrox (talk) 21:30, 27 June 2009 (UTC)
- There was no discussion over content between me and Johnfos over the issue of Sections "Prognosis" and "Cancer". The previous was ambiguously short, and quickly resolved itself. It feels like we're skipping steps. A RFC entails that we've actually made our statements about content, so far its been about character. Now, you have stated that I "want" to retain my version of the article, which is "not acceptable". First, support your claim. How does one revert, and one comment, both of which are applicable under WP:BRD, become unacceptable? I understand the dispute resolution process, Beeblebrox. Content is discussed on the article's talk page. Wikiquette is about behavior, and that is the central point I am discussing on this talk page. ChyranandChloe (talk) 22:10, 27 June 2009 (UTC)
I'm not unhappy with the way the article is at present, but if someone wishes to start an RfC that's fine. I feel there is an opportunity here to discuss the ownership issue, as it does fundamentally relate to wikipedia etiquette. I would like to know why ChyranandChloe made all these "I" statements (See Talk:Health effects of tobacco#Ownership and POV) as if she owned the article:
- "I chose the section titled "Prognosis" for a reason."
- "I chose "Prognosis" ..."
- "I layed out the article ..."
- "I actually decided not to have a "Cancer" section ..."
- "I chose to break it down ..."
-- Johnfos (talk) 22:23, 27 June 2009 (UTC)
- Two parts. The first, to write a comment to be more objective, such as stating the points without acknowledging who is making those points, would have been possible. However writing in such a manner is unnatural and hard to reply to. The intent was never to create a sense of ownership. This appears to be a misunderstanding leading to the accusation. The intent was to prod Johnfos to discuss the issue while plainly making (1) what the disagreement was about and (2) who he was going to be disagreeing with. The second part, was accuracy. Alluding to the "2008-2009 Copyedit" proposal.[9] Without using "I" felt like shirking responsibility that: I was the one who laid out the article you are dissatisfied with in the past—and with that acknowledgement, that would seem offensive.
Now you have a person to disagree with and the background to why the person is disagreeing with you, discussion seems a lot more natural—unlike the previous one where you've only minced a sentence worth of words without exploring into any detail what you or I were trying to accomplish and what we could do together to improving it.[10] 23:07, 27 June 2009 (UTC)
- In terms of number of edits on the article and Talk page, you have dominated for a long time. Have you ever really considered that perhaps you've used all those "I" statements because you have a huge personal investment in this article and want to retain control over it? It remains to be seen whether you can step back a little and encourage other editors on the page instead of pushing your own agenda. I would particularly like to see User:FocalPoint being encouraged, as I feel he keeps a close watch on what is going on, and has had some good ideas in the past which haven't been taken up. And please bear in mind that if someone does a handful of edits and provides good edit summaries, there often is no need for discussion on the Talk page. It is not as if people have to OK their edits with you, is it. Johnfos (talk) 00:06, 28 June 2009 (UTC)
- You are continuing the presumption that I dominate the article. I completely disagree. I have considered the use of the word "I", which was explicitly stated two comments ago "Do I believe my comment was well written? No[...]"; and from which I have further described the "misunderstanding" in my previous comment.
Now your second assertion that: I "have dominated for a long time". I have the most edits. To specify an exact date, my first edit was in August 2008,[11] but I didn't seriously work on it until December of 2008 with the "2008-2009 Copyedit" proposal.[12] A long time? No, at least not in the context that this article was created in 2006.[13] Now for the second part, do I dominate the article? No, I ask about content: this is always the first question I always ask. The answer to that question, judged objectively, is the answer I go by. Personal investment is a misunderstanding, I go by what improves the article.
So far you have answered little about content and much about foul play. Now before you continue to what amounts to whining: are you saying that because you have done none, asked none, you derserve that this comment be an apology? That is, a consensus resolve is an apology. When this is about character, the objectivism I offer in content disputes—how I gave Focalpoint and RFC to his liking,[14] and how I offered MastCell much needed work[15]—are off. ChyranandChloe (talk) 06:10, 28 June 2009 (UTC)
- You have dominated the article and its Talk page for a long time, and you are continuing to take a domineering and aggressive attitude here and now. Instead of taking a mature "tell me more" approach to the ownership issue I am raising you have resorted to immature name calling about "personal attacks", "foul play", "whining" etc. Please try to understand and accommodate what others are saying more and this will help you grow as an editor. Johnfos (talk) 06:40, 28 June 2009 (UTC)
- I would like to point out the civil attitude I have received from User:ChyranandChloe in the call made by the user for Wikipedia:Dispute resolution. I hope that this attitude will continue here.
- More, I see that User:Johnfos has made no personal remarks here.
- On the issue discussed here, I will not devote a lot of time searching the history of the article, I will just deposit my personal experience: With this edit I understand that ChyranandChloe proposes to me that I work on sandboxes for the article in the user's namespace (User:ChyranandChloe/Workshop 17 and User:ChyranandChloe/Workshop 15). These sandboxes maybe appropriate for ChyranandChloe preparing a text, but they are not the proper way for other users to edit the article. If someone else would edit there, it would mean that the edits would be vetted by ChyranandChloe instead of being in full view from all the people who are interested in the article. I assume good faith, but I also see that the claim for "WP:Ownership issues which are preventing the article being edited in the usual way" is reasonable. I believe that ChyranandChloe should consider User:Beeblebrox's suggestions.--FocalPoint (talk) 16:36, 28 June 2009 (UTC)
[edit] PeeJay2K3
I have been involved in discussions with User:PeeJay2K3 on Wikipedia talk:WikiProject Football on Wikipedia talk:WikiProject Football#National team yearly articles? and the user seems to be WP:HOUNDING. I have civilly given enough evidence to support my opinion on the matter and am ready to just leave it as it. My concern however is that this particular user has a vendetta to prove. Brudder Andrusha (talk) 20:53, 26 June 2009 (UTC)
- This is something we say again and again here. Two editors find themselves in an argument, and go around and around without stopping to seek outside input, until finally they begin resorting to insults and so forth. If a content dispute goes two or three "rounds" without anyone changing their position, it is time to seek more input as opposed to continuing with endless circular debate. In cases where only two editors are involved a third opinion can be quite helpful. In more involved disputes you can initiate a request for comment. Both of you seem to be intelligent and helpful editors who just didn't know when to step back and let it go for a minute. Beeblebrox (talk) 21:17, 26 June 2009 (UTC)
- In my observation of two of the rows between the two, I would suggest that the complainant here is more in the wrong than the "defendant", both in terms of factual accuracy of the position they are arguing here, the proper use of an encyclopaedia here, and willingness to turn a discussion of facts or principles into a personal issue (first of the above), and also seems happy do undermine a proposal for a unified, encyclopaedic approach with sarcastic parody, as here. Kevin McE (talk) 07:28, 27 June 2009 (UTC)
- Agree with User:Kevin McE, it is PeeJay2K3 who is in the right on this occasion. - fchd (talk) 19:18, 27 June 2009 (UTC)
[edit] User:Balkanian`s word
Recently, User:Balkanian`s word embarked on a campaign to refer to me exclusively as "The reverter" [16] [17], which I find highly disparaging as well as an attempt to discredit me. After I reported him here, he agreed not to do it anymore [18] [19]. But now, he has started again, using this section heading to refer to me [20]. I really do not appreciate this, especially after he said he wouldn't do it anymore. --Athenean (talk) 00:36, 27 June 2009 (UTC)
- I have asked sorry to the editor on User talk:Athenean; but he is not willing to stop reverting every page I edit. In this case I added some sources, and Athenean reverted them saying that "there is no inline"; while he could just request inlines without reverting it or put there a {{dubious}} or {{inline}} template. His attitude is quite non-wiki; trying only to remove everything which mentions "Albanian" or "Albanian".Balkanian`s word (talk) 12:27, 27 June 2009 (UTC)
[edit] User:Logger9 dumping off-topic material in various solid-state-physics articles
While working on glass transition, I stumbled over long sections that are cleary off-topic, dealing with the physics of glasses, but having no relation to glass-liquid-transition. It quickly turned out that these sections were contributions by User:Logger9. I invited him to discuss the contents section by section, but instead he just reverts my edits.
Looking deeper into the links and into his other contributions, I found that over the past months he has been pasting entire sections of text almost indiscriminately in quite a number of different articles, with only loose connection to the subjects therof. Just one examples: a section about transparent ceramics has been inserted in the articles Nd:YAG laser, transparent alumina, and Aluminium oxynitride. In each case, the insertion featured a micrograph from a 1983 PhD thesis (obviously his own) that has only very, very remote connection to the subject.
- another example: redundant and idiosyncratic material in the biography John W. Cahn.
In the sequel, I spent an entire evening cleaning up. In my opinion, this case reveals a severe problem with quality control. I think, people who saw Logger9's contribution were just so impressed by the scientific apparence (tons of references ! reading lists reaching down to Fourier's theory of heat !) that they did not feel competent and confident enough to protest that the material, in the context into which is was pasted, was bordering blatant nonsense.
Vandals have never been a serious threat to WP. But how to deal with a user who does 20% good, 80% harm ?
-- Paula Pilcher (talk) 07:45, 27 June 2009 (UTC)
- This seems to be an ordinary difference of opinion about the content of articles which ought to be resolved by the usual processes of dispute resolution such as RFC. User:Paula Pilcher seems to be escalating to inappropriate places such as WT:AFD#Too complicated and bringing the matter here seems inappropriate too. Colonel Warden (talk) 09:10, 27 June 2009 (UTC)
- Forget about WT:AFD#Too complicated, please. That was meant as a more general comment on bureaucratic procedures that increase the assymetry between those who bring nonsense in and those who try to keep it out. -- Paula Pilcher (talk) 09:50, 27 June 2009 (UTC)
- Please see above where it is explained, "What WQA CANNOT do: Intervene in content disputes ... Mediate longterm, ongoing conflicts between two users". Colonel Warden (talk) 09:57, 27 June 2009 (UTC)
- It's not just between two users. It's pretty clear that before me several other editors have tried to convert Logger9 into a productive contributor, and they have all given up. And so will I, if upon this alert the community does not prove capable of dealing with this special kind of trolling. -- Paula Pilcher (talk) 13:35, 27 June 2009 (UTC)
- Maybe but something has to be done. Many other editors have had similar problems with Logger9 in the past, see two very long threads on Talk:Glass. We ended up compromising by allowing Logger9 to create the article Physics of glass. However this article is extremely technical and pushes the POV of Logger9 rejecting the views of everybody else. In general Logger9 will not accept ANY removal whatsoever of their content and will simply revert the efforts of other users who attempt to disagree with their POV (And will NOT engage in any form of discussion about it). The reason why I tolerated the creation of Physics of glass was because I simply wanted to stop the edit warring and conflict caused by the Logger9 at the Glass article, in particular to stop the inisitent copy and pasting of huge portions of the same text into numerous other articles which took us a lot of time to put right. Although evidently a lot of work has gone into Logger9's contributioins, so far none of us have been able to sufficiently understand the content to be able to comment on its factual accuracy. However the key POV that Logger9 is pushing is that glass behaves as a Solid and as a Liquid. To be honest this idea is an established fantasy, yet Logger9 is attempting to push their beliefs in Physics of glass and will not compromise. You are absolutely correct in that we do not feel competent enough to protest the material, however this just goes to show how unencyclopedic the material actually is, when editors who work in the field of glass cannot even understand it! Jdrewitt (talk) 09:14, 27 June 2009 (UTC)
- Same situation for Phase transformations in solids: seems to be left to him as a playground for stuff he cannot land in Phase transition. Idem kinetic theory of solids, now blocked because of copyright violation. -- Paula Pilcher (talk) 09:38, 27 June 2009 (UTC)
- Glassy state was another dump ground for his blunder. Nobody else took note, let alone care of. I replaced it by a redirect to glass. -- Paula Pilcher (talk) 15:13, 27 June 2009 (UTC)
- If you look in the history of that article you will see that we have had problems with this article and its content previously. The major issue here is that Logger9 will not engage in any conversation with us about wikipedia policy and how to write an article that is accessible to all readers. In general Logger9 does act in good faith and has contributed a lot to wikipedia. They are not a malicious user and their conduct is not really the issue here, edit warring being the only real issue conduct wise. The main issue is that Logger9 simply doesn't agree with us that there is anything unencylcopedic about their contributions and it is difficult to engage in discussion with them about how to make the articles adhere with wikipedia policy. Jdrewitt (talk) 17:42, 27 June 2009 (UTC)
Colonel Warden says, I should transfer this discussion to RFC. RFC says, Request_comment_on_users is the appropriate section. Wikipedia:RFC#Request_comment_on_users says before starting the procedure two users should contact the user in question on his talk page. So I did: User_talk:Logger9#Reverts_and_deletions. Now I am waiting for a second person to admonish Logger9, and then we see whether further action is needed. -- Paula Pilcher (talk) 17:53, 27 June 2009 (UTC)
- An RfC for the specific articles involved would be more appropriate, e.g. there are issues raised at talk:Physics of glass which have still not been addressed and it would be very helpful if the issues raised could receive wider attention. Jdrewitt (talk) 18:36, 27 June 2009 (UTC)
- I am both completely uninvolved and ignorant of the subject matter, but I was invited to look into the matter at Glass transition, and full protected the article as a result of edit warring. Logger9 was on the verge of a 3RR violation today, and Paula Pilcher and Colonel Warden both reverted as well for a total of about 7 reverts today (June 27) alone. Very few editors are using edit summaries on that page, and when the are they tend to be unhelpful. Several of Logger9's comments on the talk page are incivil, and in general I see little effort to discuss changes and gain consensus. Unfortunately, the most recent version, which is the one I protected (per policy) is Logger9's preferred version. Nevertheless, hopefully the protection will force all those involved to make an effort to discuss the changes they'd like to see to the article. I've given them 7 days to try. Exploding Boy (talk) 19:24, 27 June 2009 (UTC)
- The facts of the matter may be seen at edit history of Glass transition. Logger9 reverted 3 times but Paula Pilcher reverted 5 times. Paula Pilcher thus broke 3RR after being warned specifically about this. The article has been protected upon the version that she was warring for, not Logger9's version as Exploding Boy states. So far as civility is concerned - our main purpose here - Paula Pilcher seems quite uncivil, being overly given to ad hominem attacks. Logger9's may seem uncommunicative but perhaps someone should tell him of these proceedings. Colonel Warden (talk) 19:46, 27 June 2009 (UTC)
- You're right. I protected immediately after I saw your most recent revert, Colonel Warden, but it seems Paula managed to slip in another revert as I was protecting the page. Exploding Boy (talk) 19:57, 27 June 2009 (UTC)
- And having reviewed her contributions, I have temporarily blocked Paula for edit warring, although she has indicated she is planning to take a break for a few days anyway. Exploding Boy (talk) 20:05, 27 June 2009 (UTC)
- You wrote: "Unfortunately, the most recent version, which is the one I protected (per policy) is Logger9's preferred version."
- Nothing could be further from the truth. All of my original work has been removed. So be it! -- logger9 (talk) 13:15, 28 June 2009 (UTC)
- If you read a little further down you'd have seen that I began protecting the article while your preferred version was live, but it was reverted one more time before I was done. However, protection isn't about whose version is preserved; it's about preventing an ongoing edit war and giving everyone a chance to discuss and build consensus about the article. As to your "original work," if you are referring to original research, that's not what we do at Wikipedia. In any case, every version is stored in the article history and can be easily retrieved. So please: go to the Talk:Glass transition page and explain to your fellow editors exactly what changes you would like to see on that article, build consensus, and work together instead of edit warring. Exploding Boy (talk) 15:23, 28 June 2009 (UTC)
- It's NOT original research (though I figured that you would come up with that). As I stated clearly, it is my original work on the article. And you have crushed it in its entirety. If you want to replace it, then do so. If not, then don't. I am here to place the work in front of you (which by the way has taken a lifetime to assemble). If you choose not to use it, that is your issue.
- The new editor in question is completely irrational about it. Of course you have kept her version. She muscles herself in, and has you all on some kind of crazy witchhunt. She is sequentially attacking every single thing I have ever done for Wikipedia. I have worked VERY hard for this organization. And this is the result ?
- No thank you. There is no point in me spending days on end in some emotionally overheated unending discussion. I told folks in the beginning: If you don't like my work I'm outtahere. I was asked specifically to stay and produce. And that is exactly what I have done. Now if you want to let some crazy lady waltz in and waste it all step by step, feel free. As far as my "fellow editors" go, it is quite clear what she thinks of it. She has performed unwarranted and unjustified blanket deletions of every single byte of it repeatedly. While I have accepted ALL of hers. And you openly accuse me of being just as bad as her ? I don't see any comparison at all, in terms of our repeated reverts. I am merely trying to keep an animal level contributor from voraciously eliminating every single thing that I contributed to that article. She curses me openly and calls me names. And I have absolutely NO idea why. There seems to be some sort of inappropriate (and highly unprofessional) personal vendatta here. And she(?) clearly has a massive agenda.
- Do what you want with it. My work is yours if you want it, and you (and she) have currently chosen not to use it -- thus rewarding her handsomely for her final unwarranted revert. It's quite a game you have going there.
- The work is all I can offer. The work is WHAT I DO.
- What she does is.....well....you know......-- logger9 (talk) 17:20, 28 June 2009 (UTC)
(unindent) The best place to handle something like this, in my experience, is a WikiProject such as WikiProject Physics, because there you get a pool of experts in the topic area. If multiple editors come to an agreement that the contributions of an individual editor are disruptive, they can combine to revert edits that don't "play ball" without themselves reverting more than once per day. In situations like that, admins will almost always side with the group rather than the "rogue" editor. One-vs-one edit-warring is always a losing proposition for both sides, regardless of who is right. (Disclaimer: I am not an admin, and this approach is not officially sanctioned, but in my experience it is the only approach that really works.) Looie496 (talk) 17:34, 28 June 2009 (UTC)
- I think so too. That is why I have expressed my concerns there (WP:Glass) over all three articles she is attempting to waste. We have already agreed to do some rebuilding of what she has managed to lay complete waste to. But regarding this particular issue, I think that they are all having too much fun watching what happens here before they put anything on record. No one is saying ANYTHING about this one. They are just watching my trying to stay afloat by my own individual effort, time and energy.... -- logger9 (talk) 17:51, 28 June 2009 (UTC)
- Can you provide links to these discussions? On Wikiproject:Glass I see one comment posted by you and answered by no one. Exploding Boy (talk) 17:56, 28 June 2009 (UTC)
- (ec)Nobody has crushed anything, Logger9. The article was protected per the protection policy: without my making a determination as to whose version might be "right." The point of protection, as has been explained to you numerous times, is simply to put a stop to edit warring. The protection I have placed on the article is temporary, as was also explained, and has eliminated nothing, as all previous versions can be accessed via the page history.
- As I have also explained to you, Wikipedia is a collaborative, consensus-built, discussion-driven project where, irritating as it may be, things will not always go our way. If you don't want to work in such an environment, or if you prefer not to have your writing ruthlessly edited, then write a book or create your own website about the things you care about.
- To be frank, you're not helping your case. I'm making no judgment on the article content, but while Paula's reverting was problematic enough to earn her a block, your own behaviour is also problematic. Since the article was protected I've seen no effort on your behalf to discuss the problems you see with Glass transition. On a project like Wikipedia you cannot simply make it known that you see no point in engaging in discussion and threaten to leave if people don't like your work. Exploding Boy (talk) 17:46, 28 June 2009 (UTC)
- (ec)I have placed a notice on the Wikiproject:Physics talk page, as well as on the RFC articles talk page and on the Wikiproject:Science talk page. In regards to the above post, if multiple editors come to an agreement that is called consensus; that is what we do here, and what Logger9 needs to do on the article in question. Exploding Boy (talk) 17:46, 28 June 2009 (UTC)
- Regarding the statement about your "original work," I suggest you read Wikipedia:Ownership of articles. Exploding Boy (talk) 17:51, 28 June 2009 (UTC)
- I own nothing here, and I make no claim to. As I said, the work is all yours to do with as you see fit. I mean, it's not like I can take it back, right ? Use it if you choose to. I sincerely hope that you choose to use it for educational purpose :-) And if not, no hard feelings. I have a very full life in the academic arena. I am just trying to help here. -- logger9 (talk) 18:00, 28 June 2009 (UTC)
- How? By blocking her? Exploding Boy (talk) 17:56, 28 June 2009 (UTC)
I'm not here to make a determination about which of you is right, only to prevent the edit war that had taken over that article. What I am telling you is that you can end this edit war and prevent future ones by establishing a consensus about the article's content and presentation on its talk page. Nothing is just accepted: consensus is always required. Whatever consensus is reached may not satisfy you completely; you may feel it's unjust or unnecessary to discuss these issues; but that's how it goes. That a version of the article you dislike happened to be the one that was protected is only coincidence. I urge you to move on. Take your concerns about the article content to its talk page. Build consensus. Exploding Boy (talk) 18:20, 28 June 2009 (UTC)
- I hear you -- but my defense of that work is already there ........Massively. -- logger9 (talk) 18:27, 28 June 2009 (UTC)
- Good. If there is clear consensus, then that should be clear on the article's talk page. Is it? If it's not clear, then it's time to make it clear. Go to the talk page, start a new section about the relevant issue(s), and begin the conversation. Exploding Boy (talk) 18:32, 28 June 2009 (UTC)
I have reacted to an invitation by User:Exploding Boy at Talk:Glass to help mediating this conflict. Upon reading this discussion, I understand the conflict has spread over several pages, but I shall start with Glass transition indicated as the hottest spot. Let me introduce myself first.
- As a trained materials scientist I have worked in several areas, including a few years in a glass research lab. While observing their activities, I was not involved in that research and have never had personal interest in that topic.
- On wikipedia, I have had experience with scientific disputes and their resolution.
- To the best of my knowledge, I have not collaborated with any editor involved here.
- I have not read any of the articles being discussed (except maybe for quick technical cleanup of transparent alumina).
- I am not an administrator and would like to ask User:Exploding Boy to help when administrative advice or action is needed.
- From what I have read about this dispute, I see excessive amount of personal attacks and reverting actions. First thing I propose is to stop that, by all parties and all means, and focus on content discussion at talk pages first. Would anyone who opposes that (e.g. "I'm fed up with talks and will fight anyone" or "Who is this guy to teach me what to do") please speak up. On my side, I pledge to be as objective as I can, trying to improve the content of the discussed pages. Best regards. Materialscientist (talk) 08:20, 29 June 2009 (UTC)
[edit] User:Sikh-history
Civility issues due to the Sunny Leone article. Tabercil, an administrator, had originally removed an assertion that violated WP:BLP.[21] Mr. Sikh-history responded with a revert and a warning of ownership against Tabercil.[22] I revert back and warn Sikh history twice about assuming good faith.[23] He sends me two ownership warnings.[24] What to do... what to do? Morbidthoughts (talk) 20:40, 27 June 2009 (UTC)
- There's no real civility issues - what the three of you have is a content dispute, and at least 2 of you are handling it poorly. Open an WP:RFC for the article in order to get a 3rd opinion. (talk→ BWilkins ←track) 20:55, 27 June 2009 (UTC)
[edit] Halfacanyon accusing me of POV-pushing, lying, etc...
User:Halfacanyon has been assuming bad faith right from the beginning. He is taking my edits very personally and in spite of my long responses to his accusations he continues to be hostile. talk page. dispute at Israeli settlements. Wikifan12345 (talk) 08:40, 29 June 2009 (UTC)
- First, you have not advised the other user of this WQA filing for their response... please do ASAP. Second, this appears to be both a content dispute AND WP:RS dispute that has led to wholesale tit-for-tat "you're a liar, no you're a liar" dispute. (talk→ BWilkins ←track) 11:25, 29 June 2009 (UTC)
- I just wanted to make sure people were aware of Half's accusations before it got out of hand. Half was extremely hostile from the very beginning and as I explained in the ANI, he claimed ALL my edits violated wiki policy. He said I deleted references, removed cited material and then when I explained why that wasn't the case he accused me of POV-pushing, trolling, and demanding I "take a break" because I am incapable of editing fairly. I disagree with the tit-for-tat scenario, though I do see your point. Thanks. Wikifan12345 (talk) 19:55, 29 June 2009 (UTC)
[edit] Talk:Greek love
Some quiet words from third parties on user talk pages is possibly required, here. For more information, see what I wrote at Wikipedia:Administrators' noticeboard/Incidents#Greek love. Uncle G (talk) 14:51, 29 June 2009 (UTC)
- If it's already at AN/I, then there's not really much we can do. Wikiquette alerts is an informal mediation process, but its already up for the admins to take a look at, there's no need for two separate discussions. Fingerz 14:05, 30 June 2009 (UTC)
[edit] User:VitasV
I don't normally do this, rather I try to focus on editing and ignore the bullshit but this user just struck a nerve.
User has an extensive history of warnings and notices on his talk page dating back to 2007 which he promptly removes without reply and no consideration of his actions showing he has not learned anything. So much so that he decided to have a banner at the top of his talk page stating anything he finds "annoying" will be removed. I understand that it's not against policy to blank his talkpage but this user is causing a clear disruption to the project, and still is violating WP:CIVIL. He's already been given a long and civil welcome and plenty of help by a thoughtful user looking to mentor him, which of course he deleted [25]. Given his immature age I can understand him being cut some slack for his actions but this has gone on too long. If he doesn't learn anything and just blanks warnings then goes on his merry way repeating his mistakes then what good is he to the project. I don't have a recommendation on what action to take here, I just thought it prudent that there be some kind of log of his disruption outside of his own talk page. Most of the disruption was done in 2007 but now it seems this user has returned and clearly hasn't learned a thing judging by his recent edits.
Just a few examples of disruption:
Previous Wikiquette alert: [26]
Previous blocks for 3RR violation: [27]
Violating WP:OR and WP:V [28] [29] [30]
Clear trolling: (most recent) [31] [32] [33]
Ownership of articles: [35] [36]
Removing speedy deletion tags: [38]
Reckless editing/Removal of references: [39] [40] [41]
Personal attacks against users: [42] [43]
Stubborn refusal to work with other editors: [46] [47]
Blanking talkpage full of nothing but a whole slew of warnings: [48] [49]
All this plus dozens of uploads of non-free media. -- Ϫ 10:29, 30 June 2009 (UTC)
- Ok, so the "most recent" diff's that you provide are from March. As you probably already know, removal of warnings is tacit acceptance of that warning. I'm not sure what you're looking for from WQA? If you think this is long-term activity, it needs WP:RFC/U. (talk→ BWilkins ←track) 10:35, 30 June 2009 (UTC)
[edit] User:Tfz (Purple Arrow / Purple), User:HighKing and User:Dunlavin Green
I'm reporting personal attacks against me and lack of etiquette from the above-mentioned users with regard to their comments on [Talk:Military history of Britain|this talk page]]. This is somewhat surprising considering Tfz's stated opinion of "trolling", "respect" and "posturing" (see his talk page).
Diffs:
- "his typical pov-motivated editing" (by User:Tfz)
- "page move was performed by User:Setanta747 who has previous history" (by User:HighKing)
- "According to his homepage he's a British unionist in the northeast of Ireland so that gives us all perspective on the politics behind both changes" (by User:Dunlavin Green)
- "And when you look at the edit history of that editor, Setanta747, you start to understand that the motives were inspired by his anti-Irish/pro-British sentiment and nothing more, and has since left the project" (by User:HighKing)
I have no idea how many more personal attacks against me might have come into being in Wikipedia over the last few months, but I would like to see them, and inability to assume good faith, eradicated.
A note to HighKing: I couldn't possibly be anti-Irish, as I am Irish myself. Just because you may be prejudiced, there is no reason to tar everyone else with your own brush. You clearly know nothing of my "motives". --Setanta 16:38, 30 June 2009 (UTC)
- Until I see both an apology and a full retraction of "Just because you may be prejudiced", I won't even start looking any further. Supposed incivility should not be reported with incivility of its own. (talk→ BWilkins ←track) 20:52, 30 June 2009 (UTC)
[edit] User:Ottava Rima - Incivility and disruptive threats
- Ottava Rima (talk · contribs · deleted contribs · page moves · block user · block log)
- Wikipedia:Featured picture candidates/The Hunting of the Snark[[50]]
- User talk:Ottava Rima#Civility
I'd like to raise Ottava Rima's incivility and threats based on bad faith here in hoping that the user can have an object evaluation on his inappropriate conduct toward me. The user is known for incivility and the fact that he has been even blocked from Wikipedia Review for his trolling and harassment against editors there. The user has accused me of destroying his FP format and being disruptive because I tried to help him on formatting. That is absurd. I've participated in reviewing images on FPC pages, and I've never seen Ottava Rima active there.
Today, I saw a group of images of The Hunting of the Snark is nominated as a Featured Picture Set, and it had received no review yet. I thought the images take too much space compared to other nominations, so that is why nobody seemed to comment on the images themselves. I suggested him to rearrange it for viewers[51], and one admin agreed with my suggestion.[52] He said if I can rearrange it, do it[53], so I did with time.[54] I even created a new sandbox just for the occasion.[55] Obviously, he did not like my formatting, and kept insisting that my computer is error[56] because he suspected my computer may make the image arrangement look irregular.[57] I said it is not because I bought my computer just a week ago[58], and suggested him to check his own.[59] then he increased his rudeness and attacked me of being disruptive and destroying the format and his nom.[60] Instead of becoming uncivil, he could've just said "I'll revert your edit because that is not what I intended". Then, I would be okay with it since he seems to be too stressed by seemingly his first FP nomination and the fact that none has commented for his nomination.
However, he denounced my intention and action "disruptive" over and over and threaten me to report me if I would not remove our discussion to elsewhere.[61] and said "untruth" that with my computer, the 5 images are shown in horizontal lines.[62][63] I visited his talk page for resolving the issue, and reminded him to be civil, but no fruit. What I can not bear with his serial and false accusations[64] is that he does not assume good faith on my trial to help him. I've got a FP and have participated in reviewing images on here and Commons, so I really do not understand his hostility and accusations. I movdd our discussion to its talk page per his request[65], but he continued his incivility and harassment (he said he will seek for me to be blocked if I do not remove my discussion, and his such threats fit for him behavior), so I ask your opinion on this. Thanks.--Caspian blue 02:26, 1 July 2009 (UTC)
- The above user is declaring things as incivil or personal attacks which are clearly not. The above user is also causing problems and has only caused disruption. The above is just more bullying and will only be stopped by Caspian being blocked. Ottava Rima (talk) 03:02, 1 July 2009 (UTC)
- Again, Ottava Rima's another harassment and threats. If you do not retract the absurd personal attacks, well, your bullying and harassing behavior should earn "block".--Caspian blue 03:15, 1 July 2009 (UTC)
- Per NPA, your false accusation of me making a personal attack is a personal attack and a violation of NPA. This is the third time tonight you have violated policy. Ottava Rima (talk) 03:18, 1 July 2009 (UTC)
- The basic cause of this disagreement seems to be the fact that Ottava doesn't really understand what the word "row" means, and confuses it with "column". These sorts of misunderstandings can be very annoying, but taking them to the level of a WQA is also annoying. Instead of pursuing this, both editors should now go away and come back tomorrow, after having a chance to calm down. Looie496 (talk) 03:21, 1 July 2009 (UTC)
- Not even close, Looie. The basic problem was that it was formatted fine and fit in WP:FPC and his formatting caused a massive disruption. He then caused disturbance because he chose to rather bicker about something that had nothing to do with reviewing pictures and then started attacking me on multiple pages to further the disruption. And calm down? I am completely calm, so your comments are absurd. Ottava Rima (talk) 03:24, 1 July 2009 (UTC)
- A good faith edit should never be described as a "massive disruption." I agree with Looie, this shouldn't have made it to WQA. Soxwon (talk) 03:26, 1 July 2009 (UTC)
- Disruption is a result. It has nothing to do with intentions. You can accidentally delete the main page and it will still be a massive disruption. Ottava Rima (talk) 03:31, 1 July 2009 (UTC)
- You're being disruptive, so you can not accept good faith helping of mine and I think the format looks more tidy than yours that unnecessarily takes too much space.--Caspian blue 03:48, 1 July 2009 (UTC)
- I strongly disagree with the absurd allegation of "disruption" against me. I've helped out formatting nominations with multiple images on FPC and they appreciated my help unlike Ottava Rima's bad faith. You said I can rearrange them, and the poem is "non-sense poem", so I did it just for you. I clearly did under your permission. However, since Ottava Rima has threaten me to be blocked repeatedly for his unreasonable disliking of my formatting, I have no hope that he behaves "civil" to me. For DR, I've brought the problem of Ottava Rima to the right place. If I demand a block for his threats and harassment, well I'd have gone to AN/I instead of reporting here. Ottava Rima's bad faith accusation is already pointed out by others here.---Caspian blue 03:35, 1 July 2009 (UTC)
- It is 100% obvious to anyone who looks at WP:FPC while Caspian blue's three column edit was in place that the page was disrupted because of the formatting. And you can be called disruptive without AGF at all, as disruption is a result of action and has nothing to do with intent. Just like NPA and CIVIL, you have misstated AGF. These are serious problems. Ottava Rima (talk) 03:39, 1 July 2009 (UTC)
- That is your "wishful thinking". You're indeed continuing to violate NPA and CIVIL and AGF. Your first contribution to FPC is just like this, I believe you would repeat this seriously rude behavior to reveiweers.--Caspian blue 03:44, 1 July 2009 (UTC)
- Enough, could you both please just take a few hours to do just cool down? This is really much ado about nothing. Soxwon (talk) 03:45, 1 July 2009 (UTC)
- Ottava Rima's threats are not worthy to report?--Caspian blue 03:48, 1 July 2009 (UTC)
- I'm thinking of selling tickets to watch this little row. :) Baseball Bugs What's up, Doc? carrots 03:50, 1 July 2009 (UTC)
- C'mon bugs...As for Blue, why should you care if his threats are unjustified? I don't really see anything worth taking offense over from either of you until the discussion had degenerated to the point that it was nothing but slinging accusations back and forth. Soxwon (talk) 03:54, 1 July 2009 (UTC)
- Bugs, you love me so much, but didn't I say what allergy I have? :) At Soxwon, if he clearly had said "he dislikes my format", then I'd be more than fine, but he insisted on my computer wrong, so my viewership and edits being "disruptive". I'd recommend you to be familiar with FPC more because providing "better presentation" is also a responsibility of nominators. However, since Ottava Rima stared the disruptive behavior and continued so, the report is warranted. Also on DYK areas, I've seen the "same behavior" of Ottava, so I would more give a credit to others' assessment on Ottava Rima.--04:05, 1 July 2009 (UTC)
- This row seems to be about the layout of the illustrations. Under OR's arrangement, I'm seeing 2 per row, and they fit my screen, which is 1024 x 768. Under CB's arrangement, I have to scroll to the right. That would indicate that OR's arrangement is better. Baseball Bugs What's up, Doc? carrots 04:07, 1 July 2009 (UTC)
Yea this does come down to a misunderstanding (row ≠ column), and is quite petty. Not really a problem if you both can agree to just walk away from it, which is what I would suggest (for the betterment of the project?
). One user tried to "fix" a problem, and it didn't really work out well. Good faith says thanks but no thanks. Oh well, shit happens. I still think a good compromise would be to use <gallery>...</gallery>; it would keep the size of the nom reasonable and easy to follow. Now I would suggest everyone goes along and does something productive. Buenos noches. wadester16 04:17, 1 July 2009 (UTC)
- The gallery approach would seem to be better, seeing as how this involves 10 large illustrations. If they want to show them larger, though, OR's approach is better. Baseball Bugs What's up, Doc? carrots 04:33, 1 July 2009 (UTC)
- At wadester, thank you for the input and thoughtful meditation. I assumed Ottava Rima would not prefer using <gallery> over his own format and image reducing, so I tried to let the image size as it is and to make a flow by using a table and complex hidden tags. The literary work is non-sense poem, so I thought "strict numbering of the images" is not really demanded. My computer screens are at default fixed in 1280 x 800 pixel, and I asked him a screen size, but I rather got uncivil responses in return. If Ottava's computer is fixed in smaller size, his viewing would be different than mine. Anyway, I rather would like to choose "disengagement" since our mentality is so different.--Caspian blue 04:43, 1 July 2009 (UTC)
[edit] User:Snowded
This user has been of concern on both Wikipedia's article for Anglophobia and the article's talk page. In enforcing his own personal opinions, he has violated Wikipedia's WP:NPA policy as well as Wikipedia's WP:NPOV policy. He has taken it upon himself to try and make the Anglophobia article his own personal soapbox for relations between England and Wales, and has frequently clashed with the main editor of the Anglophobia article, BillMasen.
Both I and BillMasen have attempted to placate Snowded, but to no avail, and his attitude has irritated me to the point where I now feel that action must be taken to put him back in line. Crablogger (talk) 05:10, 1 July 2009 (UTC)
- "Put him back in line"? You're looking for punishment? (talk→ BWilkins ←track) 13:32, 2 July 2009 (UTC)
- personal attack diffs please? we can't do anything about content disputes, if this rooted in a dispute about neutrality try a rfc or third comment and then mediation if necessary. --neon white talk 14:08, 2 July 2009 (UTC)
[edit] The Sceptical Chymist
I'm under the impression that User:The Sceptical Chymist has been previously reported here (or perhaps to ANI) for uncivil and unwelcoming comments much like these recent ones:
- "Please return to RL to your NIH or wherever you work and boss people around over there."
- "Here we go again denying evident facts and obscuring them by WP:TLDR."
These particular comments are in the context of the content dispute at Benzodiazepine, but discouraging participation by other editors and insulting editors does not really help us resolve the dispute. I'm not convinced that the behavior is pervasive enough to justify an RfC/User, but I would be happy to hear other opinions on the matter. WhatamIdoing (talk) 17:30, 1 July 2009 (UTC)
- I think that WhatamIdoing cannot claim moral high ground here. Her participation in the dispute at benzodiazepine has not been productive. She limited herself mostly to criticizing other editors. In response to my suggestion to help with content [66] she refused to help and refused to even appear fair.[67] In the same comment she described me, without any provocation, as playing "a childish "even-Steven" games, chided me for not "doing a stellar job of working on content" (remember, while refusing to help) and for behaving "worse when you don't feel like a "parent" or "teacher" is looking over your shoulders to make sure that you're doing your work". The Sceptical Chymist (talk) 23:49, 1 July 2009 (UTC)
- Actually, when one uses the handy-dandy search box at the top of either WP:WQA or WP:ANI, one can find out that they were never brought here to WQA, and were the actual filer of an incident at ANI. Please have a look at the results of that ANI, and both parties should continue having learned. (talk→ BWilkins ←track) 12:25, 3 July 2009 (UTC)
[edit] User:NRen2k5 is a bully
[edit] Abuse of template?
| http://ornacle.com/wiki/Wikipedia:Wikiquette_alerts | crawl-002 | refinedweb | 10,361 | 60.04 |
Java While Loops, Do-While Loops, and Infinite Loops
So you’ve just started learning Java, you’ve built your first Hello World program, and you’re feeling like a pro. But if you want your programs to do more and be more, you have to learn how to use loops.
There are three kinds of loop statements in Java, each with their own benefits – the while loop, the do-while loop, and the for loop. Loop mechanisms are useful for repeatedly executing blocks of code while a boolean condition remains true, a process that has a vast amount of applications for all types of software programming.
To understand the distinct uses of each loop statement, let’s take a look at the simple while loop. If you want a more in-depth, beginner friendly guide to learning Java, check out this tutorial for Java programming basics.
Syntax of the While Loop
The syntax of a while loop is as follows:
while(BooleanExpressionHere) { YourStatementHere }
The while statement will evaluate the boolean expression within the parentheses, and continue to execute the statement(s) within the curly braces as long as the expression is true.
This is the most important characteristic to note. The entirety of the loop body will be skipped over if the expression evaluated in the beginning is not true.
Keep this in mind for later when we examine the do-while loop. For now, let’s check out the while loop in action.
Example of a While loop
Let’s say you want to create a program that will count from 1 to 10, using a while loop.
public class WhileLoopExample { public static void main(String args[]) { int num = 0; System.out.println("Let's count to 10!"); while(num < 10) { num = num + 1; System.out.println("Number: " + num); } System.out.println("We have counted to 10! Hurray! "); } }
This is what your program will look like, and this is what it will return:
Let’s count to 10! Number: 1 Number: 2 Number: 3 Number: 4 Number: 5 Number: 6 Number: 7 Number: 8 Number: 9 Number: 10 We have counted to 10! Hurray!
What’s Going on Here?
Before we even open the loop, we have to set a condition for its boolean to evaluate. Because we want to count to 10, we create an int – named num in this example – and set its initial value to 0. We then have the program print the string, “Let’s count to 10!”
int num = 0; System.out.println("Let's count to 10!");
From here, we open our while loop using the syntax we talked about before. Our goal is to increase the value of num to 10, one number at a time, before closing the loop. To do this, we set our boolean expression to num < 10. As long as the value of num is less than 10, it will continue executing the statements within the loop.
Those statements are num = num + 1, and a string that prints the word “Number:” followed by the current value of num after each execution.
while(num < 10) { num = num + 1; System.out.println("Number: " + num); }
What the program is doing is repeatedly checking the current value of num, adding 1, and printing its new value, before starting the process over again, and finally ending once the value of num is 10.
Once the loop is closed, it moves on to the next statement, which is a string that reads, “We have counted to 10! Hurray!” The program moves onto this next line because the boolean expression in the while loop above is no longer true, and so the while loop has closed.
Some While Loops Never Run
It’s possible for the loop body to never run at all, if the conditions are so that the boolean was either never true, or instantly untrue.
For instance, if the initial value of num is 0 and our boolean expression is num > 10, instead of num < 10, it is already false from the start, because 0 will never be greater than 10.
public class WhileLoopExample { public static void main(String args[]) { int num = 0; System.out.println("Let's count to 10!"); while(num > 10) { num = num + 1; System.out.println("Number: " + num); } System.out.println("We have counted to 10! Hurray! "); } }
Our while loop will evalute the boolean expression, num > 10, find that it is untrue, and print:
Let's count to 10! We have counted to 10! Hurray!
The Do-While Loop
The syntax of a do-while loop is very similar to the while loop, with one significant difference – the boolean expression is located at the end of the loop, rather than at the beginning. This means the statements in the loop body will execute one time, before the boolean expression is evaluated.
public class DoWhileLoopExample { public static void main(String args[]){ int num = 0; do{ System.out.println("Number: " + num ); num = num + 1; }while( num < 10 ); } }
What this returns is:
Number: 0 Number: 1 Number: 2 Number: 3 Number: 4 Number: 5 Number: 6 Number: 7 Number: 8 Number: 9
The main noticeable difference between what our first while loop returned and what this do-while loop returns is that our do-while loop counts from 0. This is because our do-while statement prints the initial value of num once before adding to it, evaluating the boolean, and then starting over.
This is also why it stops at 9, instead of 10, like our first while loop – once the value of num is 9 at the beginning of the loop, the statement num = num + 1 makes it 10, rendering the boolean expression num < 10 untrue, thus closing the loop before it can print the next value.
You can read a more in-depth guide on how do-while loops work here.
Getting Stuck in an Infinite Loop
One of the most common errors you can run into working with while loops is the dreaded infinite loop. You risk getting trapped in an infinite while loop if the statements within the loop body never render the boolean eventually untrue. Let’s return to our first example.
Before, our statement num = num + 1 continually increased the value of num until it was no longer less than 10, rendering our boolean expression num < 10 untrue, and closing the loop – great success!
However, what if we had accidentally written num = num – 1 within the while loop?
public class WhileLoopExample { public static void main(String args[]) { int num = 0; System.out.println("Let's count to 10!"); while(num < 10) { num = num - 1; System.out.println("Number: " + num); } System.out.println("We have counted to 10! Hurray! "); } }
This would continue subtracting 1 from num, down into the negative numbers, keeping its value less than 10, forever. This is an infinite loop because our boolean will always remain true, meaning our program will continue to run it with no end in sight, unless we fix it.
This has been a basic tutorial on while loops in Java to help you get started. If you’re starting to envision yourself in a long and fruitful career coding in Java, check out this guide to Java-based interviews and their most common questions. If you still have a lot to learn, dive in with the ultimate Java tutorial for beginners.
Recommended Articles
How to Convert Integers to Strings in Java
Understanding Keyword Super: Java Tutorial
Java Threads : Fundamentals of Multithreading
Java Game Programming for Fun and Profit
Top courses in Java
Java students also learn
Empower your team. Lead the industry.
Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business. | https://blog.udemy.com/java-while-loop/ | CC-MAIN-2022-33 | refinedweb | 1,286 | 70.33 |
Note
This tutorial builds on topics covered in part 1, part 2, and part 3. It is recommended that you begin there.
This part of the tutorial will show how to use salt's
file_roots
to set up a workflow in which states can be "promoted" from dev, to QA, to
production.
Salt's fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS:
# In the master config file (/etc/salt/master) file_roots: base: - /srv/salt - /mnt/salt-nfs/base
Salt's fileserver collapses the list of root directories into a single virtual
environment containing all files from each root. If the same file exists at the
same relative path in more than one root, then the top-most match "wins". For
example, if
/srv/salt/foo.txt and
/mnt/salt-nfs/base/foo.txt both
exist, then
salt://foo.txt will point to
/srv/salt/foo.txt.
Note
When using multiple fileserver backends, the order in which they are listed
in the
fileserver_backend parameter also matters. If both
roots and
git backends contain a file with the same relative path,
and
roots appears before
git in the
fileserver_backend list, then the file in
roots will
"win", and the file in gitfs will be ignored.
A more thorough explanation of how Salt's modular fileserver works can be found here. We recommend reading this.
Configure a multiple-environment setup like so:
file_roots: base: - /srv/salt/prod qa: - /srv/salt/qa - /srv/salt/prod dev: - /srv/salt/dev - /srv/salt/qa - /srv/salt/prod
Given the path inheritance described above, files within
/srv/salt/prod
would be available in all environments. Files within
/srv/salt/qa would be
available in both
qa, and
dev. Finally, the files within
/srv/salt/dev would only be available within the
dev environment.
Based on the order in which the roots are defined, new files/states can be
placed within
/srv/salt/dev, and pushed out to the dev hosts for testing.
Those files/states can then be moved to the same relative path within
/srv/salt/qa, and they are now available only in the
dev and
qa
environments, allowing them to be pushed to QA hosts and tested.
Finally, if moved to the same relative path within
/srv/salt/prod, the
files are now available in all three environments.
See here for documentation on how to request files from specific environments.
As an example, consider a simple website, installed to
/var/www/foobarcom.
Below is a top.sls that can be used to deploy the website:
/srv/salt/prod/top.sls:
base: 'web*prod*': - webserver.foobarcom qa: 'web*qa*': - webserver.foobarcom dev: 'web*dev*': - webserver.foobarcom
Using pillar, roles can be assigned to the hosts:
/srv/pillar/top.sls:
base: 'web*prod*': - webserver.prod 'web*qa*': - webserver.qa 'web*dev*': - webserver.dev
/srv/pillar/webserver/prod.sls:
webserver_role: prod
/srv/pillar/webserver/qa.sls:
webserver_role: qa
/srv/pillar/webserver/dev.sls:
webserver_role: dev
And finally, the SLS to deploy the website:
/srv/salt/prod/webserver/foobarcom.sls:
{% if pillar.get('webserver_role', '') %} /var/www/foobarcom: file.recurse: - source: salt://webserver/src/foobarcom - env: {{ pillar['webserver_role'] }} - user: www - group: www - dir_mode: 755 - file_mode: 644 {% endif %}
Given the above SLS, the source for the website should initially be placed in
/srv/salt/dev/webserver/src/foobarcom.
First, let's deploy to dev. Given the configuration in the top file, this can
be done using
state.apply:
salt --pillar 'webserver_role:dev' state.apply
However, in the event that it is not desirable to apply all states configured
in the top file (which could be likely in more complex setups), it is possible
to apply just the states for the
foobarcom website, by invoking
state.apply with the desired SLS target
as an argument:
salt --pillar 'webserver_role:dev' state.apply webserver.foobarcom
Once the site has been tested in dev, then the files can be moved from
/srv/salt/dev/webserver/src/foobarcom to
/srv/salt/qa/webserver/src/foobarcom, and deployed using the following:
salt --pillar 'webserver_role:qa' state.apply webserver.foobarcom
Finally, once the site has been tested in qa, then the files can be moved from
/srv/salt/qa/webserver/src/foobarcom to
/srv/salt/prod/webserver/src/foobarcom, and deployed using the following:
salt --pillar 'webserver_role:prod' state.apply webserver.foobarcom
Thanks to Salt's fileserver inheritance, even though the files have been moved
to within
/srv/salt/prod, they are still available from the same
salt:// URI in both the qa and dev environments.
The best way to continue learning about Salt States is to read through the reference documentation and to look through examples of existing state trees. Many pre-configured state trees can be found on GitHub in the saltstack-formulas collection of repositories.
If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we'd love to hear from you.
In addition, by continuing to the Orchestrate Runner docs, you can learn about the powerful orchestration of which Salt is capable. | https://docs.saltstack.com/en/latest/topics/tutorials/states_pt4.html | CC-MAIN-2017-22 | refinedweb | 868 | 56.15 |
A year ago i was watching a presentation by Dave Kennedy (ReL1k) and Josh Kelly called: “PowerShell…omfg” the presentation shows multiple techniques that are very very useful during a pentest. After viewing the video I realized i could make a small addition to a phishing attack I use the pretext is simple: I e-mail a client an office document containing very important data they should NOT have received. People are very curious and tend to open files containing data they should not have received. The office document is protected using my encryption preparation utility (Patent pending).
In order to decrypt the data macro’s should be enabled. I think you are getting the point right around here, the macro downloads my own backdoor to %TEMP% and executes the file. There is a BIG downside to that approach: most anti-virus alert or block on files being executed from the %TEMP% folder. This does not mean it doesn’t work it just means there is a chance of failure. And with phishing you get one chance so it needs to count!
Watching the presentation I got excited hearing that PowerShell is able to import functions from any DLL, this allows you to use functions like VirtualAlloc, memset and CreateThread. This will allow you to allocate executable memory, fill it with your program and execute it. This means that we can use a simple PowerShell script to inject any backdoor we like into memory just like syringe.c. The big difference is no compiled code, every Windows version above Vista has PowerShell built in. But there is more! PowerShell is usually allowed to execute because the administrators use it for their scripts so no worries about application-white listing. And best of all i haven’t had any anti-virus alarming my victims, ruining my phishing attack.
TrustedSec released a script called unicorn.py which does all of the heavy lifting for you it uses MetaSploit to generate a Meterpreter and prepares a PowerShell script that injects and executes the Meterpreter. It even generates a Metasploit script for you which sets up the payload handler. While this is very nice a user is not likely to trust a PowerShell script they are fare more accustomed to Office documents. VBA allows access to the windows shell via the Shell() function, from here we can call PowerShell in order to execute our Meterpreter. There are some things to work around though, one of the most important things is that VBScript has a maximum line length. I have modified the original unicorn.py script in order to output an entire macro.
The full script is available from my github repository. If you are using this script against targets in remote environments I would advise on using the windows/meterpreter/reverse_https this makes detecting/inspecting your connection much harder. I am now using the script during my phishing expeditions 🙂 It works allot better than just dropping an executable.
Stealing Credentials
Another great example of the power of PowerShell was displayed by enigma0x3, he created a script that asks the user for his username and password. The great thing is it looks like the actual login screen Winddows uses (probably because it is). This attack can be used in the scenario discussed above but is also great in post exploitation Wesley Neelen created a Metasploit module for this. The first time I saw the script I thought this would make a nice addition to my meterpreter injection, because I have a shell let’s have the credentials aswell!
I modified the script by engima0x3 and added a function to send the credentials to my own server, this makes the script usable for my macro. The modified script is available from my GitHub repo.
In order to get this into an easily handle-able format you can use the following one-liner to convert your PowerShell script to a base64 encoded string in order to exectue our script as a one-liner.
import base64, sys; print base64.b64encode(open(sys.argv[1]).read().encode('utf_16_le'))
The output can be executed using the PowerShell command:
powershell -win hidden -enc base64encodedscript
This all adds up to the following macro:
Now let’s add this to my Excel macro and we are ready for some phishing!
That’s all folks! If you have any questions, find me on Twitter: Follow @rikvduijn | https://d.uijn.nl/2015/02/15/powershell-pentesting/ | CC-MAIN-2019-43 | refinedweb | 732 | 61.16 |
Any chance to get that update as an 2.5.x binding update?
I updated to OH 3.1.0.M3 today.
And now the 3.1.0-20210208.jar is not working any more.
2021-04-02 10:41:32.254 [WARN ] [org.apache.felix.fileinstall ] - Error while starting bundle: file:/openhab/addons/org.openhab.binding.unifi-3.1.0-20210208.jar org.osgi.framework.BundleException: Could not resolve module: org.openhab.binding.unifi [208] Unresolved requirement: Import-Package: org.apache.commons.lang; version="[2.6.0,3.0.0)" at org.eclipse.osgi.container.Module.start(Module.java:444) ~[org.eclipse.osgi-3.12.100.jar:?] at org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:383) ~[org.eclipse.osgi-3.12.100.jar:?] at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundle(DirectoryWatcher.java:1260) [bundleFile:3.6.4] at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundles(DirectoryWatcher.java:1233) [bundleFile:3.6.4] at org.apache.felix.fileinstall.internal.DirectoryWatcher.doProcess(DirectoryWatcher.java:520) [bundleFile:3.6.4] at org.apache.felix.fileinstall.internal.DirectoryWatcher.process(DirectoryWatcher.java:365) [bundleFile:3.6.4] at org.apache.felix.fileinstall.internal.DirectoryWatcher.run(DirectoryWatcher.java:316) [bundleFile:3.6.4]
With OH 3.1.0.M2 it works fine.
Is there anybody working on this plugin? I have some issues during the setup of the binding:
I am using OH2.5 and have a very strange bug, as soon as I restart OH the connection to unifi is some how lost. OH states that the connection to the controler and the wifi client is “online” but OH does not get updates from the controler (i.e. if a wifi client is online it is not reported in OH). However, if I create a new wifi client thing, with the same mac number it works, and OH correctly reports that the client is online - so I have two things, with identical mac numbers, reporting different things. This results in that after a restart of OH I have to delete all the wifi clients and thereafter recreate them, then all clients are correctly reported in OH. Not very convinent.
Can I please get some clarification if anyone has the Unifi Binding working with a UDM Pro?
I see the GitHub Issue here was last discussed in January with no further update and above, while people mention it, I cannot find anything to say that they actually have it working.
@mgbowman, from what I can tell, is the main author but hasn’t mentioned this since his return in February and @pagm seems to have done some work on it but I can’t determine whether this was simply cloudkey configuration or actually relates to the UDM Pro
If you have it working can you please post what version of OH and the binding you are using?
Any info greatly appreciated since I broke my presence detection yesterday with the new UDM Pro install and now the coffee machine doesn’t turn on
Cheers, Tim
I don’t have a UDM but I have been watching for future development of this binding. Have you seen this thread around UDM? UniFi Protect Binding (Cloudkey gen2+, Dream Machine Pro, NVR)
Unfortunately @swamiller, your link is to a totally different binding aimed at UniFi Protect video camera monitoring rather than this binding which provides UniFi Network device presence.
The confusion probably lies around the fact that both the Network and Protect modules run on a UDM Pro and these bindings can co-exist in OH to provide different functionality from them.
Definitely not working in OH3.0
UDM Pro and the UniFi Binding
I really, really want this binding to work with a UDM Pro so that I can get my phone presence detection back but I literally know nothing about Java development so fishing around in the binding code is beyond me.
But I have hacked together both a Powershell script and a Python script below that both work and return what I need to know - so I’m hoping that someone can take this and utilise it to add a UDM flag to the binding. Perhaps?
Powershell:
$udm = "" $staticURL = "/proxy/network/api/s/default" $epLogin = $udm + "/api/auth/login" $auth = @{ "username"= "myUsername" "password"= "myStrongPassword" } $authResponse = Invoke-WebRequest -Uri $epLogin -Method Post -Body $auth -SessionVariable ubnt -SkipCertificateCheck If ($authResponse.StatusCode -ne 200) { Write-Host "Authorization Failure" Exit } $authHeaders = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $csrfToken = $authResponse.Headers.'X-CSRF-Token' $authHeaders.Add("X-CSRF-Token", $csrfToken) $authHeaders.Add("Content-Type","application/json") # Endpoints from # List of all _active_ clients on the site $statusURI = $udm + $staticURL + "/stat/sta" (Invoke-RestMethod -Method get -header $authHeaders -uri $statusURI -WebSession $UBNT -SkipCertificateCheck).data | Select hostname, latest_assoc_time, mac, ip
Python3:
import requests import json import time import urllib3 #Disable warning of self-signed SSL Cert urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) udm = "" staticURL = "/proxy/network/api/s/default" epLogin = udm + "/api/auth/login" auth = {'username':'myUsername', 'password':'myStrongPassword'} s = requests.Session() resp = s.post(epLogin, data=auth, verify=False) if resp.status_code != 200: exit() statusURI = udm + staticURL + "/stat/sta" clients = json.loads(s.get(statusURI).text) for client in clients["data"]: if 'hostname' in client: hostname = client["hostname"] else: hostname = "hostname unknown" lastseen = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(client["last_seen"])) mac = client["mac"] ip = client["ip"] print(hostname + ", " + ip + ", " + mac + ", " + lastseen)
Cheers, Tim
After upgrade to OH3 I am missing “wired clients” and “site” support in this binding.
Is there any plans to get wired clients support in OH3?
Thanks,
From what I can tell, it kinda looks like both wired clients and site support were part of a branch that was never merged to main. This update added support for the latest authentication method for Unifi OS, but didn’t include the wired clients or site support in that feature branch. It also appears this is not being actively worked on so unless you want to hack it together like I’m doing, you’re SOL.
I don’t know if this is the right place to ask!
OpenHAB 3.1.0.M3
Unifi contoler 6.0.28 (on a Linux VM)
UniFi Binding latest
Has anyone managed to get this binding up and running for the use of presence detection?
Mark, thanks for info. I am not programmer, so I can’t hack this binding…
I have 3.1.0 M3 with the UniFi binding to match (not sure if by latest you mean a CI / snapshot version) hitting Unifi controller 6.0.45, doing wireless presence detection and providing data just fine.
I remember it stopped working when I went to M1 but I can’t remember what I did to fix it.
I do also have 3.1.0 M3 with the UniFi binding.
I connect to an Unifi controller 6.0.45 that is running on docker.
I’m also doing presence detection and it’s working fine.
My current setting looks like this:
Bridge unifi:controller:home "UniFi Controller" [ host="XXX.XXX.X.X", port=8443, username="abcdef", password="dontsayit", refresh=10, unifios="false" ] { Thing wirelessClient simonsPhone "Simon's Phone" [ cid="c2:e0:b5:9a:35:a0", site="", considerHome=180 ] Thing wirelessClient katrinsPhone "Katrin's Phone" [ cid="66:6a:a9:2f:d7:c1", site="", considerHome=180 ] Thing wirelessClient miasPhone "Mia's Phone" [ cid="8c:f5:a3:d2:7d:7d", site="", considerHome=180 ] Thing wirelessClient jurisPhone "Juri's Phone" [ cid="50:2e:5c:d0:be:50", site="", considerHome=180 ] Thing wirelessClient mariannesPhone "Marianne's Phone" [ cid="88:11:96:a3:fa:0d", site="", considerHome=180 ] }
That’s good to know. Then it is not for nothing that I am trying to find out why it does not work for me yet.
How good of you to share a working configuration
I just updated openHAB from 3.0.1 to 3.1.0.M3 with UniFi controller 6.0.43 (on a CKG2+) and the binding is also working for me. I don’t know when they added this, but in the UniFi controller thing configuration there is a UniFi OS toggle. Once this option is enabled, the binding works smoothly (I think I restarted the binding once but it may just be me doing something else weird).
You may also want to double check that the toggle wasn’t overlooked like I did.
I thought UniFi OS was for Cloud Key or Dream Machine type devices?
I have the toggle switched off and all is working. I configured it through the UI so have no config to share though.
Correct me if I’m wrong here, but I believe UniFi OS is on all the newer firmware versions. The way I distinguished the UniFi OS on my system was the controller local address changed from 192.168.xxx.xxx:8443 to 192.168.xxx.xxx/network which broke the UniFi binding back in OH2.
The UniFi Protect binding documentation has a short description of the UnFi OS:
Supported hardware
- UniFi Protect Cloud Key Gen2+ Firmware >= 2.0.18
- UniFi Dream Machine / Pro
- Any UniFi Protect Camera G3 and G4
- UniFi NVR
- UniFi Doorbell. | https://community.openhab.org/t/ubiquiti-unifi-binding-feature-discussion/14520/1156 | CC-MAIN-2022-40 | refinedweb | 1,525 | 56.55 |
A "protected-private" keyword?
One of the features requested in the J2SE 6.0 (Mustang) forum was a modifier that would allow a member variable to be inherited as if it were protected, but not to be accessed by the other classes from the package. This would be useful when a protected member variable must always be accessed by its getter and setter, because these have side-effects, and one wants to avoid accidentally accessing it directly.
Are we going to get such a keyword, whatever it would be named? Most likely not. Adding a keyword creates way too many problems, the most important one being that it breaks existing code that uses the new keyword as an identifier. But we can achieve the same goal with an aspect.
Let the concerned class be:
package com.foo;
public class SomeClass
{
/* don't access var directly, must call getter and setter! */
protected Type var;
public Type getVar()
{
... side effects here ...
return var;
}
public void setVar(Type v)
{
var = v;
... side effects here ...
}
}
Firstly, we would like to prevent calls like
someClass.var = ...from classes like
com.foo.SomeOtherClass. Here's how to achieve this:
package com.foo;
public aspect Protection
{
pointcut directSet(SomeClass t, Type a) :
set(Type SomeClass.var) && within(com.foo.*) && !within(com.foo.SomeClass)
&& target(t) && args(a);
void around(SomeClass t, Type a) : directSet(t, a)
{
t.setVar(a);
}
}
The Pointcut
First we declare a pointcut with the name
directSetand the parameters
t(of type
SomeClass, for the target) and
a(of type
Type, for the argument). It is to pick the joinpoints where
SomeClass.varis modified directly from inside the package, but we still want to allow
SomeClassitself to modify
var- otherwise we would get an infinite loop within
setVar()! The pointcut designator
set([modifiers] type x)stands for the direct setting of
x, i.e. any asignment in the form
x = ...;. We could also use * instead of Type in this case, because the member variable name is unique within one class of course. The pointcut designator
within(AClass)is probably clear.
What about
target()and
args()? Are these limiting the selected set of joinpoints? They could be, but they are not in this case. If the target where not already defined to be
SomeClass(e.g. if we had used a wildcard in the
set()joinpoint, like
set(com.foo.*.var)), this would limit the choices to only those joinpoints where the target is of type
SomeClass. Also the argument could be limited like this, but of course we want to catch all assignments of
varto an object of type
Type, not only to some specific subtype of it, and everything else would not make sense anyway. But this is not why we are using
target()and
args()here,
target(SomeClass)and
args(Type)would suffice for this. We need
target()and
args()to bind the formal parameters of the pointcut. And these parameters we need to expose so that we can use them in the advice later on.
The Advice
There are three types of advice: before, after and around advice. Around advice is executed instead of the pointcut. If our pointcould selected a method, the around advice would need the same return type and declared exceptions as the method, while before and after advice do not have return types. For a setter, the return type is
void, and for the getter it would obviously be
Type. With
proceed(theTarget, arg0, arg1, ...)one could execute the original joinpoint (like method, constructor, get or set), which is not what we want in this case evidently.
Can you figure out the pointcut and advice you need to add for catching the direct get, too? You can check for the solution below.
package com.foo;
public aspect Protection
{
pointcut directSet(SomeClass t, Type a) :
set(Type SomeClass.var) && within(com.foo.*) && !within(com.foo.SomeClass)
&& target(t) && args(a);
pointcut directGet(SomeClass t) :
get(Type SomeClass.var) && within(com.foo.*) && !within(com.foo.SomeClass)
&& target(t);
void around(SomeClass t, Type a) : directSet(t, a)
{
t.setVar(a);
}
Type around(SomeClass t) : directGet(t)
{
return t.getVar();
}
}
- Login or register to post comments
- Printer-friendly version
- monika_krug's blog
- 1928 reads | https://weblogs.java.net/blog/monika_krug/archive/2005/01/a_protectedpriv.html | CC-MAIN-2015-32 | refinedweb | 704 | 58.48 |
So far in this chapter, we've been dealing with C extension modulesflat function libraries. To implement multiple-instance objects in C, you need to code a C extension type, not a module. Like Python classes, C types generate multiple-instance objects and can overload (i.e., intercept and implement) Python expression operators and type operations. In recent Python releases, C types can also support subclassing just like Python classes.
One of the biggest drawbacks of types, though, is their sizeto 22-16 presents a C string stack type implementation, but with the bodies of all its functions stripped out. For the complete implementation, see this file in the book's examples distribution.
This C type roughly implements the same interface as the stack classes we met in Chapter 20, but it imposes a few limits on the stack itself. The stripped parts use the same algorithms as the C module in Example 22-15, but they operate on the passed-in self object, which now refers to the particular type instance object being processed, just as the first argument does in class methods. In types, self is a pointer to an allocated C struct that represents a type instance object.
Please note that the C API is prone to frequent changes, especially for C extension types. Although the code of this book's stack type example has been updated and retested for each edition, it may in fact not completely reflect current practice by the time you read these words.
Even as is, although it works as shown, this example does not support new, advanced C type concepts such as support for subclassing. Because this is such a volatile topic, the example was almost cut from this edition completely, but was retained in abbreviated form just to give you a sampling of the general flavor of C types. To code types of your own, you will want to explore additional resources.
For more up-to-date details on C types, consult Python's now thorough Extending and Embedding manual. And for more complete examples, see the Objects directory in the Python source distribution treeall of Python's own datatypes are merely precoded C extension types that utilize the same interfaces and demonstrate best practice usage better than the static nature of books allows.
Of special interest, see Python 2.4's Objects/xxmodule.c for example C type code. Type descriptor layouts, described shortly, are perhaps the most prone to change over time; consult the file Include/object.h in the Python distribution for an up-to-date list of fields. Some new Python releases may also require that C types written to work with earlier releases be recompiled to pick up descriptor changes.
Finally, if it seems like C types are complex, transitory, and error prone, it's because they are. Because many developers will find higher-level tools such as SWIG to be more reasonable alternatives to handcoded C types anyhow, this section is not designed to be complete.
Having said all that, the C extension type in Example 22-16 does work, and it demonstrates the basics of the model. Let's take a quick look.
/**************************************************** * stacktyp.c: a character-string stack datatype; * a C extension type, for use in Python programs; * stacktype module clients can make multiple stacks; * similar to stackmod, but 'self' is the instance, * and we can overload sequence operators here; ****************************************************/ #include "Python.h" static PyObject *ErrorObject; /* local exception */ #define onError(message) \ { PyErr_SetString(ErrorObject, message); return NULL; } /***************************************************************************** * STACK-TYPE INFORMATION *****************************************************************************/ #define MAXCHARS 2048 #define MAXSTACK MAXCHARS typedef struct { /* stack instance object format */ PyObject_HEAD /* Python header: ref-count + &typeobject */ int top, len; char *stack[MAXSTACK]; /* per-instance state info */ char strings[MAXCHARS]; /* same as stackmod, but multiple copies */ } stackobject; /***************************************************************************** * INSTANCE METHODS *****************************************************************************/ static PyObject * /* on "instance.push(arg)" */ stack_push(self, args) /* 'self' is the stack instance object */ stackobject *self; /* 'args' are args passed to self.push method */ PyObject *args; { ... } static PyObject * stack_pop(self, args) stackobject *self; PyObject *args; /* on "instance.pop( )" */ { ... } static PyObject * stack_top(self, args) stackobject *self; PyObject *args; { ... } static PyObject * stack_empty(self, args) stackobject *self; PyObject *args; { ... } static struct PyMethodDef stack_methods[] = { /* instance methods */ {"push", stack_push, 1}, /* name/address table */ {"pop", stack_pop, 1}, /* like list append,sort */ {"top", stack_top, 1}, {"empty", stack_empty, 1}, /* extra ops besides optrs */ {NULL, NULL} /* end, for getattr here */ }; /***************************************************************************** * BASIC TYPE-OPERATIONS *****************************************************************************/ static stackobject * /* on "x = stacktype.Stack( )" */ newstackobject( ) /* instance constructor function */ { ... /* these don't get an 'args' input */ } static void /* instance destructor function */ stack_dealloc(self) /* when reference-count reaches zero */ stackobject *self; { ... /* do cleanup activity */ } static int stack_print(self, fp, flags) stackobject *self; FILE *fp; int flags; /* print self to file */ { ... } static PyObject * stack_getattr(self, name) /* on "instance.attr" reference */ stackobject *self; /* make a bound-method or member */ char *name; { ... } static int stack_compare(v, w) /* on all comparisons */ stackobject *v, *w; { ... } /***************************************************************************** * SEQUENCE TYPE-OPERATIONS *****************************************************************************/ static int stack_length(self) stackobject *self; /* called on "len(instance)" */ { ... } static PyObject * stack_concat(self, other) stackobject *self; /* on "instance + other" */ PyObject *other; /* 'self' is the instance */ { ... } static PyObject * stack_repeat(self, n) /* on "instance * N" */ stackobject *self; /* new stack = repeat self n times */ int n; { ... } static PyObject * stack_item(self, index) /* on "instance[offset]", "in/for" */ stackobject *self; /* return the i-th item of self */ int index; /* negative index pre-adjusted */ { ... } static PyObject * stack_slice(self, ilow, ihigh) stackobject *self; /* on "instance[ilow:ihigh]" */ int ilow, ihigh; /* negative-adjusted, not scaled */ { ... } /***************************************************************************** * TYPE DESCRIPTORS *****************************************************************************/ static PySequenceMethods stack_as_sequence = { /* sequence supplement */ (inquiry) stack_length, /* sq_length "len(x)" */ (binaryfunc) stack_concat, /* sq_concat "x + y" */ (intargfunc) stack_repeat, /* sq_repeat "x * n" */ (intargfunc) stack_item, /* sq_item "x[i], in" */ (intintargfunc) stack_slice, /* sq_slice "x[i:j]" */ (intobjargproc) 0, /* sq_ass_item "x[i] = v" */ (intintobjargproc) 0, /* sq_ass_slice "x[i:j]=v" */ }; /* The ob_type field must be initialized in the module init function to be portable to Windows without using C++. */ static PyTypeObject Stacktype = { /* main Python type-descriptor */ /* type header */ /* shared by all instances */ PyObject_HEAD_INIT(NULL) /* was PyObject_HEAD_INIT(&PyType_Type)*/ 0, /* ob_size */ "stack", /* tp_name */ sizeof(stackobject), /* tp_basicsize */ 0, /* tp_itemsize */ /* standard methods */ (destructor) stack_dealloc, /* tp_dealloc ref-count==0 */ (printfunc) stack_print, /* tp_print "print x" */ (getattrfunc) stack_getattr, /* tp_getattr "x.attr" */ (setattrfunc) 0, /* tp_setattr "x.attr=v" */ (cmpfunc) stack_compare, /* tp_compare "x > y" */ (reprfunc) 0, /* tp_repr 'x',repr,print */ /* type categories */ 0, /* tp_as_number +,-,*,/,%,&,>>,...*/ &stack_as_sequence, /* tp_as_sequence +,[i],[i:j],len, ...*/ 0, /* tp_as_mapping [key], len, ...*/ /* more methods */ (hashfunc) 0, /* tp_hash "dict[x]" */ (ternaryfunc) 0, /* tp_call "x( )" */ (reprfunc) 0, /* tp_str "str(x)" */ }; /* plus others: see Python's Include/object.h, Modules/xxmodule.c */ /***************************************************************************** * MODULE LOGIC *****************************************************************************/ static PyObject * stacktype_new(self, args) /* on "x = stacktype.Stack( )" */ PyObject *self; /* self not used */ PyObject *args; /* constructor args */ { if (!PyArg_ParseTuple(args, "")) /* Module-method function */ return NULL; return (PyObject *)newstackobject( ); /* make a new type-instance object */ } /* the hook from module to type... */ static struct PyMethodDef stacktype_methods[] = { {"Stack", stacktype_new, 1}, /* one function: make a stack */ {NULL, NULL} /* end marker, for initmodule */ }; void initstacktype( ) /* on first "import stacktype" */ { PyObject *m, *d; /* finalize type object, setting type of new type object here for portability to Windows without requiring C++ */ if (PyType_Ready(&Stacktype) < 0) return; m = Py_InitModule("stacktype", stacktype_methods); /* make the module, */ d = PyModule_GetDict(m); /* with 'Stack' func */ ErrorObject = Py_BuildValue("s", "stacktype.error"); PyDict_SetItemString(d, "error", ErrorObject); /* export exception */ if (PyErr_Occurred( )) Py_FatalError("can't initialize module stacktype"); }
Although most of the file stacktyp.c is missing, there is enough here to illustrate the global structure common to C type implementations:
Instance struct
The file starts off by defining a C struct called stackobject that will be used to hold per-instance state informationeach generated instance object gets a newly malloc'd copy of the struct. It serves the same function as class instance attribute dictionaries, and it contains data that was saved in global variables by the C stack module of the preceding section (Example 22-15).
Instance methods
As in the module, a set of instance methods follows next; they implement method calls such as push and pop. But here, method functions process the implied instance object, passed in to the self argument. This is similar in spirit to class methods. Type instance methods are looked up in the registration table of the code listing (Example 22-16) when accessed.
Basic type operations
Next, the file defines functions to handle basic operations common to all types: creation, printing, qualification, and so on. These functions have more specific type signatures than instance method handlers. The object creation handler allocates a new stack struct and initializes its header fields; the reference count is set to 1, and its type object pointer is set to the Stacktype type descriptor that appears later in the file.
Sequence operations
Functions for handling sequence type operations come next. Stacks respond to most sequence operators: len, +, *, and [i]. Much like the _ _getitem_ _ class method, the stack_item indexing handler performs indexing, but also in membership tests and for iterator loops. These latter two work by indexing an object until an IndexError exception is caught by Python.
Type descriptors
The type descriptor tables (really, structs) that appear near the end of the file are the crux of the matter for typesPython uses these tables to dispatch an operation performed on an instance object to the corresponding C handler function in this file. In fact, everything is routed through these tables; even method attribute lookups start by running a C stack_getattr function listed in the table (which in turn looks up the attribute name in a name/function-pointer table). The main Stacktype table includes a link to the supplemental stack_as_sequence table in which sequence operation handlers are registered; types can provide such tables to register handlers for mapping, number, and sequence operation sets.
See Python's integer and dictionary objects ' source code for number and mapping examples; they are analogous to the sequence type here, but their operation tables vary. Descriptor layouts, like most C API tools, are prone to change over time, and you should always consult Include/object.h in the Python distribution for an up-to-date list of fields.
Constructor module
Besides defining a C type, this file also creates a simple C module at the end that exports a stacktype.Stack constructor function, which Python scripts call to generate new stack instance objects. The initialization function for this module is the only C name in this file that is not static (local to the file); everything else is reached by following pointersfrom instance to type descriptor to C handler function.
Again, see this book's examples distribution for the full C stack type implementation. But to give you the general flavor of C type methods, here is what the C type's pop function looks like; compare this with the C module's pop function to see how the self argument is used to access per-instance information in types:
static PyObject * stack_pop(self, args) stackobject *self; PyObject *args; /* on "instance.pop( )" */ { PyObject *pstr; if (!PyArg_ParseTuple(args, "")) /* verify no args passed */ return NULL; if (self->top == 0) onError("stack underflow") /* return NULL = raise */ else { pstr = Py_BuildValue("s", self->stack[--self->top]); self->len -= (strlen(self->stack[self->top]) + 1); return pstr; } }
This C extension file is compiled and dynamically or statically linked like previous examples; the file makefile.stack in the book's examples distribution handles the build like this:
PYLIB = /usr/bin PYINC = /usr/include/python2.4 stacktype.dll: stacktyp.c gcc stacktyp.c -g -I$(PYINC) -shared -L$(PYLIB) -lpython2.4 -o $@
Once compiled, you can import the C module and make and use instances of the C type that it defines much as if it were a Python class. You would normally do this from a Python script, but the interactive prompt is a convenient place to test the basics:
.../PP3E/Integrate/Extend/Stacks$ python >>> import stacktype # import C constructor module >>> x = stacktype.Stack( ) # make C type instance object >>> x.push('new') # call C type methods >>> x # call C type print handler [Stack: 0: 'new' ] >>> x[0] # call C type index handler 'new' >>> y = stacktype.Stack( ) # make another type instance >>> for c in 'SPAM': y.push(c) # a distinct stack object ... >>> y [Stack: 3: 'M' 2: 'A' 1: 'P' 0: 'S' ] >>> z = x + y # call C type concat handler >>> z [Stack: 4: 'M' 3: 'A' 2: 'P' 1: 'S' 0: 'new' ] >>> y.pop( ) 'M' >>> len(z), z[0], z[-1] # for loops work too (indexing) (5, 'new', 'M') >>> dir( stacktype) ['Stack', '_ _doc_ _', '_ _file_ _', '_ _name_ _', 'error'] >>> stacktype._ _file_ _ 'stacktype.dll'
So how did we do on the optimization front this time? Let's resurrect that timer module we wrote back in Example 20-6 to compare the C stack module and type of this chapter to the Python stack module and classes we coded in Chapter 20. Example 22-17 calculates the system time in seconds that it takes to run tests on all of this book's stack implementations.
#!/usr/local/bin/python # time the C stack module and type extensions # versus the object chapter's Python stack implementations from PP3E.Dstruct.Basic.timer import test # second count function from PP3E.Dstruct.Basic import stack1 # Python stack module from PP3E.Dstruct.Basic import stack2 # Python stack class: +/slice from PP3E.Dstruct.Basic import stack3 # Python stack class: tuples from PP3E.Dstruct.Basic import stack4 # Python stack class: append/pop import stackmod, stacktype # C extension type, module from sys import argv rept, pushes, pops, items = 200, 200, 200, 200 # default: 200 * (600 ops) try: [rept, pushes, pops, items] = map(int, argv[1:]) except: pass print 'reps=%d * [push=%d+pop=%d+fetch=%d]' % (rept, pushes, pops, items) def moduleops(mod): for i in range(pushes): mod.push('hello') # strings only for C for i in range(items): t = mod.item(i) for i in range(pops): mod.pop( ) def objectops(Maker): # type has no init args x = Maker( ) # type or class instance for i in range(pushes): x.push('hello') # strings only for C for i in range(items): t = x[i] for i in range(pops): x.pop( ) # test modules: python/c print "Python module:", test(rept, moduleops, stack1) print "C ext module: ", test(rept, moduleops, stackmod), '\n' # test objects: class/type print "Python simple Stack:", test(rept, objectops, stack2.Stack) print "Python tuple Stack:", test(rept, objectops, stack3.Stack) print "Python append Stack:", test(rept, objectops, stack4.Stack) print "C ext type Stack: ", test(rept, objectops, stacktype.Stack)
Running this script under Cygwin on Windows produces the following results (as usual, these are prone to change over time; these tests were run under Python 2.4 on a 1.2 GHz machine). As we saw before, the Python tuple stack is slightly better than the Python in-place append stack in typical use (when the stack is only pushed and popped), but it is slower when indexed. The first test here runs 200 repetitions of 200 stack pushes and pops, or 80,000 stack operations (200 x 400); times listed are test duration seconds:
.../PP3E/Integrate/Extend/Stacks$ python exttime.py 200 200 200 0 reps=200 * [push=200+pop=200+fetch=0] Python module: 0.35 C ext module: 0.07 Python simple Stack: 0.381 Python tuple Stack: 0.11 Python append Stack: 0.13 C ext type Stack: 0.07 .../PP3E/Integrate/Extend/Stacks$ python exttime.py 100 300 300 0 reps=100 * [push=300+pop=300+fetch=0] Python module: 0.33 C ext module: 0.06 Python simple Stack: 0.321 Python tuple Stack: 0.08 Python append Stack: 0.09 C ext type Stack: 0.06
At least when there are no indexing operations on the stack, as in these two tests (just pushes and pops), the C type is only slightly faster than the best Python stack (tuples). In fact, the difference seems trivial; it's not exactly the kind of performance issue that would generate a bug report.
The C module comes in at roughly five times faster than the Python module, but these results are flawed. The stack1 Python module tested here uses the same slow stack implementation as the Python "simple" stack (stack2). If it was recoded to use the tuple stack representation used in Chapter 20, its speed would be similar to the "tuple" figures listed here and almost identical to the speed of the C module in the first two tests:
.../PP3E/Integrate/Extend/Stacks$ python exttime.py 200 200 200 50 reps=200 * [push=200+pop=200+fetch=50] Python module: 0.36 C ext module: 0.08 Python simple Stack: 0.401 Python tuple Stack: 0.24 Python append Stack: 0.15 C ext type Stack: 0.08 .../PP3E/Integrate/Extend/Stacks$ python exttime.py reps=200 * [push=200+pop=200+fetch=200] Python module: 0.44 C ext module: 0.12 Python simple Stack: 0.431 Python tuple Stack: 1.983 Python append Stack: 0.19 C ext type Stack: 0.1
But under the different usage patterns simulated in these two tests, the C type wins the race. It is about twice as fast as the best Python stack (append) when indexing is added to the test mix, as illustrated by the two preceding test runs that ran with a nonzero fetch count. Similarly, the C module would be twice as fast as the best Python module coding in this case as well.
In other words, the fastest Python stacks are essentially as good as the C stacks if you stick to pushes and pops, but the C stacks are roughly twice as fast if any indexing is performed. Moreover, since you have to pick one representation, if indexing is possible at all you would likely pick the Python append stack; assuming they represent the best case, C stacks would always be twice as fast.
Of course, the measured time differences are so small that in many applications you won't care. Even at one million iterations, the best Python stack is still less than half a second slower than the C stack type:
.../PP3E/Integrate/Extend/Stacks$$ python exttime.py 2000 250 250 0 reps=2000 * [push=250+pop=250+fetch=0] Python module: 4.686 C ext module: 0.952 Python simple Stack: 4.987 Python tuple Stack: 1.352 Python append Stack: 1.572 C ext type Stack: 0.941
Further, in many ways, this is not quite an apples-to-apples comparison. The C stacks are much more difficult to program, and they achieve their speed by imposing substantial functional limits (as coded, the C module and type overflow at 342 pushes: 342 * 6 > 2048). But as a rule of thumb, C extensions can not only integrate existing components for use in Python scripts, they can also optimize time-critical components of pure Python programs. In other scenarios, migration to C might yield an even larger speedup.
On the other hand, C extensions should generally be used only as a last resort. As we learned earlier, algorithms and data structures are often bigger influences on program performance than implementation language. The fact that Python-coded tuple stacks are very nearly as fast as the C stacks under common usage patterns speaks volumes about the importance of data structure representation. Installing the Psyco just-in-time compiler for Python code might erase the remaining difference completely, but we'll leave this as a suggested exercise.
Interestingly, Python grew much faster between this book's first and second editions, relative to C. In the first edition, the C type was still almost three times faster than the best Python stack (tuples), even when no indexing was performed. Today, as in the second edition, it's almost a draw. One might infer from this that C migrations have become one-third as important as they once were.
For comparison, here were the results of this script in the second edition of this book, run on a 650 MHz machine under Python 1.5.2 and Linux. The results were relatively similar, though typically six or more times slowerowing likely to both Python and machine speedups:
.../PP3E/Integrate/Extend/Stacks$ python exttime.py 200 200 200 0 reps=200 * [push=200+pop=200+fetch=0] Python module: 2.09 C ext module: 0.68 Python simple Stack: 2.15 Python tuple Stack: 0.68 Python append Stack: 1.16 C ext type Stack: 0.5 .../PP3E/Integrate/Extend/Stacks$ python exttime.py 100 300 300 0 reps=100 * [push=300+pop=300+fetch=0] Python module: 1.86 C ext module: 0.52 Python simple Stack: 1.91 Python tuple Stack: 0.51 Python append Stack: 0.87 C ext type Stack: 0.38 .../PP3E/Integrate/Extend/Stacks$ python exttime.py 200 200 200 50 reps=200 * [push=200+pop=200+fetch=50] Python module: 2.17 C ext module: 0.79 Python simple Stack: 2.24 Python tuple Stack: 1.94 Python append Stack: 1.25 C ext type Stack: 0.52 .../PP3E/Integrate/Extend/Stacks$ python exttime.py reps=200 * [push=200+pop=200+fetch=200] Python module: 2.42 C ext module: 1.1 Python simple Stack: 2.54 Python tuple Stack: 19.09 Python append Stack: 1.54 C ext type Stack: 0.63
You can code C types manually like thisand in some applications, this approach may make sense. But you don't necessarily have tobecause SWIG knows how to generate glue code for C++ classes, you can instead automatically generate all the C extension and wrapper class code required to integrate such a stack object, simply by running SWIG over an appropriate class declaration. The wrapped C++ class provides a multiple-instance datatype much like the C extension type presented in this section, but SWIG handles language integration details. The next section shows how. | https://flylib.com/books/en/2.726.1.218/1/ | CC-MAIN-2019-43 | refinedweb | 3,606 | 56.05 |
Rick Muller, Sandia National Laboratories
version 0.6
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of one's life.
The Python Tutorial is a great place to start getting a feel for the language. To complement this material, I taught a Python Short Course years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs.
I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the IPython Project has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include:
I find IPython notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. I find myself endlessly sweeping the IPython subreddit hoping someone will post a new notebook. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time.
There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible. We're now (2013) past the halfway point, and, IMHO, at the first time when I'm considering making the change to Python 3.
Nonetheless, I'm going to write these notes with Python 2 in mind, since this is the version of the language that I use in my day-to-day job, and am most comfortable with. If these notes are important and are valuable to people, I'll be happy to rewrite the notes using Python 3.
With this in mind, these notes assume you have a Python distribution that includes:
A good, easy to install option that supports Mac, Windows, and Linux, and that has all of these packages (and much more) is the Entought Python Distribution, also known as EPD, which appears to be changing its name to Enthought Canopy. Enthought is a commercial company that supports a lot of very good work in scientific Python development and application. You can either purchase a license to use EPD, or there is also a free version that you can download and install.
Here are some other alternatives, should you not want to use EPD:
Linux Most distributions have an installation manager. Redhat has yum, Ubuntu has apt-get. To my knowledge, all of these packages should be available through those installers.
Mac I use Macports, which has up-to-date versions of all of these packages.
Windows The PythonXY package has everything you need: install the package, then go to Start > PythonXY > Command Prompts > IPython notebook server.
Cloud This notebook is currently not running on the IPython notebook viewer, but will be shortly, which will allow the notebook to be viewed but not interactively. I'm keeping an eye on Wakari, from Continuum Analytics, which is a cloud-based IPython notebook. Wakari appears to support free accounts as well. Continuum is a company started by some of the core Enthought Numpy/Scipy people focusing on big data.
Continuum also supports a bundled, multiplatform Python package called Anaconda that I'll also keep an eye on.
This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, Python Tutorial is a great place to start, as is Zed Shaw's Learn Python the Hard Way.
The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks in the IPython notebook documentation that even has a nice video on how to use the notebooks. You should probably also flip through the IPython tutorial in your copious free time.
Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the IPython notebook documentation or the IPython tutorial.
Many of the things I used to use a calculator for, I now use Python for:
2+2
4
(50-5*6)/4
5
(If you're typing this into an IPython notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.)
There are some gotchas compared to using a normal calculator.
7/3
2
Python integer division, like C or Fortran integer division, truncates the remainder and returns an integer. At least it does in version 2. In version 3, Python returns a floating point number. You can get a sneak preview of this feature in Python 2 by importing the module from the future features:
from __future__ import division
Alternatively, you can convert one of the integers to a floating point number, in which case the division function returns another floating point number.
7/3.
2.3333333333333335
7/float(3)
2.3333333333333335
In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: integers, also known as whole numbers to the non-programming world, and floating point numbers, also known (incorrectly) as decimal numbers to the rest of the world.
We've also seen the first instance of an import statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a math module containing many useful functions. To access, say, the square root function, you can either first
from math import sqrt
and then
sqrt(81)
9.0
or you can simply import the math library itself
import math math.sqrt(81)
9.0
You can define variables using the equals (=) sign:
width = 20 length = 30 area = length*width area
600
If you try to access a variable that you haven't yet defined, you get an error:
volume
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-9-0c7fc58f9268> in <module>() ----> 1 volume NameError: name 'volume' is not defined
and you need to define it:
depth = 10 volume = area*depth volume
You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric charcters plus underscores ("_"). Certain words, however, are reserved for the language:
and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, while, with, yield
Trying to define a variable using one of these will result in a syntax error:
return = 0
File "<ipython-input-10-2b99136d4ec6>", line 1 return = 0 ^ SyntaxError: invalid syntax
The Python Tutorial has more on using Python as an interactive shell. The IPython tutorial makes a nice complement to this, since IPython has a much more sophisticated iteractive shell.
'Hello, World!'
'Hello, World!'
or double quotes
"Hello, World!"
'Hello, World!'
But not both at the same time, unless you want one of the symbols to be part of the string.
"He's a Rebel"
"He's a Rebel"
'She asked, "How are you today?"'
'She asked, "How are you today?"'
Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable
greeting = "Hello, World!"
The print statement is often used for printing character strings:
print greeting
Hello, World!
But it can also print data types other than strings:
print "The area is ",area
The area is 600
In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out.
You can use the + operator to concatenate strings together:
statement = "Hello," + "World!" print statement
Hello,World!
Don't forget the space between the strings, if you want one there.
statement = "Hello, " + "World!" print statement
Hello, World!
You can use + to concatenate multiple strings in a single statement:
print "This " + "is " + "a " + "longer " + "statement."
This is a longer statement.
If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together.
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
You can access members of the list using the index of that item:
days_of_the_week[2]
'Tuesday'
Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the nth element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element:
days_of_the_week[-1]
'Saturday'
You can add additional items to the list using the .append() command:
languages = ["Fortran","C","C++"] languages.append("Python") print languages
['Fortran', 'C', 'C++', 'Python']
The range() command is a convenient way to make sequential lists of numbers:
range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop)
range(2,8)
[2, 3, 4, 5, 6, 7]
The lists created above with range have a step of 1 between elements. You can also give a fixed step size via a third command:
evens = range(0,20,2) evens
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
evens[3]
6
Lists do not have to hold the same data type. For example,
["Today",7,99.3,""]
['Today', 7, 99.3, '']
However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use tuples, which we will learn about below.
You can find out how long a list is using the len() command:
help(len)
Help on built-in function len in module __builtin__: len(...) len(object) -> integer Return the number of items of a sequence or collection.
len(evens)
10
for day in days_of_the_week: print day
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
This code snippet goes through each element of the list called days_of_the_week and assigns it to the variable day. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block.
(Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks.
Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well:
for day in days_of_the_week: statement = "Today is " + day print statement
Today is Sunday Today is Monday Today is Tuesday Today is Wednesday Today is Thursday Today is Friday Today is Saturday
The range() command is particularly useful with the for statement to execute loops of a specified length:
for i in range(20): print "The square of ",i," is ",i*i
The square of 0 is 0 The square of 1 is 1 11 is 121 The square of 12 is 144 The square of 13 is 169 The square of 14 is 196 The square of 15 is 225 The square of 16 is 256 The square of 17 is 289 The square of 18 is 324 The square of 19 is 361
for letter in "Sunday": print letter
S u n d a y
This is only occasionally useful. Slightly more useful is the slicing operation, which you can also use on any sequence. We already know that we can use indexing to get the first element of a list:
days_of_the_week[0]
'Sunday'
If we want the list containing the first two elements of a list, we can do this via
days_of_the_week[0:2]
['Sunday', 'Monday']
or simply
days_of_the_week[:2]
['Sunday', 'Monday']
If we want the last items of the list, we can do this with negative slicing:
days_of_the_week[-2:]
['Friday', 'Saturday']
which is somewhat logically consistent with negative indices accessing the last elements of the list.
You can do:
workdays = days_of_the_week[1:6] print workdays
['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
Since strings are sequences, you can also do this to them:
day = "Sunday" abbreviation = day[:3] print abbreviation
Sun
If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the range() function specifies the step):
numbers = range(0,40) evens = numbers[2::2] evens
[2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38]
Note that in this example I was even able to omit the second argument, so that the slice started at 2, went to the end of the list, and took every second element, to generate the list of even numbers less that 40.
We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about boolean variables that can be either True or False.
We invariably need some concept of conditions in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of boolean variables, which evaluate to either True or False, and if statements, that control branching based on boolean values.
For example:
if day == "Sunday": print "Sleep in" else: print "Go to work"
Sleep in
(Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?)
Let's take the snippet apart to see what happened. First, note the statement
day == "Sunday"
True
If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs equality testing. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value.
The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed.
The first block of code is followed by an else statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work".
You can compare any data types in Python:
1 == 2
False
50 == 2*25
True
3 < 3.14159
True
1 == 1.0
True
1 != 0
True
1 <= 2
True
1 >= 1
True
We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on.
Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same value. There is another boolean operator is, that tests whether two objects are the same object:
1 is 1.0
False
We can do boolean tests on lists as well:
[1,2,3] == [1,2,4]
False
[1,2,3] < [1,2,4]
True
Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests:
hours = 5 0 < hours < 24
True
If statements can have elif parts ("else if"), in addition to if/else parts. For example:
if day == "Sunday": print "Sleep in" elif day == "Saturday": print "Do chores" else: print "Go to work"
Sleep in
Of course we can combine if statements with for loops, to make a snippet that is almost interesting:
for day in days_of_the_week: statement = "Today is " + day print statement if day == "Sunday": print " Sleep in" elif day == "Saturday": print " Do chores" else: print " Go to work"
Today is Sunday Sleep in Today is Monday Go to work Today is Tuesday Go to work Today is Wednesday Go to work Today is Thursday Go to work Today is Friday Go to work Today is Saturday Do chores
This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the bool() function.
bool(1)
True
bool(0)
False
bool(["This "," is "," a "," list"])
True
The Fibonacci sequence is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,...
A very common exercise in programming books is to compute the Fibonacci sequence up to some number n. First I'll show the code, then I'll discuss what it is doing.
n = 10 sequence = [0,1] for i in range(2,n): # This is going to be a problem if we ever set n <= 2! sequence.append(sequence[i-1]+sequence[i-2]) print sequence
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
Let's go through this line by line. First, we define the variable n, and set it to the integer 20. n is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called sequence, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements.
We then have a for loop over the list of integers from 2 (the next element of the list) to n (the length of the sequence). After the colon, we see a hash tag "#", and then a comment that if we had set n to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of n is valid, and to complain if it isn't; we'll try this later.
In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list.
After exiting the loop (ending the indentation) we then print out the whole list. That's it!
def fibonacci(sequence_length): "Return the Fibonacci sequence of length *sequence_length*" sequence = [0,1] if sequence_length < 1: print "Fibonacci sequence only defined for length 1 or greater" return if 0 < sequence_length < 3: return sequence[:sequence_length] for i in range(2,sequence_length): sequence.append(sequence[i-1]+sequence[i-2]) return sequence
We can now call fibonacci() for different sequence_lengths:
fibonacci(2)
[0, 1]
fibonacci(12)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a docstring, and is a special kind of comment that is often available to people using the function through the python command line:
help(fibonacci)
Help on function fibonacci in module __main__: fibonacci(sequence_length) Return the Fibonacci sequence of length *sequence_length*
If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function.
Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases.
Functions can also call themselves, something that is often called recursion. We're going to experiment with recursion by computing the factorial function. The factorial is defined for a positive integer n as$$ n! = n(n-1)(n-2)\cdots 1 $$
First, note that we don't need to write a function at all, since this is a function built into the standard math library. Let's use the help function to find out about it:
from math import factorial help(factorial)
Help on built-in function factorial in module math: factorial(...) factorial(x) -> Integral Find x!. Raise a ValueError if x is negative or non-integral.
This is clearly what we want.
factorial(20)
2432902008176640000
However, if we did want to write a function ourselves, we could do recursively by noting that$$ n! = n(n-1)!$$
The program then looks something like:
def fact(n): if n <= 0: return 1 return n*fact(n-1)
fact(20)
2432902008176640000
Recursion can be very elegant, and can lead to very simple programs.
Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs.
A tuple is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses:
t = (1,2,'hi',9.0) t
(1, 2, 'hi', 9.0)
Tuples are like lists, in that you can access the elements using indices:
t[1]
2
However, tuples are immutable, you can't append to them or change the elements of them:
t.append(7)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-72-50c7062b1d5f> in <module>() ----> 1 t.append(7) AttributeError: 'tuple' object has no attribute 'append'
t[1]=77
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-73-03cc8ba9c07d> in <module>() ----> 1 t[1]=77 TypeError: 'tuple' object does not support item assignment
Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this:
('Bob',0.0,21.0)
('Bob', 0.0, 21.0)
Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates:
positions = [ ('Bob',0.0,21.0), ('Cat',2.5,13.1), ('Dog',33.0,1.2) ]
Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write:
def minmax(objects): minx = 1e20 # These are set to really big numbers miny = 1e20 for obj in objects: name,x,y = obj if x < minx: minx = x if y < miny: miny = y return minx,miny x,y = minmax(positions) print x,y
0.0 1.2
Here we did two things with tuples you haven't seen before. First, we unpacked an object into a set of named variables using tuple assignment:
>>> name,x,y = obj
We also returned multiple values (minx,miny), which were then assigned to two other variables (x,y), again by tuple assignment. This makes what would have been complicated code in C++ rather simple.
Tuple assignment is also a convenient way to swap variables:
x,y = 1,2 y,x = x,y x,y
(2, 1)
Dictionaries are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects:
mylist = [1,2,9,21]
The index in a dictionary is called the key, and the corresponding dictionary entry is the value. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}:
ages = {"Rick": 46, "Bob": 86, "Fred": 21} print "Rick's age is ",ages["Rick"]
Rick's age is 46
There's also a convenient way to create dictionaries without having to quote the keys.
dict(Rick=46,Bob=86,Fred=20)
{'Bob': 86, 'Fred': 20, 'Rick': 46}
The len() command works on both tuples and dictionaries:
len(t)
4
len(ages)
3
We can generally understand trends in data by using a plotting program to chart it. Python has a wonderful plotting library called Matplotlib. The IPython notebook interface we are using for these notes has that functionality built in.
As an example, we have looked at two different functions, the Fibonacci function, and the factorial function, both of which grow faster than polynomially. Which one grows the fastest? Let's plot them. First, let's generate the Fibonacci sequence of length 20:
fibs = fibonacci(10)
Next lets generate the factorials.
facts = [] for i in range(10): facts.append(factorial(i))
Now we use the Matplotlib function plot to compare the two.
figsize(8,6) plot(facts,label="factorial") plot(fibs,label="Fibonacci") xlabel("n") legend()
<matplotlib.legend.Legend at 0x10d1b4890>
The factorial function grows much faster. In fact, you can't even see the Fibonacci sequence. It's not entirely surprising: a function where we multiply by n each iteration is bound to grow faster than one where we add (roughly) n each iteration.
Let's plot these on a semilog plot so we can see them both a little more clearly:
semilogy(facts,label="factorial") semilogy(fibs,label="Fibonacci") xlabel("n") legend()
<matplotlib.legend.Legend at 0x10d2bee90>
There are many more things you can do with Matplotlib. We'll be looking at some of them in the sections to come. In the meantime, if you want an idea of the different things you can do, look at the Matplotlib Gallery. Rob Johansson's IPython notebook Introduction to Matplotlib is also particularly good.
There is, of course, much more to the language than I've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life.
You will no doubt need to learn more as you go. I've listed several other good references, including the Python Tutorial and Learn Python the Hard Way. Additionally, now is a good time to start familiarizing yourself with the Python Documentation, and, in particular, the Python Language Reference.
Tim Peters, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command:!
No matter how experienced a programmer you are, these are words to meditate on.
Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)
Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command:
array([1,2,3,4,5,6])
array([1, 2, 3, 4, 5, 6])
You can pass in a second argument to array that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,
array([1,2,3,4,5,6],'d')
array([ 1., 2., 3., 4., 5., 6.])
array([1,2,3,4,5,6],'D')
array([ 1.+0.j, 2.+0.j, 3.+0.j, 4.+0.j, 5.+0.j, 6.+0.j])
array([1,2,3,4,5,6],'i')
array([1, 2, 3, 4, 5, 6], dtype=int32)
To build matrices, you can either use the array command with lists of lists:
array([[0,1],[1,0]],'d')
array([[ 0., 1.], [ 1., 0.]])
You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command:
zeros((3,3),'d')
array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]])
The first argument is a tuple containing the shape of the matrix, and the second is the data type argument, which follows the same conventions as in the array command. Thus, you can make row vectors:
zeros(3,'d')
array([ 0., 0., 0.])
zeros((1,3),'d')
array([[ 0., 0., 0.]])
or column vectors:
zeros((3,1),'d')
array([[ 0.], [ 0.], [ 0.]])
There's also an identity command that behaves as you'd expect:
identity(4,'d')
array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]])
as well as a ones command.. ])
If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
linspace(0,1,11)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ])
linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
x = linspace(0,2*pi) sin(x)
array([ 0.00000000e+00, 1.27877162e-01, 2.53654584e-01, 3.75267005e-01, 4.90717552e-01, 5.98110530e-01, 6.95682551e-01, 7.81831482e-01, 8.55142763e-01, 9.14412623e-01, 9.58667853e-01, 9.87181783e-01, 9.99486216e-01, 9.95379113e-01, 9.74927912e-01, 9.38468422e-01, 8.86599306e-01, 8.20172255e-01, 7.40277997e-01, 6.48228395e-01, 5.45534901e-01, 4.33883739e-01, 3.15108218e-01, 1.91158629e-01, 6.40702200e-02, -6.40702200e-02, -1.91158629e-01, -3.15108218e-01, -4.33883739e-01, -5.45534901e-01, -6.48228395e-01, -7.40277997e-01, -8.20172255e-01, -8.86599306e-01, -9.38468422e-01, -9.74927912e-01, -9.95379113e-01, -9.99486216e-01, -9.87181783e-01, -9.58667853e-01, -9.14412623e-01, -8.55142763e-01, -7.81831482e-01, -6.95682551e-01, -5.98110530e-01, -4.90717552e-01, -3.75267005e-01, -2.53654584e-01, -1.27877162e-01, -2.44929360e-16])
In conjunction with matplotlib, this is a nice way to plot things:
plot(x,sin(x))
[<matplotlib.lines.Line2D at 0x10d49f510>]
0.125*identity(3,'d')
array([[ 0.125, 0. , 0. ], [ 0. , 0.125, 0. ], [ 0. , 0. , 0.125]])
as well as when you add two matrices together. (However, the matrices have to be the same shape.)
identity(2,'d') + array([[1,1],[1,2]])
array([[ 2., 1.], [ 1., 3.]])
Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication:
identity(2)*ones((2,2))
array([[ 1., 0.], [ 0., 1.]])
To get matrix multiplication, you need the dot command:
dot(identity(2),ones((2,2)))
array([[ 1., 1.], [ 1., 1.]])
dot can also do dot products (duh!):
v = array([3,4],'d') sqrt(dot(v,v))
5.0
as well as matrix-vector products.
There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object:
m = array([[1,2],[3,4]]) m.T
array([[1, 3], [2, 4]])
There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
diag([1,2,3,4,5])
array([[1, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 4, 0], [0, 0, 0, 0, 5]])
We'll find this useful later on.
A = array([[1,1,1],[0,2,5],[2,5,-1]]) b = array([6,-4,27]) solve(A,b)
array([ 5., 3., -2.])
There are a number of routines to compute eigenvalues and eigenvectors
A = array([[13,-4],[-4,7]],'d') eigvalsh(A)
array([ 5., 15.])
eigh(A)
(array([ 5., 15.]), array([[-0.4472136 , -0.89442719], [-0.89442719, 0.4472136 ]]))
Now that we have these tools in our toolbox, we can start to do some cool stuff with it. Many of the equations we want to solve in Physics involve differential equations. We want to be able to compute the derivative of functions:$$ y' = \frac{y(x+h)-y(x)}{h} $$
by discretizing the function $y(x)$ on an evenly spaced set of points $x_0, x_1, \dots, x_n$, yielding $y_0, y_1, \dots, y_n$. Using the discretization, we can approximate the derivative by$$ y_i' \approx \frac{y_{i+1}-y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can write a derivative function in Python via
def nderiv(y,x): "Finite difference derivative of the function f" n = len(y) d = zeros(n,'d') # assume double # Use centered differences for the interior points, one-sided differences for the ends for i in range(1,n-1): d[i] = (y[i+1]-y[i-1])/(x[i+1]-x[i-1]) d[0] = (y[1]-y[0])/(x[1]-x[0]) d[n-1] = (y[n-1]-y[n-2])/(x[n-1]-x[n-2]) return d
Let's see whether this works for our sin example from above:
x = linspace(0,2*pi) dsin = nderiv(sin(x),x) plot(x,dsin,label='numerical') plot(x,cos(x),label='analytical') title("Comparison of numerical and analytical derivatives of sin(x)") legend()
<matplotlib.legend.Legend at 0x110b39c10>
Pretty close!
Now that we've convinced ourselves that finite differences aren't a terrible approximation, let's see if we can use this to solve the one-dimensional harmonic oscillator.
We want to solve the time-independent Schrodinger equation$$ -\frac{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial x^2} + V(x)\psi(x) = E\psi(x)$$
for $\psi(x)$ when $V(x)=\frac{1}{2}m\omega^2x^2$ is the harmonic oscillator potential. We're going to use the standard trick to transform the differential equation into a matrix equation by multiplying both sides by $\psi^*(x)$ and integrating over $x$. This yields$$ -\frac{\hbar}{2m}\int\psi(x)\frac{\partial^2}{\partial x^2}\psi(x)dx + \int\psi(x)V(x)\psi(x)dx = E$$
We will again use the finite difference approximation. The finite difference formula for the second derivative is$$ y'' = \frac{y_{i+1}-2y_i+y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can think of the first term in the Schrodinger equation as the overlap of the wave function $\psi(x)$ with the second derivative of the wave function $\frac{\partial^2}{\partial x^2}\psi(x)$. Given the above expression for the second derivative, we can see if we take the overlap of the states $y_1,\dots,y_n$ with the second derivative, we will only have three points where the overlap is nonzero, at $y_{i-1}$, $y_i$, and $y_{i+1}$. In matrix form, this leads to the tridiagonal Laplacian matrix, which has -2's along the diagonals, and 1's along the diagonals above and below the main diagonal.
The second term turns leads to a diagonal matrix with $V(x_i)$ on the diagonal elements. Putting all of these pieces together, we get:
def Laplacian(x): h = x[1]-x[0] # assume uniformly spaced points n = len(x) M = -2*identity(n,'d') for i in range(1,n): M[i,i-1] = M[i-1,i] = 1 return M/h**2
x = linspace(-3,3) m = 1.0 ohm = 1.0 T = (-0.5/m)*Laplacian(x) V = 0.5*(ohm**2)*(x**2) H = T + diag(V) E,U = eigh(H) h = x[1]-x[0] # Plot the Harmonic potential plot(x,V,color='k') for i in range(4): #("Eigenfunctions of the Quantum Harmonic Oscillator") xlabel("Displacement (bohr)") ylabel("Energy (hartree)")
<matplotlib.text.Text at 0x10d7b9650>
We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.
But the eigenfunctions as we generate them aren't properly normalized. The reason is that finite difference isn't a real basis in the quantum mechanical sense. It's a basis of Dirac δ functions at each point; we interpret the space betwen the points as being "filled" by the wave function, but the finite difference basis only has the solution being at the points themselves. We can fix this by dividing the eigenfunctions of our finite difference Hamiltonian by the square root of the spacing, and this gives properly normalized functions.
The solutions to the Harmonic Oscillator are supposed to be Hermite polynomials. The Wikipedia page has the HO states given by$$\psi_n(x) = \frac{1}{\sqrt{2^n n!}} \left(\frac{m\omega}{\pi\hbar}\right)^{1/4} \exp\left(-\frac{m\omega x^2}{2\hbar}\right) H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)$$
Let's see whether they look like those. There are some special functions in the Numpy library, and some more in Scipy. Hermite Polynomials are in Numpy:
from numpy.polynomial.hermite import Hermite def ho_evec(x,n,m,ohm): vec = [0]*9 vec[n] = 1 Hn = Hermite(vec) return (1/sqrt(2**n*factorial(n)))*pow(m*ohm/pi,0.25)*exp(-0.5*m*ohm*x**2)*Hn(x*sqrt(m*ohm))
Let's compare the first function to our solution.
plot(x,ho_evec(x,0,1,1),label="Analytic") plot(x,-U[:,0]/sqrt(h),label="Numeric") xlabel('x (bohr)') ylabel(r'$\psi(x)$') title("Comparison of numeric and analytic solutions to the Harmonic Oscillator") legend()
<matplotlib.legend.Legend at 0x10da6b950>
The agreement is almost exact.
We can use the subplot command to put multiple comparisons in different panes on a single plot:
phase_correction = [-1,1,1,-1,-1,1] for i in range(6): subplot(2,3,i+1) plot(x,ho_evec(x,i,1,1),label="Analytic") plot(x,phase_correction[i]*U[:,i]/sqrt(h),label="Numeric")
Other than phase errors (which I've corrected with a little hack: can you find it?), the agreement is pretty good, although it gets worse the higher in energy we get, in part because we used only 50 points.
The Scipy module has many more special functions:
from scipy.special import airy,jn,eval_chebyt,eval_legendre subplot(2,2,1) x = linspace(-1,1) Ai,Aip,Bi,Bip = airy(x) plot(x,Ai) plot(x,Aip) plot(x,Bi) plot(x,Bip) title("Airy functions") subplot(2,2,2) x = linspace(0,10) for i in range(4): plot(x,jn(i,x)) title("Bessel functions") subplot(2,2,3) x = linspace(-1,1) for i in range(6): plot(x,eval_chebyt(i,x)) title("Chebyshev polynomials of the first kind") subplot(2,2,4) x = linspace(-1,1) for i in range(6): plot(x,eval_legendre(i,x)) title("Legendre polynomials")
<matplotlib.text.Text at 0x10e809c50>
As well as Jacobi, Laguerre, Hermite polynomials, Hypergeometric functions, and many others. There's a full listing at the Scipy Special Functions Page.
raw_data = """\ 3.1905781584582433,0.028208609537968457 4.346895074946466,0.007160804747670053 5.374732334047101,0.0046962988461934805 8.201284796573875,0.0004614473299618756 10.899357601713055,0.00005038370219939726 16.295503211991434,4.377451812785309e-7 21.82012847965739,3.0799922117601088e-9 32.48394004282656,1.524776208284536e-13 43.53319057815846,5.5012073588707224e-18"""
There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
data = [] for line in raw_data.splitlines(): words = line.split(',') data.append(map(float,words)) data = array(data)
title("Raw Data") xlabel("Distance") plot(data[:,0],data[:,1],'bo')
[<matplotlib.lines.Line2D at 0x10e70dc90>]
Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
title("Raw Data") xlabel("Distance") semilogy(data[:,0],data[:,1],'bo')
[<matplotlib.lines.Line2D at 0x10dfc6fd0>]
For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function $$ y = Ae^{-ax} $$ $$ \log(y) = \log(A) - ax$$ Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.
There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)
params = polyfit(data[:,0],log(data[:,1]),1) a = params[0] A = exp(params[1])
Let's see whether this curve fits the data.
x = linspace(1,45) title("Raw Data") xlabel("Distance") semilogy(data[:,0],data[:,1],'bo') semilogy(x,A*exp(a*x),'b-')
[<matplotlib.lines.Line2D at 0x10dcdffd0>]
If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data:
gauss_data = """\ -0.9902286902286903,1.4065274110372852e-19 -0.7566104566104566,2.2504438576596563e-18 -0.5117810117810118,1.9459459459459454 -0.31887271887271884,10.621621621621626 -0.250997150997151,15.891891891891893 -0.1463309463309464,23.756756756756754 -0.07267267267267263,28.135135135135133 -0.04426734426734419,29.02702702702703 -0.0015939015939017698,29.675675675675677 0.04689304689304685,29.10810810810811 0.0840994840994842,27.324324324324326 0.1700546700546699,22.216216216216214 0.370878570878571,7.540540540540545 0.5338338338338338,1.621621621621618 0.722014322014322,0.08108108108108068 0.9926849926849926,-0.08108108108108646""" data = [] for line in gauss_data.splitlines(): words = line.split(',') data.append(map(float,words)) data = array(data) plot(data[:,0],data[:,1],'bo')
[<matplotlib.lines.Line2D at 0x10ddecc90>]
This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
First define a general Gaussian function to fit to.
def gauss(x,A,a): return A*exp(a*x**2)
Now fit to it using curve_fit:
from scipy.optimize import curve_fit params,conv = curve_fit(gauss,data[:,0],data[:,1]) x = linspace(-1,1) plot(data[:,0],data[:,1],'bo') A,a = params plot(x,gauss(x,A,a),'b-')
[<matplotlib.lines.Line2D at 0x10f86a110>]
The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function gives pseudorandom numbers uniformly distributed between 0 and 1:
from random import random rands = [] for i in range(100): rands.append(random()) plot(rands)
[<matplotlib.lines.Line2D at 0x10f9bf210>]
random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution:
from random import gauss grands = [] for i in range(100): grands.append(gauss(0,1)) plot(grands)
[<matplotlib.lines.Line2D at 0x10fb08910>]
It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
plot(rand(100))
[<matplotlib.lines.Line2D at 0x10fc69a50>]
One of the first programs I ever wrote was a program to compute $\pi$ by taking random numbers as x and y coordinates, and counting how many of them were in the unit circle. For example:
npts = 5000 xs = 2*rand(npts)-1 ys = 2*rand(npts)-1 r = xs**2+ys**2 ninside = (r<1).sum() figsize(6,6) # make the figure square title("Approximation to pi = %f" % (4*ninside/float(npts))) plot(xs[r<1],ys[r<1],'b.') plot(xs[r>1],ys[r>1],'r.') figsize(8,6) # change the figsize back to 4x3 for the rest of the notebook
The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.
The above code uses some higher level Numpy tricks to compute the radius of each point in a single line, to count how many radii are below one in a single line, and to filter the x,y points based on their radii. To be honest, I rarely write code like this: I find some of these Numpy tricks a little too cute to remember them, and I'm more likely to use a list comprehension (see below) to filter the points I want, since I can remember that.
As methods of computing $\pi$ go, this is among the worst. A much better method is to use Leibniz's expansion of arctan(1):$$\frac{\pi}{4} = \sum_k \frac{(-1)^k}{2*k+1}$$
n = 100 total = 0 for k in range(n): total += pow(-1,k)/(2*k+1.0) print 4*total
3.13159290356
If you're interested a great method, check out Ramanujan's method. This converges so fast you really need arbitrary precision math to display enough decimal places. You can do this with the Python decimal module, if you're interested.
from numpy import sqrt def f(x): return exp(-x) x = linspace(0,10) plot(x,exp(-x))
[<matplotlib.lines.Line2D at 0x10ff00490>]
Scipy has a numerical integration routine quad (since sometimes numerical integration is called quadrature), that we can use for this:
from scipy.integrate import quad quad(f,0,inf)
(1.0000000000000002, 5.842606742906004e-11)
Very often we want to use FFT techniques to help obtain the signal from noisy data. Scipy has several different options for this.
from scipy.fftpack import fft,fftfreq npts = 4000 nplot = npts/10 t = linspace(0,120,npts) def acc(t): return 10*sin(2*pi*2.0*t) + 5*sin(2*pi*8.0*t) + 2*rand(npts) signal = acc(t) FFT = abs(fft(signal)) freqs = fftfreq(npts, t[1]-t[0]) subplot(211) plot(t[:nplot], signal[:nplot]) subplot(212) plot(freqs,20*log10(FFT),',') show()
As more and more of our day-to-day work is being done on and through computers, we increasingly have output that one program writes, often in a text file, that we need to analyze in one way or another, and potentially feed that output into another file.
Suppose we have the following output:
myoutput = """\ @"""
This output actually came from a geometry optimization of a Silicon cluster using the NWChem quantum chemistry suite. At every step the program computes the energy of the molecular geometry, and then changes the geometry to minimize the computed forces, until the energy converges. I obtained this output via the unix command
% grep @ nwchem.out
since NWChem is nice enough to precede the lines that you need to monitor job progress with the '@' symbol.
We could do the entire analysis in Python; I'll show how to do this later on, but first let's focus on turning this code into a usable Python object that we can plot.
First, note that the data is entered into a multi-line string. When Python sees three quote marks """ or ''' it treats everything following as part of a single string, including newlines, tabs, and anything else, until it sees the same three quote marks (""" has to be followed by another """, and ''' has to be followed by another ''') again. This is a convenient way to quickly dump data into Python, and it also reinforces the important idea that you don't have to open a file and deal with it one line at a time. You can read everything in, and deal with it as one big chunk.
The first thing we'll do, though, is to split the big string into a list of strings, since each line corresponds to a separate piece of data. We will use the splitlines() function on the big myout string to break it into a new element every time it sees a newline (\n) character:
lines = myoutput.splitlines() lines
['@']
Splitting is a big concept in text processing. We used splitlines() here, and we will use the more general split() function below to split each line into whitespace-delimited words.
We now want to do three things:
For this data, we really only want the Energy column, the Gmax column (which contains the maximum gradient at each step), and perhaps the Walltime column.
Since the data is now in a list of lines, we can iterate over it:
for line in lines[2:]: # do something with each line words = line.split()
Let's examine what we just did: first, we used a for loop to iterate over each line. However, we skipped the first two (the lines[2:] only takes the lines starting from index 2), since lines[0] contained the title information, and lines[1] contained underscores.
We then split each line into chunks (which we're calling "words", even though in most cases they're numbers) using the string split() command. Here's what split does:
import string help(string.split)
Help on function split in module string: split(s, sep=None, maxsplit= or is None, any whitespace string is a separator. (split and splitfields are synonymous)
Here we're implicitly passing in the first argument (s, in the doctext) by calling a method .split() on a string object. In this instance, we're not passing in a sep character, which means that the function splits on whitespace. Let's see what that does to one of our lines:
lines[2].split()
['@', '0', '-6095.12544083', '0.0D+00', '0.03686', '0.00936', '0.00000', '0.00000', '1391.5']
This is almost exactly what we want. We just have to now pick the fields we want:
for line in lines[2:]: # do something with each line words = line.split() energy = words[2] gmax = words[4] time = words[8] print energy,gmax,time
This is fine for printing things out, but if we want to do something with the data, either make a calculation with it or pass it into a plotting, we need to convert the strings into regular floating point numbers. We can use the float() command for this. We also need to save it in some form. I'll do this as follows:
data = [] for line in lines[2:]: # do something with each line words = line.split() energy = float(words[2]) gmax = float(words[4]) time = float(words[8]) data.append((energy,gmax,time)) data = array(data)
We now have our data in a numpy array, so we can choose columns to print:
plot(data[:,0]) xlabel('step') ylabel('Energy (hartrees)') title('Convergence of NWChem geometry optimization for Si cluster')
<matplotlib.text.Text at 0x110335650>
I would write the code a little more succinctly if I were doing this for myself, but this is essentially a snippet I use repeatedly.
Suppose our data was in CSV (comma separated values) format, a format that originally came from Microsoft Excel, and is increasingly used as a data interchange format in big data applications. How would we parse that?
csv = """\ """
We can do much the same as before:
data = [] for line in csv.splitlines(): words = line.split(',') data.append(map(float,words)) data = array(data)
There are two significant changes over what we did earlier. First, I'm passing the comma character ',' into the split function, so that it breaks to a new word every time it sees a comma. Next, to simplify things a big, I'm using the map() command to repeatedly apply a single function (float()) to a list, and to return the output as a list.
help(map)
Help on built-in function map in module __builtin__: map(...)).
Despite the differences, the resulting plot should be the same:
plot(data[:,0]) xlabel('step') ylabel('Energy (hartrees)') title('Convergence of NWChem geometry optimization for Si cluster')
<matplotlib.text.Text at 0x10fcc7d50>
Hartrees (what most quantum chemistry programs use by default) are really stupid units. We really want this in kcal/mol or eV or something we use. So let's quickly replot this in terms of eV above the minimum energy, which will give us a much more useful plot:
energies = data[:,0] minE = min(energies) energies_eV = 27.211*(energies-minE) plot(energies_eV) xlabel('step') ylabel('Energy (eV)') title('Convergence of NWChem geometry optimization for Si cluster')
<matplotlib.text.Text at 0x110222350>
This gives us the output in a form that we can think about: 4 eV is a fairly substantial energy change (chemical bonds are roughly this magnitude of energy), and most of the energy decrease was obtained in the first geometry iteration.
We mentioned earlier that we don't have to rely on grep to pull out the relevant lines for us. The string module has a lot of useful functions we can use for this. Among them is the startswith function. For example:
lines = """\ ---------------------------------------- | WALL | 0.45 | 443.61 | ---------------------------------------- @ Step Energy Delta E Gmax Grms Xrms Xmax Walltime @ ---- ---------------- -------- -------- -------- -------- -------- -------- @ 0 -6095.12544083 0.0D+00 0.03686 0.00936 0.00000 0.00000 1391.5 ok ok Z-matrix (autoz) -------- """.splitlines() for line in lines: if line.startswith('@'): print line
@ Step Energy Delta E Gmax Grms Xrms Xmax Walltime @ ---- ---------------- -------- -------- -------- -------- -------- -------- @ 0 -6095.12544083 0.0D+00 0.03686 0.00936 0.00000 0.00000 1391.5
and we've successfully grabbed all of the lines that begin with the @ symbol.
The real value in a language like Python is that it makes it easy to take additional steps to analyze data in this fashion, which means you are thinking more about your data, and are more likely to see important patterns.
Strings are a big deal in most modern languages, and hopefully the previous sections helped underscore how versatile Python's string processing techniques are. We will continue this topic in this chapter.
We can print out lines in Python using the print command.
print "I have 3 errands to run"
I have 3 errands to run
In IPython we don't even need the print command, since it will display the last expression not assigned to a variable.
"I have 3 errands to run"
'I have 3 errands to run'
print even converts some arguments to strings for us:
a,b,c = 1,2,3 print "The variables are ",1,2,3
The variables are 1 2 3
As versatile as this is, you typically need more freedom over the data you print out. For example, what if we want to print a bunch of data to exactly 4 decimal places? We can do this using formatted strings.
Formatted strings share a syntax with the C printf statement. We make a string that has some funny format characters in it, and then pass a bunch of variables into the string that fill out those characters in different ways.
For example,
print "Pi as a decimal = %d" % pi print "Pi as a float = %f" % pi print "Pi with 4 decimal places = %.4f" % pi print "Pi with overall fixed length of 10 spaces, with 6 decimal places = %10.6f" % pi print "Pi as in exponential format = %e" % pi
Pi as a decimal = 3 Pi as a float = 3.141593 Pi with 4 decimal places = 3.1416 Pi with overall fixed length of 10 spaces, with 6 decimal places = 3.141593 Pi as in exponential format = 3.141593e+00
We use a percent sign in two different ways here. First, the format character itself starts with a percent sign. %d or %i are for integers, %f is for floats, %e is for numbers in exponential formats. All of the numbers can take number immediately after the percent that specifies the total spaces used to print the number. Formats with a decimal can take an additional number after a dot . to specify the number of decimal places to print.
The other use of the percent sign is after the string, to pipe a set of variables in. You can pass in multiple variables (if your formatting string supports it) by putting a tuple after the percent. Thus,
print "The variables specified earlier are %d, %d, and %d" % (a,b,c)
The variables specified earlier are 1, 2, and 3
This is a simple formatting structure that will satisfy most of your string formatting needs. More information on different format symbols is available in the string formatting part of the standard docs.
It's worth noting that more complicated string formatting methods are in development, but I prefer this system due to its simplicity and its similarity to C formatting strings.
Recall we discussed multiline strings. We can put format characters in these as well, and fill them with the percent sign as before.
form_letter = """\ %s Dear %s, We regret to inform you that your product did not ship today due to %s. We hope to remedy this as soon as possible. From, Your Supplier """ print form_letter % ("July 1, 2013","Valued Customer Bob","alien attack")
July 1, 2013 Dear Valued Customer Bob, We regret to inform you that your product did not ship today due to alien attack. We hope to remedy this as soon as possible. From, Your Supplier
The problem with a long block of text like this is that it's often hard to keep track of what all of the variables are supposed to stand for. There's an alternate format where you can pass a dictionary into the formatted string, and give a little bit more information to the formatted string itself. This method looks like:
form_letter = """\ %(date)s Dear %(customer)s, We regret to inform you that your product did not ship today due to %(lame_excuse)s. We hope to remedy this as soon as possible. From, Your Supplier """ print form_letter % {"date" : "July 1, 2013","customer":"Valued Customer Bob","lame_excuse":"alien attack"}
July 1, 2013 Dear Valued Customer Bob, We regret to inform you that your product did not ship today due to alien attack. We hope to remedy this as soon as possible. From, Your Supplier
By providing a little bit more information, you're less likely to make mistakes, like referring to your customer as "alien attack".
As a scientist, you're less likely to be sending bulk mailings to a bunch of customers. But these are great methods for generating and submitting lots of similar runs, say scanning a bunch of different structures to find the optimal configuration for something.
For example, you can use the following template for NWChem input files:
nwchem_format = """ start %(jobname)s title "%(thetitle)s" charge %(charge)d geometry units angstroms print xyz autosym %(geometry)s end basis * library 6-31G** end dft xc %(dft_functional)s mult %(multiplicity)d end task dft %(jobtype)s """
If you want to submit a sequence of runs to a computer somewhere, it's pretty easy to put together a little script, maybe even with some more string formatting in it:
oxygen_xy_coords = [(0,0),(0,0.1),(0.1,0),(0.1,0.1)] charge = 0 multiplicity = 1 dft_functional = "b3lyp" jobtype = "optimize" geometry_template = """\ O %f %f 0.0 H 0.0 1.0 0.0 H 1.0 0.0 0.0""" for i,xy in enumerate(oxygen_xy_coords): thetitle = "Water run #%d" % i jobname = "h2o-%d" % i geometry = geometry_template % xy print "---------" print nwchem_format % dict(thetitle=thetitle,charge=charge,jobname=jobname,jobtype=jobtype, geometry=geometry,dft_functional=dft_functional,multiplicity=multiplicity)
--------- start h2o-0 title "Water run #0" charge 0 geometry units angstroms print xyz autosym O 0.000000 0.000000 0.0 H 0.0 1.0 0.0 H 1.0 0.0 0.0 end basis * library 6-31G** end dft xc b3lyp mult 1 end task dft optimize --------- start h2o-1 title "Water run #1" charge 0 geometry units angstroms print xyz autosym O 0.000000 0.100000 0.0 H 0.0 1.0 0.0 H 1.0 0.0 0.0 end basis * library 6-31G** end dft xc b3lyp mult 1 end task dft optimize --------- start h2o-2 title "Water run #2" charge 0 geometry units angstroms print xyz autosym O 0.100000 0.000000 0.0 H 0.0 1.0 0.0 H 1.0 0.0 0.0 end basis * library 6-31G** end dft xc b3lyp mult 1 end task dft optimize --------- start h2o-3 title "Water run #3" charge 0 geometry units angstroms print xyz autosym O 0.100000 0.100000 0.0 H 0.0 1.0 0.0 H 1.0 0.0 0.0 end basis * library 6-31G** end dft xc b3lyp mult 1 end task dft optimize
This is a very bad geometry for a water molecule, and it would be silly to run so many geometry optimizations of structures that are guaranteed to converge to the same single geometry, but you get the idea of how you can run vast numbers of simulations with a technique like this.
We used the enumerate function to loop over both the indices and the items of a sequence, which is valuable when you want a clean way of getting both. enumerate is roughly equivalent to:
def my_enumerate(seq): l = [] for i in range(len(seq)): l.append((i,seq[i])) return l my_enumerate(oxygen_xy_coords)
[(0, (0, 0)), (1, (0, 0.1)), (2, (0.1, 0)), (3, (0.1, 0.1))]
Although enumerate uses generators (see below) so that it doesn't have to create a big list, which makes it faster for really long sequenes.. ])
or it can take three arguments, for the starting point, the ending point, and the number of points:
linspace(0,1,5)
array([ 0. , 0.25, 0.5 , 0.75, 1. ])
You can also pass in keywords to exclude the endpoint:
linspace(0,1,5,endpoint=False)
array([ 0. , 0.2, 0.4, 0.6, 0.8])
Right now, we only know how to specify functions that have a fixed number of arguments. We'll learn how to do the more general cases here.
If we're defining a simple version of linspace, we would start with:
def my_linspace(start,end): npoints = 50 v = [] d = (end-start)/float(npoints-1) for i in range(npoints): v.append(start + i*d) return v]
We can add an optional argument by specifying a default value in the argument list:
def my_linspace(start,end,npoints = 50): v = [] d = (end-start)/float(npoints-1) for i in range(npoints): v.append(start + i*d) return v
This gives exactly the same result if we don't specify anything:]
But also let's us override the default value with a third argument:
my_linspace(0,1,5)
[0.0, 0.25, 0.5, 0.75, 1.0]
We can add arbitrary keyword arguments to the function definition by putting a keyword argument **kwargs handle in:
def my_linspace(start,end,npoints=50,**kwargs): endpoint = kwargs.get('endpoint',True) v = [] if endpoint: d = (end-start)/float(npoints-1) else: d = (end-start)/float(npoints) for i in range(npoints): v.append(start + i*d) return v my_linspace(0,1,5,endpoint=False)
[0.0, 0.2, 0.4, 0.6000000000000001, 0.8]
What the keyword argument construction does is to take any additional keyword arguments (i.e. arguments specified by name, like "endpoint=False"), and stick them into a dictionary called "kwargs" (you can call it anything you like, but it has to be preceded by two stars). You can then grab items out of the dictionary using the get command, which also lets you specify a default value. I realize it takes a little getting used to, but it is a common construction in Python code, and you should be able to recognize it.
There's an analogous *args that dumps any additional arguments into a list called "args". Think about the range function: it can take one (the endpoint), two (starting and ending points), or three (starting, ending, and step) arguments. How would we define this?
def my_range(*args): start = 0 step = 1 if len(args) == 1: end = args[0] elif len(args) == 2: start,end = args elif len(args) == 3: start,end,step = args else: raise Exception("Unable to parse arguments") v = [] value = start while True: v.append(value) value += step if value > end: break return v
Note that we have defined a few new things you haven't seen before: a break statement, that allows us to exit a for loop if some conditions are met, and an exception statement, that causes the interpreter to exit with an error message. For example:
my_range()
--------------------------------------------------------------------------- Exception Traceback (most recent call last) <ipython-input-170-0e8004dab150> in <module>() ----> 1 my_range() <ipython-input-169-c34e09da2551> in my_range(*args) 9 start,end,step = args 10 else: ---> 11 raise Exception("Unable to parse arguments") 12 v = [] 13 value = start Exception: Unable to parse arguments
evens1 = [2*i for i in range(10)] print evens1
You can also put some boolean testing into the construct:
odds = [i for i in range(20) if i%2==1] odds
Here i%2 is the remainder when i is divided by 2, so that i%2==1 is true if the number is odd. Even though this is a relative new addition to the language, it is now fairly common since it's so convenient.
iterators are a way of making virtual sequence objects. Consider if we had the nested loop structure:
for i in range(1000000): for j in range(1000000):
Inside the main loop, we make a list of 1,000,000 integers, just to loop over them one at a time. We don't need any of the additional things that a lists gives us, like slicing or random access, we just need to go through the numbers one at a time. And we're making 1,000,000 of them.
iterators are a way around this. For example, the xrange function is the iterator version of range. This simply makes a counter that is looped through in sequence, so that the analogous loop structure would look like:
for i in xrange(1000000): for j in xrange(1000000):
Even though we've only added two characters, we've dramatically sped up the code, because we're not making 1,000,000 big lists.
We can define our own iterators using the yield statement:
def evens_below(n): for i in xrange(n): if i%2 == 0: yield i return for i in evens_below(9): print i
0 2 4 6 8
We can always turn an iterator into a list using the list command:
list(evens_below(9))
[0, 2, 4, 6, 8]
There's a special syntax called a generator expression that looks a lot like a list comprehension:
evens_gen = (i for i in xrange(9) if i%2==0) for i in evens_gen: print i
0 2 4 6 8
A factory function is a function that returns a function. They have the fancy name lexical closure, which makes you sound really intelligent in front of your CS friends. But, despite the arcane names, factory functions can play a very practical role.
Suppose you want the Gaussian function centered at 0.5, with height 99 and width 1.0. You could write a general function.
def gauss(x,A,a,x0): return A*exp(-a*(x-x0)**2)
But what if you need a function with only one argument, like f(x) rather than f(x,y,z,...)? You can do this with Factory Functions:
def gauss_maker(A,a,x0): def f(x): return A*exp(-a*(x-x0)**2) return f
x = linspace(0,1) g = gauss_maker(99.0,1.0,0.5) plot(x,g(x))
[<matplotlib.lines.Line2D at 0x110726990>]
Everything in Python is an object, including functions. This means that functions can be returned by other functions. (They can also be passed into other functions, which is also useful, but a topic for another discussion.) In the gauss_maker example, the g function that is output "remembers" the A, a, x0 values it was constructed with, since they're all stored in the local memory space (this is what the lexical closure really refers to) of that function.
Factories are one of the more important of the Software Design Patterns, which are a set of guidelines to follow to make high-quality, portable, readable, stable software. It's beyond the scope of the current work to go more into either factories or design patterns, but I thought I would mention them for people interested in software design.
Serialization refers to the process of outputting data (and occasionally functions) to a database or a regular file, for the purpose of using it later on. In the very early days of programming languages, this was normally done in regular text files. Python is excellent at text processing, and you probably already know enough to get started with this.
When accessing large amounts of data became important, people developed database software based around the Structured Query Language (SQL) standard. I'm not going to cover SQL here, but, if you're interested, I recommend using the sqlite3 module in the Python standard library.
As data interchange became important, the eXtensible Markup Language (XML) has emerged. XML makes data formats that are easy to write parsers for, greatly simplifying the ambiguity that sometimes arises in the process. Again, I'm not going to cover XML here, but if you're interested in learning more, look into Element Trees, now part of the Python standard library.
Python has a very general serialization format called pickle that can turn any Python object, even a function or a class, into a representation that can be written to a file and read in later. But, again, I'm not going to talk about this, since I rarely use it myself. Again, the standard library documentation for pickle is the place to go.
What I am going to talk about is a relatively recent format call JavaScript Object Notation (JSON) that has become very popular over the past few years. There's a module in the standard library for encoding and decoding JSON formats. The reason I like JSON so much is that it looks almost like Python, so that, unlike the other options, you can look at your data and edit it, use it in another program, etc.
Here's a little example:
# Data in a json format: json_data = """\ { "a": [1,2,3], "b": [4,5,6], "greeting" : "Hello" }""" import json json.loads(json_data)
{u'a': [1, 2, 3], u'b': [4, 5, 6], u'greeting': u'Hello'}
Ignore the little u's before the strings, these just mean the strings are in UNICODE. Your data sits in something that looks like a Python dictionary, and in a single line of code, you can load it into a Python dictionary for use later.
In the same way, you can, with a single line of code, put a bunch of variables into a dictionary, and then output to a file using json:
json.dumps({"a":[1,2,3],"b":[9,10,11],"greeting":"Hola"})
'{"a": [1, 2, 3], "b": [9, 10, 11], "greeting": "Hola"}'
Functional programming is a very broad subject. The idea is to have a series of functions, each of which generates a new data structure from an input, without changing the input structure at all. By not modifying the input structure (something that is called not having side effects), many guarantees can be made about how independent the processes are, which can help parallelization and guarantees of program accuracy. There is a Python Functional Programming HOWTO in the standard docs that goes into more details on functional programming. I just wanted to touch on a few of the most important ideas here.
There is an operator module that has function versions of most of the Python operators. For example:
from operator import add, mul add(1,2)
3
mul(3,4)
12
These are useful building blocks for functional programming.
The lambda operator allows us to build anonymous functions, which are simply functions that aren't defined by a normal def statement with a name. For example, a function that doubles the input is:
def doubler(x): return 2*x doubler(17)
34
We could also write this as:
lambda x: 2*x
<function __main__.<lambda>>
And assign it to a function separately:
another_doubler = lambda x: 2*x another_doubler(19)
38
lambda is particularly convenient (as we'll see below) in passing simple functions as arguments to other functions.
map is a way to repeatedly apply a function to a list:
map(float,'1 2 3 4 5'.split())
[1.0, 2.0, 3.0, 4.0, 5.0]
reduce is a way to repeatedly apply a function to the first two items of the list. There already is a sum function in Python that is a reduction:
sum([1,2,3,4,5])
15
We can use reduce to define an analogous prod function:
def prod(l): return reduce(mul,l) prod([1,2,3,4,5])
120
We've seen a lot of examples of objects in Python. We create a string object with quote marks:
mystring = "Hi there"
and we have a bunch of methods we can use on the object:
mystring.split()
['Hi', 'there']
mystring.startswith('Hi')
True
len(mystring)
8
Object oriented programming simply gives you the tools to define objects and methods for yourself. It's useful anytime you want to keep some data (like the characters in the string) tightly coupled to the functions that act on the data (length, split, startswith, etc.).
As an example, we're going to bundle the functions we did to make the 1d harmonic oscillator eigenfunctions with arbitrary potentials, so we can pass in a function defining that potential, some additional specifications, and get out something that can plot the orbitals, as well as do other things with them, if desired.
class Schrod1d: """\ Schrod1d: Solver for the one-dimensional Schrodinger equation. """ def __init__(self,V,start=0,end=1,npts=50,**kwargs): m = kwargs.get('m',1.0) self.x = linspace(start,end,npts) self.Vx = V(self.x) self.H = (-0.5/m)*self.laplacian() + diag(self.Vx) return def plot(self,*args,**kwargs): titlestring = kwargs.get('titlestring',"Eigenfunctions of the 1d Potential") xstring = kwargs.get('xstring',"Displacement (bohr)") ystring = kwargs.get('ystring',"Energy (hartree)") if not args: args = [3] x = self.x E,U = eigh(self.H) h = x[1]-x[0] # Plot the Potential plot(x,self.Vx,color='k') for i in range(*args): #(titlestring) xlabel(xstring) ylabel(ystring) return def laplacian(self): x = self.x h = x[1]-x[0] # assume uniformly spaced points n = len(x) M = -2*identity(n,'d') for i in range(1,n): M[i,i-1] = M[i-1,i] = 1 return M/h**2
The init() function specifies what operations go on when the object is created. The self argument is the object itself, and we don't pass it in. The only required argument is the function that defines the QM potential. We can also specify additional arguments that define the numerical grid that we're going to use for the calculation.
For example, to do an infinite square well potential, we have a function that is 0 everywhere. We don't have to specify the barriers, since we'll only define the potential in the well, which means that it can't be defined anywhere else.
square_well = Schrod1d(lambda x: 0*x,m=10) square_well.plot(4,titlestring="Square Well Potential")
We can similarly redefine the Harmonic Oscillator potential.
ho = Schrod1d(lambda x: x**2,start=-3,end=3) ho.plot(6,titlestring="Harmonic Oscillator") | https://nbviewer.org/gist/rpmuller/5920182 | CC-MAIN-2022-33 | refinedweb | 13,359 | 59.94 |
[ This is the text of a talk given on Tue Jan 25 2000, as the keynote for the Zope track of the 8th International Python Conference. ]
I was reading an online discussion somewhere the other day, and somebody asked: "What's the best platform for building collaborative Web-based software?"
Somebody else answered: "There are really only two choices, Domino and Zope."
Five years ago, this would have been unthinkable. There would have been no way that a small team of script-language programmers could have put together an application-development platform that anybody would even think of comparing to Lotus Notes.
But the world has changed, and it's changed in precisely the ways that level the playing field for Zope. The key factors are: Internet standards; nearly-universal Web dialtone; object-oriented, network-aware scripting languages like Python; and most of all, a culture of open-source software development that thrives not only because the tools are freely exchanged, but also the knowledge of how and why to use them.
I think that what makes Zope so interesting to so many people -- me included -- is that it grew up in this environment. Zope didn't have to be retrofitted with Web support. It was built from the ground up to use -- and extend -- the Web.
Central to Zope's mission is its various kinds of support for XML, and that's the focus of my talk today. I'll admit that when Paul Everitt asked me to come and speak here, I was reluctant. After all, I'm a guy who's done almost all his Web programming in Perl and Java, and only recently begun to explore the world of Python and Zope. But after Paul and I talked for a while, I realized that we share the same vision for the future of software, and the future of the Web. It's a vision that looks beyond the parochial rivalries of our time: Windows vs Linux, Perl vs Python, Microsoft vs everybody else. When you focus on the big picture, I think there are just three things that matter:
Zope's architecture addresses all three of these points, and in each case, XML can play an crucial role.
Let's start by unpacking the notion of a network-services architecture, as it relates to the Web..
My favorite example of this is a thing I call the Web mindshare calculator. It's a script that ranks the sites listed in a Yahoo category according to the number of links in the AltaVista index that point to those sites. For example, in the Yahoo category that includes Zope there were 250 sites listed when I ran this script the other day. The top-ranked site was Sausage.com (that's the company that make the HotDog HTML editor). IBM alphaWorks was ninth. Zope came in pretty strongly at 30th. Vignette.com was 37th, and midgard-project.com, had it been included in that Yahoo category, would have ranked 200th.
To do this analysis, my script regards Yahoo as a service that offers a kind of namespace traversal API -- but really, it has to create that API itself, by recursing on nodes of the Yahoo directory as they're expressed in the HTML pages that Yahoo produces. There's no formal API that you can call directly to ask for the list of sites underneath some node of the directory.
What about the Netscape Open Directory project? It's true that you can download the whole thing as an XML file, but there's no API for retrieving just parts of the tree, or unrolling a subtree. Zope's support for XML-RPC, and its forthcoming support for SOAP (Microsoft's Simple Object Access Protocol), is the kind of thing that's going to blow this game wide open.
Since the birth of the Web, it's been the case that every Web application automatically exposes all its callable methods to HTTP clients. Zope isn't unique in this regard, though it is unusual in the richness of the set of such methods that it exposes in this way. Everything has a URL in Zope, right down to the individual elements of a parsed XML document.
I call this model the first-generation object Web. Scripts call URLs, which invoke actions, which return HTML pages, which can be processed by scripts, which can then call other URLs.
But Zope is helping to create what I call the second generation object Web. I got a taste of what this will be like when I built a Zope-based affiliate that received news feeds from Dave Winer's UserLand site. At the time, Dave was using XML-RPC to send these feeds to affiliates. Every hour, he gathered up his new stories into a Frontier data structure, turned that into an XML-RPC packet, and called a Zope method on my server, passing that data as an argument to the method. On my end, the data automatically appeared as a Python list-of-lists, which I unpacked and stuffed into a database. Distributed computing doesn't get any easier than this! The plumbing is literally invisible, and you can spend all your time doing the real work -- namely, putting the data to use.
There's no question in my mind that this general approach to distributed computing is going to be wildly popular. But you don't have to take my word for it. Ask Dun and Bradstreet. Last year, they.
So where's Zope in all this? Today, despite its very cool XML-RPC support, Zope is largely designed to emit HTML to browsers. It's clear where things are headed, though. In a great article for XML.com, Amos Latteier showed how a wxPython client can remotely manage a Zope server using XML-RPC. Admittedly, the vision's a bit ahead of the reality here, since you can't pass Zope objects over XML-RPC, and since Zope's management interface is still heavily browser-oriented.
For example, when I call manage_addFolder, what comes back isn't an XML packet containing a status, along with perhaps a reference to the new folder object that was added. What I get back instead, no matter whether I make that call using HTTP or XML-RPC, is the HTML text of the management page to which Zope redirects the browser that requests this action.
In his article, Amos acknowledges these limitations. He suggests that SOAP will enable richer communication of Zope objects across network channels -- in effect becoming a kind of RMI for Zope. But more interestingly, from my perspective, he mentions that Zope's API will be retooled to be less HTML-centric.
Personally, I'm fine with XML-RPC for the near future. It's dead simple, and that's just the way I like it. The things that SOAP might enable aren't the things I need to get my job done. In particular, I'm not too interested in recreating RMI for Zope, though I'm sure something like that would be useful to some people. The problem with RMI is that it presumes Java everywhere, and I don't think we're going to have Java everywhere, or Zope everywhere, or anything everywhere. What I hope we're going to continue to have is a diverse set of Internet platforms, each with special strengths, and all able to converse with one another using simple, easily-scriptable Internet protocols. What matters most to me is Amos' second point -- that Zope needs to grow an API that's fully XML-centric rather than HTML-centric.. Today, the programs that consume these XML interfaces are typically other back-end services. But just in the last few weeks, there have been two dramatic demonstrations of what can happen when the user's client becomes XML-aware. One is the new ZopeStudio product, which enables Mozilla to run a Zope management interface that's driven by XML rather than HTML. The other is Amos' wxPython-based Zope client, which is elegantly simple, surprisingly fast, and immediately useful.
The general notion of XML as an interface description language isn't rocket science, and there's huge leverage still to be gotten out of even something as simple as XML-RPC, never mind SOAP. We can all imagine how to build network services this way, and I'm sure there are people in this room who are already doing it. The challenge is to embed this mindset directly in a toolkit, so that everything you build with it is not only a trivial first-generation Web object, but also a more powerful second-generation Web object that delivers its services equally well to people and to other Web objects. XML-RPC and eventually SOAP will be key enablers, but what's also needed is a vision of how and why to integrate those things into a platform. I think Zope has the right set of ideas, and I'm eager to see where it takes them.
So we have a bunch of network services, isolated from humans and from software by some kind of XML layer. How are people going to use those services? What the first-generation Web taught us is that documents aren't just things we read, they're the interfaces through which we interact with network services. We still call these things "pages," but on the Web, a page isn't just a place where I read, it's a place where I maintain my calendar, or order books, or manage my investments, or exchange messages with other people.
What we used to call software applications, we now increasingly call Web pages. One reason for this, clearly, is the power of the Web's thin-client model of computing. Why should I install software, when I can just bookmark it?
But the thin-client benefit isn't the whole story. Web applications are just different than conventional GUI applications. On the Web, a document has no boundaries. It can connect to anything, and aggregate anything, that's available on the Web. It's now possible to surround the user interface of a software application -- the checkboxes and the pushbuttons that I use to order a book, or manage my investments -- with a wealth of contextual information. The book page links me to reviews of the book, or to other books in the same category. The investment page links me to analyst reports, realtime data, historical charts.
The challenge here becomes organizing all that contextual information, and presenting it in useful ways. "Information architecture" is the buzzword that's used to describe this emerging discipline. Zope, from the beginning, was a power tool for the information architect. It encourages you to build a site whose structure maps to the structure of your content, and when you do that you get all sorts of powerful inheritance effects (or should I say, acquisition effects?). Equally important, it encourages an object-oriented approach to your data. Documents such as news items, or Confera messages, are strongly typed. They can extend the behaviors of their ancestral documents, and they can interact intelligently with ZCatalog. In the world of electronic publishing, where I spend a lot of my time, this object-oriented approach to documents is not yet well understood. But there's intense competitive pressure to build document interfaces on the Web that are highly personalized, that aggregate data from multiple sources, and that summarize complex relationships among data components. Object publishing is the way to get these things done, and of course that's what Zope is about.
So where does XML fit into this picture? Well, a common misconception I hear all the time is that XML will enable "smart searching." That's kind of silly. What enables smart searching is effective information architecture. You can achieve that without any special tools or languages, if you know what you're doing, just like you can write object-oriented code in C or BASIC if you know what you're doing. Or you can use XML, and screw it up. But XML can make it easier to do effective information architecture, just like Python or Java can make it easier to do object-oriented programming.
The real significance of XML is that it extends the object system down into the document. The document's no longer just a blob of data with some metadata attributes that hook it into the containing object system. It's a mini object system in its own right, with a well-defined structure right down to the tiniest element, and scriptable methods that can access and transform and rearrange those elements. You can see this quite dramatically in Zope when you create a document of type 'XML Document' and then inspect the resulting tree of parsed elements.
What's this good for? Well it's true that for a while to come, we'll continue to need to transform this stuff into HTML for delivery into browsers. But as we create and store more of our documents in XML, we vastly improve our ability to add value to the resulting HTML pages. It's tempting to think that the limitations of HTML are what's keeping us from making richer and smarter document interfaces, and that as soon as XSL and SVG become pervasive, all our problems will be solved. I'm not holding my breath waiting for that to happen, though, because I can't help noticing that CSS -- to cite one painful example -- has been widely deployed since 1966, and still isn't universally acceptable on the Web.
The real bottleneck, I think, is lack of granular control over content. That's why I use XML. For example, one of my clients is a magazine, and I store all of its online content as XML, for delivery as HTML. It's very simple XML, just XHTML really, but the point is that I can reliably parse the content, transform it into richly-interconnected sets of Web pages, and vary that transformation at will to meet all sorts of rapidly-changing requirements.
An XML Document isn't just a header, containing structured metadata, and a body containing random stuff. The body has structure too. Most Web content managers haven't thought much about this kind of micro-structure. When they think in terms of information architecture, they consider the macro-structure of a site -- its functional areas, and how classes of documents relate to those areas. But documents do have internal structure, and you can leverage it to powerful effect. For example, the lead paragraph in a news item can also function as a teaser that appears in a summary view -- if you've tagged the lead paragraph so that you can later find and reuse it.
Or consider the collection of How-To's on the Zope.org site. As a member of Zope.org, I can create a new How-To, and categorize it by level of difficulty and topic. ZCatalog can search the resulting document in fulltext mode, or by way of the attributes I've assigned it. This is a great idea, but it only goes so far. Really, this is still the old model in which a document combines a structured header with an unstructured body. Now suppose there were a DTD for writing How-To's, along with an authoring environment that made it easy to instantiate that DTD. Suppose there's an element of the DTD called DtmlCodeFragment. Suppose that I can query the How-To collection for these DtmlCodeFragment elements, in conjunction with some specific piece of DTML syntax, say the <DTML-TREE>tag. The result would be an extremely powerful view that would draw together, on a single page, many different examples of uses of the tree tag. Imagine what that search results page would look like, then go to the zope.org site, search for "tree tag," and observe what you get -- a list of titles which reveals nothing about which of the underlying documents contains useful examples of the DTML tree tag.
Programmers learn by example, but usually, no single example will suffice. We need to collect all the variations on a theme, and consider them side by side, in order to build up the most complete mental model of the system. I think that most Zope newbies would agree that it's the difficulty of getting that mental model right that's the biggest obstacle to becoming productive in Zope. So information micro-architecture, if I can coin that term, is strategic for the Zope community, as well as for most of the problem domains in which we're inclined to use Zope.
Reusable content, at a fine level of granularity, is a game that we're all still learning how to play. Zope's 'XML Document' feature invites content managers to join that game. It's not a panacea, I'll admit. The embedded XML parser, expat, won't validate your content, and although you can arrange for ZCatalog to index and search your XML data, there's no support yet for XQL querying. But the immediate challenge is simply to get people over the XML activation threshold, by making it easy to start creating, and using, XML documents. I think Zope is well-positioned to do that. Since I'm a big believer in the "eat your own dogfood" principle, I'd suggest it might help to start managing Zope's own documentation in a more granular way -- including How-To's, Tips, and a lot of the message traffic that's currently happening outside of Zope on various lists.
My third theme is the data stores that support network services. Like every Web application server, Zope certainly knows how to play nicely with SQL engines. But unlike most, Zope comes with a native, object-oriented data store: ZODB. So when you're building a Zope-based service -- you can store the data in a relational database, or an object database, or some combination of the two.
It's worth noting that ZODB isn't the only imaginable object database that Zope and its applications could sit on top of. There are several commercial object databases, such as ObjectStore and POET, which come with what are called "bindings" to object-oriented programming languages, typically C++ and Java. In the case of Java, what this means is that your Java Hashtable, or Vector of Vectors, can persist in one of these databases.
Last year, ObjectDesign noticed that XML's structures map nicely to the object data its engine can store, query, and manage. And, like other ODB vendor, it repositioned its product as an "XML data server."
The version of ObjectStore that works with XML is called eXcelon. Once eXcelon has parsed and stored a chunk of XML, you can do some really powerful things with it. For example, you can query with XQL -- leveraging all of its powerful syntax for dealing with tree-structured data. There's also an experimental update language, analogous to SQL UPDATE, that you can use to declaratively alter nodes in place, create new nodes, and remove nodes.
This is a far cry from XML-RPC. Does it makes sense to regard XML not only as an interchange medium, but as a storage discipline? And if so, are XML and object databases really the natural partners that they might appear to be?
I don't think anybody really knows whether XML should evolve into a full-fledged data-management discipline, or if so, how. But it does seem clear that there's going to be a ton of XML content in the world, and that relational databases are not as naturally suited to the storage and management of that content as are object databases.
So where does Zope fit in here? I've said that XML is a way to extend the object system down into your documents. Zope's 'XML Document' feature already does that. It would be cool to see Zope offering some of the advanced features now available in commercial object databases -- like XQL querying and declarative updating, of XML content. But it would also be cool if Zope could hook directly into those other object databases, just as today it can hook into Oracle or Sybase. I'm not sure just how that would work, but if anybody can figure it out, I'll bet Jim Fulton can. Why might this matter? Because network services increasingly want to use complex data that's painful to shoehorn into relational stores. Persistent object storage is one of the most wonderful features of Zope, and there ought to be options ranging from ZODB to Berkeley DB all the way up to industrial-strength commercial object databases.
To wrap this up, I'd like to thank Paul Everitt for inviting me -- a mere Zope newbie -- to give this talk. I'll admit that I've had a kind of a love/hate relationship with the product as I've gotten to know it over the last few months, but Zope and I are having more good days than bad days lately, and there's no denying I've caught the Zope fever.
Here's how somebody described the experience to me:
"It starts as a benign web server with online content management... and turns into an insidious learning curve that will destroy your every waking hour. That is, until you uncover the secret that lets you do what you wanted it to in the first place."
It's true that Zope has its secrets, but it's also true that Zope takes the 'open' in open source very seriously. XML is an important part of the story. It can help ensure that Zope's network services, document interfaces, and datastores remain open, and in the right ways. The whole story hasn't been written yet, but I'm enjoying it so far, and I can't wait to read the next chapter. | http://www.oreillynet.com/lpt/a/52 | crawl-003 | refinedweb | 3,683 | 60.55 |
This page describes with the help of an example how to implement a new light-weight expression type in Eigen. This consists of three parts: the expression type itself, a traits class containing compile-time information about the expression, and the evaluator class which is used to evaluate the expression to a matrix.
TO DO: Write a page explaining the design, with details on vectorization etc., and refer to that page here.
A circulant matrix is a matrix where each column is the same as the column to the left, except that it is cyclically shifted downwards. For example, here is a 4-by-4 circulant matrix:
\[ \begin{bmatrix} 1 & 8 & 4 & 2 \\ 2 & 1 & 8 & 4 \\ 4 & 2 & 1 & 8 \\ 8 & 4 & 2 & 1 \end{bmatrix} \]
A circulant matrix is uniquely determined by its first column. We wish to write a function
makeCirculant which, given the first column, returns an expression representing the circulant matrix.
For simplicity, we restrict the
makeCirculant function to dense matrices. It may make sense to also allow arrays, or sparse matrices, but we will not do so here. We also do not want to support vectorization.
We will present the file implementing the
makeCirculant function part by part. We start by including the appropriate header files and forward declaring the expression class, which we will call
Circulant. The
makeCirculant function will return an object of this type. The class
Circulant is in fact a class template; the template argument
ArgType refers to the type of the vector passed to the
makeCirculant function.
For every expression class
X, there should be a traits class
Traits<X> in the
Eigen::internal namespace containing information about
X known as compile time.
As explained in The setting, we designed the
Circulant expression class to refer to dense matrices. The entries of the circulant matrix have the same type as the entries of the vector passed to the
makeCirculant function. The type used to index the entries is also the same. Again for simplicity, we will only return column-major matrices. Finally, the circulant matrix is a square matrix (number of rows equals number of columns), and the number of rows equals the number of rows of the column vector passed to the
makeCirculant function. If this is a dynamic-size vector, then the size of the circulant matrix is not known at compile-time.
This leads to the following code:
The next step is to define the expression class itself. In our case, we want to inherit from
MatrixBase in order to expose the interface for dense matrices. In the constructor, we check that we are passed a column vector (see Assertions) and we store the vector from which we are going to build the circulant matrix in the member variable
m_arg. Finally, the expression class should compute the size of the corresponding circulant matrix. As explained above, this is a square matrix with as many columns as the vector used to construct the matrix.
TO DO: What about the
Nested typedef? It seems to be necessary; is this only temporary?
The last big fragment implements the evaluator for the
Circulant expression. The evaluator computes the entries of the circulant matrix; this is done in the
.coeff() member function. The entries are computed by finding the corresponding entry of the vector from which the circulant matrix is constructed. Getting this entry may actually be non-trivial when the circulant matrix is constructed from a vector which is given by a complicated expression, so we use the evaluator which corresponds to the vector.
The
CoeffReadCost constant records the cost of computing an entry of the circulant matrix; we ignore the index computation and say that this is the same as the cost of computing an entry of the vector from which the circulant matrix is constructed.
In the constructor, we save the evaluator for the column vector which defined the circulant matrix. We also save the size of that vector; remember that we can query an expression object to find the size but not the evaluator.
After all this, the
makeCirculant function is very simple. It simply creates an expression object and returns it.
Finally, a short
main function that shows how the
makeCirculant function can be called.
If all the fragments are combined, the following output is produced, showing that the program works as expected: | http://eigen.tuxfamily.org/dox-devel/TopicNewExpressionType.html | CC-MAIN-2022-27 | refinedweb | 727 | 53.31 |
Command Objects
What is a command object ?
Command object is a non persistent class used for data-validation. The main advantage of using the Command objects is that we can validate those forms that do not map to the domain class.
Where to define command object ?
You can define command classes in the grails-app/controllers directory or even in the same file as a controller; or you may also define it in grails-app/src/groovy.
How to define command object ?
In one of my recent projects I did it like this :
public class RegisterUserCommand { String username String password String firstName String lastName String email String age static constraints={ username blank:false,size:30 password blank:false cardNo creditCard:true firstName matches/[A-Za-z ]/ lastName matches /[A-Aa-z ]/ email blank:false,email:true age range:18..60 } }
How to define custom validators ?
Lets us assume we ‘ve one more field as verifyPassword and we want to match it to the password field. We ‘ll use custom validator as :
verifyPassword blank:false,validator:{val,obj -> if(!(val==obj.password)) return ['field.match.message','Cofirm Password ','Password'] }
Note :Here we ‘ve returned our own customized message instead of the default error message, along with the parameters.The default error message is implicitly returned when we use default validations.
How to use our command object now ?
When a request comes in, Grails will automatically create a new instance of the command object, bind the incoming request parameters to the properties of the instance, and pass it to you as the argument.
Thus,you can create or modify your controller action as :
def registerUser = { RegisterUserCommand cmd -> if(!cmd.hasErrors()) { // your code does something } else{ // your code does something } } }
How do I display error messages on my gsp page ?
To do so you first need to pass the command object to your gsp page as :
render (view:'myErrorPage', model:[myCmd:cmd])
Then in the myErrorPage.gsp you can use g:renderErrors tag to display the errors(if any). But before displaying errors u need to make sure there are errors by making use of g:hasErrors tag :
But it shows default error messages.How can I define my own error messages ?
To do so you need to modify the /grails-app/i18n/message.properties_ file by creating your own messages or replacing the default messages by custom messages :
field.blank.message={0} cannot be blank #my message for blank fields fields.does.not.match.message={3} and {4} do not match # my verifyPassword error message field.number.notallowed.message={3} must have alphabets only # u can guess this
I can’t follow the parameterized messages..can u please ellaborate.
The error message is defined as:
default.doesnt.match.message=Property [{0}] of class [{1}] with value [{2}] does not match the required pattern [{3}]
[{0}] = property
[{1}] = class
[{2}] = value
[{3}] = constraint
The above example is a default message provided in the message.properties file.One can easily modify the message to suit to the requirement.
Or you choose to define your own message.eg., for the example given in the begining , the error message for the verifyPassword validator can be defined as :
fields.does.not.match.message={3} and {4} does not match
So in the validator the parameters and the message are returned as :
return ['field.match.message','Cofirm Password ','Password']
Hope you find this useful.
Imran Mir
imran@intelligrape.com
I wish it is possible to register own validator and make it reusable.
Geez, so that’s how its done. Awesome info. though.
Great post.
I dont know what to say. This weblog is fantastic. Thats not really a actually massive statement, but its all I could come up with right after reading this. You know so a lot about this subject. So much to ensure that you produced me want to discover much more about it. Your blog is my stepping stone, my friend. Thanks for the heads up on this subject.
Thanks a lot mate.
It was just awesome though i was aware of this in parts but finding everything at one place in form your post is very helpful for grails new bee like me.
Cheers:)
Thanks for sharing, see you then. Are there any forums that you recommend I join ?
This is amazing!
Great post | https://www.tothenew.com/blog/command-objects/ | CC-MAIN-2020-40 | refinedweb | 720 | 58.18 |
Home >>C++ Tutorial >C++ Functions
In order to perform or execute any task, the functions are used in the C++ programming. Hence, it is also known as the procedure in various programming languages. The functions have many benefits like it can be called various times provide the reusability of the written code.
Here are the advantages that the programmer gets by the functions in C++
There are generally two types of the functions in the C++ language:
Here is the syntax that is used to declare any function in the C++ programming:
return_type function_name(data_type parameter...) { //code that is to be executed }
Here is an example of the simple C++ function that will help you understand the topic better:
#include <iostream> using namespace std; void demo() { int i=10; cout<<"i=" << i<<"\n"; } int main() { demo(); demo(); demo(); } | http://www.phptpoint.com/cpp-functions/ | CC-MAIN-2021-10 | refinedweb | 138 | 52.73 |
By rick
via chaosinmotion.com
Published: May 30 2008 / 09:46
I still remember the brouhaha raised over Java on the Macintosh, and pronouncements by many of the Macintosh Technorati that Java sucks. (I believe Gruber’s words were Cross-platform crippity-crap Java apps.)
cbegin replied ago:
Agreed, Objective C sucks too.
bf44704 replied ago:
BS. Objective-C is miles ahead of Java. It's a damn shame that the original creators of Java, who explicitly credited Objective-C as an inspiration, didn't adopt more of the language's features, such as the more dynamic runtime features and the much more readable Smalltalk-like syntax. And creating UIs in COCOA using Interface Builder is still, after all these years, an order of magnitude easier and faster than doing the equivalent in Swing. Sad but true.
wp73875 replied ago:
Objective-C is definitely awesome, but that doesn't mean Java sucks. Java is excellent in many ways. However, when it comes to creating an application that looks and feels like it belongs on the platform (any platform), the current implementation of Java is not ideal.
As an aside, I think language wars are pretty silly (unless we're talking BASIC or C# which should be universally hated ::ducking::) :-) I have noticed, though, that a lot of the people who talk trash about Objective-C and Cocoa haven't ever actually implemented anything with them.
cbegin replied ago:
Cocoa != Objective-C.
You can code for Cocoa using Python too ( ). The Objective C language reads like crap. It looks like a bunch of stuff bolted onto C whereby the syntax was chosen solely to make it easy to parse and distinguish from the interlaced C code.
The lack of namespaces also creates a horrible naming convention that is not sustainable in the future. Two characters is not enough to distinguish frameworks, and it reads like crap too. I could go on, but there's little point.
Objective-C is and always will be the best choice for building OS X applications, because that's what Apple chose. But unfortunately, that's all Objective-C will ever be. As a language, it is simply not as universally relevant or applicable as Java or C# (and perhaps not even Ruby or Python). It's a niche language for a specific platform.
Clinton
Tantalus replied ago:
It doesn't matter if it's not better than Java. The reason mac developers sweat the details is because Mac users are picky and if you don't have a nice looking native aqua interface your chances of selling your app go way way down. That means Obj-C.
Voters For This Link (18)
Voters Against This Link (5) | http://www.dzone.com/links/rss/java_sucks_and_objectivec_is_great_puuuhhhllleeee.html | crawl-002 | refinedweb | 452 | 64.51 |
#include <genesis/tree/mass_tree/squash_clustering.hpp>
Definition at line 62 of file squash_clustering.hpp.
Is this cluster active, i.e., is it not yet part of a larger cluster?
Only active clusters are considered for merging.
Definition at line 85 of file squash_clustering.hpp.
How many end points (Samples) does this cluster represent?
We need this information for calculating the weighted average of the Sample masses when merging two clusters.
Definition at line 78 of file squash_clustering.hpp.
Distances from this cluster to all clusters with a lower index in the
clusters vector.
We don't store the distances in a global distance matix, but in a vector for each cluster instead, as this makes it trivial to keep track of the data when merging clusters. No need to keep track of which row belongs to which cluster etc.
Definition at line 95 of file squash_clustering.hpp.
The MassTree that this cluster represetns.
In the beginning of the algorithm, those are simply the MassTrees of the Samples. Those are then successively merged to form bigger clusters.
Definition at line 70 of file squash_clustering.hpp. | http://doc.genesis-lib.org/structgenesis_1_1tree_1_1_squash_clustering_1_1_cluster.html | CC-MAIN-2018-17 | refinedweb | 184 | 60.51 |
This GPCL (General Purpose Code Library) contains some general C# code functions that I use quite often. Please feel free to use the source code and implementation samples as freely as you like, and do not hesitate to tell me what you think of it.
Currently, it contains the following:
Debug.WriteLine
Use the source code from GPCL_Source.zip as you like, for instance, by copying the functions into your own general purpose code library, or simply add the GPCL.dll as a reference to your project and start using its handy and easy (I hope!) functions in your code.
The source code as well as the demo project is thoroughly commented and explained within the code itself, so I'll just list two pieces of code as bait:
//
// Remember to add a reference to the library
// in any applications you wish to use the library:
//
using WernerReyneke.GPCL;
//
// Query XML - implementation example:
//
//Create an instance of the DataPlus class
//in the WernerReyneke.GPCL namespace
WernerReyneke.GPCL.DataPlus xml_q = new DataPlus();
//Set a DataRow[] object equal to the QueryXML function
//The QueryXML function returns a DataRow[] object(set of data rows)
//By passing sCatalog= B as the value of
//the where_criteria and * as the value of the sorting
//parameters,the data is filtered to return
//only rows where sCatalog= B and is ordered
//Ascending according to theprimary key column
System.Data.DataRow[] drows =
xml_q.QueryXML("SampleXMLFile.XML","sCatalog='B'",
"sShortText DESC","ipkPrimaryID");
//Loop through each row of data
foreach( DataRow r in drows )
{
//Display the value of the sShortText column
//(see sample XML file: SampleXMLFile.XML
//in the \bin\Debug directory)
MessageBox.Show(r["sShortText"].ToString());
}
The above code will query the XML file you provide, according to the where criteria and sorting order specified. The returned data is read back from a regular DataRow[] object.
DataRow[]
//
// LDAP Authentication - implementation example:
//
WernerReyneke.GPCL.LDAPPlus Auth = new LDAPPlus();
//provide the Active Directory Server Name,
//NT username (your regular NT username
//and password you use to log on to your Windows Network
string name = Auth.Authenticate_Against_LDAP("LDAP (Active" +
" Directory Server) Name","NT username","password");
// The name variable will contain either failure info, or the valid
//name of the AD Server you're querying.
MessageBox.Show(name);
The above code will authenticate the credentials you pass against LDAP (Active Directory), and either return a failure message, or, in case of success, the valid name of the domain controller on AD. I recently developed a Web application for a large international petroleum company's Human Resources department, in ASP.NET, where the security/logon requirement was to integrate with Active directory (LDAP), so that no extra database of separate user accounts was necessary, but that the application use the existing users on Active Directory.
That's just two nice functions. Enjoy it, and all feedback is welcome.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here | http://www.codeproject.com/Articles/12729/XML-Querying-Text-Streaming-Debug-Capture-and-LDAP | CC-MAIN-2014-35 | refinedweb | 519 | 53.31 |
Ron is a Member of Technical Staff at Los Alamos National Laboratory. He can be contacted at rminnich@lanl.gov.
I magine waking up one day and discovering that programming has suddenly regressed you can only use global variables. Furthermore, imagine that all the names and values of all the variables of all the programs on your system are shared. Without question, this would be a nightmare. However, it's also how operating systems currently manage files. All of my files, for instance, can be made visible to you, and all of yours made visible to me and there is nothing we can do about it, at least with most operating systems. Fortunately, the concept of private namespaces provides an alternative. Private namespaces let groups of processes construct their own namespace, pass that namespace to other processes on other computers, and keep the namespace invisible to processes outside the group. In this article, I will discuss how I went about implementing private namespaces for Linux to solve problems in both distributed and cluster computing. (The complete source code that implements this technique is available electronically; see "Resource Center," page 5.)
Distributed computing is the use of many computers for one application. As processes run on remote nodes, they often access their home filesystem via the network filesystem (NFS). Figure 1, for instance, illustrates the result of a remote process from machine B running on machine A, where the remote process on machine A can access B's disks with the prefix /net/B.
Now consider a network of 400 machines where, in most cases, the set of distributed programs accesses every filesystem that may be accessed by any user at any time. Because the schedulers are trying to use as many machines as possible, these programs access all these filesystems from every desktop. In the worst case, the network of machines ends up with "spaghetti mounts," with many workstations mounting each other and even circular dependencies. The end result is that the entire collection of machines can become vulnerable to a single desktop machine going down.
You might expect that clusters would not have problems with NFS spaghetti mounts. In practice, however, the problem can sometimes be worse on a cluster. Because the cluster connectivity is very good and data sets are typically much larger than one disk, files are usually striped across the cluster nodes, and the nodes act as servers. If an NFS file server is a client of some other server, cascading outages of servers begin to occur, and whole parts of the cluster can become hung. The delays also exhibit self-synchronizing behavior, which can result in nodes all hanging synchronously, causing even more delay and failure. (The types of problems that can occur were briefly mentioned by the group that rendered the Titanic movie graphics.)
Remote file access is a common problem for both distributed and cluster computing. When a program accesses a remote filesystem, it makes changes to the global state of the operating system, and these changes can affect the operating system and all the processes running on it. Referring to Figure 1, you can see that once the process from machine B is running on machine A, the following happens:
- The process accesses B's filesystems, and they are mounted by the automounter. If B goes down, machine A can hang and may need to be rebooted. NFS only has a few ways of dealing with server outages: hanging the client until the server comes back (hard mounts); losing data on reads and writes while the server is down (soft mounts); or hanging on writes and losing data on reads (BSD spongy mounts).
- Any process on machine A can now peruse machine B's filesystem at will. B cannot distinguish malicious browsing from A from normal filesystem access from the remote process running on A.
- B's process can peruse A's filesystem at will. In fact, the problem is even worse if A and B are in different administrative domains because the process from B has now gained access to all of the servers that A can access. This single security problem alone makes it impossible to convince different organizations to share compute resources, because administrators are unwilling to leave their filesystem environment open to other organizations.
The type of namespace supported by UNIX is known as a "global namespace." In global namespaces, the filesystem namespace is common to all processes and traversable by any process on the machine. New filesystems become accessible to all processes as they are mounted, promoting "mutual insecurity."
The combination of global namespaces and distributed and cluster computing creates problems that cannot be solved by manipulations of directory permissions, automount tables, or judicious use of the chroot system call.
Again referring to Figure 1, the process running on B might construct a private namespace consisting solely of (for instance) /bdisk1 from B, as in Figure 2. When the process moves to A, it restarts with only /bdisk1 in its namespace and has no access to any of A's disks. No process on A can access /bdisk1 without communicating with B and mounting the disk; A's processes cannot access /bdisk1 just because a process from B happens to be running on A. If B goes down, then the only process that hangs or fails is the remote process started from B. The failure of B has no impact on any other process on A, or on A's operating system. In short, private namespaces remedy problems like this that I've experienced in large-scale distributed computing environments.
Private Namespaces for UNIX
Working from publicly available documents, I've built an implementation of the Plan 9 filesystem protocol and tested its user-mode components on FreeBSD, Solaris, SunOS, and Linux. I have also written a kernel-mode virtual filesystem (VFS) that runs on Linux. (For more information on Plan 9, see and "Designing Plan 9," by Rob Pike, Dave Presotto, Ken Thompson, and Howard Trickey, DDJ, January 1991.)
As Figure 3 shows, the current components of this system provide several different layers of functionality. There is a user-mode library for the client side, a kernel VFS for the client side, and a set of user-mode servers.
The Plan 9 Protocol
The Plan 9 protocol uses stream-oriented communications services to connect clients to servers. Communications are via T messages (requests) and R messages (replies). The structure of T and R messages is very regular, with fixed-size data components and commonality between different message types.
Each T message has a unique 16-bit tag, a TID. Once a TID has been used for a T message, the tag may not be used again until an R message with the same tag has been sent, which retires that TID. The tags allow concurrency in the message transmission, processing, and reply. Requests may be aborted via a TFLUSH message. To gain access to remote filesystems, a process must:
- Create one or more sessions with one or more remote file servers. Part of this process involves authentication. Once a session is created, it is assigned a session ID.
- For each session, the process may mount one or more remote filesystems. A mount is known as an "attach" in Plan 9 terminology. Several attaches may be rooted at the same point in the processes' private namespace, in which case that part of the namespace represents a union of the attaches ("union mount"). Mount points and files (opened or not) are referred to by File IDs (FIDs), but unlike NFS or other remote filesystem protocols, FIDs are specified by the client, not the server. A FID for an attach may be cloned (similar to a UNIX dup), and from that point on the cloned FID may be used for further operations, such as walking the filesystem tree.
- Using the cloned FID, the process can then traverse the remote filesystem tree via messages that walk to a place in the remote filesystem or create a new file. Walking does not change the FID.
- Unlike NFS, Plan 9 has an open operator. Files may be opened after the appropriate set of walk operations. The argument to an open is a FID.
- Once a file has been opened or created, it can be read or written via TREAD or TWRITE messages. Since each TREAD or TWRITE uses a unique TID, many concurrent reads or writes can be posted, allowing for read-ahead or write-behind to be supported.
- When a client is done with a file, it sends a TCLUNK message for the FID to indicate that the server can forget about the file. Once a close succeeds, the FID may be reused. A clunk for an attach point is equivalent to an unmount.
Status for a file is accessed via a TSTAT message, and the status may be changed via a TWSTAT message. TWSTAT is used to implement chmod, rename, and other file state management operations.
This overview should provide a flavor of the nature of the Plan 9 protocol. Based on my experience with NFS (see my paper "Mether-NFS: A Modified NFS which supports Virtual Shared Memory,"), I feel the protocol compares favorably to NFS. In contrast to the many variable-length fields in NFS, the fields in the Plan 9 messages are fixed size and the same fields are always at the same offset. The individual elements are defined in a simple but easily converted machine-independent format. Furthermore, the file status structure is similarly straightforward. The protocol is connection oriented, circumventing problems that have plagued UDP-based versions of the NFS protocol. Finally, the user-mode mount and file I/O protocol provide for increased security, since there are no privileged programs running to support those protocols.
Changes to the Plan 9 Protocol
To accommodate the differences between Plan 9 and UNIX, I made several changes and additions to the Plan 9 protocol. For one thing, I added support for symbolic links by adding two new messages to the protocol to support reading and creation of symbolic links.
The original Plan 9 protocol for reading directories returned not only directory entries, but all the information about each directory entry (for example, file sizes, access times, ownership information, and so on). This extra information is useful if used. However, it does slow directory reads by about a factor of 10, since most UNIX filesystems keep directory entries (the file names in the directory) and the information about those files in separate places. Since all UNIX systems extant do not use this information when a directory is read, I only return the directory entries, and not the information. David Butler has recently proposed adding this same type of limited directory reading operation to the Plan 9 protocol.
The client-side user-mode components let unmodified programs access private namespace semantics. The clib_libc.so is a stub library that overrides functions in the Standard C Library so that private namespace functions can be supported, as well as global namespace functions. Access to the global namespace may be disabled if desired. Programs that use this library do not need to use a kernel VFS. This library is used on systems that do not have a VFS or systems that cannot load the VFS.
User-mode components not shown in Figure 3 include the dump/restore functions that support private namespace inheritance transparently (in the C library they are integrated into fork()), as well as additional programs for testing the libraries and building the namespace that is inherited by unmodified UNIX programs.
The kernel VFS consists of several layers. The top layer consists of local directory structures. Every process has a private version of these structures, rooted at "private" in the VFS. For example, if the VFS is mounted at /v9fs, then a process referencing /v9fs/private sees a private copy of a directory tree, just as the name "current" refers to the current process in the /proc VFS. The next layer is a union mount layer, which sits on top of an actual mount layer. From the mount layer, VFS operations are performed over the network to the server. Processes using only the private name space perform a chroot to /v9fs/private, at which point the root of their namespace is the private namespace and the global name space is no longer accessible.
Servers consist of the communications layer that sends and receives packets; a packet decode layer that determines what functions to call, based on the packet type; and a filesystem-type dependent layer, which consists of the set of functions that implement a given filesystem. Most of the code is common, save for this top layer. Building a new filesystem involves writing a new set of functions and linking them with a server library.
I currently support two types of servers: an interface to the filesystem, to support remote file access; and a simple memory-based filesystem. The memory-based filesystem provides a network RAM disk. The memory-based filesystem can be used to support temporary files that might be shared between several processes on different nodes. Server processes can use the memory server to store information about servers, and thus the memory server can be used as a directory of servers.
Private Namespaces in Distributed Computing
The main application that I've found to date for distributed computing is the support of remote file access that does not require automounters or NFS access across organizational boundaries, or special privileges of any kind. I make heavy use of the user-mode client-side support described here, since not all the systems I run on are Linux, and not all Linux systems have VFS installed. UNIX programs (including Emacs, gcc, and all the shells) run without problems.
Another interesting use of the private namespace is a process-family-private /proc filesystem. Users may have the ability to distribute programs to remote nodes but may not be given permission to run (for example) a process status program such as ps, or to peruse the /proc filesystem on the remote nodes. The processes running on the remote nodes can open a session to a memory-based server and write status into a file at intervals. The processes that use this instance of the server are part of a distributed process family. Users can start a session to the server and examine the process status with ordinary ls and cat commands, as in Figure 4. The process that actually writes the file can be the remote computation or a control program that manages the remote computation and reports on its status.
Private Namespaces in Cluster Computing
For clustering, I have found two main uses for the private name spaces to date replacing NFS and building clustered /proc namespaces. In each case I use the filesystem servers, not the memory servers.
On clusters, I use the VFS on Linux nodes. The VFS is SMP-safe and 64-bit clean. Because the mounts are process-to-process, special root access is not required to mount a remote filesystem. Users can mount a remote filesystem without requiring system administrator support or the root password. At the same time, the remote mounts do not compromise security since they occur in the context of the user's private namespace. We also use the private namespace to build cluster /proc filesystems for families of processes. As a single parent process starts up processes on other cluster nodes, the parent process can mount /proc from those other nodes so as to monitor process status. Once those other /proc directories are mounted and accessible, the cluster /proc lets users easily monitor a family of processes. In Figure 5, for instance, users have mounted /proc from localhost and four nodes into /proc in the private namespace. All of the process status is easily viewed via this cluster /proc. A significant advantage of this type of /proc is that the user sees /proc only on nodes of interest. Instead of having to deal with process status from 160 nodes, users need only see status for the nodes in use by the process family. A remaining step is to modify ps to use paths with /proc/<host-name> in them instead of just /proc.
Common Uses of Private Namespaces
A final application of the private namespace common to both cluster and distributed computing is a directory of servers. Directories of servers let remote clients locate servers by name, without knowing a server's port number. The client contacts a directory of servers and looks up the desired server by name, opens the file for that name, and reads the information needed to contact the server, such as port number and the type of authentication required by that server.
To support the directory of servers application, I have applied for and been granted a registered service name (fsportmap) and port number, 4349/tcp, from the Internet Assigned Numbers Authority. This port number is used as follows: A process contacts a host on this port, establishes a session with the server and attaches the root of the directory of servers server. The process can then look up an entry for a server of interest.
Using a directory of servers (Figure 6) in this manner is much less complex than NFS. To find and connect to a filesystem server, NFS requires three separate types of daemons and RPC protocols. In contrast, using my directory of servers, I use one protocol and two different types of servers for that same protocol. The server that is used for the directory of servers is not special purpose in any sense; rather, it is a general-purpose server used for a specific application. You can eliminate two special RPC protocols and two special daemons using the directory of servers approach.
Conclusion
Private namespaces are an essential component of any large-scale distributed or cluster computing environment. I have developed a first implementation of private namespaces for UNIX, including a kernel VFS implementation for Linux. The performance of this system is comparable to NFS. Nonprivileged processes can construct their own filesystem namespaces, and these namespaces are not visible to external processes, greatly enhancing security in distributed systems. The private namespace also makes the construction of cluster /proc filesystems quite simple. Also, the memory server can be used to provide a directory of servers, eliminating the need for distinct RPC protocols for locating and mounting remote filesystems.
Acknowledgment
This research was supported by DARPA Contract #F30602-96-C-0297.
DDJ | http://www.drdobbs.com/mobile/private-namespaces-for-linux/184404878 | CC-MAIN-2014-35 | refinedweb | 3,088 | 59.74 |
Google Groups
Re: [sympy] Re: Changing the default sorting for Add and Mul
Aaron Meurer
May 29, 2012 3:33 PM
Posted in group:
sympy
It's a rule that's not enforced. Actually, I think what you're doing
is OK. As long as the hash (and equality comparison) of the object
does not depend on the cached state, it shouldn't be an issue.
The main thing is that if you change the hash of an object during its
lifetime, or change what is used to compute the hash, bad things will
happen. For example:
class Bad(object):
def __init__(self, arg):
self.arg = arg
def __hash__(self):
return hash(self.arg)
In [26]: Bad(1)
Out[26]: <__main__.Bad object at 0x106ca7fd0>
In [27]: hash(Bad(1))
Out[27]: 1
In [28]: a = Bad(1)
In [29]: b = {a:1}
In [30]: a.arg = 2
In [31]: hash(a)
Out[31]: 2
In [32]: b[a]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-32-9f1bfc533672> in <module>()
----> 1 b[a]
KeyError: <__main__.Bad object at 0x106ca7f50>
Here, we implicitly changed the hash of a by mutating the argument of
a that is used to compute the hash. Note that Python doesn't prevent
me from doing that. I could make it if I want, by making __setattr__
raise AttributeError, but even then I could forcibly change it by
modifying the object's __dict__. This is what is meant when we say
that Python is a "consenting adults" language.
Bad things also happen if we change the object, but keep the hash the same:
class Bad2(object):
def __init__(self, arg):
self.arg = arg
self.hash_cache = None
def __hash__(self):
if self.hash_cache is None:
self.hash_cache = hash(self.arg)
return self.hash_cache
def __eq__(self, other):
if not isinstance(other, Bad2):
return False
return self.arg == other.arg
(this is the pattern used by Basic, by the way)
Here we see that finding the key in a dictionary still works:
In [47]: a = Bad2(1)
In [48]: b = Bad2(2)
In [50]: d = {b:2}
In [51]: b.arg = 1
In [52]: hash(b)
Out[52]: 2
In [53]: d[b]
Out[53]: 2
but it no longer keeps the invariant that sets (and dicts) keep unique
objects (keys):
In [68]: a = Bad2(1)
In [69]: b = Bad2(2)
In [70]: hash(a)
Out[70]: 1
In [71]: hash(b)
Out[71]: 2
In [72]: b.arg = 1
In [73]: a == b
Out[73]: True
In [74]: hash(a) == hash(b)
Out[74]: False
In [75]: set([a, b])
Out[75]: set([<__main__.Bad2 object at 0x10692b3d0>, <__main__.Bad2
object at 0x10692b590>])
and this can lead to very subtle issues. For example, look what
happens when I don't compute the hash before changing the object:
In [56]: a = Bad2(1)
In [59]: b = Bad2(2)
In [60]: b.arg = 1
In [61]: a == b
Out[61]: True
In [62]: hash(a) == hash(b)
Out[62]: True
In [63]: set([a, b])
Out[63]: set([<__main__.Bad2 object at 0x10692b910>])
This time, the set contains one element!
So, yes, Python won't enforce it, but if you don't follow the rule
that hash(object) remains the same for the lifetime of object and that
a == b implies hash(a) == hash(b), then you will run into very subtle
issues, because very basic Python operations expect these invariants.
Aaron Meurer
On Tue, May 29, 2012 at 4:11 PM,
krastano...@gmail.com
<
krastano...@gmail.com
> wrote:
> On 30 May 2012 00:04, Aaron Meurer <
asme...@gmail.com
> wrote:
>> On Tue, May 29, 2012 at 11:57 AM, Joachim Durchholz <
j...@durchholz.org
> wrote:
>>> Am 29.05.2012 07:10, schrieb Aaron Meurer:
>>>
>>>> _hashable_content generally contains
>>>> other Basic objects. So what I really want is the "fully denested"
>>>> _hashable_content, where each Basic object is recursively replaced
>>>> with its _hashable_content. I've no idea how expensive that would be
>>>> to compute.
>>>
>>>
>>> The behavior will be quadratic with number of levels of sets. This could
>>> start to impact us on deeply nested expressions - not with anything a human
>>> would enter, but if somebody wants to use SymPy from a script, this might be
>>> different. (I'm talking of thousands of nesting levels.)
>>> One way to set that might be to simplify x+0+0+...+0 (1000 terms, then retry
>>> with 100,000 ;-) ).
>>>
>>> The downside of caching is that each mutator must invalidate the cached
>>> hash. (Unless mutators are disallowed once an object is entered into a dict.
>>> Or the mutable objects are really copy-on-write ones. I don't know whether
>>> any of these paths is available already, or feasible if unavailable - we're
>>> deep into Systems Programming Land here.)
>>
>> In Python, only immutable objects are hashable, and in SymPy, all
>> Basic subclasses are immutable, so this isn't an issue.
>>
> One can easily cheat this rule:
>
> class A(object):
> def __init__(self, immut1, immut2, mut):
> ...
> def __hash__(self):
> return hash((self.immut1, self.immut2))
>
> For instance this can be a Manifold, that must be "immutable" as
> mathematical object, however still be able to learn about new
> coordinate systems defined on it.
>
> I am mentioning it because I am not really sure what does this mean:
>> In Python, only immutable objects are hashable
> I suppose I can get some clarification that way.
>
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To post to this group, send email to
sy...@googlegroups.com
.
> To unsubscribe from this group, send email to
sympy+un...@googlegroups.com
.
> For more options, visit this group at
.
>
Previous post
Next post | https://groups.google.com/forum/?_escaped_fragment_=msg/sympy/pJ2jg2csKgU/0nn21xqZEmwJ | CC-MAIN-2015-27 | refinedweb | 942 | 72.97 |
Trigger Drag drop event with onclick
Trigger Drag drop event with onclick
I'm changing an existing ExtJS application to work on the iPad... I'm quite a ExtJS n00b..
Currently (with a drag drop) you can drag from a tree and drop it in the list. Which triggers an event. This works fine.
Since drag-drop doesn't work on iPad I want a click event to trigger the drop event.
var ns = Ext.namespace("HUB.Plugins.Event");
ns.ScheduleListManager = Ext.extend(Object, {
...
onEventDrop : function(dz, target, data) {
}
I thought I could trigger it with:
var ns = Ext.namespace("HUB.Plugins.Event");
ns.ScheduleListManager.onEventDrop(dz, target, data);
But this doesn't work...
How can I achieve this?
Have a look at fireEvent()
Regards,
Scott. | http://www.sencha.com/forum/showthread.php?199901-Trigger-Drag-drop-event-with-onclick&p=792872 | CC-MAIN-2015-11 | refinedweb | 126 | 70.19 |
Stubborn Wifi name... want the defaults...
So I have a LoPy4 that I've used with Pybytes a few times, and now I want it to go back entirely to factory defaults and default AP. So I've removed everything from the file system, including the pybytes JSON, removed the device from the pybytes web interface, removed the Wifi network from the web interface AND re-flashed the board with the "stable" firmware, yet somehow, from somewhere, the old Wifi name and credentials are being preserved on this seemingly blank LoPy, only as hot-spot details for the LoPy, rather than as the network it was directed at long ago.
All I'm trying to achieve is to get back to a default LoPy4, with the defauly wifi AP... Help?!
Just got around to getting back to this issue - thanks so much Xykon, problem solved!
robert-hh: I did do erase during update yep, a few times, but eventually was solved with advice from the post above :)
- Xykon administrators last edited by
@dan0h Please use the following code:
import pycom pycom.wifi_ssid('') pycom.wifi_pwd('')
After a reset you should have the original SSID/Password back.
@dan0h Using the updater, did you try the erase during update option? You have to check the advanced options to get that.
Otherwise, you can do a full flash erase with esptool.py | https://forum.pycom.io/topic/3734/stubborn-wifi-name-want-the-defaults/4 | CC-MAIN-2020-24 | refinedweb | 228 | 68.3 |
Living Clojure
(reading notes)
- Chapter 1. The Structure of Clojure
- Chapter 2. Flow and Functional Transformations
- Chapter 3. State and Concurrency
- Chapter 4. Java Interop and Polymorphism
- Chapter 5. How to Use Clojure Projects and Libraries
- Chapter 6. Communication with core.async
- Chapter 7. Creating Web Applications with Clojure
- Chapter 8. The Power of Macros
- Chapter 9. Joining the Clojure Community
- Chapter 10. Weekly Living Clojure Training Plan
- Chapter 11. Further Adventures
Preparation
- install java
- install Leiningen
- create a new project
lein new wonderland
- entering REPL:
cd wonderland; lein repl
Chapter 1. The Structure of Clojure
simple values or literals simply evaluate to themselves, for example:
- integer:
12
- decimal:
12.43
- ratio:
1/3
- character:
\j
- keyword:
:jam
- string:
"jam"
- boolean:
true,
false
- null:
nil
expressions
(+ 1 (+ 8 3))
collections
- lists
- vectors
- maps
- sets
all collections are immutable and persistent.
Immutable means the value of the collection does not change. A function to change a collection gives you back a new version of the collection.
Persistent means collections will do smart creations of new versions of themselves by using structural sharing.
Lists
Lists are collections of data what you want to access from the top of the list.
to create a list, simply put a quote in front of the parents:
'(1 2 "jam" :marmalade-jar)
you can mix and match values in a collections.
this is also valid:
'(1, 2, "jam", :bee)
commas are ignored and treated like whitespace. (but just use spaces)
some basic functions:
(first '(:rabbit :pocket-watch :marmalade :door)) ;; -> :rabbit
(rest '(:rabbit :pocket-watch :marmalade :door)) ;; -> (:pocket-watch :marmalade :door)
(cons 5 '()) ;; same as (cons 5 nil) ;; -> (5)
(cons 4 (cons 5 nil)) ;; -> (4 5)
'(1 2 3 4 5) ;; -> (1 2 3 4 5)
(list 1 2 3 4 5) ;; -> (1 2 3 4 5)
Vectors
Vectors are for collections of data that you want to access anywhere by position.
to create vector, use square brackets
[:jar1 1 2 3 :jar2]
(first [:jar1 1 2 3 :jar2]) ;; -> :jar1
(rest [:jar1 1 2 3 :jar2]) ;; -> (1 2 3 :jar2) ;; return a list
in vectors, you have fast index access to the elements:
(nth [:jar1 1 2 3 :jar2] 0) ;; -> :jar1
(nth [:jar1 1 2 3 :jar2] 2) ;; -> 2
(last [:rabbit :pocket-watch :marmalade]) ;; -> :marmalade
(last (:rabbit :pocket-watch :marmalade)) ;; -> :marmalade
vectors have better index access performance than lists, if you need to access the elements of collection by index, use a vector.
more common functions
(count [1 2 3 4]) ;; -> 4
(conj [:toast :butter] :jam :honey) ;; -> [:toast :butter :jam :honey]
(conj '(:toast :butter) :jam :honey) ;; -> (:honey :jam :toast :butter)
conj adds to a collection in the most natural way for the data structure. for lists, it adds to the beginning. for vectors, it adds to the end.
Maps
Maps are for key-value pairs.
to create maps, use curly braces:
{:jam1 "strawberry" :jam2 "blackberry"}
maps are the one place that it can be idiomatic to leave the commas in for readability:
{:jam1 "strawberry", :jam2 "blackberry"}
get a value from maps
(get {:jam1 "strawberry", :jam2 "blackberry"} :jam2) ;; -> "blackberry"
;; get with a default value, put it as the last argument (get {:jam1 "strawberry", :jam2 "blackberry"} :jam3 "not found") ;; -> "not found"
a more idiomatic way to get value from maps:
(:jam2 {:jam1 "strawberry" :jam2 "blackberry" :jam3 "marmalade"}) ;; -> "blackberry"
when using keyword as key, the key itself is also a function.
get keys and values:
(keys {:jam1 "strawberry" :jam2 "blackberry" :jam3 "marmalade"}) ;; -> (:jam3 :jam2 :jam1)
(vals {:jam1 "strawberry" :jam2 "blackberry" :jam3 "marmalade"}) ;; -> ("marmalade" "blackberry" "strawberry")
(remember, collections are immutable, below you actually get a new version of collection)
to update value:
(assoc {:jam1 "red" :jam2 "black"} :jam1 "orange") ;; -> {:jam2 "black", :jam1 "orange"}
to remove value:
(dissoc {:jam1 "strawberry" :jam2 "blackberry"} :jam1) ;; -> {:jam2 "blackberry"}
merge:
(merge {:jam1 "red" :jam2 "black"} {:jam1 "orange" :jam3 "red"} {:jam4 "blue"}) ;; -> {:jam4 "blue", :jam3 "red", :jam2 "black", :jam1 "orange"}
Sets
Sets are for collections of unique elements.
create a set with
#{}
#{:red :blue :white :pink}
;; duplicate are not allowed #{:red :blue :white :pink :pink} ;; -> error
common sets functions in
clojure.set:
;; union ;; combine all (clojure.set/union #{:r :b :w} #{:w :p :y}) ;; -> #{:y :r :w :b :p}
;; difference ;; takes elements away from one of the sets (clojure.set/difference #{:r :b :w} #{:w :p :y}) ;; -> #{:r :b}
;; intersection ;; return only shared elements (clojure.set/intersection #{:r :b :w} #{:w :p :y}) ;; -> #{:w}
convert other collecions to sets:
(set [:rabbit :rabbit :watch :door]) ;; -> #{:door :watch :rabbit}
(set {:a 1 :b 2 :c 3}) ;; -> #{[:c 3] [:b 2] [:a 1]}
get element from a set:
(get #{:rabbit :door :watch} :rabbit) ;; -> :rabbit
(get #{:rabbit :door :watch} :jar) ;; -> nil
;; use keyword as a function (:rabbit #{:rabbit :door :watch}) ;; -> :rabbit
;; the set itself can be used as a function to do the same thing (#{:rabbit :door :watch} :rabbit) ;; -> :rabbit
check if element exists:
(contains? #{:rabbit :door :watch} :rabbit) ;; -> true
(contains? #{:rabbit :door :watch} :jam) ;; -> false
add element to a set:
(conj #{:rabbit :door} :jam) ;; -> #{:door :rabbit :jam}
remove element from a set:
(disj #{:rabbit :door} :door) ;; -> #{:rabbit}
Symbols
Clojure symbols refer to values. when a symbol is evaluated, it returns the thing it refer to.
def gives something a name, so we can refer to it.
(def developer "Alice") ;; -> #'user/developer
def created a var object in the default namespace (
user) for the symbol
developer.
developer ;; -> "Alice"
user/developer ;; -> "Alice"
when you want to have a temporary var, use
let.
let allows us to have bindings to symbols that are only available within the context of the
let.
What happens in a
let, stays in the
let.
(def developer "Alice") ;; -> #'user/developer
(let [developer "Alice in Wonderland"] developer) ;; -> "Alice in Wonderland"
the bindings of
let are in a vector form. It expects pairs of symbol and values.
(let [developer "Alice in Wonderland" rabbit "White Rabbit"] [developer rabbit]) ;; -> ["Alice in Wonderland" "White Rabbit"]
- use
defto create global vars.
- use
letto create temporary bindings.
Creating a function
defn takes following arguments:
- name of the function
- a vector of parameters
- body of the function
(defn follow-the-rabbit [] "Off we go!") ;; -> #'user/follow-the-rabbit
(follow-the-rabbit) ;; -> "Off we go!"
(defn shop-for-jam [jam1 jam2] {:name "jam-basket" :jam1 jam1 :jam2 jam2}) ;; -> #'user/shop-for-jam
(shop-for-jam "strawberry" "marmalade") ;; -> {:name "jam-basket", :jam1 "strawberry", :jam2 "marmalade"}
anonymous functions
;; return a function (fn [] (str "Off we go" "!"))
;; invoke it directly ((fn [] (str "Off we go" "!"))) ;; -> "Off we go!"
;; shorthand, use # in front of the parens: (#(str "Off we go" "!")) ;; -> "Off we go!"
;; for parameters
;; if only 1 parameter, just use % (#(str "Off we go" "!" " - " %) "again") ;; -> "Off we go! - again"
;; more than 1, just %1, %2 ... (#(str "Off we go" "!" " - " %1 %2) "again" "?") ;; -> "Off we go! - again?"
defn is just the same as using
def and binding the name to the anonymous function:
(def follow-again (fn [] (str "Off we go" "!"))) ;; -> #'user/follow-again
(follow-again) ;; -> "Off we go!"
Namespaces
create a new namespace:
(ns alice.favfoods)
check current namespace:
*ns*
3 main ways of using libs in your namespace using require:
- use
requireexpression with the namespace as argument.
(require 'clojure.set)
- use
requireexpression with an alias using
:as.
(require '[alice.favfoods :as af])
;; it's also common to see it nested within the ns: (ns wonderland (:require [alice.favfoods :as af]))
- use
requirewith the namespace and the
:refer :alloptions.
(ns wonderland (:require [alice.favfoods :refer :all]))
however, the last way may create conflicts and also hard to read the code.
there's also a
use expression that is the same as the
require with
:refer :all.
sum up
(ns wonderland (:require [clojure.set :as s]))
(defn common-fav-foods [foods1 foods2] (let [food-set1 (set foods1) food-set2 (set foods2) common-foods (s/intersection food-set1 food-set2)] (str "Common Foods: " common-foods)))
(common-fav-foods [:jam :brownies :toast] [:lettuce :carrots :jam]) ;; -> "Common Foods: #{:jam}"
Chapter 2. Flow and Functional Transformations
expression and form
expression is code that can be evaluated for a result.
form is term for a valid expression that can be evaluated.
(first)
is an expression but not a valid form.
Controlling the Flow with Logic
(class true) ;; -> java.lang.Boolean
(true? true) ;; -> true
(true? false) ;; -> false
(false? false) ;; -> true
(false? true) ;; -> false
(nil? nil) ;; -> true
(nil? 1) ;; -> false
(not true) ;; -> false
(not false) ;; -> true
(not nil) ;; -> true
remember,
nil is logically false in tests.
(= :drinkme :drinkme) ;; -> true
;; collection equality is special: (= '(:drinkme :bottle) [:drinkme :bottle]) ;; -> true
;; not= is a shortcut for (not (= x y)) (not= :drinkme :4) ;; -> true
Logic Tests on Collections
(empty? [:table :door :key]) ;; -> false
(empty? []) ;; -> true
(empty? {}) ;; -> true
(empty? '()) ;; -> true
if we look at the definition of
empty?, there is
seq:
(defn empty? [coll] (not (seq coll)))
In Clojure, there are the collection and sequence abstractions.
The collections are simply a collection of elements.
The
seq function turns the collection into a sequence.
A sequence is a walkable list abstraction for the collection data structure.
(empty? []) ;; -> true
(seq []) ;; -> nil
remember, use
seq to check for not empty instead of
(not (empty? x)). this is because
nil is treated as logically false in tests.
;; every? takes a predicate and a collection as arguments (every? odd? [1 3 5]) ;; -> true
(every? odd? [1 2 3 4 5]) ;; -> false
A predicate is a function that returns a value used in a logic test.
create a predicate:
(defn drinkable? [x] (= x :drinkme))
(every? drinkable? [:drinkme :drinkme]) ;; -> true
;; use anonymous function (every? (fn [x] (= x :drinkme)) [:drinkme :drinkme]) ;; -> true
;; or (every? #(= % :drinkme) [:drinkme :drinkme]) ;; -> true
other logical test functions:
(not-any? #(= % :drinkme) [:drinkme :poison]) ;; -> false
(not-any? #(= % :drinkme) [:poison :poison]) ;; -> true
(some #(> % 3) [1 2 3 4 5]) ;; -> true
some function can be used with a set to return the element, or the first matching element of the sequence.
;; set is a function of its own member (#{1 2 3 4 5} 3) ;; -> 3
(some #{3} [1 2 3 4 5]) ;; -> 3
(some #{4 5} [1 2 3 4 5]) ;; -> 4
Harnessing the Power of Flow Control
if takes 3 parameters
- the expression that is the logical test.
- (if the test expression evaluates to true), evaluates the 2nd parameter.
- (if false), evaluates the 3rd parameter.
(if true "it is true" "it is false") ;; -> "it is true"
(if false "it is true" "it is false") ;; -> "it is false"
(if nil "it is true" "it is false") ;; -> "it is false"
(if (= :drinkme :drinkme) "Try it" "Don't try it") ;; -> "Try it"
combining
let and
if
(let [need-to-grow-small (> 5 3)] (if need-to-grow-small "drink bottle" "don't drink bottle")) ;; -> "drink bottle"
or just use
if-let:
(if-let [need-to-grow-small (> 5 1)] "drink bottle" "don't drink bottle")) ;; -> "drink bottle"
if you only want to do one thing when test is true, use
when
when takes a predicate, if it is logical true, it will evaluate the body. otherwise, it returns
nil.
(defn drink [need-to-grow-small] (when need-to-grow-small "drink bottle"))
(drink true) ;; -> "drink bottle"
(drink false) ;; -> nil
there is also a
when-let:
(when-let [need-to-grow-small true] ("drink bottle") ;; -> "drink bottle"
(when-let [need-to-grow-small false] ("drink bottle") ;; -> nil
when we want to test for mutiple things, use
cond:
in thein the
(let [bottle "drinkme"] (cond (= bottle "poison") "don't touch" (= bottle "drinkme") "grow smaller" (= bottle "empty") "all gone")) ;; -> "grow smaller"
condclause, once a logical test returns true and the expression is evaluated, none of the other test clauses are tried.
(let [x 5] (cond (> x 10) "bigger than 10" (> x 4) "bigger than 4" (> x 3) "bigger than 3")) ;; -> "bigger than 4"
for returning a default value instead of
nil, use
:else
(let [bottle "mystery"] (cond (= bottle "poison") "don't touch" (= bottle "drinkme") "grow smaller" (= bottle "empty") "all gone" :else "unknown")) ;; -> "unknown"
case is shortcut for the
cond where there is only one test value and it can be compared with an
=:
(let [bottle "drinkme"] (case bottle "poison" "don't touch" "drinkme" "grow smaller" "empty" "all gone")) ;; -> "grow smaller"
however,
case will throw an exception for no matching test. simply add an extra value as default:
(let [bottle "mystery"] (case bottle "poison" "don't touch" "drinkme" "grow smaller" "empty" "all gone" "unknown")) ;; -> "unknown"
Functions Creating Functions and Other Neat Expressions
partial is a way of currying in Clojure. Currying is a way to generate a new function with an argument partially applied.
(defn adder [x y] (+ x y ))
(adder 3 4) ;; -> 7
(defn adder-5 (partical adder 5))
(adder-5 10) ;; -> 15
comp creates a new function that combines other functions. It takes any number of functions as its parameters and returns the composition of those functions going from right to left.
(defn toggle-grow [direction] (if (= direction :small) :big :small))
(defn oh-my [direction] (str "Oh My! You are growing " direction))
(oh-my (toggle-grow :small)) ;; -> "Oh My! You are growing :big"
;; or use comp (defn surprise [direction] ((comp oh-my toggle-grow) direction))
Destructuring
destructuring allows you to assign named bindings for the elements in things like vectors and maps.
(let [[color size] ["blue" "small"]] (str "The " color " door is " size)) ;; -> "The blue door is small"
the destructuring knew what values to bind by the placement of the symbols in the binding expression.
;; without destructuring: (let [x ["blue" "small"] color (first x) size (last x)] (str "The " color " door is " size)) ;; -> "The blue door is small"
to keep th initial data structure as a binding, use
:as keyword:
(let [[color [size] :as original] ["blue" ["small"]]] {:color color :size size :original original}) ;; -> {:color "blue", :size "small", ":original ["blue" ["small"]]}
destructuring can also be done with maps:
(let [{flower1 :flower1 flower2 :flower2} {:flower1 "red" :flower2 "blue"}] (str "The flowers are " flower1 " and " flower2)) ;; -> "The flowers are red and blue"
specify default values with
:or:
(let [{flower1 :flower1 flower2 :flower2 :or {flower2 "missing"}} {:flower1 "red"}] (str "The flowers are " flower1 " and " flower2)) ;; -> "The flowers are red and missing"
:as works in maps too:
(let [{flower1 :flower1 :as all-flowers} {:flower1 "red"}] [flower1 all-flowers]) ;; -> ["red" {:flower1 "red"}]
most of the time, you will want to give the same name to the binding as the name of the key, use
:keys directive:
(let [{:keys [flower1 flower2]} {:flower1 "red" :flower2 "blue"}] (str "The flowers are " flower1 " and " flower2)) ;; -> "The flowers are red and blue"
destructuring is also available to use on parameters while defining functions with
defn:
(defn flower-colors [{:keys [flower1 flower2]}] (str "The flowers are " flower1 " and " flower2))
(flower-colors {:flower1 "red" :flower2 "blue"}) ;; "The flowers are red and blue"))
The Power of Laziness
Clojure can also work with infinite lists.
(take 5 (range)) ;; -> (0 1 2 3 4)
calling
range returns a lazy sequence, you an specify an end for the range by passing it a parameter:
(range 5) ;; -> (0 1 2 3 4)
(class (range 5)) ;; -> clojure.lang.LazySeq
but when you don't specify an end, the default is infinity.
repeat can be used to generate an infinite sequence of repeated items:
(repeat 3 "rabbit") ;; -> ("rabbit" "rabbit" "rabbit")
(class (repeat 3 "rabbit")) ;; -> clojure.lang.LazySeq
repeatedly takes a function that will be repeatedly executed over and over again.
(repeat 5 (rand-int 10)) ;; -> (7 7 7 7 7)
(repeatedly 5 #(rand-int 10)) ;; -> (1 5 8 4 3) (take 10 (repeatedly #(rand-int 10)))
cycle takes a collection as an argument and returns a lazy sequence of the items in the collection repeated infinitely.
(take 3 (cycle ["big" "small"])) ;; -> ("big" "small" "big")
rest will return a lazy sequence when it operates on a lazy sequence:
(take 3 (rest (cycle ["big" "small"]))) ;; -> ("small" "big" "small")
Recursion
(def adjs ["normal" "too small" "too big" "is swimming"])
(def alice-is [in out] (if (empty? in) out (alice-is (rest in) (conj out (str "Alice is " (first in))))))
(alice-is adjs [])
you can also do it with
loop
(defn alice-is [input] (loop [in input out []] (if (empty? in) out (recur (rest in) (conj out (str "Alice is " (first in)))))))
(alice-is adjs)
recur jumps back to the recursion point, which is the beginning of the loop, and rebinds with new parameters.
using
recur also has another very important advantage. It provides a way of not "consuming the stack" for recursive calls:
(defn countdown [n] (if (= n 0) n (countdown (- n 1))))
(countdown 3) ;; -> 0
(countdown 100000) ;; -> StackOverflowError
in the recursive call, a new frame was added to the stack for every function call,
recur only needs one stack at a time.
(defn countdown [n] (if (= n 0) n (recur (- n 1))))
(countdown 100000) ;; -> 0
In general, always use
recur when you are doing recursive calls.
The Functional Shape of Data Transformations
map
(def animals [:mouse :duck :dodo :lory :eaglet])
(map #(str %) animals) ;; -> (":mouse" ":duck" ":dodo" ":lory" ":eaglet")
(class (map #(str %) animals)) ;; -> clojure.lang.LazySeq
map returns a lazy sequence.
doall forces evaluation of the side effets:
(def animal-print (doall (map #(println %) animals))) ;; -> do the println
(animal-print) ;; return value only but no println
map can also take more than one collection to map against. It uses each collection as a parameter to the function.
the
map function will terminate when the shortest collection ends. because of this, we can even use an infinite list with it:
(def animals ["mouse" "duck" "dodo" "lory" "eaglet"])
(map gen-animal-string animals (cycle ["brown" "black"]))
reduce
differs from
map in that you can change the shape of the result.
(reduce + [1 2 3 4 5]) ;; -> 15
(reduce (fn [r x] (+ r (* x x))) [1 2 3]) ;; -> 14
unlike
map, you cannot
reduce an infinite sequence (e.g.,
range)
filter
takes a predicate and a collection as an argument.
(filter (complement nil?) [:mouse nil :duck nil]) ;; -> (:mouse :duck)
complement function takes the function and returns a function that takes the same arguments, but returns the opposite truth value.
((complement nil?) nil) ;; -> false
remove
takes a predicate and a collection.
(remove nil? [:mouse nil :duck nil]) ;; -> (:mouse :duck)
for
(for [animal [:mouse :duck :lory]] (str (name animal))) ;; -> ("mouse" "duck" "lory")
the result is a lazy sequence.
if more than one collection is specified in the
for, it will iterate over them in a nested fashion.
(for [animal [:mouse :duck :lory] color [:red :blue]] (str (name color) (name animal)))
:let modifier:
(for [animal [:mouse :duck :lory] color [:red :blue] :let [animal-str (str "animal-" (name animal)) color-str (str "color-" (name color)) display-str (str animal-str color-str)]] display-str)
:when modifier:
(for [animal [:mouse :duck :lory] color [:red :blue] :let [animal-str (str "animal-" (name animal)) color-str (str "color-" (name color)) display-str (str animal-str color-str)] :when (= color :blue)] display-str)
flatten
takes any nested collection and returns the contents in a single flattened sequence:
(flatten [[:duck [:mouse] [[:lory]]]]) ;; -> (:duck :mouse :lory)
into
takes the new collection and returns all the items of the collection
conj-ed on to it:
(into [] '(1 2 3)) ;; -> [1 2 3]
(into (sorted-map) {:b 2 :a 1 :z3}) // vectors into maps (into {} [[:a 1] [:b 2] [:c 3]])
// maps into vectors (into [] {:a 1, :b 2, :c 3})
partition
partition is useful for dividing up collections:
(partition 3 [1 2 3 4 5 6 7 8 9 10]) ;; -> ((1 2 3) (4 5 6) (7 8 9))
(partition-all 3 [1 2 3 4 5 6 7 8 9 10]) ;; -> ((1 2 3) (4 5 6) (7 8 9) (10))
partition-by takes a function and applies it to every element in the collection. it creates a new partition every time the result changes:
(partition-by #(= 6 %) [1 2 3 4 5 6 7 8 9 10]) ;; -> ((1 2 3 4 5) (6) (7 8 9 10))
Chapter 3. State and Concurrency
Using Atoms for independent Items
Atoms are designed to store the state of something that is independent, meaning we can change the value of it independently of changing any other state.
(def who-atom (atom :caterpillar))
to see the value of the atom, need to dereference it with a preceding
@:
who-atom ;; -> #<Atom@....: :caterpillar>
@who-atom ;; -> :caterpillar
changes to atom are always made synchronously.
first is using
reset!, replaces old value with new value and returns the new value:
(reset! who-atom :chrysalis)
@who-atom ;; -> :chrysalis
the other way is with
swap!, applies a function on the old value and sets it to the new value:
(def change [state] (case state :caterpillar :chrysalis :chrysalis :butterfly :butterfly))
(swap! who-atom change)
@who-atom ;; -> :chrysalis
(swap! who-atom change)
@who-atom ;; -> :butterfly
when using
swap!, the function used must be free of side effects.
swap! operator reads the value of the atom and applies the function on it, then compares the current value of the atom again to make sure that another thread hasn't changed it. if the value has changed in the meantime, it will retry.
this means any side effects in functions might be executed multiple times.
(def counter (atom 0))
@counter ;; -> 0
;; the understore `_` is the name of the value, we don't care here. (dotimes [_ 5] (swap! counter inc))
@counter ;; -> 5
to use multiple threads on this, use the
future form. the
future form takes a body and executes it in another thread.
(let [n 5] (future (dotimes [_ n] (swap! counter inc))) (future (dotimes [_ n] (swap! counter inc))) (future (dotimes [_ n] (swap! counter inc))))
@counter ;; -> 15
if we use a function with side effect:
(defn inc-print [val] (println val) (inc val))
(let [n 2] (future (dotimes [_ n] (swap! counter inc-print))) (future (dotimes [_ n] (swap! counter inc-print))) (future (dotimes [_ n] (swap! counter inc-print)))
you can see some values printed multiple times.
Using Refs for Coordinated Changes
What if we have more than one thing that needs to change in a coordinated fashion?
refs allows coordinated shared state.
that makes them different from atoms is that you need to change their values within a transaction. Clojure uses software transactional memory (STM) to accomplish this.
All actions on refs within the transaction are:
- Atomic: updates will occur to all the refs, if something goes wrong, none of them will be updated.
- Consistent: optional validator function can be used to check value before the transaction commits.
- Isolated: transaction has its own isolated view of the world, it will not see any effects from other transactions.
(def alice-height (ref 3)) (def right-hand-bites (ref 10))
like atoms, use a preceding
@ to dereference refs:
@alice-height ;; -> 3
alter form takes a ref and a function to apply to the current value (similar to
swap!)
(defn eat-from-right-hand [] (when (pos? @right-hand-bites) (alter right-hand-bites dec) (alter alice-height #(+ % 24))))
(eat-from-right-hand) ;; -> IllegalStateException No transaction running
we need to run this in a transaction, we do this by using a
dosync form. it will coordinate any state changes within the form in a transaction.
(dosync (eat-from-right-hand)) ;; -> 27
@alice-height ;; -> 27
@right-hand-bites ;; -> 9
try it with multiple threads:
(let [n 2] (future (dotimes [_ n] (eat-from-right-hand))) (future (dotimes [_ n] (eat-from-right-hand))) (future (dotimes [_ n] (eat-from-right-hand))))
@alice-height ;; -> 147
@right-hand-bites ;; -> 4
the function of the
alter must be side-effect free too, the reason is the same: there would be retries.
there is another function called
commute that we could use instead of
alter, it also must be called in a transaction.
the difference between them is
commute will not retry during the transaction. instead, it will use an in-transaction-value in the meantime, finally setting the
ref value at the commit point in the transaction.
this feature is very nice for speed and limiting retries.
on the other hand, the function that
commute applied must be commutative (execute order does not matter, like addition), or have a last-one-in-wins behavior.
the example above can use
commute instead of
alter.
Transactions that involve time-consuming computations and a large number of refs are most likely to be retried. If you are looking to limit retries, this is a reason you might prefer an atom with a map of state over many refs.
when you have one ref what is defined in relation to another ref, use
ref-set instead of
alter:
(def x (ref 1)) (def y (ref 1))
(defn new-values [] (dosync (alter x inc) (ref-set y (+ 2 @x)))) ;; use ref-set
(let [n 2] (future (dotime [_ n] (new-values))) (future (dotime [_ n] (new-values))))
@x ;; -> 5
@y ;; -> 7
Using Agents to Manage Changes on Their Own
atoms and refs are synchronous, agents are used for independent and asynchronous changes.
if you don't need the result right away, you can pass it to an agent for processing.
(def who-agent (agent :caterpillar))
@who-agent ;; -> caterpillar
we can change the state of an agent by using
send.
(def change [state] (case state :caterpillar :chrysalis :chrysalis :butterfly :butterfly))
(send who-agent change)
@who-agent ;; -> :chrysalis
unlike
swap! and
alter,
send returns immediately.
there is another way to dispatch an action to the agent, with
send-off. the difference is
send-off should be used for potentially I/O-blocking actions.
send uses a fixed thread pool, good for CPU-bound operations.
send-off uses an expandable thread pool necessary to avoid an I/O-bound thread pool from blocking:
agents can also handle transactions with their action, means we could change refs within an agent action, or send actions only if the transaction commits.
when there's an Exception:
(def who-agent (agent :caterpillar))
(defn change [state] (case state :caterpillar :chrysalis :chrysalis :butterfly :butterfly))
(defn change-error [state] (throw (Exception. "Boom!")))
(send who-agent change-error) ;; -> failed
@who-agent ;; -> :caterpillar
the agent's state did not change. the agent itself is now failed.
The exception has been cached, and next time an action is processed, the agent's error will be thrown:
(send-off who-agent change) ;; -> Exception Boom!
you can inspect agent's erorr with
agent-errors:
(agent-errors who-agent) ;; -> Exception Boom!
the agent will stay in this failed state until the agent is restarted with
restart-agent, which clears its errors and resets the state of the agent:
(restart-agent who-agent :caterpillar) ;; -> :caterpillar
(send who-agent change) @who-agent ;; -> :chrysalis
to control how the agent responds to errors, use
set-error-mode!, it can be set to either
:fail or
:continue:
(set-error-mode! who-agent :continue)
If it is set to
:continue and we assign an error handler with the
set-error-handler-fn! function, the error handler will happen on an agent exception, but the agent itself will continue on without a need for a restart:
(defn err-handler-fn [a ex] (println "error " ex " value is " @a))
(set-error-mode! who-agent :continue)
(set-error-handler! who-agent err-handler-fn)
(send who-agent change-error) ;; -> print out
@who-agent ;; -> :caterpillar
;; however the agent will continue on without a restart for the next call (send who-agent change)
@who-agent ;; -> :chrysalis
to sum up:
| Type | Communication | Coordination | |- | Atom | Synchronous | Uncoordinated | | Ref | Synchronous | Coordinated | | Agent | Asynchronous | Uncoordinated |
Chapter 4. Java Interop and Polymorphism
Clojure uses the
new and
. special form to interact with Java classes.
String cString = new String("caterpillar"); cString.toUpperCase();
clojure's equivalent:
(. "caterpillar" toUpperCase)
or
(.toUpperCase "caterpillar")
if the jave method takes arguments, they are included after the object.
String c1String = new String("caterpillar"); String c2String = new String("pillar"); c1String.indexOf(c2);
clojure's equivalent:
(.indexOf "caterpillar" "pillar")
in clojure, the first argument is the string we want to call the method on, and the second is the argument.
use
new to create instance of Java object:
(new String "Hi!!")
Another way to create an instance of a Java class is to use a shorthand form by using a dot right after the class name:
(String. "Hi!!")
to import Java classes, do it by using
:import in the namespace form:
(ns caterpillar.network (:import (java.net InetAddress)))
to execute static methods on java class, use a forward slash (
/):
(InetAddress/getByName "localhost")
to get a property off of an object, use the dot notation:
(.getHostName (InetAddress/getByName "localhost"))
we can also use full qualified names without importing:
(java.net.InetAddress/getByName "localhost")
there is also a
doto macro, which allows us to take a java object and then act on it with a list of operations:
(def sb (doto (StringBuffer. "Who ") (.append "are ") (.append "you?")))
(.toString sb) ;; -> "Who are you?"
without
doto:
(def sb (.append (.append (StringBuffer. "Who ") "are ") "you?"))
Practical Polymorphism
(defn who-are-you [input] (cond (= java.lang.String (class input)) "String - Who are you?" (= clojure.lang.Keyword (class input)) "Keyword - Who are you?" (= java.lang.Long (class input)) "Number - Who are you?"))
(who-are-you :alice) ;; -> Keyword
(who-are-you "alice") ;; -> String
(who-are-you 123) ;; -> Number
(who-are-you true) ;; -> nil
we can express this with polymorphism in clojure with multimethods.
first specifies how it is going to dispatch. that is, how it is going to decide which of the following methods to use:
(defmulti who-are-you class)
(defmethod who-are-you java.lang.Keyword [input] (str "String - Who are you? " input))
(defmethod who-are-you clojure.lang.Keyword [input] (str "Keyword - Who are you? " input))
(defmethod who-are-you java.lang.Long [input] (str "Number - Who are you? " input))
(who-are-you :alice) ;; -> Keyword
(who-are-you "alice") ;; -> String
(who-are-you 123) ;; -> Number
(who-are-you true) ;; Exception
we can also provide a default dispatch method using the
:default keyword:
(defmethod who-are-you :default [input] (str "I don't know - who are you?" input))
(who-are-you true) ;; -> I don't know ...
any function can be given to dispatch on. we can even inspect the value of a map as input:
(defmulti eat-mushroom (fn [height] (if (< height 3) :grow :shrink)))
(defmethod eat-mushroom :grow [_] "Eat the right side to grow.")
(defmethod eat-mushroom :shrink [_] "Eat the left side to shrink.")
(eat-mushroom 1) ;; -> ... grow
(eat-mushroom 9) ;; -> ... shrink
Another way to use polymorphism in clojure is to use protocols.
protocols can handle polymorphism elegantly with groups of functions.
(defprotocol BigMushroom (eat-mushroom [this]))
the parameter
this is the thing that we are going to perform the function on:
(extend-protocol BigMushroom java.lang.String (eat-mushroom [this] (str (.toUpperCase this) " mmm tasty!"))
clojure.lang.Keyword (eat-mushroom [this] (case this :grow "Eat the right side!" :shrink "Eat the left side!"))
java.lang.Long (eat-mushroom [this] (if (< this 3) :grow "Eat the right side to grow!" :shrink "Eat the left side to shrink!")))
(eat-mushroom "Big Mushroom") ;; -> "BIG MUSHROOM mmm tasty!"
(eat-mushroom :grow) ;; -> "Eat the right side!"
(eat-mushroom 1) ;; -> "Eat the right side to grow!"
we are using protocols to add methods to existing data structure. what if we want to add our own data structure?
Clojure's answer to this is data types.
there're two solutions:
if you need structured data, use
defrecord, which creates a class with a new type.
if you just need an object with a type to save memory, use
deftype.
defrecord
the
defrecord form defineds the fields that the class will hold:
(defrecord Mushroom [color height]) ;; -> caterpillar.network.Mushroom
now can create new object with a dot notation:
(def regular-mushroom (Mushroom. "white and blue polka dots" "2 inches"))
(class regular-mushroom) ;; -> caterpillar.network.Mushroom
we can get the values with the dot-dash that is preferred over the dot-prefix for accessing fields:
(.-color regular-mushroom) ;; -> "white ..."
(.-height regular-mushroom) ;; -> "2 inches"
we can combine structured data type with protocols to implement interfaces:
(defprotocol Edible (bite-right-side [this]) (bite-left-side [this]))
implement the protocol:
(defrecord WonderlandMushroom [color height] Edible (bite-right-side [this] (str "The " color " bite makes you grow bigger")) (bite-left-side [this] (str "The " color " bite makes you grow smaller")))
then another data type implements
Edible:
(defrecord RegularMushroom [color height] Edible (bite-right-side [this] (str "The " color " bite tastes bad")) (bite-left-side [this] (str "The " color " bite tastes bad too")))
construct our mushroom objects:
(def alice-mushroom (WonderlandMushroom. "blue dots" "3 inches"))
(def reg-mushroom (RegularMushroom. "brown" "1 inch"))
(bite-right-side alice-mushroom) ;; -> "The blue dots bite makes you grow bigger"
(bite-right-side reg-mushroom) ;; -> "The brown bite tastes bad"
A real-world example of protocols is implementing different types of persistence. which is one data type writes to different types of data sources, one to database and one to s3 bucket for example.
deftype
sometimes we don't really care about the structure or the map lookup features provided by
defrecord, we just need an object with a type to save memory.
in this case we should reach for
deftype:
(defprotocol Edible (bite-right-side [this]) (bite-left-side [this]))
(deftype WonderlandMushroom [] Edible (bite-right-side [this] (str "The bite makes you grow bigger")) (bite-left-side [this] (str "The bite makes you grow smaller")))
(deftype RegularMushroom [] Edible (bite-right-side [this] (str "The bite tastes bad")) (bite-left-side [this] (str "The bite tastes bad too")))
(def alice-mushroom (WonderlandMushroom.))
(def reg-mushroom (RegularMushroom.))
(bite-right-side alice-mushroom) ;; -> "The bite makes you grown bigger"
(bite-right-side reg-mushroom) ;; -> "The bite tastes bad"
the main difference between using protocols with
defrecord and
deftype is how you want your data organized.
if you want structure data, choose
defrecord, otherwise, use
deftype.
Caution
think before you use protocols.
sometimes you could have more simple solution without using protocols.
(def bite-right-side [mushroom] (if (= (:type mushroom) "wonderland") "The bite makes you grow bigger" "This bite tastes bad"))
(bite-right-side {:type "wonderland"}) ;; -> "The bite makes you grown bigger"
(bite-right-side {:type "regular"}) ;; -> "The bite tastes bad"
Chapter 5. How to Use Clojure Projects and Libraries
to create a new project skeleton:
$ lein new serpent-talk
for namespace and filename, dashes are not being valid in java class names.
so always use understores for directories and filenames, and use dashes for namespaces.
show dependencies as tree:
$ lein deps :tree
run main function of a namespace from the command line using
lein run -m:
$ lein run -m serpent-talk.talk "Hello pigeon"
you can also add this to
project.clj file specifying the main function:
:main serpent-talk.talk
then can run with:
$ lein run "Hello pigeon"
Chapter 6. Communication with core.async
create a new project with
core.async
$ lein new async-tea-party
add dependency to
project.clj:
:dependencies [[org.clojure/clojure "1.6.0"] [org.clojure/core.async "xxx"]]
include
core.async in the namespace (
src/async_tea_party/core.clj file):
(ns async-tea-party.core (:require [clojure.core.async :as async]))
Basics of core.async Channels
create a channel:
(def tea-channel (async/chan))
there are two main ways that you get things on and off channels: synchronously and asynchronously.
>!!: a blocking put, puts data on the channel synchronously.
<!!: a blocking take, takes data off the channel synchronously.
>!: an async put, puts data on the channel asynchronously, needs to be used with a
goblock.
<!: an async take, takes data off the channel asynchronously, needs to be used with a
goblock.
so when you see
!! it means a blocking call.
the
tea-channe created above is an unbuffered channel, the main thread would block until it got taken off. (it will also lock up REPL)
to create a buffered channel:
(def tea-channel (async/chan 10))
now test with the blocking put:
(async/>!! tea-channel :cup-of-tea) ;; -> true
get it off with a blocking take:
(async/<!! tea-channel) ;; -> :cup-of-tea
to close a channel:
(async/close! tea-channel)
this closes the channel to new inputs, however, if there are still values on it, they can be taken off. when the channel is finally empty, it will return a
nil.
(async/>!! tea-channel :cup-of-tea-2) (async/>!! tea-channel :cup-of-tea-3) (async/>!! tea-channel :cup-of-tea-4)
(async/close! tea-channel) ;; -> nil
;; new puts will fail (async/>!! tea-channel :cup-of-tea-5) ;; -> false
;; but existing values can be taken (async/<!! tea-channel) ;; -> :cup-of-tea-2
(async/<!! tea-channel) ;; -> :cup-of-tea-3
(async/<!! tea-channel) ;; -> :cup-of-tea-4
;; until it's empty (async/<!! tea-channel) ;; -> nil
also
nil is special, you can not put it on a channel:
(async/>!! tea-channel nil) ;; Exception
so
nil lets us know that the channel is empty.
now try async:
(let [tea-channel (async/chan)] (async/go (async/>! tea-channel :cup-of-tea-1)) (async/go (println "Thanks for the " (async/<! tea-channel)))) ;; Will print to stdout: ;; Thanks for the :cup-of-tea-1 | https://jchk.net/read/living-clojure | CC-MAIN-2019-13 | refinedweb | 6,099 | 62.07 |
SWFLoader that unfortunately sets Stage.scaleModechrisisme Nov 11, 2009 11:43 AM
In a Flex project I have been working on, I've received a new SWF asset to incorporate. No problem usually. However, this asset is setting the stage's scaleMode to showAll. OUCH. This has the effect of this: Flex app loads and looks fine, SWFLoader is later called and the asset is loaded as the SWFLoader's source. The entire stage is scaled up at least 3x the size. On the complete event for the loader I go ahead and set the scaleMode back to noScale, but then the application jumps back to looking correct. The problem is the jumping up then back again.
I am not able to have the asset rebuilt, so I am stuck with what I have. How can I work around this from Flex. Since the stage is shared, this SWF keep dorking everything up.
1. Re: SWFLoader that unfortunately sets Stage.scaleModeFlex harUI
Nov 11, 2009 11:52 AM (in response to chrisisme)1 person found this helpful
Depends on what the asset really is. If you only need a symbol from it, you can try embedding just those symbols and that might skip the code that calls showAll.
If you need the whole SWF because you want the timeline but don't care about what it actually does, you can deploy it on a different domain and then it won't have access to the stage. It will throw a security error but only folks with debugger players will see it.
Another option is to find out where code that changes scaleMode is. If it is in a particular class in AS3 in then SWF, you can try creating a custom version of that class in your main app. You version of the class will supercede the one in the SWF.
Another option is to find out when the code changes scaleMode. It might be that you can double-check on enterFrame or render events and set it back before the player actually draws it in the wrong scale mode.
Alex Harui
Flex SDK Developer
Adobe Systems Inc.
2. Re: SWFLoader that unfortunately sets Stage.scaleModechrisisme Nov 11, 2009 12:12 PM (in response to Flex harUI)
Wow, great answer, by far one of the most helpful answers I have received on any development forum. Thank you.
Follow-up question. I do need the full SWF file. It is actually an Object2VR generated SWF. I've decompiled it, and have the .as files. I did not realize that I can create a main application version of that class and it would supersede it. That is amazingly helpful since I have had similar issues with other assets. My follow-up question relates to where this custom version should go in the project so this works. I am building in a Flex Library Project, so this would actually need to supersede from there. I'm just not sure I understand where this class should go so that the SWF file will use my version instead of its built in one.
EDIT: The classes used in the original SWF are all in the default package {}. An issue I ran into is that the one object I need extends another which imports several others, and so forth and so on. Without creating local versions of all of these, I just don't see how to get this approach to work, though it really would be the most powerful.
3. Re: SWFLoader that unfortunately sets Stage.scaleModemsakrejda Nov 11, 2009 12:19 PM (in response to chrisisme)
Ah, the joys of monkey patching. You only need to ensure that it is referenced in both your Flex Library project and the eventual Application (I believe), and Flash Player will take care of the rest. I generally have an array of dummy references somewhere in the main Application class (or somewhere else I know will be referened) when I need to do something like this. E.g.,
var dummyRefs:Array = [ MonkeyPatchedClass, OtherMonkeyPatchedClass ];
(with the appropriate imports, obviously--although you won't need imports if the classes you're monkey-patching are using the top-level package). Note that you need to have the exact same as the original assets (i.e., same package structure and same class name).
You can add the swf as an externally linked library during compilation so you don't pull in the entire dependency chain.
4. Re: SWFLoader that unfortunately sets Stage.scaleModechrisisme Nov 11, 2009 12:33 PM (in response to msakrejda)
Okay, I am sort of following that, but I am starting to feel totally green again.
I have one class in the SWF that I need to supersede. It is VrObject.as. It is in the default package, and it extends GgViewer. This is where everything goes into dependency hell. If I don't also include the GgViewer class, then all the "override protected function" calls in the VrObject toss up compile problems. But as soon as I add a custom version of the GgViewer class it goes south. Is there a way that I am not seeing to use the custom VrObject.as without the compiler worrying about it not seeing the dependencies since they are all in the SWF which won't be loaded until load time.
5. Re: SWFLoader that unfortunately sets Stage.scaleModeFlex harUI
Nov 11, 2009 9:27 PM (in response to chrisisme)
Unfortunately, you will need the VrObject.as and all of its dependencies like GgViewer and whatever else it needs. Hopefully you can get your hands on the actual GgViewer. You will want to use the actual copies of every AS file except VrObject or whatever class is accessing the stage.
Note also that if you've decompiled the SWF and know when it sets scaleMode, it might be possible to find out which frame that is going to happen in and when it will happen in that frame and reset scaleMode right after instead of trying to do this trick.
Finally, our pattern for class inclusion is:
import foo.bar.SomeClass; SomeClass;
Alex Harui
Flex SDK Developer
Adobe Systems Inc. | https://forums.adobe.com/thread/522129 | CC-MAIN-2017-51 | refinedweb | 1,029 | 71.95 |
I have a nasty issue with 'cin' in loops. The first time through it works fine (I want to accept only an int for a function call in my program, and ignore chars that would mess it up) but subsequent 'cin' calls are ignored and the program loops indefinitely. Here's a short program to demonstrate:
I've put in an extra system("PAUSE") call to stop it looping forever.I've put in an extra system("PAUSE") call to stop it looping forever.Code:#include <stdlib.h> #include <iostream> using namespace std; int main() { int input = 0; while(true) { cout<<"Write something then press enter: \n"; cin>> input; //<- Doesn't get called after the first time cin.ignore(); if(input == 5) break; else cout<<"Invalid input.\n"; system("PAUSE"); } system("PAUSE"); return EXIT_SUCCESS; }
Any suggestions on how to fix it? | https://cboard.cprogramming.com/cplusplus-programming/91414-cin-loop-problem.html | CC-MAIN-2017-13 | refinedweb | 141 | 70.53 |
Porting a Windows Runtime 8.x project to a UWP project
You have two options when you begin the porting process. One is to edit a copy of your existing project files, including the app package manifest (for that option, see the info about updating your project files in Migrate apps to the Universal Windows Platform (UWP)). The other option is to create a new Windows 10 project in Visual Studio and copy your files into it. The first section in this topic describes that second option, but the rest of the topic has additional info applicable to both options. You can also choose to keep your new Windows 10 project in the same solution as your existing projects and share source code files using a shared project. Or, you can keep the new project in a solution of its own and share source code files using the linked files feature in Visual Studio.
Create the project and copy files to it
These steps focus on the option to create a new Windows 10 project in Visual Studio and copy your files into it. Some of the specifics around how many projects you create, and which files you copy over, will depend on the factors and decisions described in If you have a Universal 8.1 app and the sections that follow it. These steps assume the simplest case.
- Launch Microsoft Visual Studio 2015 and create a new Blank Application (Windows Universal) project. For more info, see Jumpstart your Windows Runtime 8.x app using templates (C#, C++, Visual Basic). Your new project builds an app package (an appx file) that will run on all device families.
- In your Universal 8.1 app project, identify all the source code files and visual asset files that you want to reuse. Using File Explorer, copy data models, view models, visual assets, Resource Dictionaries, folder structure, and anything else that you wish to re-use, to your new project. Copy or create sub-folders on disk as necessary.
- Copy views (for example, MainPage.xaml and MainPage.xaml.cs) into the new project, too. Again, create new sub-folders as necessary, and remove the existing views from the project. But, before you over-write or remove a view that Visual Studio generated, keep a copy because it may be useful to refer to it later. The first phase of porting a Universal 8.1 app focuses on getting it to look good and work well on one device family. Later, you'll turn your attention to making sure the views adapt themselves well to all form factors, and optionally to adding any adaptive code to get the most from a particular device family.
- In Solution Explorer, make sure Show All Files is toggled on. Select the files that you copied, right-click them, and click Include In Project. This will automatically include their containing folders. You can then toggle Show All Files off if you like. An alternative workflow, if you prefer, is to use the Add Existing Item command, having created any necessary sub-folders in the Visual Studio Solution Explorer. Double-check that your visual assets have Build Action set to Content and Copy to Output Directory set to Do not copy.
- You are likely to see some build errors at this stage. But, if you know what you need to change, then you can use Visual Studio's Find and Replace command to make bulk changes to your source code; and in the imperative code editor in Visual Studio, use the Resolve and Organize Usings commands on the context menu for more targeted changes.
Maximizing markup and code reuse
You will find that refactoring a little, and/or adding adaptive code (which is explained below), will allow you to maximize the markup and code that works across all device families. Here are more details.
- Files that are common to all device families need no special consideration. Those files will be used by the app on all the device families that it runs on. This includes XAML markup files, imperative source code files, and asset files.
- It is possible for your app to detect the device family that it is running on and navigate to a view that has been designed specifically for that device family. For more details, see Detecting the platform your app is running on.
- A similar technique that you may find useful if there is no alternative is give a markup file or ResourceDictionary file (or the folder that contains the file) a special name such that it is automatically loaded at runtime only when your app runs on a particular device family. This technique is illustrated in the Bookstore1 case study.
- You should be able to remove a lot of the conditional compilation directives in your Universal 8.1 app's source code if you only need to support Windows 10. See Conditional compilation and adaptive code in this topic.
- To use features that are not available on all device families (for example, printers, scanners, or the camera button), you can write adaptive code. See the third example in Conditional compilation and adaptive code in this topic.
- If you want to support Windows 8.1, Windows Phone 8.1, and Windows 10, then you can keep three projects in the same solution and share code with a Shared project. Alternatively, you can share source code files between projects. Here's how: in Visual Studio, right-click the project in Solution Explorer, select Add Existing Item, select the files to share, and then click Add As Link. Store your source code files in a common folder on the file system where the projects that link to them can see them. And don't forget to add them to source control.
- For reuse at the binary level, rather than the source code level, see Creating Windows Runtime Components in C# and Visual Basic. There are also Portable Class Libraries, which support the subset of .NET APIs that are available in the .NET Framework for Windows 8.1, Windows Phone 8.1, and Windows 10 apps (.NET Core), and the full .NET Framework. Portable Class Library assemblies are binary compatible with all these platforms. Use Visual Studio to create a project that targets a Portable Class Library. See Cross-Platform Development with the Portable Class Library.
Extension SDKs
Most of the Windows Runtime APIs your Universal 8.1 app already calls are implemented in the set of APIs known as the universal device family. But, some are implemented in extension SDKs, and Visual Studio only recognizes APIs that are implemented by your app's target device family or by any extension SDKs that you have referenced.
If you get compile errors about namespaces or types or members that could not be found, then this is likely to be the cause. Open the API's topic in the API reference documentation and navigate to the Requirements section: that will tell you what the implementing device family is. If that's not your target device family, then to make the API available to your project, you will need a reference to the extension SDK for that device family.
Click Project > Add Reference > Windows Universal > Extensions and select the appropriate extension SDK. For example, if the APIs you want to call are available only in the mobile device family, and they were introduced in version 10.0.x.y, then select Windows Mobile Extensions for the UWP.
That will add the following reference to your project file:
<ItemGroup> <SDKReference Include="WindowsMobile, Version=10.0.x.y"> <Name>Windows Mobile Extensions for the UWP</Name> </SDKReference> </ItemGroup>
The name and version number match the folders in the installed location of your SDK. For example, the above information matches this folder name:
\Program Files (x86)\Windows Kits\10\Extension SDKs\WindowsMobile\10.0.x.y
Unless your app targets the device family that implements the API, you'll need to use the ApiInformation class to test for the presence of the API before you call it (this is called adaptive code). This condition will then be evaluated wherever your app runs, but it will only evaluate to true on devices where the API is present and therefore available to call. Only use extension SDKs and adaptive code after first checking whether a universal API exists. Some examples are given in the section below.
Also, see App package manifest.
Conditional compilation and adaptive code
If you're using conditional compilation (with C# preprocessor directives) so that your code files work on both Windows 8.1 and Windows Phone 8.1, then you can now review that conditional compilation in light of the convergence work done in Windows 10. Convergence means that, in your Windows 10 app, some conditions can be removed altogether. Others change to run-time checks, as demonstrated in the examples below.
Note If you want to support Windows 8.1, Windows Phone 8.1, and Windows 10 in a single code file, then you can do that too. If you look in your Windows 10 project at the project properties pages, you'll see that the project defines WINDOWS_UAP as a conditional compilation symbol. So, you can use that in combination with WINDOWS_APP and WINDOWS_PHONE_APP. These examples show the simpler case of removing the conditional compilation from a Universal 8.1 app and substituting the equivalent code for a Windows 10 app.
This first example shows the usage pattern for the PickSingleFileAsync API (which applies only to Windows 8.1) and the PickSingleFileAndContinue API (which applies only to Windows Phone 8.1).
#if WINDOWS_APP // Use Windows.Storage.Pickers.FileOpenPicker.PickSingleFileAsync #else // Use Windows.Storage.Pickers.FileOpenPicker.PickSingleFileAndContinue #endif // WINDOWS_APP
Windows 10 converges on the PickSingleFileAsync API, so your code simplifies to this:
// Use Windows.Storage.Pickers.FileOpenPicker.PickSingleFileAsync
In this example, we handle the hardware back button—but only on Windows Phone.
#if WINDOWS_PHONE_APP Windows.Phone.UI.Input.HardwareButtons.BackPressed += this.HardwareButtons_BackPressed; #endif // WINDOWS_PHONE_APP ... #if WINDOWS_PHONE_APP void HardwareButtons_BackPressed(object sender, Windows.Phone.UI.Input.BackPressedEventArgs e) { // Handle the event. } #endif // WINDOWS_PHONE_APP
In Windows 10, the back button event is a universal concept. Back buttons implemented in hardware or in software will all raise the BackRequested event, so that's the one to handle.
Windows.UI.Core.SystemNavigationManager.GetForCurrentView().BackRequested += this.ViewModelLocator_BackRequested; ... private void ViewModelLocator_BackRequested(object sender, Windows.UI.Core.BackRequestedEventArgs e) { // Handle the event. }
This final example is similar to the previous one. Here, we handle the hardware camera button—but again, only in the code compiled into the Windows Phone app package.
#if WINDOWS_PHONE_APP Windows.Phone.UI.Input.HardwareButtons.CameraPressed += this.HardwareButtons_CameraPressed; #endif // WINDOWS_PHONE_APP ... #if WINDOWS_PHONE_APP void HardwareButtons_CameraPressed(object sender, Windows.Phone.UI.Input.CameraEventArgs e) { // Handle the event. } #endif // WINDOWS_PHONE_APP
In Windows 10, the hardware camera button is a concept particular to the mobile device family. Because one app package will be running on all devices, we change our compile-time condition into a run-time condition using what is known as adaptive code. To do that, we use the ApiInformation class to query at run-time for the presence of the HardwareButtons class. HardwareButtons is defined in the mobile extension SDK, so we'll need to add a reference to that SDK to our project for this code to compile. Note, though, that the handler will only be executed on a device that implements the types defined in the mobile extension SDK, and that's the mobile device family. So, this code is morally equivalent to the Universal 8.1 code in that it is careful only to use features that are present, although it achieves that in a different way.
// += this.HardwareButtons_CameraPressed; } ... private void HardwareButtons_CameraPressed(object sender, Windows.Phone.UI.Input.CameraEventArgs e) { // Handle the event. }
Also, see Detecting the platform your app is running on.
App package manifest
The What's changed in Windows 10 topic lists changes to the package manifest schema reference for Windows 10, including elements that have been added, removed, and changed. For reference info on all elements, attributes, and types in the schema, see Element Hierarchy. If you're porting a Windows Phone Store app, or if your app is an update to an app from the Windows Phone Store, ensure that the pm:PhoneIdentity element matches what's in the app manifest of your previous app (use the same GUIDs that were assigned to the app by the Store). This will ensure that users of your app who are upgrading to Windows 10 will receive your new app as an update, not a duplicate. See the pm:PhoneIdentity reference topic for more details.
The settings in your project (including any extension SDKs references) determine the API surface area that your app can call. But, your app package manifest is what determines the actual set of devices that your customers can install your app onto from the Store. For more info, see examples in TargetDeviceFamily.
You can edit the app package manifest to set.
The next topic is Troubleshooting. | https://docs.microsoft.com/en-us/windows/uwp/porting/w8x-to-uwp-porting-to-a-uwp-project | CC-MAIN-2019-04 | refinedweb | 2,166 | 56.15 |
AWT TextEvent Class
The object of this class represents the text events.The TextEvent is generated when character is entered in the text fields or text area. The TextEvent instance does not include the characters currently in the text component that generated the event rather we are provided with other methods to retrieve that information.
Class declaration
Following is the declaration for java.awt.event.TextEvent class:
public class TextEvent extends AWTEvent
Field
Following are the fields for java.awt.event.TextEvent class:
static int TEXT_FIRST --The first number in the range of ids used for text events.
static int TEXT_LAST --The last number in the range of ids used for text events.
static int TEXT_VALUE_CHANGED --This event id indicates that object's text changed.
Class constructors
Class methods
Methods inherited
This class inherits methods from the following classes:
java.awt.AWTEvent
java.util.EventObject
java.lang.Object | http://www.tutorialspoint.com/awt/awt_text_event.htm | CC-MAIN-2014-52 | refinedweb | 148 | 56.35 |
#include <solver_gmres.h>
Class to hold temporary vectors. This class automatically allocates a new vector, once it is needed.
A future version should also be able to shift through vectors automatically, avoiding restart.
Definition at line 55 of file solver_gmres.h.
Constructor. Prepares an array of
VECTOR of length
max_size.
Delete all allocated vectors.
Get vector number
i. If this vector was unused before, an error occurs.
Get vector number
i. Allocate it if necessary.
If a vector must be allocated,
temp is used to reinit it to the proper dimensions.
Pool were vectors are obtained from.
Definition at line 88 of file solver_gmres.h.
Field for storing the vectors.
Definition at line 93 of file solver_gmres.h.
Offset of the first vector. This is for later when vector rotation will be implemented.
Definition at line 99 of file solver_gmres.h. | http://www.dealii.org/developer/doxygen/deal.II/classinternal_1_1SolverGMRES_1_1TmpVectors.html | CC-MAIN-2014-15 | refinedweb | 141 | 63.76 |
I am getting the below error when i try to set a hash value to the parent url from iframe which contains another domain url:
Unsafe JavaScript attempt to access frame with URL "URL1" from frame with URL "URL2". Domains, protocols and ports must match.
How can I fix this problem?
From a child document of different origin you are not allowed access to the top window's
location.hash property, but you are allowed to set the
location property itself.
This means that given that the top windows location is, instead of doing
parent.location.hash = "#foobar";
you do need to know the parents location and do
parent.location = "";
Since the resource is not navigated this will work as expected, only changing the hash part of the url.
If you are using this for cross-domain communication, then I would recommend using easyXDM instead.
Crossframe-Scripting is not possible when the two frames have different domains -> Security.
See this:
Now to answer your question: there is no solution or work around, you simply should check your website-design why there must be two frames from different domains that changes the url of the other one.
A solution could be to use a local file which retrieves the remote content
remoteInclude.php
<?php $url = $_GET['url']; $contents = file_get_contents($url); echo $contents;
The HTML
<iframe frameborder="1" id="frametest" src="/remoteInclude.php?url=REMOTE_URL_HERE"></iframe> <script> $("#frametest").load(function (){ var contents =$("#frametest").contents(); });
The problem is even if you create a proxy or load the content and inject it as if it's local, any scripts that that content defines will be loaded from the other domain and cause cross-domain problems.
I was getting the same error message when I tried to chamge the domain for iframe.src.
For me, the answer was to change the iframe.src to a url on the SAME domain, but which was actually an html re-direct page to the desired domain. The other domain then showed up in my iframe without any errors.
Worked like a charm. :)
I found that using the XFBML version of the Facebook like button instead of the HTML5 version fixed this problem. Add the below code where you want the button to appear:
> <fb:like</fb:like>
Then add this to your HTML tag:
xmlns:fb=""
Read something about Javascript access security here:
Specifically about implementing Vimeo and JavaScript unsafe access, I found this on a discussion on the Vimeo forums:
If you're using a webkit browser (Safari or Chrome) that error is actually coming from the Webkit Inspector trying to access the iframe (the Webkit Inspector is actually written in HTML and Javascript).
The thing to make sure is that you can't call any of the api or addEvent methods on the iframe until the player has finished loading. As per the example, you need to add the "onLoad" event first and then execute your code inside of that handler.
And second, I checked link you provided, and it loads fast and fine to me, so it is definitely not reason why you keep having this page loading long time
The Chrome don't allow you to do that. Code from Iframe cannot access parent code.
First, check that both pages are being referenced using the SRC attr of the iframe in exactly the same way, I mean:
beacause it is possible, that even if they are running in the same machine, one of those iFrames is called like:
and the other:
or any other combination.
You can check it by inspecting the code with the Developer tools in all modern browsers.
Chrome allows you to call parent's function if they are in the same domain (I do it everyday), so there shouldnt be any problem.
I use this function to acces children iFrames:
var getIframe = function (id) { if ($.browser.mozilla) return document.getElementById(id).contentWindow; return window.frames[id]; };
and just "parent.yourParentFunction()" when you want to access a function in the parent.
Check that the parent's function is assigned to the Window object (global namespace)
Good luck with that :-)
The parent frame may have set a document.domain value different from "samedomain". Update the document.domain property in the iframe js to the value set in the parent frame and it should work. | https://javascriptinfo.com/view/47762/unsafe-javascript-attempt-to-access-frame-with-url | CC-MAIN-2021-17 | refinedweb | 722 | 62.27 |
Примечание
From Django 1.3, function-based generic views have been deprecated in favor of a class-based approach, described in the class-based views.
For example, here’s a simple URLconf you could use to present a static “about” page:
from django.conf.urls import patterns, url, include from django.views.generic.simple import direct_to_template urlpatterns = patterns('', ('^about/$', direct_to_template, { 'template': 'about.html' }) )
Though this might seem a bit “magical” at first glance – look, a view with no code! –, actually import patterns, url, include from django.views.generic.simple import direct_to_template from books.views import about_pages urlpatterns = patterns('', ('^about/$', direct_to_template, { 'template': 'about.html' }), (' (dots and slashes, here) will be rejected by the URL resolver before they reach the view itself.
The direct_to_template_name is always a good idea. Your coworkers who design templates will thank you.
Often you simply need to present some extra information beyond that provided by the generic view. For example, think of showing a list of all the Выполнение запросов for more information about QuerySet objects, and see the generic views reference for the complete details).
To pick a simple example, we might want to order a list of books by publication date, with the most recent first:
book_info = { "queryset" : Book.objects.all():
acme_books = { "queryset": Book.objects.filter(publisher__name="Acme Publishing"), "template_name" : "books/acme_list.html" } urlpatterns = patterns('', (r'^publishers/$', list_detail.object_list, publisher_info), (r'^books/acme/$', list_detail.object_list, acme generic?",.
Примечание, )
Примечание
This code won’t actually work unless. | https://djbook.ru/rel3.0/topics/generic-views.html | CC-MAIN-2020-50 | refinedweb | 241 | 52.87 |
svg.charts - Package for generating SVG Charts in Python
Contents
Status and License
svg.charts is a pure-python library for generating charts and graphs in SVG, originally based on the SVG::Graph Ruby package by Sean E. Russel.
svg.charts supercedes svg_charts 1.1 and 1.2.
svg.charts is written by Jason R. Coombs. It is licensed under an MIT-style permissive license.
You can install it with easy_install svg.charts, or from the mercurial repository source with easy_install svg.charts==dev.
Acknowledgements
svg.charts depends heavily on lxml and cssutils. Thanks to the contributors of those projects for stable, performant, standards-based packages.
Sean E. Russel for creating the SVG::Graph Ruby package from which this Python port was originally derived.
Leo Lapworth for creating the SVG::TT::Graph package which the Ruby port was based on.
Stephen Morgan for creating the TT template and SVG.
Getting Started
svg.charts has some examples (taken directly from the reference implementation) in tests/testing.
Upgrade Notes
Upgrading from 1.x to 2.0
I suggest removing SVG 1.0 from the python installation. This involves removing the SVG directory (or svg_chart*) from site-packages.
Change import statements to import from the new namespace.
from SVG import Bar Bar.VerticalBar(...) becomes from svg.charts.bar import VerticalBar VerticalBar(...) | https://bitbucket.org/jaraco/svg.charts/src/b8f95b99d116?at=2.0.7 | CC-MAIN-2015-22 | refinedweb | 218 | 63.36 |
All-Star
20720 Points
May 14, 2011 03:44 PM|gerrylowry|LINK <== résumé
-- programming (c#, .NET 4, vs2010/vs2012, T-SQL, wpf, wcf, msmq, asp.net, mvc, webforms, Windows services, web services)
-- test driven development (xUnit.net)
-- design
-- project management
-- documentation
-- debugging
Aug 24, 2011 11:52 AM|guhanasp|LINK
Thanks gerrylowry. I have one more doubt. I have been seeing a error message "visual web developer was unable to determine the encoding of this file" when i run one website application on .net 3.5(vs 2008). Please clarify this.
All-Star
20720 Points
Aug 24, 2011 04:36 PM|gerrylowry|LINK
you're welcome ... it would have been better had you posted your reply here:
since the second part of your reply is a question that may not be directly related to your original namespace question, please start a new question and provide more details as to what actions you took that caused you to get your error message.
thank you ... g.
2 replies
Last post Aug 24, 2011 04:36 PM by gerrylowry | http://forums.asp.net/p/1681040/4418696.aspx?telecommute+Gerry+Lowry+wants+work+http+gerrylowryprogrammer+com+c+NET4+xUnit+net+SQL+mvc | CC-MAIN-2014-15 | refinedweb | 176 | 64 |
Configuring & Reducing Gaps in Number Range for IT Service Management
Number Ranges are the most critical master data in IT Service Management and we can configure them during initial setup of any IT Service Management process.
for e.g. if we are configuring Incident/Problem management or Change Request Management(i.e. ChaRM), we need to configure them in addition to other settings like BP creation etc.
The transaction for configuring them is SNRO. In this transaction, enter the object as CRM_SERVIC (i.e. for SMCR or Request for Change Transaction Type) and click on Number Ranges tab.
Now we have the option of checking existing number ranges interval for above object or creating a new one
Let us say we goto change interval mode by click the change interval tab above. The below screen displays the existing interval.
Now we would like to create a new one which we can assign to our Y*/Z* Transaction Types like ZMCR(Copy of SMCR) for example.
To do the same click on the +Interval tab and in the pop-up enter the required start and end number range like example shown
Click save button and below pop appears. Press enter.
Thus, we are ready with our own number range and would like to assign now to ZMIN etc. Navigate to highlighted path in SPRO transaction click on “Define Transaction Types” activity.
Now choose ZMCR (for e.g.) and assign the same to our custom change request or Request for Change Transaction type
Please Note: It is not recommend to modify/change to any standard SAP delivered transaction type, therefore copy to your custom namespace before configuring as per your requirements.
Now, whenever we post a new ZMCR or our custom transaction type it will be created with our own number range as shown below.
In addition, to above we can also observe before a Request for Change(RfC) is posted, the number. is already generated and consumed by the system which can give rise to many gaps and also unnecessary consumes the number range.
To save number range, again visit the same SPRO path and in the Transaction Numbering block remove the tick from Early Number Assignment.
Now number is only generated whenever the RfC is saved and till then it will be blank as shown below.
Also, please note that ID or Number field is not editable and therefore no one can edit it.
All the Best and keep smiling…. Cheers!!!
Hi Prakhar,
Thank you for sharing the article which is very much helpful for my project as the customer require the exact requirement. They dont want to create the number immediately when opens the request of change. only when they save the request the nos should populate.
Keep doing ...
Thanks
Regards
Chithra Natarajan
Thanx Chithra..happy to help
Hi Prakhar,
I created a new number range interval (Z2) for change requests and turned early number assignement off.
Now I've got following problem: the current number isn't correct.
For rfc 200000-200010 current number 200010 was displayed. Now 2000011 is reached and current number is 200020. It seems like there's an internal puffering about 10 numbers.
The problem is if there's a downtime of the system the next rfc will take the current number.
For example last rfc was 2000013 -> downtime -> next rfc 2000020.
Maybe you can help me with this matter?
Thanks a lot,
Martin.
Hello Martin,
I think your issue is with number range buffering (transaction SM56 to check).
You can remove the buffering using SNRO, CRM_SERVIC, and then in the menu : No. Range Object: Change (Edit → Set-Up Buffering →
It's all explained here:
Regards,
Roger
Hello Prakhar Saxena
Is this possible to customized the service ID as per the number range? From 7000000000 to IM7000000000? | https://blogs.sap.com/2013/10/20/configuring-reducing-gaps-in-number-range-for-it-service-management/ | CC-MAIN-2021-49 | refinedweb | 633 | 63.49 |
Frequently Asked Questions¶
Why the name jug?¶
The cluster I was using when I first started developing jug was called “juggernaut”. That is too long and there is a Unix tradition of 3-character programme names, so I abbreviated it to jug.
How to work with multiple computers?¶
The typical setting is that all the computers have access to a networked filesystem like NFS. This means that they all “see” the same files. In this case, the default file-based backend will work nicely.
You need to start separate
jug execute processes on each node.
See also the answer to the next question if you are using a batch system or the bash utilities page if you are not.
Will jug work on batch cluster systems (like SGE/LFS/PBS)?¶
Yes, it was built for it.
The simplest way to do it is to use a job array.
On LFS, it would be run like this:
bsub -o output.txt -J "jug[1-100]" jug execute myscript.py
For SGE, you often need to write a script. For example:
cat >>jug1.sh <<EOF #!/bin/bash exec jug execute myscript.py EOF chmod +x jug1.sh
Now, you can run a job array:
qsub -t 1-100 ./jug1.sh
Alternatively, depending on your set up, you can pass in the script on STDIN:
echo jug execute myscript.py | qsub -t 1-100
In any case, 100 jobs would start running with jug synchronizing their outputs.
Given that jobs can join the computation at any time and all of the communication is through the backend (file system by default), jug is especially suited for these environments.
The project gridjug integrates jug with gridmap to help run jug on SGE clusters (this is an external project).
How do I clean up locks if jug processes are killed?¶
Jug will attempt to clean up when exiting, including if it receives a SIGTERM signal on Unix. However, there is nothing it can do if it receives a SIGKILL (or if the computer crashes).
The solution is to run
jug cleanup to remove all the locks.
In some cases, you can avoid the problem in the first place by making sure that SIGTERM is being properly delivered to the jug process.
For example, if you executing a script that only runs jug (like in the previous
question), then use
exec to replace the script by the jug process.
Alternatively, in bash you can set a
trap to catch and propagate the
SIGTERM:
#!/bin/bash N_JOBS=10 pids="" for i in $(seq $N_JOBS); do jug execute & pids="$! $pids" done trap "kill -TERM $pids; exit 1" TERM wait
It doesn’t work with random input!¶
Normally the problem boils down to the following:
from jug import Task from random import random def f(x): return x*2 result = Task(f, random())
Now, if you check
jug status, you will see that you have one task, an
f
task. If you run
jug execute, jug will execute your one task. But, now, if
you check
jug status again, there is still one task that needs to be run!
While this may be surprising, it is actually correct. Everytime you run the
script, you build a task that consists of calling
f with a different number
(because it’s a randomly generated number). Given that tasks are defined as the
combination of a Python function and its arguments, every time you run jug, you
build a different task (unless you, by chance, draw twice the same number).
My solution is typically to set the random seed at the start of the computation explicitly:
from jug import Task from random import random, seed def f(x): return x*2 seed(123) # <- set the random seed result = Task(f, random())
Now, everything will work as expected.
(As an aside: given that jug was developed in a context where it is important to be able to reproduce your results, it is generally a good idea that if your computation depends on pseudo-random numbers, you be explicit about the seeds. So, this is a feature not a bug.)
Why does jug not check for code changes?¶
1) It is very hard to get this right. You can easily check Python code (with dependencies), but checking into compiled C is harder. If the system runs any command line programmes you need to check for them (including libraries) as well as any configuration/datafiles they touch.
You can do this by monitoring the programmes, but it is no longer portable (I could probably figure out how to do it on Linux, but not other operating systems) and it is a lot of work.
It would also slow things down. Even if it checked only the Python code: it would need to check the function code & all dependencies + global variables at the time of task generation.
I believe sumatra accomplishes this. Consider using it if you desire all this functionality.
2) I was also afraid that this would make people wary of refactoring their code. If improving your code even in ways which would not change the results (refactoring) makes jug recompute 2 hours of results, then you don’t do it.
3) Jug supports explicit invalidation with jug invalidate. This checks your dependencies. It is not automatic, but often you need a person to understand the code changes in any case.
Can jug handle non-pickle() objects?¶
Short answer: No.
Long answer: Yes, with a little bit of special code. If you have another way to
get them from one machine to another, you could write a special backend for
that. Right now, only
numpy arrays are treated as a special case (they are
not pickled, but rather saved in their native format), but you could extend
this. Ask on the mailing list if
you want to learn more.
Is jug based on a background server?¶
No. Jug processes do not need a server running. They need a shared backend. This may be the filesystem or a redis database. But jug does not need any sort of jug server.
Can I pass command line arguments to a Jugfile?¶
Yes. They will be available using
sys.argv as usual.
If you need to pass arguments starting with a dash, you can use
-- (double
dash) to terminate option processing. For example, if your jugfile contains:
import sys print(sys.argv)
Now you can call it as:
# Argv[0] is the name of the script $ jug execute ['jugfile.py'] $ jug execute jugfile.py ['jugfile.py'] # Using a jug option does not change ``sys.argv`` $ jug execute --verbose=info jugfile.py ['jugfile.py'] $ jug execute --verbose=info jugfile.py argument ['jugfile.py', 'argument'] # Use -- to terminate argument processing $ jug execute --verbose=info jugfile.py argument -- --arg --arg2=yes ['jugfile.py', 'argument', '--arg', '--arg2=yes'] | http://jug.readthedocs.io/en/latest/faq.html | CC-MAIN-2017-39 | refinedweb | 1,135 | 74.59 |
It is considered good practice to rate limit an API to allow for a better flow of data and to increase security by mitigating attacks such as DDoS. Rate limiting will restrict the number of requests that can be made from a unique IP address during a designated period of time.
Import the library
from ratelimit import limits
Apply the decorator
@app.route(‘/endpoint/’, methods=[‘GET’]) @limits(calls=1, period=1) #max 1 call per second def respond(): #API code
If the limit is exceeded, the following exception will be raised.
raise RateLimitException(‘too many calls’, period_remaining)
And that’s all. Just as developers are taught to code around SQL injections, rate limiting is another necessary measure that should be implemented with any API .
Discussion (3)
pip install? It would be almost
import antigravity.
For requests, or Flask?
Hi, thx
I have some questions
What's a better flow of data? Requests are queued if limit is hit, can we configure a timeout? Is rate based on request emits or response receives?
This is all configurable with the ratelimit API, this post is just showing how easy it is to get started. In this situation it is IP address | https://dev.to/paymon123/the-easiest-way-to-rate-limit-a-python-api-3njc | CC-MAIN-2021-21 | refinedweb | 199 | 57.67 |
Introducing Arborist, the Tree Editor for Elm
Peter Szerzo
・5 min read
Two years ago, I published the first version of elm-arborist, a package for editing generic tree structures in Elm. I didn't advertise it too much at the time so it quietly went through 2 major re-writes and
elm bump-ed all the way up to version 8.2.0. All the while, it kept supporting more and more sophisticated needs in the NLX Studio app, where I use it to visually define conversational logic for chatbots.
Whether you have a complex use-case or not (after all, tree structures are everywhere), the goal of the library is to make editing trees smooth, contained and flexible:
- Arborist takes care of editing the structure (adding new nodes, rearranging subtrees etc.).
- you take care of editing the internals of each node.
- divide and conquer any use-case by combining the two together with minimal glue code. And since I can't help it: #functor.
We will look into how this works in this post. But first, a quick origin story.
Where It All Began
In a previous career in the architecture world, I spent a lot of time working with a tool called Grasshopper, a visual programming extension for the 3d modeling program Rhino:
In this environment, variables are represented as sliders, connected to boxes that either did transformation on their values or used them as coordinates to draw shapes. The resulting cable salads - found in countless other tools such as cables.gl - were beautiful and weird and full of possibilities. Finally, the structure of the 'code' was visual in a way a static folder structure could never truthfully represent.
When NLX Studio, my current startup, developed a need for a tool to model conversation logic, we thought of something similar, yet simpler and easier to organize. Instead of Grasshopper's directed graph, how about a (non-binary) tree? This is what
Arborist wound up supporting: utilities to create a tree holding a generic data structure, a fully flexible way to render it, along with an event handling machinery allowing users to expand and alter the structure visually.
Let's see how it all works.
Creating an Editable Tree
tldr; this section runs through a simple Arborist example. Feel free to just read the example on its own.
We start by defining a
Node type, and build up a starting tree using the
Arborist.Tree module.
import Arborist import Arborist.Tree as Tree type alias MyNode = { question : String , answer : String } startingTree : Tree.Tree Node startingTree = Tree.Node ( MyNode "How are you?" "Fine, thanks" ) [ Tree.Node ( MyNode "Great. Would you like coffee?" "Absolutely" ) [] ]
We can define each tree with the
Tree.Node constructor that takes the root node and an array of child nodes that are of the same recursive structure.
Creating trees in Elm this way is nothing new or special. It is, in fact, a very common example when talking about the language's type system.
Next, we lay out a
Model that contains this tree, as well as some internal state that
Arborist will need:
type alias Model = { tree : Tree.Tree MyNode , arboristState : Arborist.State } init : Model init = { tree = startingTree , arboristState = Arborist.init }
The editor will need some initial settings:
arboristSettings : List (Arborist.Setting MyNode) arboristSettings = [ Settings.keyboardNavigation True , Settings.defaultNode (MyNode "A" "Q") , Settings.nodeWidth 100 , Settings.level 80 , Settings.gutter 20 ]
Arborist will send a single message that updates the model like so:
update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of Arborist updater -> let ( newState, newTree ) = updater model.arborist model.tree in ( { model | arborist = newState , tree = newTree } , Cmd.none )
Instead of new state and tree values, Arborist's message comes with a function that computes them based on the current values. This is set up this way in order to prevent potential frequently fired mouse events from undoing each others' changes.
Finally, the view:
view : Model -> Html.Html Msg view model = Arborist.view [] { state = model.arboristState , tree = model.tree , nodeView = nodeView , settings = arboristSettings , toMsg = Arborist }
See the full example to see how these pieces fit together exactly.
Towards a Full-featured Editor
The simple example is a few steps behind the full-featured Arborist home page example. Here are the most important methods from the library that will get us there:
- activeNode and setActiveNode: when a tree node is focused, you can retrieve it using
activeNodeand use it to render a piece of UI responsible for editing its internals. Use
setActiveNodein the update method to persist changes in the tree.
- subscriptions: like the updater from the
updatemethod above, Arborist's
subscriptionstake a bit of type-fu to set up. In the end, though, they won't take up that much space, and add a lot of goodies like animations and keyboard navigation outlined in the remainder of this post.
See the full example for details.
The UX Features that Count
The first few versions of the package focused on making things work and making sure that the tree could be completely separated from the
Node data type it is used with. With
v8.0.0, it was UX time: making complicated trees intuitive to edit.
Keyboard Navigation
Instead of laboriously panning and padding around to find nodes, trees can now be traversed using arrow keys:
Clustering
What I really loved in Grasshopper is that you could take a portion of a cable salad and pull it together in a single box with the appropriate number of inputs and outputs, the visual equivalent of factoring out a pure function. Arborist can do this for subtrees:
To implement this feature, simply add the isNodeClustered logic in settings, 'teaching' the layout algorithm whether the subtree under a node should be hidden. Edit the corresponding flag for any given node with setActiveNode.
The Minimap
The clustering feature, effective for grouping a logically coherent subtree, is not effective for making large trees easy to navigate. To cover that front, Arborist makes it easy to create synced minimaps like the ones we are used to in our IDE's:
To create a minimap in Arborist, simply re-use the arborist internal state on a new piece of
Arborist.view with new settings for a smaller geometry and new views for nodes. The design of this internal state makes sure that when you interact with your tree in either views, the viewport stays in sync:
Adding these goodies to the library has been very exciting - and hopefully, there is a lot more to come.
Next up for Arborist
The next steps for the library will likely focus on performance for very large trees. My workplace needs up to this point did not include trees larger than 50 nodes, but I would be curious to look for windows of optimization in the library's codebase to become a whole lot more ambitious than that.
Do you have other ideas? Let me know by opening an issue or reaching out to me under @peterszerzo on Elm Slack or Twitter. Until then!
How Do I Start Giving Talks on Coding?
I don't consider myself an expert in just about any aspect of computer science ...
I love what the library does. It makes me want to think up a project just to use it. Thank you. :)
Thanks Dirk, look forward to seeing what you come up with :) | https://dev.to/peterszerzo/introducing-arborist-the-tree-editor-for-elm-49po | CC-MAIN-2019-47 | refinedweb | 1,227 | 63.59 |
Making Fluid 3D Creatures with JOGL
What has your GPU done for you today? Most modern computers are equipped with a ridiculously fast chip that's dedicated to graphics processing, but Java programs rarely get the chance to make it sing. With the
JOGL
API, we now have a way to let the GPU take over the lion's share of the math, which suddenly makes Java the language of choice for a lot of 3D applications. If we put all of those design patterns and sophisticated coding techniques we've been learning and refining together with bare-metal access to the graphics processor chip, we should be able to create surprisingly robust 3D apps that are also as fast as can be. This article describes my JOGL adventures while building a virtual universe that I call
"Fluidiom"
(fluid + idiom), based on push and pull forces.
Figure 1 shows Fluidiom in action.
Figure 1. Fluidiom main window
If you have a recent JVM, installed properly, and a 3D-enabled computer, you should be able to run my Web Start application with one click on the screen shot above.
You will have to approve of my self-made certificate for signing .jars. The creatures you'll see there have learned to run by survival-of-the-fittest evolution!
Push and Pull?
Things look alive when they move around sort of autonomously, and even more so if their structures are not rigid like crystal. Most of the 3D graphics you see these days, with the exception of the pre-rendered Pixar miracles, seems to be made of solid blocks, and if you're lucky, they appear to articulate, but mostly on hinges or ball-joints like Transformer toys. 3D animators masterfully encode object movements to look as natural as possible, but that takes the keen perception of an artist to fool you.
At the beginning of 1996, I had an unstoppable urge to find out what this new Java language was all about, and at the same time I had become fascinated with a piece of
outdoor art
that I had seen in the east of Holland. I wanted to recreate this tower in the computer. Soon, I became addicted to the wobbly tension-based structures that were appearing. I built the model on the basis of springs (things that push or pull), which I later called elastic intervals.
Why elastic intervals? Simplicity. There just aren't many ideas simpler than two dots connected with a line that has a preferred length. The fun starts when you connect the dots; when you connect up four dots with six lines that all prefer the same length, you're already launched into 3D! Connect a bunch more together, and they start to show all sorts of sometimes-surprising behavior, such as when a structure's elastic intervals are all under stress, but the structure as a whole is completely stable.
I started to think about what other things could be easily done. It seemed that the intervals could just as well be like muscles if their preferred lengths varied over time. Then, with structures writhing about methodically, you get to wonder if they could learn to walk. That's what I'm working on these days, but there's a
mailing list
for this kind of banter. Time to get into JOGL.
The Need for Speed!
My first renderings of the structures used
java.awt.Graphics from the Java 1.0 Virtual Machine to make wireframe images. I was sold on Java immediately when I saw how easy it was to put together a GUI with lots of controls and still see a fairly reasonable frame rate on the wireframe graphics. There were a lot of calculations going on, but the JVM seemed to perform well enough, and that improved vastly when JIT compilers arrived.
Then Java 1.2 arrived, with its graphics layer completely rewritten. The new
java.awt.Graphics2D was able to do all of the wild and wonderful things you can do with PostScript using Java2D, such as making partially transparent lines that are 1.432133 pixels thick, and other stuff. That was cool, but my animation frame rate collapsed, and my heart sank. Java was no longer good at animation, but had chosen to be good at desktop publishing instead. I'm sure there were a million justifications, and one of them was perhaps that there was a 3D API coming out. Major changes would have to be made, so that put the Fluidiom project on ice.
Eventually I got back to the project (during train rides) and started to program using the maturing Java3D API. It was definitely interesting and educational to understand how
scene graphs
in Java3D worked, and it made sense to use Java to attach behaviors to 3D objects. The 3D acceleration helped a lot, and bigger springy structures appeared with shading and all, but larger structures were still bringing the frame rate back to one frame per second.
A friend of mine looked over my shoulder one day and shook his head at how slow it was, and that got me mad. He had worked in OpenGL using C, and so I went hunting and found JOGL. The following day turned into a coding frenzy; by the end of the day I had rewritten the entire program to use JOGL instead of Java3D. The previously one-frame-per-second structures suddenly flowed smoothly enough on my screen that there was no need to talk about frame rate at all! Suffice it to say that I celebrated that night.
Nobody should have the illusion that any Java3D program can be rewritten in one day, as I had done, because I had some very specific advantages. First of all, my structures made of springy things connected together was already a kind of scene graph, so I had actually been mapping it to the Java3D scene graph. My project also doesn't need terribly sophisticated interaction with the mouse or other devices. More importantly, the quantum performance leap was because Fluidiom required that each and every object in the scene move all the time, which is fairly rare in 3D and precludes a whole bunch of optimizations that can be applied to static things. Java3D is set up to optimize groups of graphics objects that together form a big object, but it doesn't help when every little object in the whole scene is jiggling around.
What's on Display?
I had it easy. There are only two kinds of things that need to appear in Fluidiom graphics: intervals and joints. Each interval is a springy thing connecting two joints, and a joint brings together one end of a bunch of intervals. The intervals and joints are gathered together into something called a fabric, and that's all there is. A joint contains the 3D coordinates and a vector describing where it's moving. An interval holds its preferred length (span), and its two joints. The fabric is just a collection of joints and intervals.
The essence of these classes is this:
// a moving dot in space that brings together
// a bunch of interval ends
public class Joint {
public Point3f locus = new Point3f();
public Interval [] interval;
public Vector3f moment = new Vector3f();
}
// a connection between two joints, with
// a preferred length
public class Interval {
public Joint alpha;
public Joint omega;
public float span;
public float stress;
}
// collect joints and intervals together
public class Fabric {
public Joint [] joint;
public Interval [] interval;
}
In the real code (available in the Resources section below) there are some other things involved. The reason the fields are public, for example, is to make it easy to store the objects in XML using reflection (but that's another story). It also allows the code to be quite a bit more concise, avoiding the
getXxx() and
setXxxx() methods for the most commonly accessed things. I chose to make the fabric of joints and intervals publicly accessible and deal with them through a special singleton called
FabricShell that lets you change the fabric, and at the same time broadcasts
FabricEvent messages to any observers. Ha, two design patterns already!
Note that I'm using the
Point3f and
Vector3f classes, which was one nice thing inherited from Java3D: vecmath.jar. Originally, I had created some classes for doing the vector calculations, but when I started with Java3D, I discovered that it had all been done quite well, and there were matrix operations in there that I didn't really want to write.
Probably the most important observer of the
Fabric is the
JoglFabric, since it is responsible for maintaining a parallel data structure that deals with all things JOGL. Every change in the
Fabric immediately changes what you see when the
JoglFabric reacts to the change event. More about that in a minute.
Add the queen of all design patterns, the
visitor,
and suddenly the code becomes loosely coupled and you can freely and easily add new chunks of functionality that can be plugged in or out whenever you want. It also greatly simplifies the
FabricShell, since you can do all sorts of different things by simply calling one overloaded method:
public class FabricShell {
private Fabric fabric;
public void admitVisitor(JointVisitor theVisitor) {
// have the visitor visit all the joints
}
public void admitVisitor(IntervalVisitor theVisitor) {
// have the visitor visit all the intervals
}
}
public interface JointVisitor {
void visit(Joint theJoint);
}
public interface IntervalVisitor {
void visit(Interval theInterval);
}
There are visitors for all sorts of things in Fluidiom, but there are two main visitors responsible for breathing life into the fabric by activating the pushes and pulls. The physics of pushing and pulling is nothing more than basic vector math, with its little arrows and dots. The fabric is given life by a straightforward brute-force iteration process. First, the
Exerter, an
IntervalVisitor, looks at the difference between preferred and actual span and exerts vector forces on the
moment (a vector for where the joint is moving) of its two joints, either pushing or pulling them. Then, the
Mover, a
JointVisitor, is sent in to change the locations of all of the joints based on the accumulated moment vectors. Just keep repeating these steps, and the whole thing wobbles like jelly.
Synchronization
When I teach Java to C/C++ programmers, I always tell them that threads and synchronization are good for nostalgia. With those other languages, we created single-threaded code with tiny bugs that produce unpredictable results, whereas in Java, you get a slap on the wrist in the form of an exception. Multithreading can give you that feeling of unpredictability again in Java, so savor it. Ask a good Swing programmer about threading, and they'll tell you about the importance of keeping things simple.
The best way to synchronize is to not need synchronization at all, if possible. When you write using JOGL, there's a part of your program that has a mind (or better, a thread) of its own, because when you create a
GLCanvas you want it to take care of its own updates in its own time with an
Animator. What you get in return are callbacks to any
GLEventListener that you attach to the canvas. Your code should be modifying the visible objects in between frames only, so it's in the callbacks that all the work has to be done.
I need the iterations to happen at a certain speed, regardless of the frame rate available for a particular machine, so in the
display callback I check if it's time to do the number crunching with the
FabricShell.get().iterate() method:
public void display(GLDrawable theDrawable) {
// .... do all the displaying first ...
// then decide if the fabric needs
// an iteration yet
if (System.currentTimeMillis()-lastIteration
> ITERATION_DELAY) {
FabricShell.get().iterate();
lastIteration = System.currentTimeMillis();
}
}
JOGL Basics
In the beginning, there was a
GLCanvas, with an
Animator and a
GLEventListener:
public class JoglFabricView extends JFrame
implements FabricListener, GLEventListener
{
private GLCanvas canvas;
private JoglFabric joglFabric = new JoglFabric();
private Animator animator;
// construct the parts in the frame
public JoglFabricView() {
super("Fluidiom Fabric");
canvas = GLDrawableFactory.getFactory()
.createGLCanvas(new GLCapabilities());
canvas.addGLEventListener(this);
getContentPane().add(canvas,BorderLayout.CENTER);
animator = new Animator(canvas);
}
// get things ready for display
public void init(GLDrawable theDrawable) {
}
// draw the scene
public void display(GLDrawable theDrawable) {
}
// not sure what this is for so left it empty
public void displayChanged(
GLDrawable theDrawable,
boolean theModeChangedFlag,
boolean theDeviceChangedFlag
) {
}
// set up the matrices
public void reshape(
GLDrawable theDrawable,
int theX, int theY,
int theWidth, int theHeight
) {
}
}
The first callback you get is to initialize things. I didn't dig too deeply here, but rather just snagged some hints from code in the JOGL demos and from the various OpenGL tutorials you can find on the net. That's the thing with JOGL: most of the information you need can be found in the form of C or C++ code examples, since the API that JOGL presents to you is almost the same as the one that all OpenGL programmers get. Code snippets are surprisingly easy to translate into Java.
The code that I found helped me set up the
init() method, which sets up lighting and sets some basic rendering attributes, and start the
display() method, which clears the screen and then draws everything. Both of these methods actually stayed quite small, because I had them delegate work to similar methods in the
JoglFabric class, which was responsible for setting up and showing the fabric rendering. The
display() method and the cascade of calls inside of it do most of their work in matrix operations, which I'll get to shortly.
The Frustum
There's one more thing to work out before it can all get going, and that's the frustum, which is just a funny-shaped box in space that represents what the viewer can see. The frustum has six flat planes: front, back, top, bottom, left, and right. The front and back planes are parallel, but the back plane is bigger than the front one. Objects inside of the frustum appear on screen, and the rest do not. It makes no sense for the graphics card to calculate all of the shading and such for objects that are not going to be painted on the screen; the frustum gives the GPU the ability to ignore all irrelevant polygons. Too close, too far away, or beyond the sides, and the object vertexes calculations are enough. This is cool, because this allows you to fly inside of complicated objects with your "camera"!
The frustum is something you set up inside of the
reshape() method, because its actual shape depends on the aspect ratio of the canvas you're using. There is a call to
reshape() before anything is shown, so you get a chance to set up the frustum first.
The
GL_PROJECTION matrix is the one that embodies the frustum, while most of the work is done with the
GL_MODEL_VIEW matrix. There's a lot of documentation on the Web to explain, in
excruciating detail,
how these matrices are used.
Picking
Click your mouse on the display window to select a 3D object. It sounds easy, but imagine trying to make it work. Your mouse click happens at some integer
(x,y) coordinate on the screen, but the object is being projected through a bunch of floating-point matrix operations to its spot on the screen. You have to think of your mouse click shooting an arrow into the "scene" to see which object it hits. Down deep inside, somebody has to figure out which polygon in the scene intersects with the arrow you shot, and then determine which "object" owns that polygon. Java3D has a solution for this, but JOGL leaves it up to you.
I must confess that laziness overwhelmed me here, so I took advantage of the simplicity of my model. I sidestepped the idea of finding a polygon intersection point, because I had no idea how to do it. Rather, I just shoot the mouse-click arrow into the scene looking for the closest midpoints of intervals (just 3D points, not polygons!) and then favoring the closer ones over the further ones. Even my lazy approach wasn't trivial, however! In my code, the picking trick is in
JoglFabricView.display(), using special matrix calculations.
The Matrix
Just like Neo in
the movie,
you have to master the matrix if you want to make it work for you. A matrix is like a lens that transforms one universe of coordinates to another one in very specific ways. It can translate things from one position to another, rotate things around the origin, and scale things wider or narrower in different directions. OpenGL lets you perform these operations individually, with each one actually affecting the current matrix in the graphics card.
In Fluidiom, I display hundreds of intervals, all moving at the same time from frame to frame, and this is all done with hundreds of different matrix operations performed on what is effectively a solitary graphics object. One graphical object is displayed in many different ways, or viewed through different matrix lenses, to make up one image. It's a very economical way to work, since single objects are reused, and this kind of activity is behind most 3D stuff you see.
Here's the code that displays an interval, for example:
// preserve the current matrix for later
theGL.glPushMatrix();
// throw the object out into space
theGL.glTranslatef(
intervalLocation.x,
intervalLocation.y,
intervalLocation.z
);
// twirl it around to the right angle
theGL.glRotatef(
RADIANS_TO_DEGREES*
(float)Math.acos(intervalDirection.z),
-intervalDirection.y, intervalDirection.x, 0
);
// stretch it out along the z-axis
theGL.glScalef(
theIntervalRadius,
theIntervalRadius,
span/2
);
// paint the object
theGL.glCallList(shape);
// restore the preserved matrix
theGL.glPopMatrix();
Pay close attention to the ordering, because it might trick you. You translate first, then rotate, then scale. What you're actually doing is setting up a single new matrix, and the operations are actually happening in the opposite order. First, we scale the object (here, it gets oblong, because it's supposed to represent a springy interval) then rotate it (trigonometry anyone?), and finally, flick it out into space at the desired location.
You can see that here, too, you are responsible for telling the graphics card to save its current matrix before you mangle it, and then telling it to restore the previous one when you're finished. The only non-matrix call here is the
glCallList(), which brings us to the next topic.
OpenGL Lists and OOP
OpenGL is not an object-oriented universe (it's really just a flat static C API), and JOGL doesn't do a lot more than represent it as some colossal Java classes with kerjillions of static (native) method calls. (The classes are so large, by the way, that they hang my IDE if I ask for code completion!) That is, of course, not to say that you can't write tidy object-oriented code against this API. You just have to imagine a bridge between the CPU and the GPU, and imagine your classes as having part of their state living "over there."
In OpenGL, you can create what is called a list, which effectively represents a long series of OpenGL commands, or rather, the result of these commands: a graphical object. All you get back to identify the graphical object is an integer, so you use that integer to reach across the CPU-GPU bridge and paint the object with
glCallList(shape). In your tidy object-oriented code, the Java-side proxy for the graphical object contains the
shape integer as part of its state.
For one Fluidiom rendering I wanted blimp-like oblong ellipsoids, so I created a list that contains a sphere. The matrix operations in the above code snippet stretch it, twirl it, and toss it out into space before anybody ever sees the sphere.
In true object-oriented tradition, I needed to be able to plug in different shape objects to render intervals on the fly, so I abstracted the interface needed for this:
public interface IntervalShape
{
float RADIANS_TO_DEGREES = 180f/(float)Math.PI;
void init(GL theGL, float theRadius);
void display(
GL theGL,
Interval theInterval,
float theIntervalRadius,
FabricOccupant theOccupant
);
}
The
init() method gives the implementation a chance to create its OpenGL lists in the GPU and hang on to the integers that represent them, and the
display() method does the matrix magic and then calls the list using these shape integers. An
IntervalShape is one of these bridging objects that has part of its state (the list) living in the GPU.
There are some strange extra things in the
display method that help in setting up the matrix, and the
FabricOccupant (which represents you, the viewer) is included for a very special reason. When you want an object to appear smooth in OpenGL, it should have lots of polygons, but too many polygons is inefficient. Long ago, graphics coders figured out the idea of level of detail, which lets you display things in more detail if the viewer is closer, and use simpler versions of the objects if the viewer is far away. The distance from an interval to the
FabricOccupant is used for this.
Fluidiom intervals are also supposed to show the viewer how much stress they are under, so there is a list stored for each of the different colors to represent stress (from red=push to blue=pull). The plot thickens.
Putting It All Together
So far, you have an idea as to what is to be displayed (the intervals) and the JOGL code needed for creating the efficient GPU-side lists that are eventually painted (the various
IntervalShape implementations). The only important thing missing is the
JoglFabric.
The job of the
JoglFabric is to observe the fabric and represent it to JOGL through delegated calls to the various
IntervalShape implementations. Each visible thing in the fabric is mirrored by an inner class bridge object in the
JoglFabric. For example, there is an inner class called
IntervalBridge:
private class IntervalBridge {
protected Interval interval;
public IntervalBridge(Interval theInterval) {
interval = theInterval;
}
public Interval getInterval() {
return(interval);
}
public void display(GL theGL) {
intervalShape.display(
theGL,
interval,
intervalRadius,
fabricOccupant
);
}
}
This is an inner class, because that makes it easy to deal with the
IntervalShape, which happens to be global, as well as making other global variables available: the
intervalRadius and the
fabricOccupant. At the same time, the call to
display is given the bridge's own
interval reference.
At the
JoglFabric level, callbacks from observing the fabric are received, and they result in additions and removals to the mirrored inventory of
IntervalBridge objects, which are stored in a
java.util.Map. When a change happens at interval level, we can quickly look up its proxy here using the map and reflect the change.
Conclusions
Obviously, things get a little dirty when you have to dig in a program against a procedural API from within an object-oriented language, but there's nothing like heating up the transistors on your graphics card a bit with Java code. JOGL lays OpenGL out as flat as can be, and leaves it up to you to make your code as tidy and object-oriented as you please. All you have to do is imagine that part of your object's state is over there on the graphics card instead of in your VM.
It's a little annoying that the JOGL classes have enough methods to actually hang your development environment if they try code completion, but you get used to avoiding that problem. It also takes a little while to learn how to interpret the vast quantity of OpenGL tutorials and code snippets out there on the Web and put them into JOGL form, but you get better at it.
Java3D was doing far too much calculation before talking to the graphics card. The jump to JOGL is all that I needed to get the kind of everything-in-motion performance that I needed to display the evolved running
Fluidiom creatures, and Java Web Start was there to make deployment a breeze.
Resources
- Fluidiom Project at SourceForge.
- Download the code or get it from
CVS.
- Fludiom Code mailing list where you can meet the people involved.
- JOGL at java.net.
- OpenGL.org: the OpenGL community.
- NEHE Productions has some popular tutorials.
- Game Tutorials has some more on OpenGL.
- Login or register to post comments
- Printer-friendly version
- 8057 reads | https://today.java.net/pub/a/today/2004/07/20/fluidiom.html?page=2 | CC-MAIN-2015-40 | refinedweb | 4,082 | 58.82 |
The
load function runs before the component is created on both the client and server, allowing you to fetch and manipulate data before rendering the page, meaning you wont need to show a loading state
onMount. Let's look at an example.
Back in our
[name].svelte file, we are fetching data from our endpoint. Since the endpoint has the same file name as our page, this page receives the data returned from this endpoint directly as props. But what if we were to change our endpoint's name to
[handle].json.js? Now, we no longer receive data from this endpoint as a prop. This endpoint is no longer a page endpoint, it is now a standalone endpoint, so what we should do instead is fetch our data using the load function.
In order to use the load function, the first thing we need to so is add a script tag with module context. Anything defined on the module scope will be evaluated only once when the module first evaluates, rather than for each component instance.
<script module="context"></script>
Now from this script tag, we can export the load function like this.
<script module="context"> export async function load() {} </script>
Load gives us a bunch of arguments that will help us fetch data. In this example we'll want access to this
params object, which contains any dynamic params that were part of our request, and also a fetch function to actually fetch our data in both the client and the sever.
<script module="context"> export async function load({(params, fetch)}) {} </script>
Now, since the load function is reactive, it will re-run anytime one of its parameters changes if that param is being used in the function. In our example, since we will be using
params, our load function will re-run anytime our dynamic param,
name, changes. It's also important to use the SvelteKit-provided fetch wrapper rather than using the native
fetch within our load function to avoid issues that may occur when calling the native fetch on the server.
Now, we can use our dynamic param,
name, to fetch the appropriate data from our endpoint like this.
<script context="module"> export async function load({ params, fetch }) { const response = await fetch(`/product/${params.handle}.json`); const product = await response.json(); } </script>
This alone will throw an error because the
load function must return something. We can return a
props object, which is an object that will pass data from the module script to the component as properties. In this example, it will return our
product data.
<script context="module"> export async function load({ params, fetch }) { const response = await fetch(`/product/${params.handle}.json`); const product = await response.json(); return { props: { product: product.product, }, }; } </script>
Now our load function is returning the data fetched from our endpoint, but the page still won't load. Since we changed our endpoint's dynamic param from
name to
handle in the beginning of this module, we'll have to update this in our endpoint as well. Within our
[handle.json.js endpoint, update the code to use
handle anywhere
name is being used.
All of the code within our
load function will run on both the client and the server, before the component is created, and we will now have
product available to us in our component as a prop. If you watched my previous video on components, you may remember that in order to access props in a component we need to export that prop to make it available for assignment. It's important to remember to export our prop in our normal script tag like this.
<script context="module"> export async function load({ params, fetch }) { const response = await fetch(`/product/${params.handle}.json`); const product = await response.json(); return { props: { product: product.product, }, }; } </script> <script> export let product; </script>
Now, our product's data is successfully being returned and will be displayed in our page. It's also important to point out that the
load function will run anytime a parameter being used within it changes. This means if our product name changes,
load() will run again before re-mounting the component.
Now, let's dive into how exactly this function works. This load function will be called before the component is created, and it will fetch our product list data from
/product/[handle].json, which is an our endpoint. When we initially request this page from the server, all of this code in the
load function is run on the server where it renders the HTML with our data. But this code also runs on the client. To prove this we can log
product in our console. If we refresh the page, we will see
product logged in the browser's console. This is because the code is running on the client. However, we will also see product logged in the terminal. That's because the
load function is being run on the server as well. What's interesting is if we go to a different route and click back to
/product/shirt, we will see that
product is once again being logged in our browser's console, but not in the terminal. Why is this?
To best explain what's happening, let's take a look at a diagram.
When the page initially loads, the client requests
/product/shirt from the server. The server then calls
load, which fetches
/product/shirt.json from the endpoint. However, the data does not need to be served from your app. It can just as easily be fetched from an external API.
The server then uses the returned data to render the HTML. In this particular example, the
load function is calling
fetch. It's important to note, that if you call
fetch in
load, the resulting data is inlined into the HTML.
When the browser hydrates the HTML,
load is immediately called again, this time from the browser. However, this time
fetch won't hit the network, because it reads the inlined data instead. Only calling
fetch once guarantees consistency between server and client, and is a more efficient use of bandwidth.
This explains why we see product logged in both the browser and terminal when we refresh the page (because it is getting called twice, once on the server and once on the client). But you may be wondering, why call it twice? Why not just call it once and serialize the output of load? By calling
load on both sides, we are not restricted as to what we can load. For example, you can conditionally 'load' different components, even though they couldn't be serialized.
Now we have a decent understanding of how
load works when we first load the page. But if you remember, on subsequent page loads, we only see product being logged in the browser's console. It no longer is logged in the terminal. We can test this out by routing to a different page and then clicking back into
/product/cup.
Why is this happening? Shouldn't load be getting called twice? Once on the client and on the server? Well, after the initial page load, any subsequent page loads will call
load directly from the client. And this time, any
fetch calls will hit the network as depicted in this diagram.
Now is also a good time to point out that
load() is very different than one of Svelte's lifecycle functions,
onMount. Every component has a lifecycle that starts when it is created, and ends when it is destroyed.
load() will run before the component is rendered to the DOM, and the component will not be rendered until we receive a response from
load(). This means that we do not need to display any sort of loading indicator.
onMount() is different because it runs as soon as the component is mounted to the DOM. This means that the user will see the component before we receive a response from
onMount().
load() should typically be used when you are fetching data, but there are use cases where you may need to use
onMount() over
load(), so it is important to understand the difference.
In our example, we use
load() to fetch the product data. Now, what if we want to display a modal component that alerts the item is on sale, but only after the user has been on the page for five seconds? We can do this using the
onMount function. Here, I've already added our modal component to this page which will be displayed if
showModal is true. First we need to import
onMount from Svelte. Now, let's create a new variable,
seconds, which will start off as zero. In our
onMount function we will set an interval that will increment
seconds by one every second. Within this interval let's set
showModal to true if
seconds is 5. I'll also log the value of
seconds.
<script context="module"> export async function load({params, fetch}) { const response = await fetch(`/product/${params.handle}.json`); const product = await response.json() return { props: { product: product.product } } } </script> <script> import { onMount } from 'svelte'; export let product; let showModal = false; let count = 0; onMount(() => { const interval = setInterval(() => { seconds += 1; console.log(seconds) if(seconds === 5) { showModal = true; } }, 1000); }); </script> {#if showModal} <Modal on:click={() => {showModal = false}}> <span slot="header"> <div class="text-center font-bold uppercase mb-4"> This item is on sale! </div> </span> <span slot="body"> <div class="mb-4"> {product.name} is on sale! It is only {product.price} </div> </span> <span slot="button"> <button on:click={() => {showModal = false}}Awesome!</button> </span> </Modal> {/if}
Now, if we were to run this in the browser, we would see that every second our
seconds variable is increased by one, and once it hits 5 we see our modal. Now, we should also clear our interval once our modal appears as well as when the component is unmounted. We can do this by returning a function like this.
onMount(() => { const interval = setInterval(() => { seconds += 1; console.log(seconds); if (seconds === 5) { showModal = true; clearInterval(interval); } }, 1000); return () => { clearInterval(interval); }; });
Anytime a function is returned from
onMount, it will be called when the component is unmounted. It's also worth mentioning that unlike the load function,
onMount can be used in all components. Not just pages.
Now that we know the correct way to load data into our app using the
load function, in the next module we will learn how to prefetch our data to make our app feel extra snappy. | https://vercel.com/docs/beginner-sveltekit/loading | CC-MAIN-2022-33 | refinedweb | 1,761 | 72.97 |
#include <OnixS/CME/MDH/SocketFeedEngine.h>
Represents a collection of settings affecting the behavior of the multi-threaded feed engine while working with network-related services.
Definition at line 32 of file SocketFeedEngine.h.
Initializes the given instance of the network settings with the default values.
Definition at line 44 of file SocketFeedEngine.h.
Cleans everything up.
Definition at line 53 of file SocketFeedEngine.h.
Defines amount of time Feed Engine spends on socket waiting for I/O while running master processing loop.
Time is measured in milliseconds.
Definition at line 121 of file SocketFeedEngine.h.
Sets dataWaitTime.
Definition at line 128 of file SocketFeedEngine.h.
Max size for network packet transmitted by MDP.
Definition at line 60 of file SocketFeedEngine.h.
Max size for network packet transmitted by MDP.
Definition at line 67 of file SocketFeedEngine.h.
Defines size of receiving buffer in bytes for sockets.
Definition at line 98 of file SocketFeedEngine.h.
Sets socketBufferSize.
Definition at line 105 of file SocketFeedEngine.h.
Watch service to be used by Feed Engine.
Watch is used by Feed Engine to assign time points to packets received from the feeds.
Definition at line 79 of file SocketFeedEngine.h.
Watch service to be used by Feed Engine.
If no instance associated, UTC watch is used.
Definition at line 88 of file SocketFeedEngine.h. | https://ref.onixs.biz/cpp-cme-mdp3-handler-guide/classOnixS_1_1CME_1_1MDH_1_1SocketFeedEngineSettings.html | CC-MAIN-2022-33 | refinedweb | 221 | 54.08 |
I need to command a user to open file for reading and then take the words
ending in "ed" and print them back to the file.
My problem is I understand opening a file, and comparing the strings but
putting them together so that I am writing the new items to the file
just boggles my poor soul.
So a few issues here: can't actually open the text file, and also not sure how to go about reading the array into it.
I appreciate all of your help, and please do not give me short replies so as to assume I know exactly what you mean because I am a sincere virgin to c++. thanks
#include <stdio.h> #include <string.h> int main( void ) { int i; /* loop counter */ int length; /* length of current string */ char array[ 5 ][ 20 ] = {0}; /* 5 strings from user */ char filename [100]; FILE* fileptr; printf ("Please enter the file to open:"); scanf_s ("%s", filename); fileptr = fopen (filename, "r"); if (fileptr == NULL) { printf ("Unable to open '%s' for input. \n", filename); } else { while (!feof(fileptr)) /* read in 5 strings from user */ for ( i = 0; i <= 4; i++ ) { printf( "\nThe strings ending with \"ED\" are:\n" ); } /* loop through 5 strings */ for ( i = 0; i <= 4; i++ ) { /* find length of current string */ length = strlen( &array[ i ][ 0 ] ); /* print string if it ends with "ED" */ if ( strcmp( &array[ i ][ length - 2 ], "ED" ) == 0 ) { printf( "%s\n", &array[ i ][ 0 ] ); fclose (fileptr); } /* end if */ }//end for // } //end else// return 0; } //end main// | http://www.dreamincode.net/forums/topic/295599-open-file-to-read-compare-string-array-print-back-to-the-file/ | CC-MAIN-2017-26 | refinedweb | 252 | 77.61 |
The camera eats first - it's the motto that fuels a lifestyle for some people, especially those on Instagram. Nowadays, people make separate social media accounts dedicated to the food they eat - whether it's homemade, fine dining, or simply snacking. Many people often seek a huge following on their food-dedicated Instagram accounts too because that gives them the opportunity to gain potential sponsorships, shoutouts, and just the satisfaction of knowing that people are interested in sharing the love for food.
However, there's often pressure of crafting the perfect post. Social media influencers go through great lengths to capture the most appealing photo or to come up the wittiest and most eye-catching caption to go with the photo. There's a lot of creative effort that goes into maintaining a social media presence. Wouldn't it be nice if you had someone else write the captions for you so that you can focus on eating delicious food?
In this article, we'll walk through how you can develop a functional Python program that generates Instagram worthy captions for you to post along with your picture using OpenAI's GPT-3 engine. This app will take pictures that are sent to Twilio Programmable WhatsApp's API and classified using Clarifai's API to determine what kind of caption to give the picture.
Wow, look who remembered Twilio's birthday week ;).
- A Clarifai account. Sign up for a free account to generate an API key.
Configuration foodiecaptioner $ cd foodiecaptioner $ python3 -m venv venv $ source venv/bin/activate (venv) $ pip install openai twilio flask python-dotenv clarifai
For those of you following the tutorial on Windows, enter the following commands in a command prompt window:
$ md foodiecaptioner $ cd foodiecaptioner $ python -m venv venv $ venv\Scripts\activate (venv) $ pip install openai twilio flask python-dotenv clarifai
The last command uses
pip, the Python package installer, to install the five packages that we are going to use in this project, which are:
- The OpenAI Python client library, to send requests to the OpenAI GPT-3 engine.
- The Twilio Python Helper library, to work with SMS messages.
- The Flask framework, to create the web application.
- The python-dotenv package, to read a configuration file.
- The Clarifai’s Python library to interact with the Clarifai API for image recognition. found in the Authentication tab in the Documentation.
The Python application will need to have access to this key, so we are going to create a .env file where the API key will be safely stored. The application we write will be able to import the key as an environment variable later.
Create a .env file in your project directory (note the leading dot) and enter a single line of text containing the following:
OPENAI_KEY=<YOUR-OPENAI-KEY>
Make sure that the
OPENAI_KEY is safe and that you do not expose the .env file in a public location.
Configure the Twilio WhatsApp Sandbox app, you can request production access for your Twilio phone number.
Use your smartphone to send a WhatsApp message of the phrase to your assigned WhatsApp number. If you are successful, you should receive a message as shown below.
Authenticate against Twilio and Clarifai Services
Next, we need to safely store some important credentials that will be used to authenticate against the Twilio and Clarifai services.
Create a file named .env in your working directory and paste the following text:
TWILIO_ACCOUNT_SID=<your Twilio account SID> TWILIO_AUTH_TOKEN=<your Twilio auth token> CLARIFAI_API_KEY=<your Clarifai API Key>
Look for the TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN variables on the Twilio Console and add it to the .env file.
To use the Clarifai API, you need to make an account and create an application in order to generate an API key for your project. Once your account is created, add the API key to the .env file as well.
Since this is a tutorial to create a WhatsApp chat bot, we will need to use a webhook (web callback) to allow real-time data to be delivered to our application by Twilio.
Open up another terminal window and navigate to the "foodiecaptioner"_7<<
The URL from ngrok in my example is “” but again, yours will be different.
Before you click on the “Save” button at the very bottom of the page, make sure that the request method is set to
HTTP POST.
Integrate Clarifai API to your application
This project is a fun opportunity to test out the Clarifai API and see how it works against the user inputs. Using computer vision and artificial intelligence, the Clarifai API is able to scrape and analyze the image to return the tags or "concepts" associated with the image. This API will be used to help our app identify what's going on in the picture so that we can generate a relevant Instagram caption for it.
With that said, let’s create a new Python file. I created image_classifier.py to store the code that uses Clarifai’s API. Copy the following code into the file you just created:
import os from dotenv import load_dotenv from clarifai.rest import ClarifaiApp load_dotenv() CLARIFAI_API_KEY = os.environ.get('CLARIFAI_API_KEY') app = ClarifaiApp(api_key=CLARIFAI_API_KEY) def get_picture_tags(image_url): response_data = app.tag_urls([image_url]) relevant_tags = {} for concept in response_data['outputs'][0]['data']['concepts']: relevant_tags[concept['name']] = 1 return relevant_tags.keys()
The
get_picture_tags is a function that will make a request to the Clarifai API so that the picture sent in through WhatsApp can be analyzed. The
response_data is parsed so that only the tags for the picture are saved in the
relevant_tags list. These descriptive tags will have a 1 set to them however the value doesn't really matter. The most important part is the key which will be passed into another file that handles the requests made to OpenAI's GPT-3 engine so that a caption can be generated from the picture. Alternatively, you can use another data structure to store all the tags however using a dictionary allows you to expand on the project if you need to, especially if you need to detect a particular word.
Let's try out the Clarifai code really quick so you can see how impressive it is! Start the Python shell directly in your terminal and find a URL of a picture on the Internet that you would like to test the API on. Copy all the code above in the image_classifier.py file but call the
get_picture_tags function at the end, outside of the definition.
Here's an example of an image I used and what the Python shell looks like:
>>> import os >>> from dotenv import load_dotenv >>> from clarifai.rest import ClarifaiApp >>> >>> load_dotenv() True >>> CLARIFAI_API_KEY = os.environ.get('CLARIFAI_API_KEY') >>> app = ClarifaiApp(api_key=CLARIFAI_API_KEY) >>> >>> def get_picture_tags(image_url): ... response_data = app.tag_urls([image_url]) ... relevant_tags = {} #dictionary data structure for faster lookup time ... for concept in response_data['outputs'][0]['data']['concepts']: ... relevant_tags[concept['name']] = 1 ... return relevant_tags.keys() ... >>> get_picture_tags("") dict_keys(['dinner', 'soup', 'food', 'no person', 'lunch', 'meat', 'curry', 'pork', 'vegetable', 'dish', 'meal', 'bowl', 'hot', 'chicken', 'rice', 'chili', 'delicious', 'broth', 'parsley', 'cooking'])
Impressive classification huh? The Clarifai API was able to recognize the image of bun rieu from Pho Saigon's Yelp page and even included the tag "delicious" in the list of relevant tags which is definitely true.
Write captions with OpenAI GPT-3's engine
The OpenAI playground allows users to explore GPT-3 (Generative Pre-trained Transformer 3), a highly advanced language model that is capable of generating written text that sounds like an actual human worked on it. This powerful model can also read a user's input and learn about the context of the prompt to determine how it should generate more text in the same writing style.
How the GPT-3 engine will work in this case is that we will have to provide the AI with some material to work with. This is the time for the foodies and social media experts to pull up your favorite posts on Instagram. Look for popular hashtags and pictures of influencers and feed the captions you really like into the engine.
Here is an example of a string of text that will be passed to the engine. A variable named
session_prompt will provide an instructional sentence before showing the format of how you, the user, will interact with the app.
session_prompt=""" Below are some witty fun descriptions for Instagram pictures based on the tags describing the pictures. The tags for this picture are: { } Fun description: There is no picture. The tags for this picture are: {'food', 'sweet', 'chocolate', 'sugar', 'cake', 'milk', 'delicious', 'cup', 'candy', 'no person', 'breakfast', 'baking', 'party', 'cream', 'vacation', 'Christmas', 'coffee', 'table', 'color', 'cookie'} Fun description: Loving this delicious Christmas dessert platter this year! Happy Holidays everyone! The tags for this picture are: {'food', 'sweet', 'chocolate', 'sugar', 'cake', 'milk', 'delicious', 'cup', 'candy', 'no person', 'breakfast', 'baking', 'party', 'cream', 'vacation'} Fun description: Took myself on vacation to enjoy some fancy chocolate. A girl's best friend! The tags for this picture are: {'food', 'sweet', 'chocolate', 'sugar', 'cake', 'milk', 'delicious', 'breakfast', 'baking', 'party', 'cream', 'vacation'} Fun description: A perfectly small cake that I baked for my friends birthday! The tags for this picture are: {'hot', 'cheetos', 'snack', 'yummy', 'junk', 'food', 'delicious', 'vacation'} Fun description: I was so delighted when I found these cheetos that tasted exactly like pepperoni pizza! """
As you can see, the input is presented on the line that says "The tags for this picture are:". Since the format of the tags created from the Clarifai API are stored in list format with curly brackets and delimiters, we have to show the engine some examples of tags in the same exact format. Feel free to create random tags based on your favorite food picture, or a nice picture that you saw on social media. Be sure to also write an example of how you want the "Fun description" should say below so that OpenAI GPT-3's engine can have an idea of what future captions should look like based on the particular tags provided.
Great! It's time to put this information into code. Create a file named caption_generator.py and copy and paste the following code:
from dotenv import load_dotenv import os from random import choice import openai from flask import Flask, request load_dotenv() openai.api_key = os.environ.get('OPENAI_KEY') completion = openai.Completion() start_sequence = "\nFun description:" restart_sequence = "\n\nThe tags for this picture are:" session_prompt=<INSERT_YOUR_OWN> def generate_caption(picture_tags): prompt_text = f'{session_prompt}{restart_sequence}: {picture_tags}{start_sequence}:' response = openai.Completion.create( engine="davinci", prompt=prompt_text, temperature=0.7, max_tokens=64, top_p=1, frequency_penalty=0, presence_penalty=0.3, stop=["\n"], ) caption = response['choices'][0]['text'] return str(caption)
Be sure to replace
session_prompt with the one provided earlier or make up your own. It is essential that you keep the format of the example
session_prompt.
Notice that the values for the variables
start_sequence and
restart_sequence match up with the ones in the
session_prompt. As mentioned, this is how you maintain interaction with the OpenAI GPT-3 engine. You, the user, will send in a picture through WhatsApp. That picture will be sent to the Clarifai API to describe a list of tags related to the picture. Those tags are sent to the
generate_caption function and will follow the conventions of
prompt_text which essentially puts all the variables together in order to generate content.
After setting the value for
session_prompt, this function calls the
openai.Completion.create() method on the OpenAI client and passes to it a series of arguments that customize the engine’s response, including the new prompt. The
max_tokens variable, which stands for either a word or punctuation mark, was set to 64 so that the length of the caption will be appropriate - not too long, not too short. You can read more about the GPT-3 customization options in the Ultimate Guide to OpenAI-GPT3 Language Model or explore the OpenAI Playground for yourself.
If you're curious to see how the code works, you can start the Python shell in the terminal and copy and paste the entire code from caption_generator.py there. You should also take the tags from the previous section with Clarifai and pass them into the
generate_caption function. I won't copy the entire OpenAI GPT-3 code since it's above, but this an example of what should be at the bottom of the code, and a sample output that was generated:
>>> pic_tags = ['dinner', 'soup', 'food', 'no person', 'lunch', 'meat', 'curry', 'pork', 'vegetable', 'dish', 'meal', 'bowl', 'hot', 'chicken', 'rice', 'chili', 'delicious', 'broth', 'parsley', 'cooking'] >>> generate_caption(pic_tags) ' I am fascinated by the meticulous process that goes into making such a simple dish like chicken soup.'
Seems like OpenAI GPT-3's engine thinks this is a chicken soup instead of a Vietnamese soup dish, but at least the computer created a caption for us! If you want to train OpenAI to understand cultural foods, make sure to add more information to the `session_prompt` variable.
Now this brings us to the last part - connecting all these files together.
Build the main Instagram caption generator app
At this point we have two files - caption_generator.py and image_classifier.py - that define very important functions for the app to work. In order to call the functions in our main file, we will need to import them over.
Create a file named app.py and copy and paste the following code in order to import the functions and necessary modules to run the Flask app:
import os from dotenv import load_dotenv from flask import Flask, request from twilio.twiml.messaging_response import MessagingResponse from twilio.rest import Client from image_classifier import get_picture_tags from caption_generator import generate_caption load_dotenv() app = Flask(__name__) client = Client()
It's time to write out the functions and webhook to make this application come to life. Here's the code that should be added under the
client object:
def respond(message): response = MessagingResponse() response.message(message) return str(response) @app.route('/webhook', methods=['POST']) def reply(): sender = request.form.get('From') media_msg = request.form.get('NumMedia') message = request.form.get('Body').lower() if media_msg == '1': pic_url = request.form.get('MediaUrl0') relevant_tags = get_picture_tags(pic_url) caption = generate_caption(relevant_tags) return respond(caption) else: return respond(f'Please send in a picture.')
As you can see, a new function
respond() is created and called throughout the project. This function sends a response to the user. By calling this function, it also helps our app return the output to the user.
The webhook is short - the user will text in a picture that they want to generate a caption for. The
pic_url is passed to the
get_picture_tags function defined from the image_classifier.py file. The results from that function are stored in
relevant_tags which are then passed to the
generate_caption function defined in the caption_generator.py file and finally returned to the user over WhatsApp.
Run the WhatsApp Picture Sharing App
It’s time to wrap things up and start generating captions for your social media account! You can check out my GitHub repository to make sure you have the full project.
Make sure you have one tab running
flask and one tab running
ngrok. If you closed it for any reason, start it again now with the following commands in their respective tabs.
(venv) $ flask run
And in the second tab:
$ ngrok http 5000
Furthermore, make sure that your
ngrok webhook URL is updated inside the Twilio Sandbox for WhatsApp. Each time you restart ngrok, the URL changes, so you will have to replace the URL. Remember to add the
/webhook at the end of the ngrok forward URL.
And now the fun begins! Get your WhatsApp enabled mobile devices and text your WhatsApp number. Be careful, the OpenAI GPT-3 engine might not be the best and it might generate a caption that you don't like. If that happens, you can submit the same picture again until you find an idea for a caption that is Instagram-worthy!
Here's an example of my dinner - think this will get 2,000 likes?
If you liked seeing the tags that the Clarifai API had to offer, you can parse the strings from the
relevant_tags and append them to a
hashtags array so that they are included in your social media posts. Here's an example of what your app can also do:
Conclusion: Building a Caption Generator Application
Congratulations on finishing this caption generator application, and most importantly, good luck on your journey to 1,000+ likes! Who knows, maybe the captions from the app aren't the best but it can inspire you to create your own creative captions.
This simple WhatsApp tutorial is just one of the many fun projects you can do using Twilio API, Clarifai, OpenAI GPT-3, and of course, Python and Flask tools.
Perhaps you can take this project a step further by playing around with the Clarifai API and make sure to detect and reject NSFW photos or make your application work for only food pictures instead of anything else.
What’s next for OpenAI GPT-3 projects?
If you're hungry to build more, try out these ideas:
- Build a Halloween story generator with OpenAI's GPT-3 engine and Twilio WhatsApp
- Customize a WhatsApp chatbot with OpenAI's GPT-3 engine and Twilio WhatsApp
- Build an Image Recognition App on WhatsApp using Twilio MMS, Clarifai API, Python, and Flask
Let me know what's cooking in your kitchen or on your computer by reaching out to. | https://www.twilio.com/blog/instagram-food-whatsapp-python-openai-gpt3-clarifai | CC-MAIN-2021-10 | refinedweb | 2,911 | 52.39 |
I did some basic cleaning of the code, turned it into a Python package, and pushed it to Launchpad. I also added some minor changes, such as introducing a
definefunction to define new tables instead of automatically creating one when an insert was executed. Automatically constructing a table from values seems neat, but in reality it is quite difficult to ensure that it has the right types for the application. Here is a small code example demonstrating how to use the
definefunction together with some other operations.
import mysql.api.simple as api server = api.Server(host="example.com") server.test_api.tbl.define( { 'name': 'more', 'type': int }, { 'name': 'magic', 'type': str }, ) items = [ {'more': 3, 'magic': 'just a test'}, {'more': 3, 'magic': 'just another test'}, {'more': 4, 'magic': 'quadrant'}, {'more': 5, 'magic': 'even more magic'}, ] for item in items: server.test_api.tbl.insert(item)The table is defined by providing a dictionary for each row that you want in the table. The two most important fields in the dictionary is name and type. The name field is used to supply a name for the field, and the type field is used to provide a type of the column. The type is denoted using a basic Python type constructor, which then maps internally to a SQL type. So, for example,
intmap to the SQL
INTtype, and
boolmap to the SQL type
BIT(1). This choice of deciding to use Python types are simply because it is more natural for a Python programmer to define the tables from the data that the programmer want to store in the database. I this case, I would be less concerned with how the types are mapped, just assuming that it is mapped in a way that works. It is currently not possible to register your own mappings, but that is easy to add.So, why provide the type object and not just a string with the type name? The idea I had here is that since Python has introspection (it is a dynamic language after all), it would be possible to add code that read the provided type objects and do things with them, such as figuring out what fields there are in the type. It's not that I plan to implement this, but even though this is intended to be a simple database interface, there is no reason to tie ones hands from start, so this simple approach will provide some flexibility if needed in the future.
LinksSome additional links that you might find useful:
- Connector/Python
- You need to have Connector/Python installed to be able to use this package.
- Sequalize
- This is a JavaScript library that provide a similar interface to a database. It claims to be an ORM layer, but is not really. It is more similar to what I have written above.
- Roland's MQL to SQL and Presentation on SlideShare is also some inspiration for alternatives.
- | http://mysqlmusings.blogspot.se/2012/02/pythonic-database-api-now-with.html | CC-MAIN-2015-32 | refinedweb | 487 | 66.88 |
On Mon, Jun 20, 2011 at 11:25, Andrew W. Nosenko <address@hidden> wrote: > On Sun, Jun 19, 2011 at 22:03, Andy Wingo <address@hidden> wrote: >> Hi, >> >>. > > You are right. I marked the underlying char as volatile instead of > the pointer itself. > The proper version seems > static char* volatile addr; > but ATM I have no compiler around for verify that. > The static char* volatile addr; doesn't help also. Gcc continues to think that it has rights to inline the inner find_stack_direction() and messes the check as consequence. Solution: make Gcc unable to inline the function call. One of possible ways to achieve it: call the inner through volatile pointer. typedef int (*func_t)(); int find_stack_direction (); volatile func_t f = &find_stack_direction; int find_stack_direction () { static char *addr = 0; auto char dummy; if (addr == 0) { addr = &dummy; return f(); } else return (&dummy > addr) ? 1 : -1; } int main () { int r = find_stack_direction (); return r < 0; } It's not the single way, but just first that come to minds. -- Andrew W. Nosenko <address@hidden> | http://lists.gnu.org/archive/html/bug-guile/2011-06/msg00051.html | CC-MAIN-2015-40 | refinedweb | 168 | 65.12 |
using Sys.require i do this:
Sys.require( [Sys.components.dataView, Sys.components.dataContext, Sys.components.watermark, Sys.scripts.WebServices, Sys.scripts.Globalization] );however if i need to call a wcf service aka Sys.scripts.WebServices namespace.I need to include the svc file javascript proxy after this call: .
Sys.scripts.WebServices namespace is down before retrieving my wcf proxy classes?I would like to not rely on "inclusion" sequence .. hoping api already exists.
Thanks
Rama
hi,.
I apologize for leaving this question here, but nothing else seems even close to the appropriate place. I'm troubleshooting a performance problem where a query to a database installed on SQL 2008 is running slower than it did on 2005 running on
slower/older hardware. I noticed in the network traffic that the server is sending an 8000 byte TDL response packet (using several network packets) but it stops immediately and waits until it get's an ACK back from the client This appears to be slowing the
response dramatically.
The client and server are configured to use up to a 64K TCP window, but there is never more than 8K in flight at any time because of the delays waiting for ACKs.
Is this normal for SQL Server; a part of the design, or what? I'm not so concerned with the size of the response packets, I just don't think the server should be so squimish about sending out data, if it's there to send, and if the network isn't fully utilized.
If you know how to fix this, I would appreciate your help very much. Thanks!
<
<
Table | http://www.dotnetspark.com/links/717-sysrequire--wcf-svcjsdebug.aspx | CC-MAIN-2018-47 | refinedweb | 271 | 62.07 |
10 May 2013 22:23 [Source: ICIS news]
MEDELLIN, ?xml:namespace>
The incident at the
According to the company, a new Topping C unit will be operational within a few weeks, while a new coking facility currently under construction is expected to come online in 2015.
“Technical evaluations are being carried out with the objective to determine both the date of commissioning of Topping C as well as the reduced operational capacity, as Coke A will not be recovered,” said YPF chief financial officer Daniel Gonzalez.
“Our preliminary estimate is that processing capacity will be approximately 150,000 bbl/day with Coke A not in operation,” said Gonzalez.
According to YPF estimates, disruption to the refinery’s operations will reduce output this year by approximately 1.4m cubic metres (mcm).
“This will be composed by 70% of high value products, 20% of light products and 10% of heavy products,” Gonzalez said.
The executive said the company might have to import some of these products to make up for reduced production and to satisfy domestic demand.
“But we may also be able to export some crude oil surpluses from time to time,” he added.
YPF reported late on Thursday that its net income in the first quarter fell by 2.8% year on year to Argentinian pesos (Ps) 1.26bn ($241.4m, €185.9m), mainly due to the impact of deferred income tax of Ps481m.
Revenues in the January-to-March period were up by about 25% to Ps18.63bn from Ps14.85bn a year ago, driven by increased domestic sales of liquid fuels and higher prices for gasoline and diesel.
However, sales of chemical products fell both in the domestic and external markets, decreasing by 19.5% and 9.1%, respectively, compared to the prior-year quarter.
Meanwhile, crude oil production in the first quarter of 2013 slipped 0.7% from a year earlier, while gas output fell by 3.7%, the company said.
Despite the dip in output, Gonzalez said that the company stands by its goal of boosting oil and natural gas production by 4% and 1%, respectively, during 2013.
YPF has vowed to increase output after the company was renationalised by
($1 = €0.77; $1 = Ps | http://www.icis.com/Articles/2013/05/10/9667555/argentina-ypf-refinery-to-operate-at-reduced-capacity-until.html | CC-MAIN-2015-22 | refinedweb | 369 | 56.76 |
This article explains the advantages of the NetBeans IDE. A Celsius To Fahrenheit application is used as a sample.
Introduction
This article explains the advantages of the NetBeans IDE. A "Celsius To Fahrenheit" application is used as a sample.
First we create this app simply in a text editor like Notepad++.
Creating a CelToFarConverter GUI
We need to use the following procedure to create this app.
Step 1
We need to import several packages like:
import javax.swing.*;
import java.awt.*;
Swing is used to design the layout, in other words create a user interface and AWT is used to handle button event.
Step 2
Define a class to extend JFrame and that implements an ActionListener. Now we create one button object, two label components and two text fields as shown below.
public class CelToFarConverter extends JFrame implements ActionListener
{
JButton btn;
JLabel lb1, lb2;
JTextField tf1,tf2;
Step 3
Now create a constructor to display the button, label and text field and set their property as shown below.
Public CelToFarConverter()
{
add(lb1=new JLabel("Celsius"));
add(tf1=new JTextField(20));
add(lb2=new JLabel("Fahrenheit"));
add(tf2=new JTextField(20));
add(btn=new JButton("Click To Convert"));
setVisible(true);
tf2.setEditable(false);
setSize(600,300);
setTitle("Celsius To Fahrenheit Converter");
setLayout(new FlowLayout());
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
btn.addActionListener(this);
}
Step 4
Now create a method of ActionListener to generate an action in which it gets text from the user input and then converts it using a formula and displays it in another textfield as shown below.
public void actionPerformed(ActionEvent e)
float tempFahr = (float)((Double.parseDouble(tf1.getText()))
* 1.8 + 32);
tf2.setText(tempFahr + " F");
Step 5
Create a main method to start executing our app. As we know the compiler starts exection through main so create an object of our class to start construtor exection. As shown below.
public static void main(String args[])
new CelToFarConverter();
Complete Coding
import java.awt.event.*;
Public CelToFarConverter()
public void actionPerformed(ActionEvent e)
public static void main(String args[])
Output
Compile the program.
Press Enter to see the application.
Now we provide input in Celsius.
Now click on the button ("Click To Convert") and the results are shown below.
Using the NetBeans IDE
We will now create this app using the NetBeans IDE.
We create the same app in the NetBeans IDE. The purpose of this is to understand the advantages of using the NetBeans IDE. The NetBeans IDE provides drag and drop capability to relieve us of the necessity to write code to display a button, textfield, label, and so on. Let's proceed to create this app.
Open the NetBeans IDE then click on the "File" Menu then choose "New Project". A window is generated similar to the following figure.
Now choose "Java" -> "Java Application" -> click on "Next". Provide your project name. I used CelciusToFahrenheitCoverter as shown below.
Now click on "Finish". Your project is generated; now right-click on the project and choose "New" -> "JFrame Form" as shown below.
Now provide your class name. I used "CelToFarConverterGUI" as shown below.
Now click on "Finish". A JFrame is added by the NetBeans IDE as shown below.
Step 6
Now you see the advantages of using the NetBeans IDE. In this application we did not need to write code to display various components. NetBeans provides Drag and Drop capability to add several components of a frame.
Now first we change the title of our frame.
In the Navigator window Right-click on "JFrame" then choose "Properties" as shown below.
Step 7
You see the "title" property in the properties window. In the right corner you see there is an icon bar; click on that icon. A new window is generated. Type your Frame title there as shown below.
Step 8
Click on "OK" -> "Close" to close the properties window. Now the title will be changed. Now start a Drag and Drop in the frame.
Open the Palette by pressing ("CTRL+SHIFT+8"). The Palette window is shown below.
Step 9
Start drag and drop; we need to drag one button, two labels, and two text fields. First add a label then textfield. Similarly, the next label and a textfield and finally add a button. For drag you need to click on the label and then click on the frame where you need to add this as shown below.
Similarly add another field and now your frame is similar to the following frame.
Step 10
Change the default content written inside it. Right-click on them one by one. First right-click on "Label1" and choose "Edit Text" as shown below.
Step 11
Now change the name of each component according to your use. We use the following name.
Step 12
Now resize the component and then resize the frame. For resizing, click on each component separately and resize it. Now the frame is as shown below.
You will be surprised if you see its coding by clicking on "Source Part", that's code generated by the NetBeans IDE automatically. Take a look at the following.
Note: You don't need to make changes in the code so leave it as is.
Step 13
Now go to the "Navigator Window" and change the name of the component.
Right-click on each component and choose "Change Variable Name" and provide the following name.
Step 14
Now add ActionListener to the button to generate the result.
Right-click on the button then choose "Event" -> "Action" -> "Action Performed" as shown below.
Step 15
Write the following code in the actionPerformed methd.
float tempFahr = (float)((Double.parseDouble(tf1.getText()))* 1.8 + 32);tf2.setText(tempFahr + " F");
Now your project is ready to run.
Step 16
Right-click on the project and choose "Run". The following output is generated.
Celsius column
Now we need to provide only one input for the Celsius column only.
Provide a nuber and click on the button; you will see the following output:
Click on the button.
Summary
The main purpose of this article is to illustrate the advantages of Netbeans. As you can see, we need to write code only for ActionListener. It's so easy to develop a GUI application in the NetBeans IDE. Thanks for reading.
View All
View All | https://www.c-sharpcorner.com/UploadFile/fd0172/how-to-create-celsius-to-fahrenheit-converter-app-and-showin/ | CC-MAIN-2021-43 | refinedweb | 1,040 | 68.36 |
A class can inherit from another class to extend or customize the original class. Inheriting from a class allows you to reuse the functionality in that class instead of building it from scratch. A class can inherit from only a single class, but can itself be inherited by many classes, thus forming a class hierarchy. A well-designed class hierarchy is one that reasonably generalizes the nouns in a problem space. For example, there is a class called Image in the System.Drawing namespace, which the Bitmap, Icon, and Metafile classes inherit from. All classes are ultimately part of a single giant class hierarchy, of which the root is the Object class. All classes implicitly inherit from it.
In this example, we start by defining a class called Location. This class is very basic, and provides a location with a name property and a way to display itself to the console window:
class Location { // Implicitly inherits from object string name; // The constructor that initializes Location public Location(string n) { name = n; } public string Name {get {return name;}} public void Display( ) { System.Console.WriteLine(Name); } }
Next, we define a class called URL, which will inherit from Location. The URL class has all the same members as Location, as well as a new member, Navigate. Inheriting from a class requires specifying the class to inherit from the class declaration, using the C++ colon notation:
class URL : Location { // Inherit from Location public void Navigate( ) { System.Console.WriteLine("Navigating to "+Name); } // The constructor for URL, which calls Location's constructor public URL(string name) : base(name) { } }
Now, we instantiate a URL, then invoke both the Display method (which is defined in Location) and the navigate method (which is defined in URL):
class Test { static void Main( ) { URL u = new URL(""); u.Display( ); u.Navigate( ); } }
A class D may be implicitly upcast to the class B that it derives from, and a class B may be explicitly downcast to the class D that derives from it. For instance:
URL u = new URL( ); Location l = u; // upcast u = (URL)l; // downcast
If the downcast fails, an InvalidCastException is thrown.
The as operator makes a downcast that evaluates to null if the downcast fails:
u = l as URL;
The is operator tests whether an object is or derives from a specified class (or implements an interface). It is often used to perform a test before a downcast:
if (l is URL) ((URL)l).Navigate( );
Polymorphism is the ability to perform the same operations on many types, as long as each type shares a common subset of characteristics. C# custom types exhibit polymorphism by inheriting classes and implementing interfaces (see Section 3.5 later in this chapter).
To continue with our inheritance example, the Show method can perform the operation Display on both a URL and a LocalFile, because both types inherit a Location's set of characteristics:
class LocalFile : Location { public void Execute( ) { System.Console.WriteLine("Executing "+Name); } // The constructor for LocalFile, which calls URL's constructor public LocalFile(string name) : base(name) { } } class Test { static void Main( ) { URL u = new URL(""); LocalFile l = new LocalFile("c:\\readme.txt"); Show(u); Show(l); } public static void Show(Location loc) { System.Console.Write("Location is: "); loc.Display( ); } }
A key aspect of polymorphism is the ability for each type to exhibit a shared characteristic in its own way. A base class may have virtual function members, which enable a derived class to provide its own implementation for that function member (also see Section 3.5 later in this chapter):
class Location { public virtual void Display( ) { Console.WriteLine(Name); } ... } class URL : Location { // chop off the http:// at the start public override void Display( ) { Console.WriteLine(Name.Substring(6)); } ... }
URL now has a custom way of displaying itself. The Show method in the Test class in the previous section now calls the new implementation of Display. The signatures of the overridden method and the virtual method must be identical, but unlike Java and C++, the override keyword is also required.
A class may be declared abstract. An abstract class may have abstract members. These are function members without implementation that are implicitly virtual. In our earlier examples, we had a Navigate method for the URL and an Execute method for the LocalFile. Instead, Location could be an abstract class with an abstract method called Launch:
abstract class Location { public abstract void Launch( ); } class URL : Location { public override void Launch( ) { Console.WriteLine("Run Internet Explorer..."); } } class LocalFile : Location { public override void Launch( ) { Console.WriteLine("Run Win32 Program..."); } }
A derived class must override all its inherited abstract members, or must itself be declared abstract. An abstract class cannot be instantiated. For instance, if LocalFile does not override Launch, then LocalFile itself must be declared abstract, perhaps so that Shortcut and PhysicalFile can derive from it.
A class may prevent other classes from inheriting from it by specifying the sealed modifier in the class declaration:
sealed class Math { ... }
The most common scenario for sealing a class is when a class is composed of only static members, which is the case with the Math class. Another effect of sealing a class is that it enables the compiler to turn all virtual method invocations made on that class into faster nonvirtual method invocations.
Aside from calling a constructor, the new keyword can also be used to hide the base class data members, function members, and type members of a class. Overriding a virtual method with the new keyword hides, rather than overrides, the base class implementation of the method:
class B { public virtual void Foo( ) { } } class D : B { public override void Foo( ) { } } class N : D { public new void Foo( ) { } // hides D's Foo } N n = new N( ); n.Foo( ); // calls N's Foo ((D)n).Foo( ); // calls D's Foo ((B)n).Foo( ); // calls D's Foo
A method declaration with the same signature as its base class must explicitly state whether it overrides or hides the inherited member.
This section deals with the subtleties of the virtual function member calling mechanism.
In C#, a method is compiled with a flag that is true if it overrides a virtual method. This flag is important for versioning. Suppose you write a class that derives from a base class in the .NET Framework, and deploy your application to a client computer. Later, the client upgrades the .NET Framework, and that base class now has a virtual method that matches the signature of one of your methods in the derived class:
class B { // written by the library people virtual void Foo( ) {...} // added in latest update } class D : B { // written by you void Foo( ) {...} }
In most object-oriented languages such as Java, methods are not compiled with this flag, so a derived class's method with the same signature is assumed to override the base class's virtual method. This means a virtual call is made to D's Foo, even though D's Foo was unlikely to have been made according to the specification intended by the author of B. This could easily break your application. In C#, the flag for D's Foo is false, so the runtime knows to treat D's Foo as new, which ensures that your application functions as it was originally intended. When you get the chance to recompile with the latest framework, you can add the new modifier to Foo, or perhaps rename Foo to something else.
An overridden function member may seal its implementation, so that it cannot be overridden. In our earlier virtual function member example, we could have sealed the URL's implementation of Display. This prevents a class that derives from URL from overriding Display, which provides a guarantee on the behavior of a URL.
class Location { public virtual void Display( ) { Console.WriteLine(Name); } ... } class URL : Location { public sealed override void Display( ) { Console.WriteLine(Name.Substring(6)); } ... }
Very occasionally, it is useful to override a virtual function member with an abstract one. In this example, the abstract Editor class overrides the Text property to ensure that concrete classes deriving from Editor provide an implementation for Text.
class Control { public virtual string Text { get { return null; } } } abstract class Editor : Control { abstract override string Text {get;} }
The base keyword is similar to the this keyword, except that it accesses an overridden or hidden base class function member. The base keyword is also used to call a base class constructor (see the next section) or access a base class indexer (by using base instead of this). Calling base accesses the next most derived class that defines that member. The following builds upon our example from the section on the this keyword:
class Hermit : Dude { public void new Introduce(Dude a) { base.Introduce(a); Console.WriteLine("Nice Talking To You"); } }
Initialization code can execute for each class in an inheritance chain, in constructors as well as field initializers. There are two rules. The first is field initialization occurs before any constructor in the inheritance chain is called. The second is the base class executes its initialization code before the derived class does.
A constructor must call its base class constructors first. In a case in which the base class has a parameterless constructor, the constructor is implicitly called. In a case in which the base class provides only constructors that require parameters, the derived class constructor must explicitly call one of the base class constructors with the base keyword. A constructor may also call an overloaded constructor (which calls base for it):
class B { public int x ; public B(int a) { x = a; } public B(int a, int b) { x = a * b; } // Notice how all of B's constructors need parameters } class D : B { public D( ) : this(7) { } // call an overloaded constructor public D(int a) : base(a) { } // call a base class constructor }
Consistent with instance constructors, static constructors respect the inheritance chain, so each static constructor from the least derived to the most derived is called. | http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+I+Programming+with+C/Chapter+3.+Creating+Types+in+C/3.2+Inheritance/ | CC-MAIN-2018-51 | refinedweb | 1,655 | 52.19 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hi,
On my character rig, I have a line like this:
hips_con = doc.SearchObject('hips_con)
hips_con = doc.SearchObject('hips_con)
It works on the source document but once I reference the file it doesn't work since the file will now have a name of
character_name::hips_con
character_name::hips_con
I can't delete the namespace entirely since there will be several character that have the same rig set-up in a single scene.
So, is there a way to limit the doc.SearchObject() on the source document and not on the reference document?
Thank you for looking at the problem.
Hi @bentraje could you be more specific or link a scene file example.
You say namespace, but so far only Xref used namespace. You also talk about referencing the file which makes me thinking of an Xref.
But since you speak about the character object, which also uses some kind of virtual document.
I'm not sure.
Cheers,
Maxime.
Hi @m_adam
Thanks for the response.
Sorry for the confusion.
Yes, I am talking with the Xref object when I mentioned "referenced file"
and No, I'm not talking about character object. I just mentioned character. As in my character in general.
Please see the illustration file below:
In the source file, it prints the BaseObject since it found the object
In the reference file, it should print also the BaseObject but it finds none because of the namespace.
That is why I am looking to limit the doc.SearchObject function on the source document and not on the file it is referenced.
Is there a way around this?
Thank you.
Hi @bentraje, you actually have to handle it yourself.
Here is an example
import c4d
def main():
nameToSearch = "Cube"
# Retrieves the parent object of the host object
parent = op.GetObject().GetUp()
xref = None
# Iterates overs each parent
while parent:
# if it's an xref, break and store it
if parent.CheckType(1025766):
xref = parent
break
parent = parent.GetUp()
# If an xref is defined then retrieve the namespace
if xref is not None:
namespace = xref[c4d.ID_CA_XREF_NAMESPACE]
nameToSearch = namespace + "::" + nameToSearch
# Search the object
Cube = doc.SearchObject(nameToSearch)
print Cube
Remember if you simply want to access the host object you can call BaseTag.GetObject from the python tag.
If you have any question, please let me know.
Cheers,
Maxime.
@m_adam
I have a handful of doc.SearchObject() so I was looking for an easier route.
I guess there is no way around it except concatenating strings.
doc.SearchObject()
Anyhow, thanks for the code and confirmation.
Will tag this thread as closed.
Have a great day ahead! | https://plugincafe.maxon.net/topic/11624/limit-the-doc-searchobject-on-the-source-document/? | CC-MAIN-2021-43 | refinedweb | 477 | 67.96 |
bing_jeen363189
30-10-2019
Hi,
I'm currently having issues with populating video metadata in AEM specifically for videos exported from Adobe Media Encoder. When I upload the source video to AEM, the metadata populates without any issues. I tried different export settings in Adobe Media Encoder and in After Effects but have not found a solution. I noticed the metadata in the exported video files has "pantry" attached to them. From what I've gathered in the forums, this is to mark exported video files as derivatives and to maintain integrity. I've attached a screenshot below for reference.
Please advise. Thanks!
kautuk_sahni
Community Manager
bhaskar.kumar chirags8739021hamidk92094312 hamidk92094312 SonDang BrijeshY rachanam15474013 aemmarc JaideepBrar berliant Any inputs on this one?
aemmarc
Employee
31-10-2019
Check the Namespace is defined in AEM. The XMP Metadata Writeback process will skip namespaces that aren't defined and not write that metadata to the file.
Hi aemmarc,
Just checked back with my team. For the source videos, the namespace is defined as what's shown from the screenshot I've provided (RetouchingWorkflow, Setlocation, etc.) and the metadata appears as we intended (Video, NYS1, etc).
After editing and exporting the videos through Media Encoder or After Effects, it changes from what was defined to "pantry <namespace>" (Pantry Retouchingworkflow, Pantry, Setlocation, etc) and the metadata from the source videos are still intact.
Here's a screenshot of comparing the results I get between a source video and edited video uploaded to AEM.
It appears that extra word "pantry" is preventing the metadata from populating in AEM. Is it possible to export videos without "pantry" attached to the <namespace>? The goal is to populate the AEM page with the metadata from the edited video. I forgot to clarify, the first screenshot I provided shows the namespaces and metadata from both source video (left) and exported video (right) read from exiftool. | https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/video-metadata-not-populating-in-aem/m-p/324974/highlight/true | CC-MAIN-2020-50 | refinedweb | 315 | 63.8 |
.
Understanding Render()
In the last part of this series, the following code was added at the end of the “index.js” file:
ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('root') );
The render() function as part of ReactDOM object is central to how React works when used to create HTML content. It “renders” all of the code passed to it.
Renders?
The two arguments passed to the RectDOM object in this example are (1) the HTML content and (2) where it should put this content. (By default, the root will be an element with the id of ‘root’.)
The HTML code passed to the render() function may seem strange, as it was not enclosed in strings or otherwise escaped somehow. This is one of the most powerful parts of how React works: HTML can be used directly in the JavaScript code!
By passing it to the render() function, it is “rendered” into code that the underlining system understands and passed off to be processed. In the example, the simple use of a single tag is not overly complex, but it demonstrates the basis of how React works: everything is “rendered” into HTML by its code.
All Components Render()
When working with React, individual parts are called components. Think of them as sections of the interface.
Every component has the same function as the ReactDOM object: render().
With this in mind, it becomes easier to see how to use components. Because all components also render content, they can also use HTML in their code. Different parts of an interface could be broken up into sections (components) and then combined together. And each would have access to the same ability to render its contents!
Inheriting from React.Component
To use the React.Component class, the keyword extends can be used to “extend” a base object into an existing one as part of JavaScript ES6.
class Example extends React.Component { render() { return ( <h1>Hello, world!</h1> ); } }
Writing a new class Example, then, means using the extends keyword and having its parent be the React.Component class. This makes a new object a React component!
As mentioned before, all components render().
Thus, adding a render() function to the class allows it to render its own content. Add in a return statement helps it to pass HTML code. Like with the ReactDOM.render() function, it returns code that is ultimately rendered.
Using React.Component Classes with ReactDOM
React understands JavaScript classes a little differently than normal. In order to have them rendered as part of the normal ReactDOM.render() function, they must be included as if they too were HTML.
An existing Example class that extends the React.Component object, then, becomes the following:
<Example />
Notice that it has an ending slash, /. This signals, like with HTML, that the tag is self-closing.
Putting It All Together
The following code makes a new class called Example that inherits from React.Component and has its own render() function. This new class is included as part of the ReactDOM.render() process by using it as if it was a HTML tag.
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; class Example extends React.Component { render() { return ( <h1>Hello, world!</h1> ); } } ReactDOM.render( <Example />, document.getElementById('root') );
While the final result did not change, the code that produced it did. Now, a class based on a React Component is passing off its rendering to the ReactDOM.render() function. | https://videlais.com/2019/05/25/learning-react-part-2-understanding-render/ | CC-MAIN-2020-45 | refinedweb | 574 | 59.4 |
Tutorial
How To Deploy a Gatsby Application to DigitalOcean App Platform
The author selected /dev/color to receive a donation as part of the Write for DOnations program.
Introduction.
In this tutorial, you will create a sample Gatsby app on your local machine, push your code to GitHub, then deploy to App Platform.
Prerequisites
On your local machine, you will need a development environment running Node.js; this tutorial was tested on Node.js version 14.16.0 and npm version 6.14.11. onto your local machine. You can follow the tutorial Contributing to Open Source: Getting Started with Git to install and set up Git on your computer.
An account on GitHub, which you can create by going to the Create your Account page.
-
The Gatsby CLI tool downloaded onto your local machine. You can learn how to do this in Step 1 of the How To Set Up Your First Gatsby Website tutorial.
An understanding of JavaScript will be useful. You can learn more about JavaScript at our How To Code in JavaScript series. You don’t need to know React in order to get started, but it would be helpful to be familiar with the basic concepts. You can learn React with this series.
Step 1 — Creating a Gatsby Project
In this section, you are going to create a sample Gatsby application, which you will later deploy to App Platform.
First, clone the default Gatsby starter from GitHub. You can do that with the following command in your terminal:
- git clone
The Gatsby starter site provides you with the boilerplate code you need to start coding your application. For more information on creating a Gatsby app, check out How To Set Up Your First Gatsby Website.
When you are finished with cloning the repo,
cd into the
gatsby-starter-default directory:
- cd gatsby-starter-default
Then install the Node dependencies:
- npm install
After you’ve downloaded the app and installed the dependencies, open the following file in a text editor:
- nano gatsby-config.js
You have just opened Gatsby’s config file. Here you can change metadata about your site.
Go to the
title key and change
Gatsby Default Starter to
Save the Whales, as shown in the following highlighted line:
module.exports = { siteMetadata: { title: `Save the Whales`,`, ], }
Close and save the file. Now open the index file in your favorite text editor:
- nano src/pages/index.js
To continue with the “Save the Whales” theme, replace
Hi people with
Adopt a whale today, change
Welcome to your new Gatsby site. to
Whales are our friends., and delete the last
<p> tag:
import React from "react" import { Link } from "gatsby" import { StaticImage } from "gatsby-plugin-image" import Layout from "../components/layout" import SEO from "../components/seo" const IndexPage = () => ( <Layout> <SEO title="Home" /> <h1>Adopt a whale today</h1> <p>Whales are our friends.</p> <StaticImage src="../images/gatsby-astronaut.png" width={300} quality={95} formats={["AUTO", "WEBP", "AVIF"]}Go to page 2</Link> <br /> <Link to="/using-typescript/">Go to "Using TypeScript"</Link> </Layout> ) export default IndexPage
Save and close the file. You are going to swap out the Gatsby astronaut image with a GIF of a whale. Before you add the GIF, you will first need to create a GIF directory and download it.
Go to the
src directory and create a
gifs file:
- cd src/
- mkdir gifs
Now navigate into your newly created
gifs folder:
- cd gifs
Download a whales GIF from Giphy:
- wget
Wget is a utilty that allows you to download files from the internet. Giphy is a website that hosts GIFs.
Next, change the name from
giphy.gif to
whales.gif:
- mv giphy.gif whales.gif
After you have changed the name of the GIF, move back to the root folder of the project and open up the index file again:
- cd ../..
- nano src/pages/index.js
Now you will add the GIF to your site’s homepage. Delete the
StaticImage import and element, then replace with the following highlighted lines:
import React from "react" import { Link } from "gatsby" import whaleGIF from "../gifs/whales.gif" import Layout from "../components/layout" import SEO from "../components/seo" const IndexPage = () => ( <Layout> <SEO title="Home" /> <h1>Adopt a whale today</h1> <p>Whales are our friends.</p> <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}> <img src={whaleGIF} </div> <Link to="/page-2/">Go to page 2</Link> <br /> <Link to="/using-typescript/">Go to "Using TypeScript"</Link> </Layout>
Here you imported the whales GIF and included it in an image tag between the
<div> element. The
alt tag informs the reader where the GIF originated.
Close and save the index file.
Now you will run your site locally to make sure it works. From the root of your project, run the development server:
- gatsby develop
After your site has finished building, put
localhost:8000 into your browser’s search bar. You will find the following rendered in your browser:
In this section, you created a sample Gatsby app. In the next section, you are going to push your code to GitHub so that it is accessible to App Platform.
Step 2 — Pushing Your Code to GitHub
In this section of the tutorial, you are going to commit your code to
git and push it up to GitHub. From there, DigitalOcean’s App Platform will be able to access the code for your website.
Go to the root of your project and create a new git repository:
- git init
Next, add any modified files to git:
- git add .
Finally, commit all of your changes to git with the following command:
- git commit -m "Initial Commit"
This will commit this version of your app to git version control. The
-m takes a string argument and uses it as a message about the commit.
Note: If you have not set up git before on this machine, you may receive the following output:
*** Please tell me who you are. Run git config --global user.email "you@example.com" git config --global user.name "Your Name" to set your account's default identity. Omit --global to set the identity only in this repository.
Run the two
git config commands to provide this information before moving on. If you would like to learn more about git, check out our How To Contribute to Open Source: Getting Started with Git tutorial.
You will receive output like the following:
Output[master 1e3317b] Initial Commit 3 files changed, 7 insertions(+), 13 deletions(-) create mode 100644 src/gifs/whales.gif
Once you have committed the file, go to GitHub and log in. After you log in, create a new repository called gatsby-digital-ocean-app-platform. You can make the repository either private or public:
After you’ve created a new repo, go back to the command line and add the remote repo address:
- git remote set-url origin
Make sure to change
your_name to your username on GitHub.
Next, declare that you want to push to the
main branch with the following:
- git branch -M main
Finally, push your code to your newly created repo:
- git push -u origin main
Once you enter your credentials, you will receive output similar to the following:
OutputCounting objects: 3466, done. Compressing objects: 100% (1501/1501), done. Writing objects: 100% (3466/3466), 28.22 MiB | 32.25 MiB/s, done. Total 3466 (delta 1939), reused 3445 (delta 1926) remote: Resolving deltas: 100% (1939/1939), done. To * [new branch] main -> main Branch 'main' set up to track remote branch 'main' from 'origin'.
You will now be able to access your code in your GitHub account.
In this section you pushed your code to a remote GitHub repository. In the next section, you will deploy your Gatsby app from GitHub to App Platform.
Step 3 — Deploying your Gatsby App on DigitalOcean App Platform
In this step, you are going to deploy your app onto DigitalOcean App Platform. If you haven’t done so already, create a DigitalOcean account.
Open your DigitalOcean control panel, select the Create button at the top of the screen, then select Apps from the dropdown menu:
After you have selected Apps, you are going to retrieve your repository from GitHub. Click on the GitHub icon and give DigitalOcean permission to access your repositories. It is a best practice to only select the repository that you want deployed.
You’ll be redirected back to DigitalOcean. Go to the Repository field and select the project and branch you want to deploy, then click Next:
Note: Below Branch there is a pre-checked box that says Autodeploy code changes. This means if you push any changes to your GitHub repository, DigitalOcean will automatically deploy those changes.
On the next page you’ll be asked to configure your app. In your case, all of the presets are correct, so you can click on Next:
When you’ve finished configuring your app, give it a name like save-the-whales:
Once you select your name and click Next, you will go to the payment plan page. Since your app is a static site, you can choose the Starter plan, which is free:
Now click the Launch Starter App button. After waiting a couple of minutes, your app will be deployed.
Navigate to the URL listed beneath the title of your app. You will find your Gatsby app successfully deployed.
Conclusion
In this tutorial, you created a Gatsby site with GIFs and deployed the site onto DigitalOcean App Platform. DigitalOcean App Platform is a convenient way to deploy and share your Gatsby projects. If you would like to learn more about this product, check out the official documentation for App Platform. | https://www.digitalocean.com/community/tutorials/how-to-deploy-a-gatsby-application-to-digitalocean-app-platform | CC-MAIN-2021-17 | refinedweb | 1,609 | 64 |
Problem Statement
Design a stack that supports push, pop, top, and retrieving the minimum element in constant time.
push(x) — Push element x onto stack.
pop() — Removes the element on top of the stack.
top() — Get the top element.
getMin() — Retrieve the minimum element in the stack.
Note: Methods pop, top and getMin operations will always be called on non-empty stacks.
Example
push(-2); push(0); push(-3); getMin(); pop(); top(); getMin();
-3 0 -2
Explanation:
MinStack minStack = new MinStack();
minStack.push(-2);
minStack.push(0);
minStack.push(-3);
minStack.getMin(); // return -3
minStack.pop();
minStack.top(); // return 0
minStack.getMin(); // return -2
Approach
We know a stack is a data structure in which we can easily push an element at the end and also easily access or pop the last inserted element. These operations happens in O(1) time in a stack. But in this problem we have to make an extra function getMin that should be able to retrive the minimum element in the stack and that also in O(1) time.
So to design this data structure we will obviously use stack as main data structure but we have to add some extra algorithm to it so that we are able to get minimum element in constant time.
For this, lets see an example. Let suppose we have to insert these elements in stack in order :
5 6 4 7 5 3 , and then start popping.
We can observe that when we pop the element which is also current minimum then the new minimum is the same as it was while before pushing it. So we must have the knowledge of all the previous minimum elements till now so that we can retrieve the minimum after removal of current minimum element in O(1) time.
For this we can use another stack which will store only the minimum element(represented with green colour in fig) of the main stack . So the top element of the minimum stack will tell the current minimum element.
During insertion or removal of an element we will update the min stack as below:
- If the new pushed element is less than or equal to current minimum then we push this element into min stack also to update the current minimum.
- If the popped element is equal to the current minimum then then we remove this element from the min stack also to update the current minimum.
Implementation
C++ Program for Min Stack Leetcode Solution
#include <bits/stdc++.h> using namespace std; class MinStack { public: stack<int> st,mn; void push(int x) { if(st.empty() || x<=mn.top()) mn.push(x); st.push(x); } void pop() { if(st.top()==mn.top()) mn.pop(); st.pop(); } int top() { return st.top(); } int getMin() { return mn.top(); } }; int main() { MinStack* minStack = new MinStack(); minStack->push(-2); minStack->push(0); minStack->push(-3); int param1 = minStack->getMin(); minStack->pop(); int param2 = minStack->top(); int param3 = minStack->getMin(); cout<<param1<<endl; cout<<param2<<endl; cout<<param3<<endl; return 0; }
-3 0 -2
Java Program for Min Stack Leetcode Solution
import java.util.*; class MinStack { Stack<Integer> st=new Stack<>(); Stack<Integer> mn=new Stack<>(); public void push(int x) { if(st.empty() || x<=mn.peek()) mn.push(x); st.push(x); } public void pop() { if(st.peek().equals(mn.peek())) mn.pop(); st.pop(); } public int top() { return st.peek(); } public int getMin() { return mn.peek(); } } class Rextester{ public static void main(String args[]) { MinStack minStack = new MinStack(); minStack.push(-2); minStack.push(0); minStack.push(-3); int param1 = minStack.getMin(); minStack.pop(); int param2 = minStack.top(); int param3 = minStack.getMin(); System.out.println(param1); System.out.println(param2); System.out.println(param3); } }
-3 0 -2
Complexity Analysis for Min Stack Leetcode Solution
Time Complexity
O(1) : O(1) for all operations. As we know stack takes constant time for push, pop, and top. For getMin we have used another stack which makes this function also to work in constant time.
Space Complexity
O(n) : In worst case all operation is push, hence space complexity is O(n). | https://www.tutorialcup.com/leetcode-solutions/min-stack-leetcode-solution.htm | CC-MAIN-2021-25 | refinedweb | 683 | 56.35 |
Welcome to the updated web development in Python with the Django web framework tutorial series. In these tutorials, we will be covering everything you should need to get started and become familiar with the Django web framework. To do this, we're going to create a PythonProgramming.net-like website, which allows us to to cover topics like:
...and much more!
Django is a very "high-level" (handles many things for you) and fully-featured web frame-work aimed at allowing you to achieve rapid development while also preparing you from the start to create websites that will scale in time, both in terms of speed and code bloat as well.
A "web framework" is anything that provides some scaffolding to help you make a web application.
Before you begin
I suggest you are familiar with the Python 3 basics.
In this tutorial, I will be using Python 3.7 and Django 2.1.4. To get Django, you just do:
pip install django
Make sure you're pointing to pip for python 3. Throughout this tutorial, I am going to mainly be using the syntax that you'll find useful on Linux. For example, to reference Python 3 on Linux, you would say
python3. On Windows, you would instead do something like
py -3.7, which would reference Python 3.7 specifically. Eventually, you are almost certain to launch your website on a Linux server. For this reason, I am going to use the Linux syntax, but it's highly common to develop locally first, and you can use any operating system for this.
You may find that, if you're viewing this tutorial long after I have covered it, things have changed with Django. You can either check the comments section of the videos, or grab the same version of Django that I am using, by doing:
pip install django==2.1.4
I would only do that for learning purposes, however. You should always try to use the latest version since it will include important changes, like security fixes.
I will be using sublime-text 3 as my editor, but you can use whatever editor you want. You can also use whatever operating system you want. I have developed with Django on Windows, Mac, and Linux. They all work just fine.
Alright, assuming you have Django installed, let's get started! With your installation of Django, you should now have a command line keyword:
django-admin. You can use this to start a new
project. Django considers all websites to be a collection of apps. Consider a website that has a forum, a shop, and a blog. Each of those three things would be considered its own "app." A collection of these apps is your project. So, let's start a project. In your terminal/cmd.exe, do:
django-admin startproject mysite
You can call the
mysite bit whatever you want to call your project, but it seems to be a pretty consistent convention to call your project
mysite, so I will stick with that.
The
startproject command will create a new directory called whatever you called your project, In our case, that's called
mysite.
Your project's directory is called
mysite and your primary app is also called
mysite. The only real role of this "primary app" as I am going to call it is to link your other apps. You shouldn't really be doing much in there besides managing settings and urls mostly.
Django is meant to be highly modular. When you develop an app like some forums for one website, you should be able to easily take your forum app to another website seemlessly and implement it nearly immediately.
Okay, so with this in mind, let's add our first actual app to our project. To do this, we will use
manage.py. This is a helper python script that will allow you to do things within your project. First, we'll use it to create another app. In your terminal/command line, navigate to the dir where this
manage.py file is, then do:
python3 manage.py startapp main
You should see that a new directory has been created called
main. So now the top levels of your project's structure are something like:
main -main (directory, your "primary" app) -mysite (directory, this is your new app) -manage.py (helper python script)
Okay great. Let's go ahead and run our server now. We will do this with
manage.py. You should probably open a separate terminal/command prompt to run the server within. We will keep the server running through much of the development.
To run the server, do:
python3 manage.py runserver
You should see something like:. January 10, 2019 - 18:50:09 Django version 2.1.5, using settings 'mysite.settings' Starting development server at Quit the server with CTRL-BREAK.
For now, we can ignore the migrations thing, we'll talk about that later on. We can see that our development server is now running at. Open a browser and head to that address. You should see something like:
This is just the default "working" message that you will see if you're not handling for the home page yourself yet. Let's replace this with something of our own! I will leave the server running in a separate terminal from now on.
Django uses what's called a Model View Controller paradigm. Every page you visit on a Django application is likely using these three things to serve you data.
While we call it an MVC (model, view, controller), you can imagine it moreso working in reverse. A user will visit a URL, your controller (urls.py) will point to a specific view (views.py). That view can then (it doesn't actually HAVE to) interface with your models.
We can simplify this slightly, however, and just have the controller point to your view, and the view can just return a string, just so we can get a feel for how things connect.
When first starting out with something like Django, all these connections and things we need to do in order to do seemingly simple things can seem daunting, and like it's not worth it. With other frameworks, displaying some simple text can be done in moments, and doesn't require all this fiddling about. Where Django shines is not with simple websites. It shines when you've got a website that has grown in time, and you keep wanting to add features...etc, but then you've got a huge mess of code. Django helps you to keep things organized, and forces you to immediately follow good practices which will scale over time. This is annoying at first, but worth it long term. I like to share the following comic:
We always start projects with the best of intentions, but, we tend to have poor foresight. The best thing Django can do for you is the very thing that you may find annoying initially: Abstraction. Over time if you wanted to add more attributes to your users? It would be super simple with Django. Say you wanted to add more functionality like a more advanced nav bar? How about search bar? How about a full redesign? Through Django, this would all be trivial.
Okay, so let's display some simple text at the homepage. To do this, we don't need a model, but we do need the view (which will dictate the text we want to display) and the controller to point to the view, based on the URL. Because I think of things in the order of the user, I would first start with the controller. So we need to tell Django that homepage should return some view. Django begins in your "primary" app when looking for URLs. So first, let's go to:
mysite/mysite/urls.py:
from django.contrib import admin from django.urls import path urlpatterns = [ path('admin/', admin.site.urls), ]
So we've only got one URL here, and that's apparently to some administration page, which isn't our focus right now. So, what we need to do is point this URL to a view. That said, Django sees websites as a collection of apps. So actually the
urls.py file inside your "primary" app is usually just going to point to your apps. How do we point to an app? We just point to that app's
urls.py file! So let's just add that:
from django.contrib import admin from django.urls import path, include urlpatterns = [ path("", include('main.urls')), path('admin/', admin.site.urls), ]
Aside from adding the extra path, don't forget to also import "include" from
django.urls.
Okay, so now django knows that, when the path is basically empty (the homepage), to look inside of the
main app at its
urls.py file to see if that file (the controller) points to a
view, which would be some function inside of that app's
views.py.
We have yet to modify our
main app's
urls.py file, and we also have not made any view. Let's do both! Let's navigate into our
mysite/main app. We can see that there are already various things here...but there's no
urls.py! Let's add that now:
mysite/main/urls.py
from django.urls import path from . import views app_name = 'main' # here for namespacing of urls. urlpatterns = [ path("", views.homepage, name="homepage"), ]
We import path the same as before. We locally import our view. We specify the app name (this isn't required, but you might as well just get in the habit of doing it. Later it will become very useful when we want to dynamically reference URLs). Like I said, Django is highly modular, and even if you wanted to change URL paths, or use someone else's application, but maybe not the same URL paths, you can easily do it. This is also why we give a
name as a parameter of the
path. Okay, great! We've got the controller all set. When someone visits the homepage, Django looks first at the
mysite/mysite/urls.py, seeing that it points to
mysite/main/urls.py, which then points to views.homepage (so, a function called homepage inside of
views.py). Do we have that? Nope. Let's do it.
views.py does already exist here, however, so open that up to edit it. It should look like:
mysite/main/views.py
from django.shortcuts import render # Create your views here.
Normally, these views will render some HTML template and pass some variables, but we're just going to make it super simple and return a straight HTTP response. To do this, we need to import it:
from django.http import HttpResponse
We told
urls.py to look for a
homepage function, so let's define that:
def homepage(request): return HttpResponse("pythonprogramming.net homepage! Wow so #amaze.")
Note that we pass
request. You will always pass this to your views.
Your full
views.py should now be:
mysite/main/views.py
from django.shortcuts import render from django.http import HttpResponse # Create your views here. def homepage(request): return HttpResponse("pythonprogramming.net homepage! Wow so #amaze.")
Now, go back to your browser and refresh. If you fiddled at all up to this point, you probably hit an error, and you will need to re-run the server since you will have stopped on an error most likely. Now, at the homepage, you should see:
We'll stop here, and what I suggest you do is make sure you can do all of the above, without the use of a tutorial. So, start a fresh project, add an app, configure the url.py files to point the homepage to a view, and have that view return a simple string. Things will only become more complicated from here, so it's essential you understand these basics up to this point. If you're fuzzy on anything, I strongly advise you to ask questions or Google it until you're solid on everything up to this point.
As we progress, I would suggest that you continue doing this. At the end of each tutorial, save the project. Then at the next tutorial, make a copy of the project, follow the tutorial, then try to do it to the copy without needing to consult the tutorial. You might forget import paths or function names, that's fine. What you want to make sure is that you actually understand what connections need to be made.
Alternatively, you could also work on a side-project that isn't the same as what we're doing here. As you follow along here, apply it to your own project. The most important thing is that you don't gloss over some concept that you don't understand. Join us on Discord and ask for help in the #help channel if you can't figure something out. | https://pythonprogramming.net/django-web-development-python-tutorial/ | CC-MAIN-2022-40 | refinedweb | 2,175 | 76.11 |
AccordionView, storing groups in separate modules
Next with AV, I'm trying to store separately the several groups to avoid a long long script and make easier maintaining. Is this possible? If yes, I'm doing something wrong. I attach both the launcher script and a module with a group.
Dank u
you can only add vanilla like objects in an
AccordionView, in your example the class
aGroupis not a vanilla like object
use:
from vanilla import * class aGroup(Group): def __init__(self): super(aGroup, self).__init__((0, 0, -0, -0)) self.PopUpButton = PopUpButton((10, 10, -10, -10), ['PopUpButton'], sizeStyle='small')
good luck!
Still not working with the launcher code :(
Sorry for worst formatting ever.
from mojo.UI import AccordionView from vanilla import * from storegroup import aGroup class launcher: def __init__(self): self.w = FloatingWindow((200, 600), title='accordionView') self.agroup = aGroup() descriptions = [ dict(label="aGroup", view=self.agroup, size=117, collapsed=False, canResize=False) ] self.w.accordionView = AccordionView((0, 0, -0, -0), descriptions) self.w.open() launcher()
BTW, is there a guide somewhere to correctly format in posts?
euh?
an
AccordionViewonly takes a vanilla like object, best is to subclass a
vanilla.Groupand add UI elements in there.
good luck
(a working version attached) | https://forum.robofont.com/topic/237/accordionview-storing-groups-in-separate-modules | CC-MAIN-2021-49 | refinedweb | 205 | 52.46 |
XmListReplaceItemsPos man page
XmListReplaceItemsPos — A List function that replaces the specified elements in the list
Synopsis
#include <Xm/List.h> void XmListReplaceItemsPos( Widget widget, XmString *new_items, int item_count, int position);
Description.
- widget
Specifies the ID of the List widget.
- new_items
Specifies the replacement items.
- item_count
Specifies the number of items in new_items and the number of items in the list to replace. This number must be nonnegative.
- elements from new_items. That is, the item at position is replaced with the first element of new_items; the item after position is replaced with the second element of new_items; and so on, until item_count is reached.
For a complete definition of List and its associated resources, see XmList(3).
Related
XmList(3).
Referenced By
XmList(3). | https://www.mankier.com/3/XmListReplaceItemsPos | CC-MAIN-2017-47 | refinedweb | 123 | 51.04 |
Help:Formatting
Revision as of 10:18, 9 April 2009
Here are some typical formatting elements. Hit the edit button above to see what you need to type to get each effect.
The Contents box you see below is generated automatically and responds to heading lines you add to the text:
Level 1 heading
Don't use level one headings if you can help it. Use level 2 headings for anything which needs a clear dividing line, and level 3 for most other things.
Level 2 heading
Level 3 heading
Level 4 heading
Level 5 heading
Level 6 heading
Use multiple single quotes for italics and bold instead of the equivalent HTML tags.
If you want to leave a paragraph break, use two carriage returns. If you only use one, it'll automatically join the second sentence onto the first.
You can add a horizontal divider, like the one above, by typing four dashes.
In talk pages, you can type --~~~~ (two dashes and four tildes) to sign your comments with your name and a datestamp. This makes it easier to tell who is writing. The signature looks like this: --NCarter 23:11, 28 April 2006 (GMT)
Lists
You can do lists by prefixing each item with asterisks:
- One
- Two
- Three
Numbered lists are done in the same way, but with hash signs:
- One
- Two
- Three
You can nest lists by adding extra symbols, and you can even mix numbered and non-numbered lists:
- One
- Two
- One
- Two
- One
- Two
- Three
- Three
Internal links
Don't use complete URLs for internal links. Just use the bit which appears after 'index.php?title=' in the URL. The following are examples of how to make various kinds of links:
Main Page - no need to use underscores for spaces... this is handled automatically.
Link with a different title - use a bar | character to separate the link from the title.
Category:MonoBehaviour - an unformatted link to a category. For links to other namespaces, it's necessary to use a leading colon to cancel out the other colon between the namespace and the page name.
A link to a category with a different title - Again, use a bar character as a separator.
External links
Unity - external link. Note that the link title is separated from the URL by a space. You can't use a bar character for this purpose. | https://wiki.unity3d.com/index.php?title=Help:Formatting&diff=prev&oldid=5930 | CC-MAIN-2021-39 | refinedweb | 394 | 70.73 |
TLDR;
Rector – The Power of Automated Refactoring is now 100% completed
Tomas Votruba and I first met a couple of years ago at one of my favorite conferences; the Dutch PHP Conference in Amsterdam (so actually, we’re very close to our anniversary, Tomas!). He presented Rector there and it was really inspiring. A year later I was working on a legacy migration problem: our team wanted to migrate from Doctrine ORM to “ORM-less”, with handwritten mapping code, etc. I first tried Laminas Code, a code generation tool, but it lacked many features, and also the precision that I needed. Suddenly I recalled Rector, and decided to give it a try. After some experimenting, everything worked and I learned that this tool really is amazingly powerful!
Thank you, Tomas
I asked Tomas if he would like to write a book together, about Rector, combining the perspective of a developer who needs to learn how to use and extend Rector, with the perspective of the creator who has a vision for the project. This turned out to be a very fruitful collaboration. To be extremely honest, in the beginning of the project I was really annoyed by Tomas’ contributions. As an example, this guy put all of the value object classes in a namespace called ValueObjects. I had never encountered anything like that. He also added all kinds of PHPStan rules that would throw errors in my face whenever I tried to commit anything. At first I was like: that is not how writing a book works, Tomas. You’re treating it as a software project. I want to have freedom. I want to treat it like art.
In the end, I realized we were optimizing for different things. He focused on:
Achieving the simplest possible setup for service container configuration.
Never having to remember any special rule: a failing build should remind you of your mistakes.
Maximum maintainability in the long run. This book needs to be useful not only this year, but during the life span of the project itself.
These are very valuable principles, and from this place I’d like to thank Tomas for leading by example here. I’ll never forget this, and will make it part of every future project (books and software projects alike). Thanks to this approach, every code sample is analyzed for issues, automatically formatted, and every code sample can be automatically refactored with Rector. When we need to, we can even upgrade the code base to a new PHP version. In fact, we could even decide to downgrade it!
Rector is better because of this book
While writing I often encountered a weird problem. Sometimes it turned out to be bug in Rector, sometimes an edge case that would be solved by a feature that was already on the roadmap (like migrating Rector to use static reflection). In all cases, Tomas was able to improve Rector, making the learning experience for news users much smoother, and more enjoyable.
You’ll be a better developer because of this book
Rector is a fascinating tool, but before you can effectively extend it for your own code transformation needs, you have to learn about some other related topics, like tokenizing, parsing, the Abstract Syntax Tree, node visitors, and so on. This book aims to provide a good introduction to all these topics, making you a better developer anyway, even if you’d never actually use Rector.
Conclusion
In conclusion: buy this book. It’s now 100% complete. It’ll teach you a lot about PHP as a language, how to improve legacy projects without wasting development time, and even about test-driven development.
| https://online-code-generator.com/release-of-the-rector-book/ | CC-MAIN-2021-43 | refinedweb | 610 | 62.07 |
- Access
- Rolling out changes
- Cleaning up
Feature flag controls
Access
To be able to turn on/off features behind feature flags in any of the GitLab Inc. provided environments such as staging and production, you need to have access to the ChatOps bot. The ChatOps bot is currently running on the ops instance, which is different from or.
Follow the ChatOps document to request access.
After you are added to the project test if your access propagated, run:
/chatops run feature --help
Rolling out changes
When the changes are deployed to the environments it is time to start rolling out the feature to our users. The exact procedure of rolling out a change is unspecified, as this can vary from change to change. However, in general we recommend rolling out changes incrementally, instead of enabling them for everybody right away. We also recommend you to not enable a feature before the code is being deployed. This allows you to separate rolling out a feature from a deploy, making it easier to measure the impact of both separately.
The GitLab feature library (using Flipper, and covered in the Feature Flags process guide) supports rolling out changes to a percentage of time to users. This in turn can be controlled using GitLab ChatOps.
For an up to date list of feature flag commands please see the source
code.
Note that all the examples in that file must be preceded by
/chatops run.
If you get an error “Whoops! This action is not allowed. This incident will be reported.” that means your Slack account is not allowed to change feature flags or you do not have access.
Enabling a feature for pre-production testing
As a first step in a feature rollout, you should enable the feature on and.
These two environments have different scopes.
dev.gitlab.org is a production CE environment that has internal GitLab Inc.
traffic and is used for some development and other related work.
staging.gitlab.com has a smaller subset of GitLab.com database and repositories
and does not have regular traffic. Staging is an EE instance and can give you
a (very) rough estimate of how your feature will look/behave on GitLab.com.
Both of these instances are connected to Sentry so make sure you check the projects
there for any exceptions while testing your feature after enabling the feature flag.
For these pre-production environments, the commands should be run in a
Slack channel for the stage the feature is relevant to. For example, use the
#s_monitor channel for features developed by the Monitor stage, Health
group.
To enable a feature for 25% of all users, run the following in Slack:
/chatops run feature set new_navigation_bar 25 --dev /chatops run feature set new_navigation_bar 25 --staging
Enabling a feature for GitLab.com
When a feature has successfully been enabled on a pre-production environment and verified as safe and working, you can roll out the change to GitLab.com (production).
Communicate the change
Some feature flag changes on GitLab.com should be communicated with parts of the company. The developer responsible needs to determine whether this is necessary and the appropriate level of communication. This depends on the feature and what sort of impact it might have.
Guidelines:
- If the feature meets the requirements for creating a Change Management issue, create a Change Management issue per criticality guidelines.
- For simple, low-risk, easily reverted features, proceed and enable the feature in
#production.
- For features that impact the user experience, consider notifying
#support_gitlab-combeforehand.
Process
Before toggling any feature flag, check that there are no ongoing
significant incidents on GitLab.com. You can do this by checking the
#production and
#incident-management Slack channels, or looking for
open incident issues
(although check the dates and times).
We do not want to introduce changes during an incident, as it can make diagnosis and resolution of the incident much harder to achieve, and also will largely invalidate your rollout process as you will be unable to assess whether the rollout was without problems or not.
If there is any doubt, ask in
#production.
The following
/chatops commands should be performed in the Slack
#production channel.
When you begin to enable the feature, please link to the relevant
Feature Flag Rollout Issue within a Slack thread of the first
/chatops
command you make so people can understand the change if they need to.
To enable a feature for 25% of the time, run the following in Slack:
/chatops run feature set new_navigation_bar 25
This sets a feature flag to
true based on the following formula:
feature_flag_state = rand < (25 / 100.0)
This will enable the feature for GitLab.com, with
new_navigation_bar being the
name of the feature.
This command does not enable the feature for 25% of the total users.
Instead, when the feature is checked with
enabled?, it will return
true 25% of the time.
To enable a feature for 25% of actors such as users, projects, or groups, run the following in Slack:
/chatops run feature set some_feature 25 --actors
This sets a feature flag to
true based on the following formula:
feature_flag_state = Zlib.crc32("some_feature<Actor>:#{actor.id}") % (100 * 1_000) < 25 * 1_000 # where <Actor>: is a `User`, `Group`, `Project` and actor is an instance
During development, based on the nature of the feature, an actor choice should be made.
For user focused features:
Feature.enabled?(:feature_cool_avatars, current_user)
For group or namespace level features:
Feature.enabled?(:feature_cooler_groups, group)
For project level features:
Feature.enabled?(:feature_ice_cold_projects, project)
If you are not certain what percentages to use, simply use the following steps:
- 25%
- 50%
- 75%
- 100%
Between every step you’ll want to wait a little while and monitor the appropriate graphs on. The exact time to wait may differ. For some features a few minutes is enough, while for others you may want to wait several hours or even days. This is entirely up to you, just make sure it is clearly communicated to your team, and the Production team if you anticipate any potential problems.
Feature gates can also be actor based, for example a feature could first be
enabled for only the
gitlab project. The project is passed by supplying a
--project flag:
/chatops run feature set --project=gitlab-org/gitlab some_feature true
For groups the
--group flag is available:
/chatops run feature set --group=gitlab-org some_feature true
Note that actor-based gates are applied before percentages. For example, considering the
group/project as
gitlab-org/gitlab and a given example feature as
some_feature, if
you run these 2 commands:
/chatops run feature set --project=gitlab-org/gitlab some_feature true /chatops run feature set some_feature 25 --actors
Then
some_feature will be enabled for both 25% of actors and always when interacting with
gitlab-org/gitlab. This is a good idea if the feature flag development makes use of group
actors.
Feature.enabled?(:some_feature, group)
Percentage of time rollout is not a good idea if what you want is to make sure a feature is always on or off to the users. In that case, Percentage of actors rollout is a better method.
Lastly, to verify that the feature is deemed stable in as many cases as possible, you should fully roll out the feature by enabling the flag globally by running:
/chatops run feature set some_feature true
This changes the feature flag state to be enabled always, which overrides the
existing gates (e.g.
--group=gitlab-org) in the above processes.
Disabling feature flags
To disable a feature flag that has been globally enabled you can run:
/chatops run feature set some_feature false
To disable a feature flag that has been enabled for a specific project you can run:
/chatops run feature set --group=gitlab-org some_feature false
You cannot selectively disable feature flags for a specific project/group/user without applying a specific method of implementing the feature flags.
Feature flag change logging
ChatOps level
Any feature flag change that affects GitLab.com (production) via ChatOps is automatically logged in an issue.
The issue is created in the gl-infra/feature-flag-log project, and it will at minimum log the Slack handle of person enabling a feature flag, the time, and the name of the flag being changed.
The issue is then also posted to the GitLab internal Grafana dashboard as an annotation marker to make the change even more visible.
Changes to the issue format can be submitted in the ChatOps project.
Instance level
Any feature flag change that affects any GitLab instance is automatically logged in features_json.log. You can search the change history in Kibana. You can also access the feature flag change history for GitLab.com in Kibana.
Cleaning up
A feature flag should be removed as soon as it is no longer needed. Each additional feature flag in the codebase increases the complexity of the application and reduces confidence in our testing suite covering all possible combinations. Additionally, a feature flag overwritten in some of the environments can result in undefined and untested system behavior.
To remove a feature flag, open one merge request to make the changes. In the MR:
- Add the ~”feature flag” label so release managers are aware the changes are hidden behind a feature flag.
- If the merge request has to be picked into a stable branch, add the appropriate
~"Pick into X.Y"label, for example
~"Pick into 13.0". See the feature flag process for further details.
- Remove all references to the feature flag from the codebase, including tests.
- Remove the YAML definition for the feature from the repository.
Once the above MR has been merged, you should:
- Clean up the feature flag from all environments with
/chatops run feature delete some_feature.
- Close the rollout issue for the feature flag after the feature flag is removed from the codebase.
Cleanup ChatOps
When a feature gate has been removed from the codebase, the feature record still exists in the database that the flag was deployed too. The record can be deleted once the MR is deployed to each environment:
/chatops run feature delete some_feature --dev /chatops run feature delete some_feature --staging
Then, you can delete it from production after the MR is deployed to prod:
/chatops run feature delete some_feature | https://docs.gitlab.com/ee/development/feature_flags/controls.html | CC-MAIN-2021-21 | refinedweb | 1,706 | 61.26 |
On 05/06/2010 09:44 PM, Brian Mulholland wrote: > This is a second asking, so sorry if I am being impatient, but I was > hoping to see a response to this. > > I've got a combo box with the list in a List of string arrays (code > and decode). The bean has the currently selected code. I created a > DropDownChoice with a custom ChoceRenderer as below. The CR interface > is invoked for both the acquisition of the bean value and for each row > of the list, which is why the below code checks the type of object > coming in. > > This works great when displaying, but when the value comes back to the > server, it is loaded back into the bean as > "[Ljava.lang.String;@3c6f3c6f". It looks like the Object.toString(). > What am I doing wrong here? >
The ChoiceRenderer is only used for the display of the options on the page, not for saving. The IModel's setObject() is called then, which in that case is the PropertyModel calling setId() on the bean, with the current Object as argument. So you need to implement an IModel that's a little more intelligent and basically does what the renderer does: pick the first element if it's an array, the whole String otherwise. >From a design point of view it looks like this logic should maybe be in the bean, or at least in a helper class so you can use it from both the Model and the ChoiceRenderer. -- Thomas > DropDownChoice ddc = new DropDownChoice(id, new PropertyModel(bean, > id), listOfStringArrays, new IChoiceRenderer(){ > @Override > public Object getDisplayValue(Object array) { > if(array instanceof String) > return (String) array; > else if(array.getClass().isArray()){ > String[] result = (String[]) array; > return result[1]; > } > else > throw new RuntimeException("Huh?"); > } > > @Override > public String getIdValue(Object array, int arg1) { > if(array instanceof String) > return (String) array; > else if(array.getClass().isArray()) > { > String[] result = (String[]) array; > return result[0]; > } > else > throw new RuntimeException("Huh?"); > } > }); > > -- ------------------------------------------------------------------- Thomas Kappler thomas.kapp...@isb-sib.ch Swiss Institute of Bioinformatics Tel: +41 22 379 51 89 CMU, rue Michel Servet 1 1211 Geneve 4 Switzerland ------------------------------------------------------------------- --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org For additional commands, e-mail: users-h...@wicket.apache.org | https://www.mail-archive.com/users@wicket.apache.org/msg51445.html | CC-MAIN-2021-39 | refinedweb | 369 | 53.92 |
A quick demonstration of automatic forms
Let’s start by showing how this works, before getting into the details. To do that, we’ll add a project model to our application. A project can have any number of lists associated with it, so that related to-do lists can be grouped together. For now, let’s consider the project model by itself. Add the following lines to the app.py file, just after the Todo application class definition. We’ll worry later about how this fits into the application as a whole.
class IProject(interface.Interface):
name = schema.TextLine(title=u'Name',required=True)
kind = schema.Choice(title=u'Kind of project',
values=['personal','business'])
description = schema.Text(title=u'Description')
class AddProject(grok.Form):
grok.context(Todo)
form_fields = grok.AutoFields(IProject)
We’ll also need to add a couple of imports at the top of the file:
from zope import interface
from zope import schema
Save the file, restart the server, and go to the URL. The result should be similar to the following screenshot:
OK, where did the HTML for the form come from? We know that AddProject is some sort of a view, because we used the grok.context class annotation to set its context and name. Also, the name of the class, but in lowercase, was used in the URL, like in previous view examples.
The important new thing is how the form fields were created and used. First, a class named IProject was defined. The interface defines the fields on the form, and the grok.AutoFields method assigns them to the Form view class. That’s how the view knows which HTML form controls to generate when the form is rendered.
We have three fields: name, description, and kind. Later in the code, the grok.AutoFields line takes this IProject class and turns these fields into form fields.
That’s it. There’s no need for a template or a render method. The grok.Form view takes care of generating the HTML required to present the form, taking the information from the value of the form_fields attribute that the grok.AutoFields call generated.
Interfaces
The I in the class name stands for Interface. We imported the zope.interface package at the top of the file, and the Interface class that we have used as a base class for IProject comes from this package.
Example of an interface
An interface is an object that is used to specify and describe the external behavior of objects. In a sense, the interface is like a contract. A class is said to implement an interface when it includes all of the methods and attributes defined in an interface class. Let’s see a simple example:
from zope import interface
class ICaveman(interface.Interface):
weapon = interface.Attribute('weapon')
def hunt(animal):
"""Hunt an animal to get food"""
def eat(animal):
"""Eat hunted animal"""
def sleep()
"""Rest before getting up to hunt again"""
Here, we are describing how cavemen behave. A caveman will have a weapon, and he can hunt, eat, and sleep. Notice that the weapon is an attribute—something that belongs to the object, whereas hunt, eat, and sleep are methods.
Once the interface is defined, we can create classes that implement it. These classes are committed to include all of the attributes and methods of their interface class. Thus, if we say:
class Caveman(object):
interface.implements(ICaveman)
Then we are promising that the Caveman class will implement the methods and attributes described in the ICaveman interface:
weapon = 'ax'
def hunt(animal):
find(animal)
hit(animal,self.weapon)
def eat(animal):
cut(animal)
bite()
def sleep():
snore()
rest()
Note that though our example class implements all of the interface methods, there is no enforcement of any kind made by the Python interpreter. We could define a class that does not include any of the methods or attributes defined, and it would still work.
Interfaces in Grok
In Grok, a model can implement an interface by using the grok.implements method. For example, if we decided to add a project model, it could implement the IProject interface as follows:
class Project(grok.Container):
grok.implements(IProject)
Due to their descriptive nature, interfaces can be used for documentation. They can also be used for enabling component architectures, but we’ll see about that later on. What is of more interest to us right now is that they can be used for generating forms automatically.
Schemas
The way to define the form fields is to use the zope.schema package. This package includes many kinds of field definitions that can be used to populate a form.
Basically, a schema permits detailed descriptions of class attributes that are using fields. In terms of a form—which is what is of interest to us here—a schema represents the data that will be passed to the server when the user submits the form. Each field in the form corresponds to a field in the schema.
Let’s take a closer look at the schema we defined in the last section:
class IProject(interface.Interface):
name = schema.TextLine(title=u'Name',required=True)
kind = schema.Choice(title=u'Kind of project',
required=False,
values=['personal','business'])
description = schema.Text(title=u'Description',
required=False)
The schema that we are defining for IProject has three fields. There are several kinds of fields, which are listed in the following table. In our example, we have defined a name field, which will be a required field, and will have the label Name beside it. We also have a kind field, which is a list of options from which the user must pick one. Note that the default value for required is True, but it’s usually best to specify it explicitly, to avoid confusion. You can see how the list of possible values is passed statically by using the values parameter. Finally, description is a text field, which means it will have multiple lines of text.
Available schema attributes and field types
In addition to title, values, and required, each schema field can have a number of properties, as detailed in the following table:
In addition to the field attributes described in the preceding table, some field types provide additional attributes. In the previous example, we saw that there are various field types, such as Text, TextLine, and Choice. There are several other field types available, as shown in the following table. We can create very sophisticated forms just by defining a schema in this way, and letting Grok generate them.
Form fields and widgets
Schema fields are perfect for defining data structures, but when dealing with forms sometimes they are not enough. In fact, once you generate a form using a schema as a base, Grok turns the schema fields into form fields. A form field is like a schema field but has an extended set of methods and attributes. It also has a default associated widget that is responsible for the appearance of the field inside the form.
Rendering forms requires more than the fields and their types. A form field needs to have a user interface, and that is what a widget provides. A Choice field, for example, could be rendered as a <select> box on the form, but it could also use a collection of checkboxes, or perhaps radio buttons. Sometimes, a field may not need to be displayed on a form, or a writable field may need to be displayed as text instead of allowing users to set the field’s value.
Form components
Grok offers four different components that automatically generate forms. We have already worked with the first one of these, grok.Form. The other three are specializations of this one:
- grok.AddForm is used to add new model instances.
- grok.EditForm is used for editing an already existing instance.
- grok.DisplayForm simply displays the values of the fields.
A Grok form is itself a specialization of a grok.View, which means that it gets the same methods as those that are available to a view. It also means that a model does not actually need a view assignment if it already has a form. In fact, simple applications can get away by using a form as a view for their objects. Of course, there are times when a more complex view template is needed, or even when fields from multiple forms need to be shown in the same view. Grok can handle these cases as well, which we will see later on.
Adding a project container at the root of the site
To get to know Grok’s form components, let’s properly integrate our project model into our to-do list application. We’ll have to restructure the code a little bit, as currently the to-do list container is the root object of the application. We need to have a project container as the root object, and then add a to-do list container to it.
To begin, let’s modify the top of app.py, immediately before the TodoList class definition, to look like this:
import grok
from zope import interface, schema
class Todo(grok.Application, grok.Container):
def __init__(self):
super(Todo, self).__init__()
self.title = 'To-Do list manager'
self.next_id = 0
def deleteProject(self,project):
del self[project]
First, we import zope.interface and zope.schema. Notice how we keep the Todo class as the root application class, but now it can contain projects instead of lists. We also omitted the addProject method, because the grok.AddForm instance is going to take care of that. Other than that, the Todo class is almost the same.
class IProject(interface.Interface):
title = schema.TextLine(title=u'Title',required=True)
kind = schema.Choice(title=u'Kind of project',values=['personal',
'business'])
description = schema.Text(title=u'Description',required=False)
next_id = schema.Int(title=u'Next id',default=0)
We then have the interface definition for IProject, where we add the title, kind, description, and next_id fields. These were the fields that we previously added during the call to the __init__ method at the time of product initialization.
class Project(grok.Container):
grok.implements(IProject)
def addList(self,title,description):
id = str(self.next_id)
self.next_id = self.next_id+1
self[id] = TodoList(title,description)
def deleteList(self,list):
del self[list]
The key thing to notice in the Project class definition is that we use the grok.implements class declaration to see that this class will implement the schema that we have just defined.
class AddProjectForm(grok.AddForm):
grok.context(Todo)
grok.name('index')
form_fields = grok.AutoFields(Project)
label = "To begin, add a new project"
@grok.action('Add project')
def add(self,**data):
project = Project()
self.applyData(project,**data)
id = str(self.context.next_id)
self.context.next_id = self.context.next_id+1
self.context[id] = project
return self.redirect(self.url(self.context[id]))
The actual form view is defined after that, by using grok.AddForm as a base class. We assign this view to the main Todo container by using the grok.context annotation. The name index is used for now, so that the default page for the application will be the ‘add form’ itself.
Next, we create the form fields by calling the grok.AutoFields method. Notice that this time the argument to this method call is the Project class directly, rather than the interface. This is possible because the Project class was associated with the correct interface when we previously used grok.implements.
After we have assigned the fields, we set the label attribute of the form to the text: To begin, add a new project. This is the title that will be shown on the form.
In addition to this new code, all occurrences of grok.context(Todo) in the rest of the file need to be changed to grok.context(Project), as the to-do lists and their views will now belong to a project and not to the main Todo application. For details, take a look at the source code of this article for Grok 1.0 Web Development>>Chapter 5. | https://hub.packtpub.com/forms-grok-10/ | CC-MAIN-2018-17 | refinedweb | 2,019 | 65.93 |
What does the /ALTERNATENAME linker switch do?
Raymond
There’s an undocumented switch for the Microsoft Visual Studio linker known as
/ALTERNATENAME. Despite being undocumented, people use it a lot. So what is it?
This is effectively a command line switch version of the
OLDNAMES.LIB library. When you say
/ALTERNATENAME:X=Y, then this tells the linker that if it is looking for a symbol named
X and can’t find it, then before giving up, it should redirect it to the symbol
Y and try again.
The C runtime library uses this mechanism for various sneaky purposes. For example, there’s a part that goes
BOOL (WINAPI * const _pDefaultRawDllMain)(HANDLE, DWORD, LPVOID) = NULL; #if defined (_M_IX86) #pragma comment(linker, "/alternatename:__pRawDllMain=__pDefaultRawDllMain") #elif defined (_M_IA64) || defined (_M_AMD64) #pragma comment(linker, "/alternatename:_pRawDllMain=_pDefaultRawDllMain") #else /* defined (_M_IA64) || defined (_M_AMD64) */ #error Unsupported platform #endif /* defined (_M_IA64) || defined (_M_AMD64) */
What this does is say, “If you need a symbol called
_pRawDllMain, but you can’t find it, then try again with
_pDefaultRawDllMain.” If an object file defines
_pRawDllMain, then that definition will be used. Otherwise
_pDefaultRawDllMain will be used.
Note that
/ALTERNATENAME is a linker feature and consequently operates on decorated names, since the linker doesn’t understand compiler-specific name-decoration algorithms. This means that you typically have to use different versions of the
/ALTERNATENAME switch, depending on what architecture you are targeting. In the above example, the C runtime library knows that
__cdecl decoration prepends an underscore on x86, but not on any other platform.
This use of
/ALTERNATENAME here is a way for the compiler to generate hooks into the DLL startup process based on the code being compiled. If there is no
_pRawDllMain defined by an object file, then
_pDefaultRawDllMain will be used instead, and that version is just a null pointer, which means, “Don’t do anything special.”
This pattern of using the
/ALTERNATENAME switch lets you provide a default value for a function or variable, which others can override if they choose. For example, you might do something like this:
void default_error_log() { /* do nothing */ } // For expository simplification: assume x86 cdecl #pragma comment(linker, "/alternatename:_error_log=_default_error_log")
If nobody defines a custom
error_log function, then all references to
error_log are redirected to
default_error_log, and the default error log function does nothing.¹
The C++/WinRT library uses
/ALTERNATENAME for a different purpose. The C++/WinRT library wants to support being used both with and without
windows.h, so it contains its own declarations for the Windows functions and structures that it needs.
But now there’s a problem: If it is used with
windows.h, then there are structure definition errors. Therefore, C++/WinRT needs to give its equivalent declarations of Windows structures some other name, to avoid redefinition errors.
But this in turn means that the function prototypes in the C++/WinRT library need to use the renamed structures, rather than the original Windows structures, in case the C++/WinRT library is used without
windows.h. This declaration will in turn create a conflict if the C++/WinRT library is used with
windows.h when the real declarations are encountered in
windows.h.
The solution is to rename the C++/WinRT version of Windows functions, too. C++/WinRT gives them a
WINRT_IMPL_ prefix, so that there is no function declaration collision.
We now have two parallel universes. There’s the
windows.h universe, and the C++/WinRT universe, each with their own structures and functions. The two parallel universes are unified by the
/ALTERNATENAME directive, which tells the linker, “If you find yourself looking for the function
WINRT_IMPL_GetLastError, try again with
GetLastError.” Since nobody defines
WINRT_IMPL_GetLastError, the “try again” kicks in, and all of the calls to
WINRT_GetLastError end up redirected to the operating system
GetLastError function, which is what we wanted in the first place.
¹ The more traditional way of doing this (that doesn’t rely on undocumented vendor-specific linker features) is to take advantage of the classical model for linking, specifically the part where you can let an OBJ override a LIB: What you do is define
_pRawDllMain in a separate OBJ file that defines nothing except that one variable, and put that OBJ in the C runtime LIB. If the module provides its own definition of
_pRawDllMain in an OBJ file, then that definition is used. Otherwise, the linker will search through the LIBs, and eventually it will find the one in the C runtime LIB and use that one.
So why does
/ALTERNATENAME exist if you could already get this effect via LIBs, and in way that all linkers support, not just the Microsoft C linker?
C++/WinRT is a header-only library. It has no LIB in which to put these default definitions. It therefore has to use the “command line switch version of a LIB”.
So, to summarise: one team in Microsoft has added a universal hack so that at least two other teams who surely could’ve settled all this up over beer and pizza now don’t have to talk to each other. If you scale this scenario up sufficiently, you can imagine the development process that lead to Teams.
I’m curious, how would you have addressed the use-case of a header-only library, that needs to use symbols that may or may not be declared outside the library?
Hm, it seems it looks, sweems and quacks like GCC’s weak symbols + aliases. See:
How does C++/WinRT, being part of the compilation unit, issue a pragma for the linker if the linker may be invoked separately from the compiler?
Looking at the coff .obj format, there is a “.drectve” section that can contain linker directives (from pragmas or whatever)… or maybe rather “Lnker” directives, snce ‘s seem to be at a premum.
This WinRT library may only be headers, but there should still be object files created by project compilation.
The question I have is why C++/WinRT needs to support being used without windows.h in the first place…
What’s the use case for that?
windows.h is a huge header file, and UWP apps can get a lot done using only the classes provided by the Windows Runtime. It also allows C++/WinRT to be compiled by a wider range of compilers, and it permits C++/WinRT to be more easily ported to other platforms. | https://devblogs.microsoft.com/oldnewthing/20200731-00/?p=104024/ | CC-MAIN-2020-50 | refinedweb | 1,066 | 52.09 |
.
use DBIx::Class::Candy -perl5 => v10;
I love the new features in Perl 5.10 and 5.12, so I felt that it would be nice to remove the boiler plate of doing
use feature ':5.10' and add it to my sugar importer. Feel free not to use this.
Most of the imported subroutines are the same as what you get when you use the normal interface for result definition: they have the same names and take the same arguments. In general write the code the way you normally would, leaving out the
__PACKAGE__-> part. The following are methods that are exported with the same name and arguments:
belongs_to has_many has_one inflate_column many_to_many might_have remove_column remove_columns resultset_attributes resultset_class sequence source_name table
There are some exceptions though, which brings us to:
These are merely renamed versions of the functions you know and love. The idea is to make your result classes a tiny bit prettier by aliasing some methods. If you know your
DBIx::Class API you noticed that in the "SYNOPSIS" I used
column instead of
add_columns and
primary_key instead of
set_primary_key. The old versions work, this is just nicer. A list of aliases are as follows:
column => 'add_columns', primary_key => 'set_primary_key', unique_constraint => 'add_unique_constraint', relationship => 'add_relationship',
Eventually you will get tired of writing the following in every single one of your results:
use DBIx::Class::Candy -base => 'MyApp::Schema::Result', -perl5 => v12, -autotable => v1;
You can set all of these for your whole schema if you define your own
Candy subclass as follows:
package MyApp::Schema::Candy; use base 'DBIx::Class::Candy'; sub base { $_[1] || 'MyApp::Schema::Result' } sub perl_version { 12 } sub autotable { 1 } -base => 'MyApp::Schema::Result', -perl5 => v18, -autotable => v1;
Your
base method will get
MyApp::Schema::Result, your
perl_version will get
18, and your
autotable will get
1.
There is currently a single "transformer" for
add_columns, so that people used to the Moose api will feel more at home. Note that this may go into a "Candy Component" at some point.
Example usage:
has_column foo => ( data_type => 'varchar', size => 25, is_nullable => 1, );' };
This allows you to define a column and set it as unique in a single call:
unique_column name => { data_type => 'varchar', size => 30, };
Currently there is a single version,
v1, which looks at your class name, grabs everything after
::Schema::Result:: (or
::Result::), removes the
::'s, converts it to underscores instead of camel-case, and pluralizes it. Here are some examples if that's not clear:
MyApp::Schema::Result::Cat -> cats MyApp::Schema::Result::Software::Building -> software_buildings MyApp::Schema::Result::LonelyPerson -> lonely_people MyApp::DB::Result::FriendlyPerson -> friendly_people MyApp::DB::Result::Dog -> dogs
Also, if you just want to be different, you can easily set up your own naming scheme. Just add a
gen_table method to your candy subclass. The method gets passed the class name and the autotable version, which of course you may ignore. For example, one might just do the following:
sub gen_table { my ($self, $class) = @_; $class =~ s/::/_/g; lc $class; }
Which would tranform
MyApp::Schema::Result::Foo into
myapp_schema_result_foo.
Or maybe instead of using the standard
MyApp::Schema::Result namespace you decided to be different and do
MyApp::DB::Table or something silly like that. You could pre-process your class name so that the default
gen_table will still work:
sub gen_table { my $self = shift; my $class = $_[0]; $class =~ s/::DB::Table::/::Schema::Result::/; return $self->next::method(@_); }. | http://search.cpan.org/~frew/DBIx-Class-Candy-0.002104/lib/DBIx/Class/Candy.pm | CC-MAIN-2014-23 | refinedweb | 569 | 55.27 |
Fields in shapefiles
The following codes is usually used to retrieve shapefiles:
rshp = shapefile.Reader(filename) shapes = rshp.shapes() records = rshp.records()
records contains metadata about each shape. Fields and values are not stored in a dictionary. Here is a snippet of code to do so:
{k[0]: v for k, v in zip(rshp.fields[1:], records[0])}
Here is an example of the results:
{'ORD_PRE_DI': 'SW', 'ORD_STNAME': 'SW 149TH ST', 'ORD_STREET': '149TH', 'ORD_STRE_1': 'ST', 'ORD_SUF_DI': b' ', 'R_ADRS_FRO': 976, ...
(original entry : data_geo_streets.py:docstring of ensae_projects.datainc.data_geo_streets.shapely_records, line 10)
How to avoid entering the credentials everytime?
Using clear credentials in a program file or in a notebook is dangerous. You should do that. However, it is annoying to type his password every time it is required. The solution is to store it with keyring. You need to execute only once:
from pyquickhelper.loghelper import set_password set_password("k1", "k2", "value")
And you can retrieve the password anytime not necessarily in the same program:
from pyquickhelper.loghelper import set_password set_password("k1", "k2", "value")
(original entry : credentials_helper.py:docstring of ensae_projects.automation.credentials_helper.set_password, line 8) | http://www.xavierdupre.fr/app/ensae_projects/helpsphinx/i_faq.html | CC-MAIN-2021-10 | refinedweb | 186 | 54.29 |
How can I have modules that mutually import each other?
Suppose you have the following modules:
File foo.py:
from bar import bar_var foo_var=1
File bar.py:
from foo import foo_var bar_var=2
If you import the foo module from your main program, you get the following traceback:
Traceback (most recent call last): File "program.py", line 14, in <module> import foo File "foo.py", line 1, in <module> from bar import bar_var File "bar.py", line 1, in <module> from foo import foo_var ImportError: cannot import name foo_var
The problem is that the interpreter will do things in the following order:
- The main a few ways to get around this problem:
The preferred way is simply to avoid
recursive use of from-import, placing all code inside functions.
Initializations of global variables and class variables should use
constants and built-in or locally defined functions only. This means
everything from an imported module is referenced as
<module>.<name>.
Another way is to do things in the following order in each module:
- exports (globals, functions, and classes that don’t need imported base classes)
- import statements
- active code (including globals that are initialized from imported values).
Yet another way is to move the import statements into the functions that are using the imported objects.
CATEGORY: programming | http://www.effbot.org/pyfaq/how-can-i-have-modules-that-mutually-import-each-other.htm | CC-MAIN-2015-40 | refinedweb | 217 | 52.49 |
Recently how to build a Reactive Play application with Reactive Slick. In order to build a simple CRUD application in Play framework using Reactive Slick (Slick-3.0.0-RC1) we need to follow these steps:
1) Run activator new play-reactive-slick play-scala to create a new Play application.
2) Add following dependencies in build.sbt file
Here we are using PostgreSQL as database. So, if you have not installed it yet, you can download it from here.
3) Next, we need to add bidirectional mapping of our case class with the table, like this:
Important thing to note in above mapping is that we have omitted O.Nullable/O.NotNull. The reason behind it is that both O.Nullable/O.NotNull are deprecated in Slick-3.0.0-RC1. Instead, we have provided the mapping for optional fields in projection itself.
4) Then, we can write a Database Access Object (DAO) to perform CRUD operations:
As we can see in above code that now Slick returns result in Future. Also, now the database instance db have an associated connection pool and thread pool, so, it is important to call db.close when we are done using it.
5) At last, we need to provide Asynchronous Action response in controller. Example:
Here we are using Action.async to handle Future response from database.
In this way we can build a simple CRUD application in Play framework using Reactive Slick. For more information on Reactive Slick click here.
To download a demo Application click here.
18 thoughts on “Play with Reactive Slick: A Simple CRUD application in Play! framework using Slick 3.03 min read”
Reblogged this on Play!ng with Scala and commented:
Recently Typesafe released Slick 3.0.0-RC1
Reblogged this on Rishi Khandelwal.
Reblogged this on himanshu2014.
Reblogged this on anuragknoldus.
Thank you 😀
If you are using JDBC, I guess that you are not using the reactive behavior provided by this reactive slick version.
Thank you for the article, looks really interesting. I took a look at the source to recreate the project and was wondering where your evolutions are coming from. They appear to be auto generated but that is not happening for me, even though I specified the slick.default property in the config file. Is there a way to trigger the generation manually or how is it working for you?
/** Free all resources allocated by Slick for this Database object. In particular, the
* [[slick.util.AsyncExecutor]] with the thread pool for asynchronous execution is shut
* down. If this object represents a connection pool managed directly by Slick, it is also
* closed. */
def close: Unit = try executor.close() finally source.close()
I vote down this post because some newbie I know wrote “finally { db.close()} ” in model class after reading this.
I tried to change build.sbt, application.conf and then created correspondent table (Employee) on MySQL database. But I had the error message: “[MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘”date_of_birth” as x6, x9.”address” as x5, x9.”designation” as x8, x9.”id” as x3’ at line 1]”
I tried to debug but couldn’t find the cause.
Could you have a help?
Thanks so much.
Sorry! I my previous post I forgot to write that I want to modify current source codes to run on MySql database instead of PostgresSQL
In your DAO object, when you call db.run(), db.close() you are calling the method db. This might give you an other database object which flaws the run/close flow, doesn’t it?
I wonder why you don’t get an exception like: “[PSQLException: Bad value for type long : 2015-04-06 22:36:17]”
Do you have any advice concerning mapped vs. non-mapped tables? Your are using mapped table, but
tuple seems to be the “default” way since it appears in almost all the examples in the documentation and mapped tables only in a dedicated section. Where is the advantage of using tuples, the access is index based and not really type safe. Maybe I’m missing something.
Documentation for Play asks you NOT to explicitly perform `db.close`:
The documentation for Play explicitly says NOT to perform `db.close` in your DAO. | https://blog.knoldus.com/play-with-reactive-slick-a-simple-crud-application-in-play-framework-using-slick-3-0/ | CC-MAIN-2021-04 | refinedweb | 724 | 68.16 |
nis+, NIS+, nis - a new version of the network information name service authoriza- tion policies.+ direc- tories. The NIS+ directory that forms the root of the NIS+ namespace is called the root directory. There are two spe- c_dir directory of a domain, contain a list of all the NIS+ principals within a certain NIS+ group. An NIS+ princi- pal is a user or a machine making NIS+ requests. NIS+ Link Object iden- tifies. Concatenation Path Normally, all the entries for a certain type of information are stored within the table itself. However, there are times when it is desirable may then be resolved. Principal Names+ con- text. They are stored as records in an NIS+ table named cred, which always appears in the org_dir subdirectory of the directory named in the principal name. This mapping can be expressed as a replacement function: principal.domain ->[cname=principal.domain ],cred.org_dir.domain map- pings: LOCAL sen- sitive to the context of the machine on which the pro- cess is executing. DES. DHnnn-m will return a record containing the NIS+ princi- pal. Group Names Like NIS+ principal names, NIS+ group names take the form: group_name.domain All objects in the NIS+ namespace and all entries in NIS+ tables may+ sim- ple. Authentication The NIS+ name service uses Secure RPC for the integrity of the NIS+ service. This requires that users of the service and their machines must have a Secure RPC key pair associ- ated will not will. destroy This right gives a client permission to destroy or remove an existing object or entry. When a client attempts to destroy an entry or object by removing it, the service first checks to see if the table or direc- tory may be granted to any one of four dif- ferent_dir and org_dir subdirec- tory. Notice that there is currently no command line interface to set or change the OAR of the directory object. main- tained cri- terion.
<rpcsvc/nis_object.x> protocol description of an NIS+ object <rpcsvc/nis.x> defines the NIS+ protocol using the RPC language as described in the ONC+ Developer's Guide <rpcsvc/nis.h>)
System Administration Guide: Naming and Directory Services (DNS, Describes how to make the transition from NIS to NIS+. ONC+ Developer's Guide Describes the application programming interfaces for networks including NIS+.
System Administration Guide: Naming and Directory Services (DNS, Describes how to plan for and configure an NIS+ namespace. System Administration Guide: IP Services Describes IPv6 extensions to Solaris name services.
NIS+ might not be supported in future releases of the SolarisTM Operating Environment. Tools to aid the migration from NIS+ to LDAP are available in the Solaris 9 operating environment. For more information, visit. | http://man.eitan.ac.il/cgi-bin/man.cgi?section=1&topic=NIS%2B | CC-MAIN-2020-10 | refinedweb | 450 | 56.25 |
A stream that generates a merkle tree based on the incoming data
Project description
merkle-tree-stream
A stream that generates a merkle tree based on the incoming data
A hash tree or merkle tree is a tree in which every leaf node is labelled with the hash of a data block and every non-leaf node is labelled with the cryptographic hash of the labels of its child nodes. Merkle trees in Dat are specialized flat trees that contain the content of the archives.
Install
$ pip install merkle-tree-stream
Example
import hashlib def _leaf(node, roots=None): return hashlib.sha256(node.data).digest() def _parent(first, second): sha256 = hashlib.sha256() sha256.update(first.data) sha256.update(second.data) return sha256.digest() merkle = MerkleTreeGenerator(leaf=leaf, parent=parent) merkle.write(b"a") merkle.write(b"b") assert len(merkle) == 2 + 1
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/merkle-tree-stream/0.0.1a5/ | CC-MAIN-2020-40 | refinedweb | 171 | 53.71 |
gtkmm: Gtk::FlowBoxChild Class Reference
See the description of FlowBox. More...
#include <gtkmm/flowboxchild.h>
Detailed Description
See the description of FlowBox.
Constructor & Destructor Documentation
Creates a new FlowBoxChild, to be used as a child of a FlowBox.
Member Function Documentation
Marks child invalidate_sort() on any model change, but that is more expensive.
Gets the child widget of self.
- Returns
- The child widget of self.
Gets the child widget of self.
- Returns
- The child widget of self.
Gets the current index of the child in its Gtk::FlowBox container.
- Returns
- The index of the child, or -1 if the child is not in a flow box.
Get the GType for this class, for use with the underlying GObject type system.
Provides access to the underlying C GObject.
Provides access to the underlying C GObject.
Returns whether the child is currently selected in its Gtk::FlowBox container.
- Returns
trueif child is selected.
This is a default handler for the signal signal_activate().
The child widget.
- Returns
- A PropertyProxy that allows you to get or set the value of the property, or receive notification when the value of the property changes.
The child widget.
- Returns
- A PropertyProxy_ReadOnly that allows you to get the value of the property, or receive notification when the value of the property changes.
Sets the child widget of self.
- Parameters
-
- Slot Prototype:
void on_my_activate()
Flags: Run First, Action
The signal_activate() signal is emitted when the user activates a child widget in a Gtk::FlowBox, either by clicking or double-clicking, or by using the Space or Enter key.
While this signal is used as a [keybinding signal][GtkSignalAction], it can be used by applications for their own purposes.
Friends And Related Function Documentation
A Glib::wrap() method for this object.
- Parameters
-
- Returns
- A C++ instance that wraps this C instance. | https://developer.gnome.org/gtkmm/stable/classGtk_1_1FlowBoxChild.html | CC-MAIN-2021-31 | refinedweb | 301 | 59.6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.