text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
As you may already be familiar with SELinux or perhaps not but I won't go in to discussion what exactly it is, know though that the NSA from the USA created it years back and I am an avid user. I recently got my Raspberry PI and using Raspbian but had an abundant amount of time trying to figure out why every reboot and SSH/HDMI Output would cause a large X [cursor], like the old school GUI did when XFree86 GUI didn't work and this is when attempting to start selinux with no success. In short, I have a tutorial I made to help those who may want to try it out and notice it will NOT work out of the box, even if you use apt-get for all 3 required selinux packages.
GET SELINUX
Open up a terminal and launch:
ACTIVATE SELINUX
Code: Select all
sudo apt-get install selinux-basics selinux-policy-default
Now, you will normally have an issue here but go ahead and run the command:
The output may vary but it should tell you to reboot, so go forth and type
Code: Select all
selinux-activate
now and should start to reboot.now and should start to reboot.sudo reboot
CHECK SELINUX
Now, you may have to remotely SSH to your machine as you probably don't see anything but a grey background and big black X for a mouse cursor. If you don't have this problem, SELinux probably was a successful install (lucky you but doubtful). Either way, open a terminal and run the command:
Now, after that command, you probably see something scary like:
Code: Select all
sudo check-selinux-installation
This obviously means SELinux was NOT successful. So, let's fix that!This obviously means SELinux was NOT successful. So, let's fix that!/usr/sbin/check-selinux-installation:19: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module.
@staticmethod
/usr/sbin/check-selinux-installation:23: DeprecationWarning: os.popen2 is deprecated. Use the subprocess module.
def fix():
/etc/pam.d/login is not SELinux enabled
FSCKFIX is not enabled – not serious, but could prevent system from booting…
PERMISSIVE OR ENFORCING?
Now, I would tell you to try and enable permissive mode but it is extremely likely it won't work. So I want you to enable Enforcing Mode by typing:
CONFIGURE PAM
Code: Select all
sudo selinux-config-enforcing
Now we need to manually configure PAM and you could use vi or pico (sorry, I like pico!) so you can use your favorite text editor for the below command:
1). Edit PAM Login (/etc/pam.d/login)
Now add the following in the file:
Code: Select all
sudo pico /etc/pam.d/login
Save the file (Pico users press Ctrl+X, Y to Overwrite, Enter to Save/Exit).
Code: Select all
session required pam_selinux.so multiple
Now let's re-activate by typing:
It will recommend you reboot, don't, not yet as we have a couple more tasks...
Code: Select all
sudo selinux-activate
2). EDIT INITSCRIPTS (/etc/default/rcS)
Go forth and type:
In this file, I want you to add 2 lines of code (any order):
Code: Select all
sudo pico /etc/default/rcS
and
Code: Select all
FSCKFIX=yes
Now save the file (see #1 for PICO Saving).
Code: Select all
EDITMOTD=no
CHECK DEVPTS
Please run the code:
If it comes back with:
Code: Select all
sudo mount | grep devpts
We've successfully allowed SELinux to install and has been activated where prior it could not be. You could always setup a Cron job and other things as well but you should be OK now.We've successfully allowed SELinux to install and has been activated where prior it could not be. You could always setup a Cron job and other things as well but you should be OK now.[devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620)/quote] or similar, you're good to go.
Now, remove the static nodes by typing:3). Go, Go Activate (selinux-activate)3). Go, Go Activate (selinux-activate)
Code: Select all
sudo rm -f /dev/[tp]ty[abcdepqrstuvwxyz][0-9a-f]
Now that we added and saved everything, let's see if we did it successfully by typing:If you now see:If you now see:
Code: Select all
sudo selinux-activateActivating SE Linux
SE Linux is activated. You may need to reboot now. | https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=16738 | CC-MAIN-2017-51 | refinedweb | 745 | 62.27 |
Ter. Python has "anonymous blocks" all over the place, since every control structure controls one or more of them. It simply requires that they be forgotten at the next DEDENT. Surely you don't advocate that each of them should get a name! I think this is a difference of cognition. Specifically, people who don't want to name blocks as functions may not abstract processes to signatures as easily, and reify whole processes (including all free identifiers!) as objects more easily, as those who don't think naming is a problem. > Since I routinely use standard names 'f' and 'g' (from math) to name > functions whose name I do not care about, I am baffled (and annoyed) by If the cognition hypothesis is correct, of course you're baffled. You "just don't" think that way, while he really does. The annoyance can probably be relieved by s/a developer/some developers/ here: > (repeated) claims such as "Having to name a one-off function adds > additional cognitive overload to a developer." (Tav). I suspect that "overload" is a pun, here. Your rhetorical question > Golly gee, if one cannot decide on standard one-char name, how can > he manage the rest of Python? has an unexpected answer: in the rest of Python name overloading is carefully controlled and scoped into namespaces. If my cognition hypothesis is correct, then a standard one-character name really does bother/confuse his cognition, where maintaining the whole structure of the block from one use to the next somehow does not. (This baffles me, too!) The question then becomes "can Python become more usable to developers unlike you and me without losing some Pythonicity?" Guido seems to think not (I read him as pessimistic on both grounds: the proposed syntax is neither as useful nor as Pythonic as Tav thinks it is). | https://mail.python.org/pipermail/python-ideas/2009-March/003283.html | CC-MAIN-2016-30 | refinedweb | 307 | 61.16 |
It is the job of an EJB container to automatically load and store Entity Data between each remote interface call. However, it is the job of the EJB developer to make sure that the data loaded in an Entity Bean is consistent with the backend data store at all times. If this is not the case, an Entity Bean can create data integrity problems. This situation may arise when throughout the life of an Entity Bean data, owned by the Bean, is being changed and these changes are not being reflected in the Bean's state.
For example, an Entity Bean representing a parent record and a set of child records may have a remote interface method that adds a new child record. If the method's implementation immediately persists this record to the database, the Entity Bean automatically becomes out of synch with its data store until a fresh load is performed. However, if similar data operations continue without subsequent loads, the Bean can no longer be referenced for the most up-to-date information.
This pattern offers a solution for these kinds of problems.
This pattern primarily applies to the situation described in the example given above. The assumption is that an Entity Bean represents a parent record that has at least one set of related child records. Think about Order and Line Items relationship where Order is the parent record and it may have one or more Line Items associated with it. The relationships may be a lot more complicated and nest many levels, however this pattern would still apply once each individual relationship is considered. Thus, the discussion will focus on a single instance of parent/children relationship but it can be generalized for any situation where this type of relationship is involved.
This pattern assumes that EJB developers correctly update parent record data during data setting operations targeted towards parent record data. If this is not the case, the Entity Bean code must be modified to update or set all the data after each data modification operation. It is also a good practice to follow the Aggregate Details Pattern to store the details of the Entity Bean. The examples in this discussion rely on the use of this pattern.
In order to correctly capture the changes made to the child records, the EJB developers need to follow these steps:
1. Update the Bean's child data
2. Store the operation that was performed on the data
3. Correctly apply the operation to the child data during ejbStore
Steps 2 and 3 are by far the trickiest and therefore require a closer inspection.
Let’s assume that we have a parent class named Shipment that can contain one or more schedules.
public class ShipmentAccessor extends AccessorBase {
// Class Data Members
...
protected ChangeArrayList schedule = new ChangeArrayList();
...
}
ChangeArrayList is a special class derived from ArrayList whose sole responsibility is to keep track of data changes made to a piece of data stored in the list. It is a fully functioning class that has been tested and implemented in a large-scale project. If you would like to obtain a complete listing, drop me an e-mail at leo at stratos dot net.
Following the Aggregate Details Pattern, the Shipment Entity Bean will be derived from the ShipmentAccessor class, thus allowing the complete graph to be returned by executing a single getAllDetails call from the Entity Bean and also enabling set/get code reuse inside the Bean.
If we need to add a new schedule (represented by the ShipmentSchedule class) to the shipment currently stored in the ShipmentAccessor, all we need to do is to add it to the schedule list and do not immediately commit the changes to the data store.
public void addShipmentSchedule(ShipmentSchedule shipmentSchedule) {
schedule.add(shipmentSchedule);
}
The same is true for updating and removing schedules:
public void updateShipmentSchedule(ShipmentSchedule shipmentSchedule) {
schedule.update(shipmentSchedule);
}
public void deleteShipmentSchedule(ShipmentSchedule shipmentSchedule) {
schedule.remove(shipmentSchedule);
}
By capturing the data and all of the operations performed on it, we keep the Entity Bean valid at all times. This, however, places more responsibility on the Bean's ejbStore method that now needs to apply all the data changes that were made after the last store. Therefore, in addition to the current code, extra functionality needs to be added to handle storing all the data modifications. Since all the changes are known, it is relatively straightforward to apply them. The example below uses ChangeArrayList to show how this can be accomplished.
public void ejbStore()
{
...
// Store all the schedule records
deleteShipmentSchedules( schedule.getDeleted() );
insertShipmentSchedules( schedule.getInserted() );
updateShipmentSchedules( schedule.getUpdated() );
}
Each of the methods that takes care of an individual data store operation (insert, update, delete) receives an iterator that iterates over objects that only have the specified operation applied against them. This way, each method needs to handle only a specific operation thus making the code very simple to write, understand and reuse. A consequent load should completely refresh the set of data (schedules in this case) and remove all of the operations previously performed against the data because this pattern only works for load-store lifecycle of the Entity Bean.
Notice that a general usage pattern of the details object under the Aggregate Details Pattern as well as performance considerations (since each remote interface call invokes load/store operations) dictate the following use of the details objects:
1. Find/create the bean
2. Extract the details object (getAllDetails)
3. Perform various data operations on the object
4. Store the details in the Entity Bean (setAllDetails)
Under this scenario, this pattern makes even more sense because of its ability to capture data changes made between each store operation. Let's imagine a situation where an EJB developer follows the general steps outlined above. However, in Step 3, s/he performs several data operations that involve updating existing data and inserting a new record (all this, of course, is done for a set of child records rather than the parent record). If special precautions are not taken, ejbStore will not be able to determine which child records were updated and which were inserted. This pattern allows EJB developers to seamlessly manage this information and to easily manipulate data no matter what operations have been performed against it.
Discussions
J2EE patterns: Keeping Track of Entity Data Changes Between Loads and Stores
Keeping Track of Entity Data Changes Between Loads and Stores (6 messages)
- Posted by: Leo Shuster
- Posted on: December 12 2000 17:21 EST
Threaded Messages (6)
- Keeping Track of Entity Data Changes Between Loads and Stores by Uday Natra on December 15 2000 18:59 EST
- Keeping Track of Entity Data Changes Between Loads and Stores by Leo Shuster on December 27 2000 16:41 EST
- Keeping Track of Entity Data Changes Between Loads and Stores by mike finegan on April 15 2001 10:15 EDT
- Keeping Track of Entity Data Changes Between Loads and Stores by Tarek Hammoud on December 17 2000 21:57 EST
- Keeping Track of Entity Data Changes Between Loads and Stores by Benedict Chng on December 19 2000 11:05 EST
- Keeping Track of Entity Data Changes Between Loads and Stores by Somu Balasundar on January 21 2002 02:05 EST
Keeping Track of Entity Data Changes Between Loads and Stores[ Go to top ]
Leo, The pattern U described, looks really good.
- Posted by: Uday Natra
- Posted on: December 15 2000 18:59 EST
- in response to Leo Shuster
I have a concern about the pattern. You said about implementing the EJBStore method properly to store all the changes properly. But when ever a business method(addShipmentSchedule) is invoked on the EntityBean, The EJBLoad and EJBStore are automatically called if a new a transaction is started. If the Transaction attribute on the EntityBean Methods is set to TRANS_REQUIRED, Then all the business methods on the entity Bean should be called from a single Session Bean Method.
But calling a single Session Bean Method to Update all the info will not suit a situation where I have Multiple UI screens that encapsulates a single Entity Bean. Here I need to write all the changes to the Entity Bean after the user has gone through all the screens Deleting, Inserting and Updating various elements of the entity Bean.
My basic question is that EjbLoad and Store all called on every method if they are called individually.
Keeping Track of Entity Data Changes Between Loads and Stores[ Go to top ]
I will try to answer some of the questions raised in the replies posted above.
- Posted by: Leo Shuster
- Posted on: December 27 2000 16:41 EST
- in response to Uday Natra
There are a couple of ways that this pattern can be implemented in real life:
1. As stated in the pattern, all the changes are made to the accessor object that represents the complete data vector of the Entity Bean. The accessor is obtained at the beginning of the transaction/lifecycle by calling getAllDetails() method and is modified throughout its existence. At the end, the accessor is persisted to the backend data store by calling setAllDetails() method of the Bean, thus forcing ejbStore to be called. Between getAllDetails() and setAllDetails() calls, the Bean itself is not modified -- just the accessor. Keep in mind that it is very costly to commit small changes to the Entity Bean one by one, which trigers store/load operations every time. It is much more efficient to modify the accessor's state and commit all of the changes at once.
2. As suggested by Uday, a Session Bean can be created to wrap an Entity Bean. In this situation, the approach described above can be utilized but on a slightly smaller scale since the changes will be accumulated for one Session Bean method call only. Here, the pattern still proves useful since it eliminates the need to develop extra functionality to commit Bean's state changes (data updates, deletes, inserts) that would otherwise have to be implemented.
3. Some EJB containers may provide special tags in their ejb-jar.xml files that enable developers to limit the amount of loads and stores. In WebLogic, for example, two tags -- <is-modified-method-name> and <is-db-shared> -- can be added inside the <weblogic-enterprise-bean> tag in weblogic-ejb-jar.xml to describe the situations when stores and loads should and should not happen. It is often a good idea to keep track of Bean's changes and signal to the container when the bean is changed and should be saved (this is actually what Toplink does for you, but for those of us without it, we have to reinvent the wheel). In this situation, since stores and loads may not happen on every remote interface call, this pattern enforces data integrity and consistency.
Toplink is a great tool. Unfortunately, not all of us have the benefit of working with it. Thus, this pattern offers a solution for the situations when such advanced middleware tools are not available for the developers. Also, the cost of implementing this pattern is minimal if the framework for data tracking is already implemented. This is exactly what ChangeArrayList does. As you can see from the pattern description, keeping track of data using it becomes trivial.
This pattern describes a solution for keeping the data consistent as a part of normal EJB operation. It relies on the assumption that a number of data operations is performed on the Bean's accessor between getAllDetails() and setAllDetails() calls. It is inefficient to use this pattern for keeping track of a single change that immediately gets persisted to the data store.
This pattern has no bearing on the data modified by external processes since the Bean will not be aware of them at the ejbStore time. In fact, this is a very delicate problem that may require a separate dicussion.
Keeping Track of Entity Data Changes Between Loads and Stores[ Go to top ]
Has anyone compared the performance of TopLink to these methods? TopLink ultimately uses CMP? How are the issues about fine-grained entity beans avoided? Or are they just cached efficiently?
- Posted by: mike finegan
- Posted on: April 15 2001 22:15 EDT
- in response to Leo Shuster
I have heard that using TopLink adds up to 20% degradation in performance over BMP. Does that sound right?
Keeping Track of Entity Data Changes Between Loads and Stores[ Go to top ]
It will be much cheaper to by a product like toplink that can detect deletions, insertions and modifications. Your alternative of coding this stuff in pretty much every 1 to many bean if way more expensive that a license cost.
- Posted by: Tarek Hammoud
- Posted on: December 17 2000 21:57 EST
- in response to Leo Shuster
Not affliated with toplink. Just think that it makes coding persistent stuff trivial.
Keeping Track of Entity Data Changes Between Loads and Stores[ Go to top ]
Hi Leo,
- Posted by: Benedict Chng
- Posted on: December 19 2000 11:05 EST
- in response to Tarek Hammoud
I thought your pattern is describing how you would overcome the problem of data inconsistencies caused by an external process modifying the information on the database tables directly without going through the EJB container. Eg a nightly batch process.
Do you encounter such problems? How do you resolve it?
Ben
Keeping Track of Entity Data Changes Between Loads and Stores[ Go to top ]
Hi
- Posted by: Somu Balasundar
- Posted on: January 21 2002 02:05 EST
- in response to Benedict Chng
Keeping track of Entity Data changes is the responsibility of Transaction manager. Remember that transaction manager controls one Thread at a time. if suppose any batch load/update the Tx tell's the container that load the data again. By Context object this is getting acheived.
Correct me if i am wrong. | http://www.theserverside.com/discussions/thread.tss?thread_id=2636 | CC-MAIN-2014-35 | refinedweb | 2,303 | 50.46 |
The PiFace common functions module.
Project Description
pifacecommon
Common functions for interacting with PiFace products.
Documentation
[]()
You can also find the documentation installed at:
/usr/share/doc/python3-pifacecommon/
Install
Make. Change Log ==========
v4.1.2
- Fixed bug with new Device Tree (Pi2) by changing GPIO_INTERRUPT_DEVICE from /sys/devices/virtual/gpio/ to /sys/class/gpio/ and changing udev rule.
v4.1.1
- Support varying listeners.
v4.1.0
- Added deregister to interrupts.
v4.0.1
- Fixed SPI file descriptor bug when closing.
- Fixed issue #14.
v4.0.0
- Ignored “Interrupted system call” error in watch_port_events.
- Rewrite main functions into chip specific (MCP23S17) class.
- GPIOInterruptDevice class replacing core GPIO enable/disable functions.
- SPIDevice class replacing spisend function. Can now add in spi_callback function which is called before each SPI write.
- Updated installation instructions.
v3.1.1
Added IODIR_FALLING_EDGE and IODIR_RISING_EDGE to replace IODIR_ON and IODIR_OFF respectively. IODIR_ON and IODIR_OFF can still be used in the same way as before. Falling/Rising Edge are for physical level 1/0, On/Off are for logical (programmer) level 1/0.
- Physical Level (pifacecommon.read_bit):
IODIR_FALLING_EDGE: 1 -> 0 IODIR_RISING_EDGE: 0 -> 1
- Logical Level (pifacedigitalio.PiFaceDigital().input_pins[0].value):
IODIR_ON: 0 -> 1 IODIR_OFF: 1 -> 0
- Remember that PiFace Digital Inputs are active low:
>>> pifacecommon.read_bit(0, INPUT_PORT) >>> 1 # physical >>> pifacedigitalio.PiFaceDigital().input_pins[0].value >>> 0 # logical
Fixed Debian package bug where setup script would not be executed.
v3.1.0
- Added debouncing with adjustable settle time.
v3.0.0
- Added timeout class (fixing Issue #2) in interrupts.
- Added support for interrupts on multiple boards.
- Interrupts must be enabled/disabled manually. Automatic handling of this broke interrupts from multiple boards.
v2.0.2
- Moved version number to pifacecommon/version.py so that it can be read from setup.py and bin/uninstall.py.
- Updated SPI help link to point to the new docs.
- Moved installation scripts into single file for Debian packaging.
v2.0.1
- Added version number in source.
- Added uninstall script.
v2.0.0
- Improved interrupts (different API, check the docs).
- Reduced scope of global variables from package to individual modules. (Hiding namespaces from the end user is an attempt to simplify the interface for children. However this package is not intended for that audience and so messing with the namespaces only confuses things.)
v1.2.1
- Supports Python 2.
- Started using semantic versioning.
v1.2
- Started using a change log!
- Removed errors submodule, custom exceptions now go in their respective modules. This might change back in a future release.
- Fixed DigitalInput value bugs
- Fixed SPI transfer bug.. Function spisend now takes bytes as an argument instead of a list. This makes more sense, since it returns bytes.
- Removed install.sh, everything is now handled by setup.py.
- Updated docs.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pifacecommon/ | CC-MAIN-2018-13 | refinedweb | 476 | 54.59 |
The design of ductwork is as important to a home heating and air conditioning system as the unit that powers it. Ducts that are improperly sized, installed with crimps or tight bends, have leaks or are not balanced between supply (conditioned air) and return ("used" air) will cause the system to operate inefficiently. Over time, such small problems may damage the unit. Ductwork is installed at the same time as the heating/air conditioning unit. If done properly, ductwork should last a lifetime.
Things You'll Need
Flexible duct
Metal hanger straps
Hammer
Hanger nails
Tin snips
Heat-resistant tape
Step 1
Determine the size of ductwork needed to match the air flow from the unit, measured in cubic feet per second. The unit should have that marked on it or it will be in the owner's manual with the unit. The size of the pipes coming in from the unit also should indicate the size duct needed to handle output from the unit (supply) and necessary return of air from the house. Main ducts will be larger; branches to individual vents will be smaller.
Step 2
Map routes for ducts through the house using a central duct with branches off to individual areas. Run the main duct through the center of the house, if possible, with forks to vents on outside walls. House design will affect this routing; in some houses, it may be necessary to split supply systems at the unit and run parallel ducts along walls, through an attic. Locate return ducts in areas where air will flow naturally, such as hallways; put return openings on inside walls.
Step 3
Install ducts in a basement between or around floor joists; in a ceiling, place ducts between ceiling joists; in an attic, lay ducts over joists. Fasten ducts with metal strap hangers nailed to joists, about every 6 feet in basements and ceilings and less often in attics. Don't stretch ducts; keep them secure but flexible. Avoid any obstructions that would require sharp bends or would crimp the ducts.
Step 4
Avoid any areas subject to heat, such as water heaters and hot water lines, and don't run ducts over electrical boxes. Add extra insulation to ducts in areas such as attics, where cold wind might blow in. Use metal connectors to join duct sections; slide connectors inside the duct on each end. Secure all connections and seams air-tight with heat-resistant tape recommended by the duct manufacturer.
Step 5
Connect supply and return ducts to the unit once all of the ductwork is laid throughout the house. Test the system by forcing the blower to turn on and look for leaks, indicated by blowing insulation or whistling air around a seam. Force the blower on with a switch if the unit has a separate blower switch, or by adjusting the thermostat until it comes on.
Warning
Don't use duct tape to seal ductwork seams; despite its name, the adhesive will deteriorate over time and cause leaks. | https://www.ehow.com/how_8116922_install-ductwork-central-heating-air.html | CC-MAIN-2020-29 | refinedweb | 504 | 68.3 |
Welcome to the Core Java Technologies Tech Tips for December 14, 2004. Here you'll get tips on using core Java technologies and APIs, such as those in Java 2 Platform, Standard Edition (J2SE).
This issue covers:
Resource Bundle Loading
Hiding ListResourceBundles from javadoc
ListResourceBundles
javadoc Java technology source for developers. Get the latest Java platform releases, tutorials, newsletters and more.
java.net - A web forum where enthusiasts of Java technology can collaborate and build solutions together.
java.com - The ultimate marketplace promoting Java technology,
applications and services.
A resource bundle is a way of embedding text strings in a language-specific (or more precisely, locale-specific) manner. An earlier Tech Tip discussed the use of resource bundles. What follows is a short refresher. If you have a program that needs a string such as "Hello, World", one approach is to code it in the program. However with resource bundles, you don't hardcode the string. Instead, you put the string in a lookup table, and then your program looks up the string at runtime. If the program runs with a different locale, the lookup finds a different string, if translated, or finds the original string if not translated. This doesn't affect the code in your program -- it runs with the same code, irrespective of locale. The only thing you need to do is create and translate the lookup table of values.
As stated previously, resource bundles work with locales. You can say, "I want the 'greet' string for English," where English is the locale. Or, you can say you want 'color' for U.S. English, and 'colour' for U.K. English. Locales also support regionality. In other words, you can specify a phrase for one dialect of U.S. English (perhaps a phrase used in Southern California), and a different phrase for another U.S. region, say New York City.
You can define a resource bundle in a .class file that extends ListResourceBundle, or you can use a PopertyResourceBundle that is backed by a .properties file. When combining resource bundles and locales, there are two searches involved. The first finds the nearest resource bundle requested, the second finds the string for the requested key. Why the differentiation? When searching for resource bundles, the system stops as soon as it finds and loads the requested resource bundle. If the system doesn't find the key in the requested bundle, it then hunts in other resource bundles until it finds the key. Ultimately, if it doesn't find the key, the system throws a MissingResourceException.
.class
ListResourceBundle
PopertyResourceBundle
.properties
MissingResourceException
To demonstrate, suppose you want to find a string for a New York locale in a bundle named Greeting. Suppose too that your Locale was created as follows:
Locale newYork = new Locale("en", "US", "NewYork")
and you asked for a resource bundle like this:
ResourceBundle bundle =
ResourceBundle.getBundle("Greeting", newYork);
The system first looks for the .class file for the bundle. With a region/variant level of locale, such as New York, the file would be Greeting_en_US_NewYork.class. If the system can't find the .class file in the classpath, it then searches for the file Greeting_en_US_NewYork.properties. And if it can't find that file, the system subsequently searches for Greeting_en_US.class, followed by Greeting_en_US.properties, Greeting_en.class, Greeting_en.properties, Greeting.class, and Greeting.properties. The searching stops when the system finds the resource bundle. Thankfully, there is caching involved, so the system doesn't always search everywhere, but that's still potentially a lot of different places that have to be searched.
Greeting_en_US_NewYork.class
Greeting_en_US_NewYork.properties
Greeting_en_US.class
Greeting_en_US.properties
Greeting_en.class
Greeting_en.properties
Greeting.class
Greeting.properties
The system then performs a second round of lookups -- this time for the requested key. If the key isn't in the bundle it found, the system looks for more resource bundles, beyond the language, country, and variant level of the current bundle. This could load more bundles, whether they are .class files or .properties files.
One question you might have is which approach is better, using .class files or using .properties files? Notice that .class files are searched for first, then .properties files. Also note that .class files are loaded directly by the class loader, but .properties files have to be parsed each time the bundle needs to be loaded. Parsing is a two-pass process. To deal with Unicode strings such as \uXXX, the system must scan each key=value line twice, and then split the key from the value.
\uXXX
key=value
Let's investigate both approaches further by comparing load times. Start with the following test program:
import java.util.*;
public class Test1 {
public static void main(String args[]) {
Locale locale = Locale.ENGLISH;
long start = System.nanoTime();
ResourceBundle myResources =
ResourceBundle.getBundle("MyResources", locale);
long end1 = System.nanoTime();
String string = myResources.getString("HelpKey");
long end2 = System.nanoTime();
System.out.println("Load: " + (end1 - start));
System.out.println("Fetch: " + (end2 - end1));
System.out.println("HelpKey: " + string);
}
}
If you are running on a 1.4 Java platform, you need to change the test program so that it calls currentTimeMillis instead of nanoTime. The nanoTime method works with nanosecond precision. The currentTimeMillis works only in milliseconds. Also, see the note about microbenchmarks at the end of this tip.
currentTimeMillis
nanoTime
Next, create a ListResourceBundle class in the same directory as the test program:
import java.util.*;
public class MyResources extends ListResourceBundle {
public Object[][] getContents() {
return contents;
}
private static final Object[][] contents = {
{"OkKey", "OK"},
{"CancelKey", "Cancel"},
{"HelpKey", "Help"},
{"YesKey", "Yes"},
{"NoKey", "No"},
};
}
Compile the test program and the MyResources class. Then run the test program.
MyResources
Your results will depends on your operating environment, your RAM size, and the speed of your processor. Here's a result produced in a 800 MHz machine running Windows XP with 768 MB RAM:
Load: 25937415
Fetch: 62994
Now create a properties file, MyResources.properties, with the following elements:
MyResources.properties
OkKey=OK
CancelKey=Cancel
HelpKey=Help
YesKey=Yes
NoKey=No
Run the test program again, but first remove the MyResources class. This will run the program using the .properties files. Here's the result produced in the same machine as before:
Load: 101469357
Fetch: 35450
The load times show the ListResourceBundle approach is faster than the PropertyResourceBundle approach. But, surprisingly, the fetch times show that the PropertyResourceBundle approach is almost twice as fast as ListResourceBundle approach. With roughly a five times difference in loading and a two times difference in fetching, you'd have to do a lot of fetches to catch up. Keep in mind that a nanosecond is a billionth of a second and a millisecond is a thousandth of a second.
PropertyResourceBundle
Now run the tests again, but this time use 100 elements in the .class and .properties files. To create the file, you can simply copy the five elements in the previous files 20 times, and change the entries slightly with each copy. For example, change OkKey to OkKey1, CancelKey to CancelKey1, and so on. Your results should follows the earlier results. Loading should be faster with the ListResourceBundle, but fetching should be faster with the PropertyResourceBundle. Actually, you should find that the load time of 100 resources for a PropertyResourceBundle is close to that of five elements.
OkKey
OkKey1
CancelKey
CancelKey1
ListResourceBundle
Load: 12782686
Fetch: 262788
PropertyResourceBundle
Load: 12600795
Fetch: 35175
Changing the Locale from language (Locale.ENGLISH) to language
and country (Locale.US) produces even more interesting results:
Locale.ENGLISH
Locale.US
ListResourceBundle
Load: 13152117
Fetch: 32921
PropertyResourceBundle
Load: 14592024
Fetch: 261060
ListResourceBundle:
Load: 12837863
Fetch: 264264
PropertyResourceBundle
Load: 14468366
Fetch: 33166
In all cases, while loading the initial bundle is always faster for the ListResourceBundle, fetching is sometimes slower. So which way do you go? For smaller resource bundles, the ListResourceBundle does seem to be the faster of the two. For larger ones, it seems best to stay away from ListResourceBundle. The ListResourceBundle needs to convert the two-dimensional array into a lookup map, that's the reason for the slower time.
Looking at these results, you might think that a ListResourceBundle should never be used. For instance, for a server-based program, it is easier to maintain a .properties file than a .class file, and the load time is negligible. But, a ListResourceBundle is not just a two-dimensional array of strings. The getContents method returns an Object array:
getContents
Object
public Object[][] getContents()
What does this mean? If you want to localize content beyond simply strings, you must use ListResourceBundle objects. This allows you to localize content such as images, colors, and dimensions. You can't have any object in a PropertyResourceBundle, only strings.
Note that the timing test in the sample program can be considered a microbenchmark. It can certainly be improved. However, with the caching of resource bundle loading, it's hard to get accurate load times when looping multiple times in the same run. Multiple runs should be used to validate results. For information on techniques for writing microbenchmarks, see the JavaOne 2002 presentation How NOT To Write A Microbenchmark. In addition, a lot of performance work in this area has been done for JDK 5.0. Your numbers may differ substantially using Java 2 SDK, Standard Edition, v 1.4.x.
For additional information about working with resource bundles,
see the javadoc for the ResourceBundle class,
the internationalization trail in the Java Tutorial, and the Core Java Internationalization page.
The first Tech Tip in this issue, Resource Bundle Loading, made some performance comparisons between the ListResourceBundle approach and the PropertyResourceBundle approach. If you decide to take the ListResourceBundles approach instead of the alternative PropertyResourceBundle route, there is one more thing to consider.. How to you address this issue? In fact, is there a way to hide ListResourceBundles from javadoc? This tip shows you a way to do that.
By default, the javadoc tool supports two options for suppressing classes from the output. You can specify a list of all the classes in a file and direct the tool to run javadoc on this fixed set. Or you can place all the resource bundles in a package and then direct the tool to run on a set of packages that ignores the package in which the resource bundle is located. The first technique is cumbersome -- maintaining the list is difficult. The second technique prevents you from keeping the resource bundles in the same directories as the source that uses them.
So how can you customize javadoc to ignore specific classes when generating its output? The answer is that instead of generating a complete list of classes (with the resource bundle classes missing), you simply provide a list of resource bundle classes.
This solution works for both the 1.4 and 5.0 releases of J2SE. To do this you run a doclet that accepts an option, -excludefile, which excludes a set of classes that you specify. Here's how you run the doclet (note that the command should go on one line):
-excludefile
java -classpath <path to doclet and path to tools.jar>
ExcludeDoclet -excludefile <path to exclude file>
<javadoc options>
In response to the command, the validOptions method of the Doclet class looks for the -excludefile option. If it finds it, the method reads the contents of the exclude file -- these are the set of classes and packages to ignore. Then the start method is called. As each class or package is processed, the method throws away the classes and packages in the exclude set. The doclet includes the optionLength method, this allows the doclet to run under both J2SE 1.4 and 5.0. Here is the doclet, ExcludeDoclet:
validOptions
Doclet
start
optionLength
ExcludeDoclet
import java.io.*;
import java.util.*;
import com.sun.tools.javadoc.Main;
import com.sun.javadoc.*;
/**
* A wrapper for Javadoc. Accepts an additional option
* called "-excludefile", which specifies which classes
* and packages should be excluded from the output.
*
* @author Jamie Ho
*/
public class ExcludeDoclet extends Doclet {
private static List m_args = new ArrayList();
private static Set m_excludeSet = new HashSet();
/**
* Iterate through the documented classes and remove the
* ones that should be excluded.
*
* @param root the initial RootDoc (before filtering).
*/
public static boolean start(RootDoc root) {
root.printNotice
("\n\nRemoving excluded source files.......\n\n");
ClassDoc[] classes = root.classes();
for (int i = 0; i < classes.length; i++) {
if (m_excludeSet.contains(classes[i].qualifiedName()) ||
m_excludeSet.contains
(classes[i].containingPackage().name())) {
root.printNotice
("Excluding " + classes[i].qualifiedName());
continue;
}
m_args.add(classes[i].position().file().getPath());
}
root.printNotice("\n\n");
return true;
}
/**
* Let every option be valid. The real validation happens
* in the standard doclet, not here. Remove the "-excludefile"
* and "-subpackages" options because they are not needed by
* the standard doclet.
*
* @param options the options from the command line.
* @param reporter the error reporter.
*/
public static boolean validOptions(String[][] options,
DocErrorReporter reporter) {
for (int i = 0; i < options.length; i++) {
if (options[i][0].equalsIgnoreCase("-excludefile")) {
try {
readExcludeFile(options[i][1]);
} catch (Exception e) {
e.printStackTrace();
}
continue;
}
if (options[i][0].equals("-subpackages")) {
continue;
}
for (int j = 0; j < options[i].length; j++) {
m_args.add(options[i][j]);
}
}
return true;
}
/**
* Parse the file that specifies which classes and packages
* to exclude from the output. You can write comments in this
* file by starting the line with a '#' character.
*
* @param filePath the path to the exclude file.
*/
private static void readExcludeFile(String filePath)
throws Exception {
LineNumberReader reader =
new LineNumberReader(new FileReader(filePath));
String line;
while ((line = reader.readLine()) != null) {
if (line.trim().startsWith("#"))
continue;
m_excludeSet.add(line.trim());
}
}
/**
* Method required to validate the length of the given option.
* This is a bit ugly but the options must be hard coded here.
* Otherwise, Javadoc will throw errors when parsing options.
* We could delegate to the Standard doclet when computing
* option lengths, but then this doclet would be dependent on
* the version of J2SE used. I'd rather hard code so that
* this doclet can be used with 1.4.x or 1.5.x.
*
* @param option the option to compute the length for.
*/
public static int optionLength(String option) {
if (option.equalsIgnoreCase("-excludefile")) {
return 2;
}
//General options
if (option.equals("-author") ||
option.equals("-docfilessubdirs") ||
option.equals("-keywords") ||
option.equals("-linksource") ||
option.equals("-nocomment") ||
option.equals("-nodeprecated") ||
option.equals("-nosince") ||
option.equals("-notimestamp") ||
option.equals("-quiet") ||
option.equals("-xnodate") ||
option.equals("-version")) {
return 1;
} else if (option.equals("-d") ||
option.equals("-docencoding") ||
option.equals("-encoding") ||
option.equals("-excludedocfilessubdir") ||
option.equals("-link") ||
option.equals("-sourcetab") ||
option.equals("-noqualifier") ||
option.equals("-output") ||
option.equals("-sourcepath") ||
option.equals("-tag") ||
option.equals("-taglet") ||
option.equals("-tagletpath")) {
return 2;
} else if (option.equals("-group") ||
option.equals("-linkoffline")) {
return 3;
}
//Standard doclet options
option = option.toLowerCase();
if (option.equals("-nodeprecatedlist") ||
option.equals("-noindex") ||
option.equals("-notree") ||
option.equals("-nohelp") ||
option.equals("-splitindex") ||
option.equals("-serialwarn") ||
option.equals("-use") ||
option.equals("-nonavbar") ||
option.equals("-nooverview")) {
return 1;
} else if (option.equals("-footer") ||
option.equals("-header") ||
option.equals("-packagesheader") ||
option.equals("-doctitle") ||
option.equals("-windowtitle") ||
option.equals("-bottom") ||
option.equals("-helpfile") ||
option.equals("-stylesheetfile") ||
option.equals("-charset") ||
option.equals("-overview")) {
return 2;
} else {
return 0;
}
}
/**
* Execute this doclet to filter out the unwanted classes
* and packages. Then execute the standard doclet.
*
* @param args The Javadoc arguments from the command line.
*/
public static void main(String[] args) {
String name = ExcludeDoclet.class.getName();
Main.execute(name, name, args);
Main.execute((String[]) m_args.toArray(new String[] {}));
}
}
Compile the doclet as follows:
javac -classpath tools.jar ExcludeDoclet.java
Replace tools.jar with the appropriate location of your JDK installation. For example, if you're running in the Windows environment and your JDK is installed in the c:\jdk1.5.0 directory, specify c:\jdk1.5.0\lib\tools.jar.
tools.jar
c:\jdk1.5.0 directory, specify c:\jdk1.5.0\lib\tools.jar
Next, create a file such as skip.txt to identify which classes to skip. Normally, this would be your set of ListResourceBundle subclasses. For this example, run ExcludeDoclet with the standard JDK classes, and ignore a set in the java.lang package:
skip.txt
java.lang
java.lang.Math
java.lang.Long
java.lang.InternalError
java.lang.InterruptedException
java.lang.Iterable
java.lang.LinkageError
Then run the following command (on one line):
java -classpath .;c:\jdk1.5.0\lib\tools.jar ExcludeDoclet
-d docs -excludefile skip.txt -sourcepath c:\jdk1.5.0\src
-source 1.5 java.lang
The command will generate the javadoc for the java.lang package, excluding the six classes and interfaces identified in skip.txt.
Here is part of the generated javadoc showing the interfaces in the java.lang package. Notice that the Iterable interface is excluded.
Iterable
For additional information about creating custom doclets, see the
tip Generating Custom Doclets. | http://java.sun.com/developer/JDCTechTips/2004/tt1214.html | crawl-001 | refinedweb | 2,773 | 50.94 |
Yesterday I created trying to understand why I was getting a different value
casting an address and just casting a float ..
anyway the reason for this is this code here
My teacher wrote this in class because wanted us to understand how floating point numbers are stored.My teacher wrote this in class because wanted us to understand how floating point numbers are stored.Code:#include<iostream> using namespace std; void binaryPrint(unsigned char * buff, int size) { for (int i = (size -1) ; i >= 0 ; i --) { unsigned char mask = 128; for (int j = 0 ; j < 8 ; j ++) { cout << ((buff[i] & mask) ? "1" : "0"); mask = mask >> 1; } } cout << endl; } void main() { float i; i = -100.75f; binaryPrint( (unsigned char * ) & i, sizeof(float)); }
But I find it really hard to understand his code.
firstly how is it possible to store an integer into a char type?
is it because all numbers in computer base form is in binary? and because 128 is not greater than 255 the highest number an 8bit can take, I was able to store in a char?
second:
when i declare a type and a location is made for it in memory is it contiguous, so it like an array ?
so for an int, because memory is usually stored in bytes
it will divide and int of size 4bytes into each byte
so it is a 4 by 8 array?
1111 1111
1111 1111
1111 1111
1111 1111
so if i just declared a simple char , because a char size is already a byte it doesn't have divide into up into separate byte..so it is basically
1111 1111
if that is the case
this piece of code:
If bytes are located as arrays, 1d array, 2d array..If bytes are located as arrays, 1d array, 2d array..Code:void binaryPrint(unsigned char * buff, int size) { for (int i = (size -1) ; i >= 0 ; i --) { unsigned char mask = 128; for (int j = 0 ; j < 8 ; j ++) { cout << ((buff[i] & mask) ? "1" : "0"); mask = mask >> 1; } } cout << endl; }
were 2 loops used because char *buff is string pointer to an address that holds one byte of data
I don't really understand why i will be initialised to the size of a type..why not start from 0?
when he did buff[i]
so if i have 32 bits in memory:
1000 1111
1100 0001
1001 1011
0011 0101
if i did buff[i]
since i is 4-1 and represents row 4
i am assuming
it just takes
0011 0101 & 1000 0000
0011 0101 & 0100 0000
0011 0101 & 0010 0000
and just ands them together to see which bit is set.after the inner loop is done
it goes to row 3
and does the process again. So it does this 32 times.
so let's say instead of converting a float or int - since they are 32bits
i wanted to convert a char
now a char is 8 bits long
1111 0000
if i had a string pointer to an address in memory that holds the binary digits
so buff points to 1111 0000
1111 0000 is essentially a 1d array ..
so why is that when I use just one loop, to check which bits are set it doesn't work ?
... Unless... UnlessCode:#include <iostream> using namespace std ; void main() { char let = 9 ; unsigned char* buff = (unsigned char*)&let; unsigned char mask = 128 ; int i= 0; for(; i<8; i++) { if(buff[i] & mask) { cout << "1" ; } else { cout << "0" ; } mask >>=1 ; } cout << endl ; system("pause") ; }
because I am incrementing i..
i am moving to the next row of byte in memory...
so that wouldn't work..
i would have to do buff[0] or *buff for it to work...
please help explain.. | https://cboard.cprogramming.com/cplusplus-programming/130964-binary-converter-bit-level.html | CC-MAIN-2017-04 | refinedweb | 624 | 74.53 |
18 December 2018 0 comments Python, Web development
Last week, I landed concurrent downloads in
hashin. The example was that you do something like...
$ time hashin -r some/requirements.txt --update-all
...and the whole thing takes ~2 seconds even though it that
some/requirements.txt file might contain 50 different packages, and thus 50 different PyPI.org lookups.
Just wanted to point out, this is not unique to use with
--update-all. It's for any list of packages. And I want to put some better numbers on that so here goes...
Suppose you want to create a requirements file for every package in the current virtualenv you might do it like this:
# the -e filtering removes locally installed packages from git URLs $ pip freeze | grep -v '-e ' | xargs hashin -r /tmp/reqs.txt
Before running that I injected a little timer on each pypi.org download. It looked like this:
def get_package_data(package, verbose=False): url = "" % package if verbose: print(url) + t0 = time.time() content = json.loads(_download(url)) if "releases" not in content: raise PackageError("package JSON is not sane") + t1 = time.time() + print(t1 - t0)
I also put a print around the call to
pre_download_packages(lookup_memory, specs, verbose=verbose) to see what the "total time" was.
The output looked like this:
▶ pip freeze | grep -v '-e ' | xargs python hashin.py -r /tmp/reqs.txt 0.22896194458007812 0.2900810241699219 0.2814369201660156 0.22658205032348633 0.24882292747497559 0.268247127532959 0.29332590103149414 0.23981380462646484 0.2930259704589844 0.29442572593688965 0.25312376022338867 0.34232664108276367 0.49491214752197266 0.23823285102844238 0.3221290111541748 0.28302812576293945 0.567702054977417 0.3089122772216797 0.5273139476776123 0.31477880477905273 0.6202089786529541 0.28571176528930664 0.24558186531066895 0.5810830593109131 0.5219211578369141 0.23252081871032715 0.4650228023529053 0.6127192974090576 0.6000659465789795 0.30976200103759766 0.44440698623657227 0.3135409355163574 0.638585090637207 0.297544002532959 0.6462509632110596 0.45389699935913086 0.34597206115722656 0.3462028503417969 0.6250648498535156 0.44159507751464844 0.5733060836791992 0.6739277839660645 0.6560370922088623 SUM TOTAL TOOK 0.8481268882751465
If you sum up all the individual times it would have become 17.3 seconds. It's 43 individual packages and 8 CPUs multiplied by 5 means it had to wait with some before downloading the rest.
Clearly, this works nicely.
Follow @peterbe on Twitter | https://api.minimalcss.app/plog/concurrent-download-with-hashin | CC-MAIN-2020-24 | refinedweb | 364 | 74.29 |
This is your resource to discuss support topics with your peers, and learn from each other.
01-16-2011 06:30 AM
I am attempting to create a label from the information given in a TextInput box and need it to autoSize to the length of the text. The api describes the property for this that I am following but get the following error:
1120: Access of undefined property TextFieldAutoSize. MyIdeasBoard.as /MyIdeasBoard/s
The code for this is as follows:
var tempString:String = newTextInput.text; var thisTextField:Label = new Label(); thisTextField.text = tempString; thisTextField.autoSize = TextFieldAutoSize.RIGHT;
Grateful for any assistance
Solved! Go to Solution.
01-16-2011 07:03 AM
My experience is that you don't need to specify the width of a Label, it'll make it fit automatically. Just set the text, remove the autoSize line and add it.
01-16-2011 07:28 AM
Its not making it fit, that is why I went down the route of the autoSize, its currently cutting it off after about 10 characters and the labels need to be of various sizes.
01-16-2011 08:50 AM
hey dave,
from the looks of it everything looks fine. i am able to use the autoSize properly on my machine. what it sounds like is you are importing the wrong Label class. make sure you are importing the qnx.ui.text.Label class. hope that helps. good luck!
01-16-2011 10:17 AM
Make sure you have imported TextFieldAutoSize at the top:
import flash.text.TextFieldAutoSize;
01-16-2011 11:07 AM
Thats great thanks, I hadn't imported flash.text.TextFieldAutoSize as I had assumed that it was covered by the qnx.ui.text.Label. Its working as expected now. | https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Label-autoSize-error/m-p/737781 | CC-MAIN-2017-13 | refinedweb | 293 | 67.04 |
»
Programming Diversions
Author
Abbott's Revenge
Garrett Rowe
Ranch Hand
Joined: Jan 17, 2006
Posts: 1296
posted
Aug 26, 2007 17:09:00
0
One cool resource I've found for interesting programming questions is
The online-judge problem set archive
. Although
Java
support for the online submission portion of the site leaves a lot to be desired, (currently only Java 1.2 is supported with
very
limited support for java.io.* operations), there are plenty of problems that span a whole range of difficulties. The problem I'm currently pounding my head against is problem #816
Abbott's Revenge
. This problem combines a slew of programming goodies, from parsing a domain-specific language, to finding a good abstraction of the problem (my current headache), to discovering a good algorithm to solve the maze. I figured it might be fun to discuss one or several of the interesting facets of this problem here.
To the mods: I'm not sure if its OK to post problems from this site here, but I figured since these weren't active contest problems, but more like individual programming brain teasers it might not be a problem. There are no prizes associated with solving the problems as far as I'm aware, just the general feeling of accomplishment. If it is an issue, just let me know and I'll cease and desist.
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
Garrett Rowe
Ranch Hand
Joined: Jan 17, 2006
Posts: 1296
posted
Aug 26, 2007 17:17:00
0
More fun interactive mazes designed by
Robert Abbott
can be found
here
.
Garrett Rowe
Ranch Hand
Joined: Jan 17, 2006
Posts: 1296
posted
Aug 26, 2007 17:45:00
0
As far as the problem abstraction goes, this is what I'm currently thinking.
interface Cell { boolean isGoal(); CellCoordinates getCoordinates(); List<Cell> listCellsAvailableFrom(Direction d); Direction directionTo(Cell c) throws CellNotAdjacentException; } enum Direction {NORTH, SOUTH, EAST, WEST;} interface CellCoordinates { int getX(); int getY(); }
Maybe I'll use this as a jumping-off point and see where it leads me, although I'm sure the TDD guys will say I'm getting way ahead of myself. I just not as comfortable with their style yet.
Steve Fahlbusch
Bartender
Joined: Sep 18, 2000
Posts: 582
7
I like...
posted
Aug 29, 2007 06:02:00
0
Garrett,
On monday saw your post at work and as soon as i got home had to spend a few minutes banging this out in python - it's a great classic problem with a twist.
How goes it for you?
Just for my own interest. Did you use a forwards or backwards evaluation approach to solving the path (did backwards myself)? And as to enumerating the solution domain did you use a depth first search or breadth first search (since it seems to want a minimal solution, i did the BFS and punted at the first solution).
thanks for the link
Gabriel Claramunt
Ranch Hand
Joined: May 26, 2007
Posts: 375
I like...
posted
Sep 15, 2007 22:48:00
0
Very interesting... It hook me up immediately!
Seems "easy" to solve
: create a directed graph where the nodes represents the "hallways" and the arcs the "turns" connecting them (Yes, is somehow the "complement" of the maze's draw). Then, with Dijktra's algorithm should be a piece of cake
(and is DFS)
Too lazy to code it right now, later I'll do it as an excuse to practice Ruby.
Gabriel
Software Surgeon
Piet Verdriet
Ranch Hand
Joined: Feb 25, 2006
Posts: 266
posted
Sep 29, 2007 06:13:00
0
Thanks for posting this fun puzzle! For those interested, here's how I solved it:
import java.io.*; import java.util.*; // AbbottsRevenge class AbbottsRevenge { public AbbottsRevenge(String fileName) throws IOException { Scanner data = new Scanner(new File(fileName)); while(data.hasNextLine()) { String name = data.nextLine().trim(); if(name.equals("END")) break; MazeSolver solver = new MazeSolver(data.nextLine().trim()); Maze maze = new Maze(name, data); solver.solve(maze); System.out.println(solver); } } public static void main(String[] args) throws IOException { new AbbottsRevenge("data.txt"); // contents of data.txt: /* AtlantaMaze 4 2 N 4 3 1 1 WL NR * 1 2 WF EFR NLR * 1 3 WFL EFR NL * 1 4 ER NL * 2 1 NFR SFL WL * 2 2 NFL SFLR WFRL EL * 2 3 NF SF WFLR EFLR * 2 4 ELR NF SR * 3 1 SL WR * 3 2 NF EFL WR SLR * 3 3 EFL WL SLR * 3 4 EL SR * 0 END */ } } // Maze class Maze { String name; Map<Point, Crossing> crossings; public Maze(String n, Scanner data) { name = n; crossings = new HashMap<Point, Crossing>(); process(data); } public Direction getNextDirection(Point current, Direction last) { Crossing c = crossings.get(current); return c.popDirection(last); } private void process(Scanner data) { while(data.hasNextLine()) { String line = data.nextLine().trim(); if(line.equals("0")) break; String[] array = line.split("\\s+"); Crossing c = new Crossing(new Point(array[0], array[1])); for(int i = 2; i < array.length-1; i++) { for(int j = 1; j < array[i].length(); j++) { c.addConnection(array[i].charAt(0)+""+array[i].charAt(j)); } } crossings.put(c.point, c); } } } // Crossing class Crossing { Point point; Map<Direction, Stack<Direction>> connections; public Crossing(Point p) { point = p; connections = new HashMap<Direction, Stack<Direction>>(); } public void addConnection(String s) { Direction bearing = Direction.getDiretion(String.valueOf(s.charAt(0))); Direction heading = Direction.getDiretion(s); Stack<Direction> allHeadings = connections.remove(bearing); if(allHeadings == null) allHeadings = new Stack<Direction>(); allHeadings.push(heading); connections.put(bearing, allHeadings); } public Direction popDirection(Direction d) { Stack<Direction> stack = connections.get(d); return stack == null || stack.isEmpty() ? null : stack.pop(); } } // Point class Point { int row; int column; public Point(String r, String c) { this(Integer.parseInt(r), Integer.parseInt(c)); } public Point(int r, int c) { row = r; column = c; } public boolean equals(Object o) { Point that = (Point)o; return this.row == that.row && this.column == that.column; } public int hashCode() { return row*37 ^ column*43; } public Point move(Direction d) { switch(d) { case NORTH : return new Point(row-1, column); case SOUTH : return new Point(row+1, column); case EAST : return new Point(row, column+1); default : return new Point(row, column-1); } } public String toString() { return "("+row+","+column+")"; } } // MazeSolver class MazeSolver { Point finish; Maze maze; Stack<Decision> stack; List<Decision> path; public MazeSolver(String s) { String[] array = s.split("\\s+"); Point start = new Point(array[0], array[1]); Direction bearing = Direction.getDiretion(array[2]); finish = new Point(array[3], array[4]); stack = new Stack<Decision>(); stack.push(new Decision(start, bearing)); } private void move(Direction d) { Point next = stack.peek().point.move(d); stack.push(new Decision(next, d)); } private Direction lastDirection() { return stack.peek().direction; } private Point currentPoint() { return stack.peek().point; } private void removeCycles() { if(stack.size() == 1) return; path = new ArrayList<Decision>(stack); for(int i = 0; i < path.size(); i++) { Decision temp = path.get(i); int start = path.indexOf(temp); int end = path.lastIndexOf(temp); while(end > start) { path.remove(end--); } } } public void solve(Maze m) { maze = m; move(lastDirection()); while(true) { Direction next = maze.getNextDirection( currentPoint(), lastDirection()); if(next != null) { move(next); } else { stack.pop(); } if(currentPoint().equals(finish) || stack.size() == 1) break; } removeCycles(); } public String toString() { StringBuilder b = new StringBuilder(maze.name).append('\n').append(' '); if(path == null) return b.append(" No Solution Possible").toString(); for(int i = 0; i < path.size(); i++) { if(i > 0 && i%10 == 0) b.append('\n').append(' '); b.append(' ').append(path.get(i).point); } return b.toString(); } } // Decision class Decision { Point point; Direction direction; public Decision(Point p, Direction d) { point = p; direction = d; } public boolean equals(Object o) { Decision that = (Decision)o; return this.point.equals(that.point) && this.direction == that.direction; } public int hashCode() { return point.hashCode()*13 ^ direction.hashCode()*47; } } // Direction enum Direction { NORTH, EAST, SOUTH, WEST; public static Direction getDiretion(String s) { if(s.equals("WR") || s.equals("EL") || s.equals("NF") || s.equals("N")) { return NORTH; } else if(s.equals("NR") || s.equals("SL") || s.equals("EF") || s.equals("E")) { return EAST; } else if(s.equals("ER") || s.equals("WL") || s.equals("SF") || s.equals("S")) { return SOUTH; } else { return WEST; } } }
I agree. Here's the link:
subject: Abbott's Revenge
Similar Threads
Need excellent Java Programming Resource
Math problem with WAS 4.1
I can't run my Servlet
Binary Notes
Finding the largest Prime Factor
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/35482/Programming/Abbott-Revenge | CC-MAIN-2014-52 | refinedweb | 1,429 | 56.76 |
This is another post about ASP.NET Core and Angular 2. This time I use a cleaner and more light weight way to host an Angular 2 App inside an ASP.NET Core Web. I'm going to use dotnet CLI and Visual Studio Code.
A few days ago there was an update for ASP.NET Core announced. This is not a big one, but a important run-time update. You should install it, if you already use ASP.NET Core 1.0. If you install it the first time (loaded from), the update is already included. Also since a few days, the final version of Angular2 was announced. So, we will use Angular 2.0.0 and ASP.NET Core 1.0.1.
This post is structured into nine steps:
#1 Create the ASP.NET Core Web
The first step is to create the ASP.NET Core web application this is the easiest thing using the dotnet CLI. After downloading it from and installing it, you are directly able to use it. Choose any console you like and g to your working folder.
Type the following line to create a new web application inside that working folder:
> dotnet new -t web
If you used the dotnet CLI for the first time it will take a few seconds. After the first time it is pretty fast.
Now you have a complete ASP.NET Core quick-start application. Almost the same thing you get, if you create a new application in Visual Studio 2015.
Now we need to restore the NuGet packages, which contains all the .NET Core and ASP.NET dependencies
> dotnet restore
This takes a few seconds, depending in the amount of packages or on the internet connection.
If this is done, type
dotnet run to start the app. You will see an URL in the console. Copy this URL and paste it into the browsers address bar. As you can see, you just need three console commands to create a working ASP.NET application.
#2 Setup the ASP.NET Core Web
To support a Angular2 single page application we need to prepare the
Startup.cs a little bit. Because we don't want to use MVC, but just the web API, we need to remove the configured default route.
To support Angular routing, we need to handle 404 errors: In case a requested resource was not found on the server, it could be a Angular route. This means we should redirect request, which results in a error 404, to the
index.html. We need to create this file in the
wwwroot folder later on.
The
Configure method in the
Startup.cs now looks(); }
#3 The Front-End Dependencies.
NPM is used to get all that stuff, including Angular itself, to the development machine. We need to configure the
package.json a little bit. The easiest way is to use the same configuration as in the ANgular2 quick-start tutorial on angular.io
You need to have Node.JS installed on your machine, To get all the tools working.
{ " } }
You should also install Webpack, Typings and TypeScript globaly on your machine:
> npm install -g typescript > npm install -g typings > npm install -g webpack
The TypeScript build needs a configuration, to know how to build that code. This is why we need a
tsconfig.json:
{ "compilerOptions": { "target": "es5", "module": "commonjs", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false } }
And TypeScript needs type defintions for all the used libraries, which are not written in TypeScript. This is where Typings is used. Typings is a kind of a package manager for TypeScript type definitions, which also needs a configuration:
{ " } }
Now we can use
npm install in the console to load all that stuff. This command automatically calls
typings install as a NPM post install event.
#4 Setup the Single Page
The Angular2 app is hosted on a single HTML page inside the
wwwroot folder of the ASP.NET Core web. Add a new
index.html and add it to the
wwwroot folder:
>
Currently we don't have the JavaSript dependencies configured. This is what we will do in the next step
#5 Configure Webpack
Webpack has two tasks in this simple tutorial. The first thing is to copy some dependencies out of the
node_modules folder into the
wwwroot folder, because static files will only be provided out of this special folder. We need Core.JS, Zone.JS, Reflect-Metadata and System.JS. The second task is to build and bundle the Angular2 application (which is not yet written) and all it's dependencies.
Let's see how this simple Webpack configuration (webpack.config.js) looks like:' } ] } }];
We have two separate configurations for the mentioned tasks. This is not the best way how to configure Webpack. E.g. the Angular2 Webpack Starter or the latest Angular CLI, do the whole stuff with a more complex Webpack configuration.
To run this configuration, just type
webpack in the console. The first configuration writes out a few warnings, but will work anyway. The second config should fail, because we don't have the Angular2 app yet.
#6 Configure the App
We now need to load the Angular2 app and it's dependencies. This is done with System.JS which also needs a ocnfiguration. We need a systemjs.config.js:
/** *);
This file is almost equal to the file from the angular.io quick-start tutorial. We just need to change a few things:
The first thing is the path to the node_modules which is not on the same level as usual. So we need to change the path to
../node_modules/, we also need to tell System.js that the bundle is not a commonjs module. this is doen with the
meta property. I also changed the app main path to
./bundle.js, instead of
main.js
#7 Create the App
Inside the
wwwroot folder, create a new folder called
app. Inside this new folder we need to create a first TypeScript file called main.ts:
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app.module'; const platform = platformBrowserDynamic(); platform.bootstrapModule(AppModule);
This script calls the app.module.ts, which is the entry point to the app: { }
The module collects all the parts of our app and puts all the components and services together.
This is a small component with a small inline template:[] = []; }
At least, we need to create a service which calls a ASP.NET Core web api. We need to create the API later on.; }
#8 The Web API
The web api is pretty simple in this demo, just to show how it works:; } } }
If you start the app using
dotnet run you can call the API using that URL:, you'll see the three persons in the browser as a JSON result.
#9 That's It. Run the App.
Type
webpack and
dotnet run in the console to compile and pack the client app and to start the application. After that call the URL in a browser:
Conclusion
As you can see, hosting an Angular2 app inside ASP.NET Core web using this way is pretty much easier and much more light weight than using Visual Studio 2015..
I pushed the demo code to GitHub. Try it out, play around with it and give me some feedback about it :)
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/aspnet-core-and-angular-2-using-cli-and-visual-stu | CC-MAIN-2017-04 | refinedweb | 1,233 | 68.06 |
Steps to reproduce:
# create a large (>4gb) file
f = open('foo.txt', 'wb')
text = 'a' * 1024**2
for i in xrange(5 * 1024):
f.write(text)
f.close()
# now zip the file
import zipfile
z = zipfile.ZipFile('foo.zip', mode='w', allowZip64=True)
z.write('foo.txt')
z.close()
Now inspect the file headers using a hex editor. The written headers are incorrect. The filesize and compressed size should be written as 0xffffffff and the 'extra field' should contain the actual sizes.
Tested on Python 2.5 but looking at the latest code in 3.2 it still looks broken.
The problem is that the ZipInfo.FileHeader() is written before the filesize is populated, so Zip64 extensions are not written. Later, the sizes in the header are written, but Zip64 extensions are not taken into account and the filesize is just wrapped (7gb becomes 3gb, for instance).
My patch fixes the problem on Python 2.5, it might need minor porting to fix trunk. It works by assigning the uncompressed filesize to the ZipInfo header initially, then writing the header. Then later on, I re-write the header (this is okay since the header size will not have increased.) | https://bugs.python.org/msg115250 | CC-MAIN-2018-51 | refinedweb | 199 | 77.74 |
A layout manager which arranges widgets horizontally or vertically. More...
#include <Wt/WBoxLayout>
A layout manager which arranges widgets horizontally or vertically.
This layout manager arranges widgets horizontally or vertically inside the parent container.
The space is divided so that each widget is given its preferred size, and remaining space is divided according to stretch factors among widgets. If not all widgets can be given their preferred size (there is not enough room), then widgets are given a smaller size (down to their minimum size). If necessary, the container (or parent layout) of this layout is resized to meet minimum size requirements.
The preferred width or height of a widget is based on its natural size, where it presents its contents without overflowing. WWidget::resize() or (CSS
width,
height properties) can be used to adjust the preferred size of a widget.
The minimum width or height of a widget is based on the minimum dimensions of the widget or the nested layout. The default minimum height or width for a widget is 0. It can be specified using WWidget::setMinimumSize() or using CSS
min-width or
min-height properties.
You should use WContainerWidget::setOverflow(OverflowAuto) or use a WScrollArea to automatically show scrollbars for widgets inserted in the layout to cope with a size set by the layout manager that is smaller than the preferred size.
When the container of a layout manager does not have a defined size (by having an explicit size, or by being inside a layout manager), or has has only a maximum size set using WWidget::setMaximumSize(), then the size of the container will be based on the preferred size of the contents, up to this maximum size, instead of the default behaviour of constraining the size of the children based on the size of the container. Note that because of the CSS defaults, a WContainer has by default no height, but inherits the width of its parent widget. The width is thus by default defined.
A layout manager may provide resize handles between items which allow the user to change the automatic layout provided by the layout manager (see setResizable()).
Each item is separated using a constant spacing, which defaults to 6 pixels, and can be changed using setSpacing(). In addition, when this layout is a top-level layout (i.e. is not nested inside another layout), a margin is set around the contents. This margin defaults to 9 pixels, and can be changed using setContentsMargins(). You can add more space between two widgets using addSpacing().
For each item a stretch factor may be defined, which controls how remaining space is used. Each item is stretched using the stretch factor to fill the remaining space.
Usage example:
Enumeration of the direction in which widgets are layed out.
Creates a new box layout.
This constructor is rarely used. Instead, use the convenient constructors of the specialized WHBoxLayout or WVBoxLayout classes.
Use
parent =
0 to created a layout manager that can be nested inside other layout managers.
Adds a layout item.
The item may be a widget or nested layout.
How the item is layed out with respect to siblings is implementation specific to the layout manager. In some cases, a layout manager will overload this method with extra arguments that specify layout options.
Implements Wt::WLayout.
Adds a nested layout to the layout.
Adds a nested layout, with given
stretch factor.
Adds extra spacing.
Adds extra spacing to the layout.
Adds a stretch element.
Adds a stretch element to the layout. This adds an empty space that stretches as needed.
Adds a widget to the layout.
Adds a widget to the layout, with given
stretch factor. When the stretch factor is 0, the widget will not be resized by the layout manager (stretched to take excess space).
The
alignment parameter is a combination of a horizontal and/or a vertical AlignmentFlag OR'ed together.).
Removes and deletes all child widgets and nested layouts.
This is similar to WContainerWidget::clear(), with the exception that the layout itself is not deleted.
Implements Wt::WLayout.
Returns the number of items in this layout.
This may be a theoretical number, which is greater than the actual number of items. It can be used to iterate over the items in the layout, in conjunction with itemAt().
Implements Wt::WLayout.
Returns the layout direction.
Inserts a nested layout in the layout.
Inserts a nested layout in the layout at position
index, with given
stretch factor.
Inserts extra spacing in the layout.
Inserts extra spacing in the layout at position
index.
Inserts a stretch element in the layout.
Inserts a stretch element in the layout at position
index. This adds an empty space that stretches as needed.
Inserts a widget in the layout.
Inserts a widget in the layout at position
index, with given
stretch factor. When the stretch factor is 0, the widget will not be resized by the layout manager (stretched to take excess space).).
Returns whether the user may drag a particular border.
This method returns whether the border that separates item index from the next item may be resized by the user.
Returns the layout item at a specific index.
If there is no item at the
index,
0 is returned.
Implements Wt::WLayout.
Removes a layout item (widget or nested layout).
Implements Wt::WLayout.
Sets the layout direction.
Sets whether the use may drag a particular border.
This method sets whether the border that separates item index from the next item may be resized by the user, depending on the value of enabled.
The default value is false.
If an
initialSize is given (that is not WLength::Auto), then this size is used for the size of the item, overriding the size it would be given by the layout manager.
Sets spacing between each item.
The default spacing is 6 pixels.
Sets the stretch factor for a nested layout.
The
layout must have previously been added to this layout using insertLayout() or addLayout().
Returns whether the
stretch could be set.
Sets the stretch factor for a widget.
The
widget must have previously been added to this layout using insertWidget() or addWidget().
Returns whether the
stretch could be set.
Returns the spacing between each item. | https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WBoxLayout.html | CC-MAIN-2021-31 | refinedweb | 1,037 | 58.18 |
Hello,
I am very new to Bonita.
I am trying to implement a Bonita process into a Java EE Application and deploy it to the Bonita Wildfly.
I have found the simple Example that connects to the running Bonita instance but when I deploy it as an ear file to my wildfly instance nothing happens. I cant see any output logs. Thats why I am curious if my Program is even running.
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;
Hi guys,
As my colleague said (in french here ), we have an unexpected behavior with the Java API for Bonita 7.5.
When trying to start a new process with a LocalDate in the contract, we have the following error :
Error while validating expected inputs: [java.time.Ser@271a6dc7 cannot be assigned to LOCALDATE]
I created a new Definition process, taking only 'myDate' LOCALDATE in the initialization contract
Hi there,
I have a java program that works perfectly with postgres driver 9.2, and it also works in my groovy scripts except one.
Here is my code:
Basically I am trying to upload the document d1 to postgres as a byte[]. This works in java but I cannot get it to work in groovy, which it should.
Any ideas? Thanks in advance.
Seán
Problem with working out how to integrate a java client with a Tomcat portal as per instructions Integrate a process into an application .
Here is what I've done so far:
hi,
i get error when i try to import org.bonitasoft.engine..... in groovy
do you no why ?
thank you
hello ,
i m total new in this environment .
i try to make one connector which just print hello word .
but system.out.println not working .
so how can i print hello word on that engine log on finish task
Pl help
thanks in advance
Jalpa.
After adding a connector Alfresco, the following exception occurs.
Can anyone help me? Thanks
Achille | https://community.bonitasoft.com/tags/engine | CC-MAIN-2021-17 | refinedweb | 338 | 67.15 |
Hi,
I'm starting to use the RichFaces and tried to use the tag <a4j:form> in my example and an error occurred:
/index.xhtml @17,23 <a4j:form> Tag Library supports namespace:, but no tag was defined for name: form
I'm trying with glassfish 3.0.1 and netbeans 6.9.
Another problem happens when I use the header on tag <rich:panel, the netbeans identify with a error:
Someone help?
a:form will not be migrated to 4.x because ajaxSubmit feature not actual in JSF 2 world where behaviors standartized.
for rich:panel - header should actually works. we need to check if it's properly geenrated as attribute in taglib. that could cause such validations as IDE can't recognize it as valid attribute. But it will works when you run app.
Hi Ilya,
Yes that works when I run app. I just to know why this validations was incorrect...
Thanks a lot for your clarification!!!!! | https://developer.jboss.org/thread/159268 | CC-MAIN-2018-05 | refinedweb | 160 | 68.57 |
Operational Differences between MultiValue and Caché
MV Accounts and Caché Namespaces
[Back]
MultiValue Features of Caché
>
Operational Differences between MultiValue and Caché
>
MV Accounts and Caché Namespaces
Class Reference
Search
:
Both MV and Caché have the concept of a logical space to hold groups of related programs and data. In MV, this space is called an ACCOUNT; Caché calls it a
NAMESPACE
. Because of this similarity, it is natural to consider mapping MV accounts to Caché namespaces since this will also provide the easiest access to all the other facilities that Caché provides for MV applications.
Similarity is not identity, however. The rules for forming MV account names differ from those for Caché namespaces. The following describes the differences between them and how those difference are resolved.
An MV account name could contain any character from the extended ASCII character set.
Caché namespace names are at least one character long, starting with an alphabetic character or a percent sign, and followed by an arbitrary number of alphanumerics, dashes or underscores.
In the simplest case, a Caché namespace maps to a Caché database of the same name. Caché database names are between 1 and 30 characters long, can start with an alphabetic character or an underscore. The remaining characters can be alphanumeric, dash, or underscore.
Converting the Account Name
In transforming the account name for Caché, it is desirable to end up with a result that doesn't alter simple names, that transforms non-conforming account names in an obvious way, and that results in a string acceptable for both the namespace and the database name. To that end, the following algorithm is used to transform an MV account name into the required Caché namespace and database names. AName, NSName, and DBName stand for the MV account name, Caché namespace name and Caché database name, respectively.
Start with an empty NSName.
Scan AName from left-to-right finding the first alphanumeric character. Make this the first character of NSName.
Continuing from the character just found, append all following characters of AName to NSName in the order scanned as long as each is an alphanumeric, dash or underscore.
If the resulting NSName is SYSPROG, set NSName to %SYS (the Caché administrator namespace).
If NSName is empty (no suitable characters were found), set NSName to the string ACCT_NIL.
If NSName is longer than 27 characters, set NSName to the string "ACCT_TRUNC_nnn_1", where nnn is the length of the original account name.
Convert NSName to uppercase.
Set DBName equal to NSName.
Resolving Duplicate Namespace Names
When creating a new MV account within Caché, it is possible that the preceding algorithm will result in a namespace name that already exists. In this instance, additional processing is done to make the namespace name unique so that the creation of the new account succeeds.
If the name does not ends in an underscore followed by a string of digits, "_1" is appended to the name.
Repeatedly do the following until a unique name results:
Extract the string of digits following the last underscore.
Increment the integer formed by this string of digits by one.
Replace the extracted string of digits by the new value.
Account Name Maps
When creating a new MV account names, for example when importing MV applications, Caché keeps track of the original account name and its resulting namespace name. This map is used to ensure that references in the MV application to other MV account are properly resolved.
When an account name is deleted, it is removed from this map.
Dictionary Items Single Versus MultiValued
The basic rule for dictionary entries in Caché is this: unless you are certain that the field will always be single-valued, mark it as multivalued or just leave the M/S indicator blank. This is because setting an entry as single valued allows the query processor to optimize the generated code, and generally this optimized code will not work correctly on multivalues.
I-Types
If you have an I-type dictionary attribute, regardless of whether it is marked single or multivalued, the I-type expression processes the entire data record at once. In the I-type expression, you can choose to use single-valued functions like OCONV, or multivalued functions like OCONVS, so you can control whether multivalues are processed as one string (using OCONV), or as multivalues (using OCONVS).
Note:
This processing is independent of whether the attribute is marked as single or multivalued.
After processing the I-type expression, the result is passed through the option conversion in attribute 3 of the dictionary item 1 value at a time, again regardless off single or multivalued identification.
The usage of the attribute varies with whether it is marked as single or multivalued. If the data is multivalued, but the attribute is marked as single-valued, then a comparison against a single value will likely fail. For example:
SELECT FILE WITH ATRB = "ABC"
where the actual data in ATRB is something like
ABC]DEF
will pass if ATRB is marked multivalued, but will fail if it is marked single-valued, because on a multivalued compare,
ABC
is compared separately against
ABC
and
DEF
, but on a single valued compare,
ABC
is being compared against the entire string
ABC]DEF
and is not equal.
If an attribute is marked as single-valued, then the results of an exploded select or WHEN clause will be different than if it were marked multivalued.
Note:
This is true on Caché and all platforms that support I- and D-type dictionary attributes.
A-Types
For A-types, the rules are slightly different. No other platform besides Caché allows an A-type to be marked single-valued. On UniVerse ODBC, A-types can be identified as single-valued, but this has no effect on MultiValue query.
On any platform, when an A-type is multivalued, the correlative on attribute 8 of the dictionary is called repeatedly, once for each value in the data. This is in contrast to an I-type where the I-type expression is only run once. However, as with the I-type, the conversion (attribute 7 in an A-type) is applied to each value.
On Caché, when an A-type is single-valued, the results depend on the type of correlative. For a simple correlative like MCT, the entire data attribute, including all multivalues, is passed through the correlative as one string. If the attribute data is something like ABC]DEF, then the result of the MCT correlative will be Abc]def, as opposed to Abc]Def which would be produced by a multivalued attribute. Then the conversion is applied one value at a time. So a single-valued A-type with a simple correlative will process like a single-valued I-type.
Other Considerations
If the correlative is an A, F or C processing code, then the data will be passed through the correlative once, but each attribute reference in the correlative will only get the first value of the attribute it references. For example if the correlative is F1, and attribute 1 contains
ABC]DEF
, the result of the correlative will be just
ABC
. The second value never gets processed.
So, it is ill-advised to ever use an A, F or C processing code in a single-valued attribute if there's any chance the processed data might be multivalued. This applies not just to the data in the AMC (Attribute 2 of the dictionary item) but also if any of the other attributes the correlative references might be multivalued.
[Back]
[Top of Page]
© 1997-2018, InterSystems Corporation
Content for this page loaded from GVOD.xml on 2017-09-29 10:49:46 | http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GVOD_mv_accounts | CC-MAIN-2018-17 | refinedweb | 1,278 | 50.97 |
Tutorial
Running TypeScript Scripts With Ease with ts-node.
While not for everybody, TypeScript, as well as other strongly typed languages, have been gaining in popularity. With TypeScript being a superset of JavaScript, using it means transpiling your
*.ts files down to pure JavaScript before the V8 engine can understand them. You could watch for file changes and automate the transpiling but sometimes you just want to run your script and get results. This is where
ts-node comes in. With
ts-node we can skip the fuss and execute our TypScript scripts with ease.
Getting Started
To get things started, we need to install
typescript and
ts-node:
## Via npm $ npm install typescript ts-node ## Via Yarn $ yarn add typescript ts-node
That’s all there is to getting started. Since
ts-node is an executable we can run, there’s nothing to
import /
require in our scripts.
Speaking of scripts, if you don’t already have a TypeScript project to work with, you can just grab this script to mess around with:
class Reptile { private reptiles: Array<string> = [ 'Alligator', 'Crocodile', 'Chameleon', 'Komodo Dragon', 'Iguana', 'Salamander', 'Snake', 'Lizard', 'Python', 'Tortoise', 'Turtle', ]; shuffle(): void { for (let i = this.reptiles.length - 1; i > 0; i--) { let j: number = Math.floor(Math.random() * (i + 1)); let temp: string = this.reptiles[i]; this.reptiles[i] = this.reptiles[j]; this.reptiles[j] = temp; } } random(count: number = 1, allowDupes?: boolean): Array<string> { let selected: Array<string> = []; if (!allowDupes && count > this.reptiles.length) { throw new Error(`Can't ensure no dupes for that count`); } for (let i: number = 0; i < count; i++) { if (allowDupes) { // Dupes are cool, so let's just pull random reptiles selected.push(this.reptiles[ Math.floor(Math.random() * this.reptiles.length) ]); } else { // Dupes are no go, shuffle the array and grab a few this.shuffle(); selected = this.reptiles.slice(0, count); } } return selected; } } const reptile = new Reptile(); console.log(`With Dupes: ${reptile.random(10, true)}`); console.log(`And Without: ${reptile.random(10)}`);
The above script simply pulls random values from an array and is overly complex just for the heck of it.
Running Scripts
Before we get to using
ts-node to work some magic, it’s good practice to know what would happen if we were to run a TypeScript script with plain ol’ Node, in case we were to ever run into it in the future.
If we were to run the aforementioned
reptile.ts script like so:
$ node reptile.ts
We would be presented with a lovely
SyntaxError: Unexpected identifier on the second line of the file, barking about the private class variable.
Cool, now that we know what not to do and what to expect when we do it, let’s try to run the script again with
ts-node:
# Via NPM $ npx ts-node reptile.ts # Via Yarn $ yarn run ts-node reptile.ts
If all went according to plan, the script not only runs, but will log out two comma separated lists of types of reptiles, one potentially having duplicates and one without.
Oh, and in case you’re wondering about the npx command, it’s a tool that now comes with npm that makes it easy to run binaries that are local to the project from the command line.
Speeding Things Up
Under the hood,
ts-node takes your script, does a bunch of semantic checking to ensure everything is on the up and up, and then transpiles your TypeScript to JavaScript.
This is the safest option but if we’re not so worried about TypeScript errors, we can pass in the
-T or
--transpileOnly flag. This flag tells
ts-node to simply transpile down to JavaScript and to not worry about any TypeScript errors.
While it’s not always adviseable to use this flag, because you lose out on what makes TypeScript pretty awesome, there are scenarios where it makes sense, like when you’re just trying to run somebody else’s script or if you’re confident that your editor + linter is catching everything and you’re being mindful enough to listen.
TypeScript REPL
Another added bonus to
ts-node is being able to spin up a TypeScript REPL (read-eval-print loop) similar to running
node without any options.
This TypeScript REPL allows you to write TypeScript right on the command-line and is super handy for testing something out on a whim.
Similar to how you get to the Node.js REPL, all you need to do is run
ts-node without any arguments:
# Via npm $ npx ts-node # Via Yarn $ yarn run ts-node
And now you can enjoy all of the strictness that TypeScript has to offer, right in your favorite terminal! | https://www.digitalocean.com/community/tutorials/typescript-running-typescript-ts-node | CC-MAIN-2020-40 | refinedweb | 783 | 70.13 |
distinct o.instancename --ISNULL(o.instancename, '') as instancename
,q.value as Orcale_Patch from oraclehosts h
left join oracleinstances o on h.hostname = o.hostname
left join puppetdb_certname_facts q on q.certname = o.hostname AND q.fact = 'oracle_patch'
--and h.hostname not in (select p.hostname from oracleProductionHosts p)
group by instancename, q.value
order by instancename
Instancename oraclepatch
MEPUAT mepuat|(JAN2017)
Output-2.txt
Looks like we now have a table like structure for oracle_patch
in which case, we now need to check where [value] like instance+'%'
we still need to strip out some characters, something like :
Open in new windowDo we need to strip out left and right brackets so "(JAN2018)" becomes "JAN2018"
Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.
There was a below function last time you created.
USE [DBInfor]
GO
/****** Object: UserDefinedFunction [dbo].[udfGetPatchDetails]
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create function [dbo].[udfGetPatchDetails]
returns varchar(100)
as
begin
declare @ins nvarchar(100) = rtrim(ltrim(@instance)) COLLATE SQL_Latin1_General_CP850_B
declare @val nvarchar(1000) = rtrim(ltrim(@value)) COLLATE SQL_Latin1_General_CP850_B
return (case when charindex(@ins,@val) > 0
then replace(replace(ltrim(
substring(@val
,charindex('|',@val,charin
,charindex(char(10),@val + char(10),charindex(@ins,@v
,'(',''),')','') -- replace brackets
else '' end) -- as Orcale_PSU
end
GO
select fact, value, stuff([value],1,charindex(
from puppetdb_certname_facts where fact='oracle_patch' and [value] like instance+'%'
Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'as'.
puppetdb_certname_facts where fact='oracle_patch' correct ?
Open in new window
In original context
Open in new window
Frist query output is below:--
fact value Orcale_PSU
oracle_patch mepuat|(JAN2017) JAN2017
Here is the output attached but did not get any output.
Output-2.txt
we can try : and q.value like rtrim(o.instancename)+'%'
Might need to do a hex dump so I can check the content of [value] and instancename, ot we have collation mismatch again...
I have ran this following query and the output looks like better but some are not
select distinct o.instancename --ISNULL(o.instancename, '') as instancename
,isnull(replace(replace(st
left join oracleinstances o on h.hostname = o.hostname
left join puppetdb_certname_facts q on q.certname = o.hostname AND q.fact = 'oracle_patch' and [value] like '%|%'
--and h.hostname not in (select p.hostname from oracleProductionHosts p)
group by instancename, q.value
order by instancename
Thanks
Zahid
Output-2.txt
So, how is this different from last time ? What has changed with the list in [value] ?
I had thought that the "team" finally got its act together and had unpacked that column, but not if we look at :
AGLDEV No Patch dw1tst|No Patch agldev|No Patch soadev|No Patch oatdev|No Patch vertextest|No Patch dw1dev|No Patch agltst|No Patch
Doing a hex dump, it seems that the very first character of [value] is char(09) - the tab character so searching for anything other than %instance% wont give you much at all. And not all rows have a '|'
So, let's go back a step and just do:
Open in new windowI am pretty sure we will be back to using a modified function, but lets get the above output first
Function would be something like :
Open in new windowand to use
Open in new window
Thanks
Zahid
Output--2-.txt
Only the | sign is coming first.
GISTEST |JAN2017
GRCPROD |JAN2018 ccgprod|JAN2018
I am validating data. I will get back to you soon.
Thank You
Output-1.txt
Open in new windowStill investigating the inclusion of ccgprod - doesnt make sense. analysing the hex dump
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
Open in new window
I guess ccgprod has something wrong format in the flat file source.
Also i see there is a new value added DEMUPG JAN2018 vcpdev|No which has the same problem. Please see the attached file.
Just wanted to make sure, after that i will close this.
Thanks
Zahid
Output-1.txt
It would have to be something like a non-space character after the (jan2018) and before the next instancename.
Doing a hex dump on the output examples does have a space. Hence, it works for me. I really wish it didnt so I could track it down - could be as easy as another replace().
Maybe a raw dump from your side as a ZIP would help ?
In the flat file source there are some confidential data. I wish i could share with you. But i spoke with Linux team, they told me about they will fix in their puppet db so that it can dump in the windows file system. I have SSIS job that basically process all these data and import into my SQL Database. This is for reporting purposes. Anyway i appreciate your help. It worked for me.
Thanks
Zahid
Would love to see what is stopping it - ah well. | https://www.experts-exchange.com/questions/29091464/T-SQL-query.html | CC-MAIN-2018-30 | refinedweb | 850 | 66.33 |
Cutom dialogs
Hello,
I have a question about using dialogs created in designer.
For example:
class Keyboard: public QDialog, public BlurEffect
{
QPushButton b1;
QPushButton b2;
.
.
Ui::Keyboard *ui;
}
#include "Keyboard.h"
class Dialog1: public QDialog, public BlurEffect
{
Keyboard *key;
QPushButton b1;
other components
}
key is created by new operator;
Is it correct?
Best regards
@wojj In this example you have Keyboard as a pointer, so you would have to use new to allocate it.
So somewhere in your constructor add:
key = new Keyboard(this);
And you should be good to go.
- jsulm Moderators
@wojj If you only want to show the dialog as a modal dialog from, lets say a slot, then you can do it like this (without declaring the Keyboard *key pointer in the class):
void Dialog1::on_button_pressed() { Keyboard keyboard; keyboard.exec(); }
Thank you for answer.
I initialize key as:
key = new Keyboard;
key->setParent(this);
When I try to use
key = new Keyboard(this);
keyboard doesn't show after
dialog1->show();
To show keyboard I use a function
void showKey()
{
key->show();
key->activateWindow();
key->raise();
}
I do not understand difference between
key = new Keyboard(this);
and
key = new Keyboard;
key->setParent(this);
I think the result should be the same.
Best regards
@wojj said in Cutom dialogs:
class Keyboard: public QDialog, public BlurEffect
{
QPushButton b1;
QPushButton b2;
.
.
Ui::Keyboard *ui;
}
It would be however the code above isn't code that would work with that... Here is what it should look like:
class Keybord : public QDialog, public BlurEffect { Q_OBJECT public: Keyboard(QWidget *parent = nullptr) : QWidget(parent); private: QPushButton b1; QPushButton b2; Ui::Keyboard *ui; };
Then passing the
thispointer to the constructor should work with
keyboard->show()later on.
Hello,
Can't make it work with this pointer in constructor.
That's my keyboard.h
class Keyboard : public QDialog, public BlurEffect
{
Q_OBJECT
public:
explicit Keyboard(QWidget *parent = 0);
and keyboard.cpp
Keyboard::Keyboard(QWidget *parent) :
QDialog(parent), BlurEffect(),
ui(new Ui::Keyboard)
{
ui->setupUi(this);
The only way it work as I expect is to call empty constructor and then set parent
key = new Keyboard;
key->setParent(this);
After that I can receive signals from keyboard and it's shown on the screen.
Is anything incorrect in my code or is another way to create custom widgets I should use?
Best regards
No that code is fine. Let me see the code where you instantiate the Keyboard object. That is where the
thisproblem would show up.
This is the part of code:
Dialog1::Dialog1(QWidget *parent) : QDialog(parent) { this->setWindowFlags(Qt::FramelessWindowHint); this->resize(800, 480); this->setStyleSheet(QString::fromUtf8("background-color: rgb(73, 76,69);")); labelList.clear(); buttonList.clear(); key = new Keyboard(this); //key = new Keyboard; //key->setParent(this); key->move(274, 5); connect(key, SIGNAL(key_pressed(int)), this, SLOT(key_pressed(int))); . . . }
and Dialog1
pDialog = new Dialog1();
Dialog1 is a pointer in class that inherits QObject and contains gui pointer (pDialog), pointer to logic part for this gui and sends signal to higher level class. Generaly my application is built in that way. So I do not have one dialog where I change the widgets, I have many dialogs and I show and hide them if necessary.
Best Regards
@ambershark edit: added coded tags for easier reading.
Ok this is really weird..
So what if you check the value of
key->parent()right after you do a
key = new Keyboard(this);and see if it matches the
thispointer.
Then do the alternate method using
setParent(this)and test
parent()again for it's value.
There should not be any difference between setting the value for the parent in the constructor versus setting it with setParent() and if you are seeing differences that is weird.
Also what happens if you add a
key->show()right after
key->move(...). Move should show it but maybe that is something that might help.
The other part that concerns me is when you say you can't even receive signals. That shouldn't be affected by parenting at all. Setting a parent is optional. So it definitely wouldn't affect signal handling. That tells me something is very wrong in your app. None of the code you have posted shows any glaring issue though.
The function key->parent() returns value as expected, in both cases the same value this. When I call key->show() after key->move(...) the kayboard shows up but it shouldn't because all pDialog is hide and another dialog is processed (without keyboard). So the key object behaves like not part of pDialog.
Everythig works ok when I use Keyboard without this pointer in contructor and then call setParent. So i think I will just accept this fact and use my custom dialogs in that way. Thank you for help ambershark.
I didn't have problems with signals. I wrote: I can receive... (sorry bad English).
Best regards
@wojj As long as
setParentdoes what you need then go with that. It is not working properly though. That usually means there's either a bug in Qt or more than likely a bug in your code somewhere.
You're welcome for the help, and your english was just fine. :)
Thank you for help.
I will try to briefly explain my problem.
My application seems to work correctly (I can switch between dialogs, communication ports works ok), but when I leave it working, it receives a segmentation fault after few hours. In the simplest case it sends and receives about 10 bytes on serial port every 300ms, and refreshes QLabel every 1 second with system time.
Serial port I use is in main thread (it was in separate but the result was the same). I have some move serial's in separate threads but in the simplest case there are no communication.
The application works on embedded linux system. Usually after SIGSEGV (or sometime SIGILL), the gdb backtrace doesn't help because it ends up with some disassembled code.
All I can see are some functions that I do not use in my code ie.
QPainter::drawImage(const QRectF &, const QImage &, const QRectF &, Qt::ImageConversionFlags
or
QTextEngine::shapeLine(QScriptLine const&)) #1 0xb6982534 in ?? () from /lib/libQt5Gui.so.5 Backtrace stopped: previous frame identical to this frame (corrupt stack?)
On qDebug() logs
"void LtimerTimeout()"is the last, so I think it stopped in
timeLabel->setText(temp);
void LoginDialog::timerTimeoutSlot() { qDebug() << "void LtimerTimeout()"; if(!refreshTime) return; QString temp = QTime::currentTime().toString("hh:mm:ss"); if(temp.size() != 8) return; for(int i = 0; i < temp.size(); i++) { if(i%2 != 0) temp.insert(i, " "); } timeLabel->setText(temp); }
I asked about custom widgets because I suspected that maybe they are not initialized properly and may lead to segmentation fault (going to uninitialized memory). Do you have any suggestions how to solve this problem?
Best regards
@wojj This is a lot to try to debug without actually having code/debugger..
From what I can see though here's a few ideas:
It's possible you have a corrupt stack like the debugger suggests. The usual culprit there is stack overflows. I don't see any stack memory being used in the code you shared, so there may be another reason for this.
You mentioned threading at one point. This type of bug screams thread synchronization issues. If you are single threaded now then it's not it, but if you still have multiple threads I would look at your sync objects and make sure they are properly protecting your memory.
Is it possible
timeLabelhas gone out of scope or is being accessed from another thread (other than the one it was created in)? That is the line probably causing the crash. A quick test could be to comment out the
timeLabel->setText(temp)line and instead just throw it in a
qDebug() << tempto test. If your crash stops you probably have a bad object there. It got cleaned up more than likely and is a dangling pointer.
Hope that guides you a bit. Good luck! :)
The thread synchronization was the first point when I tried to find a bug. I use Queued connections between threads, and when I directly read/write between threads I use QReadWriteLocker object and it seems to work correct.
timeLabelis simply the pointer to QLabel defined in
LoginDialogclass with new operator and it shouldn't go out of scope.
But I found another problem which I think can leads to segmentation fault.
I created a very simple example:
#include "dialog.h" #include "obj.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); Dialog w; Obj *pObj = new Obj; w.show(); return a.exec(); }
where:
Obj is:
#include <QObject> class Obj : public QObject { Q_OBJECT public: explicit Obj(QObject *parent = nullptr); signals: public slots: };
and empty constructor
#include "obj.h" Obj::Obj(QObject *parent) : QObject(parent) { }
I put breakpoint in main function in line
Obj *pObj = new Obj;and then step into (F11), then in Obj constructor I've got gdb warning
Can't find linker symbol for virtual table
but only for crosscompiled Qt. For normal pc compilation I didn't get any warnings. So I hope the problem is in compilation options for Qt, but I will have to verify that.
Best regards
@wojj Hmm, can you check which libs of Qt you are compiling against and which ones your app is using on the target? That warning is usually caused by differences in the version of Qt libs.
You can use
ldd yourapplicatonto show the libs. If they are system level or something you didn't expect, try having it use the libs you built against. Deploy it with them and set your
LD_LIBRARY_PATHbefore launching.
Here you can see a quick demo of ldd showing that I'm using a custom Qt at /usr/local/Qt:
[shockwave] ~/tmp > ldd tmp linux-vdso.so.1 (0x00007fff1c2e0000) libQt5Widgets.so.5 => /usr/local/Qt/lib/libQt5Widgets.so.5 (0x00007f59a0d01000) libQt5Gui.so.5 => /usr/local/Qt/lib/libQt5Gui.so.5 (0x00007f59a0566000) libQt5Core.so.5 => /usr/local/Qt/lib/libQt5Core.so.5 (0x00007f599fe46000)
Beyond that all I can say is moc can do weird things, lol. It wouldn't surprise me if it's just the debugger getting confused by the moc'd code. I would investigate the real reasons before writing that off though.
Also, mixed Qt libs could indeed be causing your crashing issues that you are seeing. | https://forum.qt.io/topic/86989/cutom-dialogs | CC-MAIN-2018-39 | refinedweb | 1,731 | 65.22 |
Establishing a Connection
The objects available within our connector are accessible from the "cdata.taxjar" module. In order to use the module's objects directly, the module must first be imported as below:
import cdata.taxjar as mod
From there, the connect() method can be called from the connector object to establish a connection using an appropriate connection string, such as the below:
mod.connect("APIKey=3bb04218ef8t80efdf1739abf7257144;")
Authenticating a TaxJar AccountTo authenticate to the TaxJar API, you will need to first obtain the API Key from the TaxJar UI. Keep in mind that the API is available only for Professional and Premium TaxJar plans. If you already have a Professional or Premium plan you can find the APIKey by logging in the TaxJar UI and going to Account->TaxJar API. After obtaining the API Key you can set the APIKey connection property. That's all you need to do for a successful connection.
Extra Notes
- By default the provider will retrieve data of the last 3 months in case the entity supports date range filtering. You can use the StartDate to set the minimum creation date of the data retrieved.
- If the API Key has been created for a sandbox API account please set UseSandbox to true in order for a successful connection.
- In case you are using a sandbox API account please keep in mind that not everything will work as expected. This is also documented in the TaxJar developer docs here: Sandbox Environment and here: Unsupported endpoints
- The TaxJar API rate limiting is really generous. (10000 requests per minute for TaxJar Professional plans and 25000 per minute for the TaxJar Premium plans).
- Because of the TaxJar API limits we are restricted to make an http request for each row in order to collect as much data as we can. We suggest to increase the value of the MaxThreads connection property.
- The default value of MaxThreads has been set to 20 which means it will make at most 20 concurrent requests. To improve the performance of the provider consider increasing this value based on the machines resources. | https://cdn.cdata.com/help/JTG/py/pg_connectionpy.htm | CC-MAIN-2021-49 | refinedweb | 348 | 52.29 |
What is internationalization (i18n) and localization (l10n)? I18n is the process of making the text in your application capable of delivering in multiple languages. l10n means that your application has been coded in a way that it meets the language or cultural requirements of a particular locale such as date formats, timezones, currency, symbols, or icons.
So, why are they important? Because you want your app to be as accessible as possible so you can reach maximum users. Java apps are relatively straightforward to internationalize, thanks to built-in mechanisms. Same goes for Spring Boot — it’s there by default!
This tutorial will show you how to internationalize a simple Java app, a Spring Boot app with Thymeleaf, and a JavaScript Widget.
If you’d rather watch a video, I created a screencast of this tutorial. + 11.ea.26-open 8.0.202.j9-adpt 11.0.2-sapmchn 8.0.202.hs-adpt 11.0.2-zulu 8.0.202-zulufx * 11.0.2-open 8.0.201-oracle 11.0.2.j9-adpt > + 8.0.181-zulu 11.0.2.hs-adpt 7.0.181-zulu 11.0.2-zulufx 1.0.0-rc-12-grl + 11.0.1-open 1.0.0-rc-11-grl + 11.0.0-open 1.0.0-rc-10-grl 10.0.2-zulu 1.0.0-rc-9-grl 10.0.2-open 1.0.0-rc-8-grl 9.0.7-zulu 9.0.4-open ================================================================================ + - local version * - installed > - currently in use ================================================================================
Set up your environment to use the latest version of OpenJDK with the command below:
sdk default java 11.0.2-open
Now, you should be able to run your
Hello.java as a Java program.
$ java Hello.java Hello, World!
Look, Ma! No compiling needed!!és teniendo un gran día. ��
Spring Boot uses Spring’s
LocaleResolver and (by default) its
AcceptHeaderLocalResolverimplementation..
Add the Ability to Change Locales With a URL Parameter
This is a nice set-up,.
Hot Reloading Thymeleaf Templates and Resource Bundles in Spring Boot 2.1 set up.
Create an OIDC App on Okta
If you already have an Okta Developer account, log in to it. If you don’t, create one at developer.okta.com/signup. After you’re logged in to your Okta dashboard, complete the following steps:
- From the Applications page, choose Add Application.
- On the Create New Application page, select Web.
- Give your app a memorable name, then click Done.
Your settings should look similar to the ones below.
You can specify your issuer (found under API > Authorization Servers), client ID, and client secret in
custom-login/src/main/resources/application.yml as follows:
okta: oauth2: issuer:.
This works because Spring auto-enables
AcceptHeaderLocaleResolver.
Add i18n Messages and Sync Locales, add
language as a model attribute, and add a
Locale parameter to the
login() method. Spring MVC will resolve the
Locale automatically with
ServletRequestMethodArgumentResolver.
package com.okta.spring.example.controllers; ... import java.util.Locale; @Controller public class LoginController { ... private static final String LANGUAGE = "language"; @GetMapping(value = "/custom-login") public ModelAndView login(HttpServletRequest request, @RequestParam(name = "state", required = false) String state, Locale locale) throws MalformedURLException { ... mav.addObject(LANGUAGE, locale); return mav; } ... }
Then, modify
custom-login/src/main/resources/templates/login.html and add a
config.language setting that reads this value.
config.redirectUri = /*[[${redirectUri}]]*/ '{redirectUri}'; config.language = /*[[${language}]]*/ '{language}';
Restart everything, go to, click the login button, and it should now render in English.
Add Internationalization Bundles for Thymeleaf
To make it a bit more obvious that changing locales is working, create
messages.properties in
custom-login/src/main/resources, and specify English translations for keys.
hello=Hello welcome=Welcome home, {0}!
Create
messages_es.properties in the same directory, and provide translations.
hello=Hola welcome=¡Bienvenido a casa {0}!
Open
custom-login!
Use the User’s Locale from Okta.
Yeehaw! Feels like Friday, doesn’t it?!
i18n in JavaScript with Angular, React, and Vue).
- Angular: ngx-translate
- React: a Translate component based off react-translate
- Vue: Vue I18n
Internationalize Your Java Apps Today!
I hope you’ve enjoyed this whirlwind tour of how to internationalize and localize your Java and Spring Boot applications. If you’d like to see the completed source code, you can find it on GitHub.
Baeldung’s Guide to Internationalization in Spring Boot was a useful resource when writing this post.
We like to write about Java and Spring Boot on this here blog. Here are a few of my favorites:
- Build a Java REST API with Java EE and OIDC
- Spring Boot 2.1: Outstanding OIDC, OAuth 2.0, and Reactive API Support
- Add Social Login to Your JHipster App
- Build and Secure Microservices with Spring Boot 2.0 and OAuth 2.0
- Develop a Microservices Architecture with OAuth 2.0 and JHipster
Follow us on your favorite social network { Twitter, LinkedIn, Facebook, YouTube } to be notified when we publish awesome content in the future.
i18n in Java 11, Spring Boot, and JavaScript was originally published to the Okta developer blog on February 25, 2019. | https://laptrinhx.com/how-to-internationalize-and-localize-your-java-and-spring-boot-apps-56994962/ | CC-MAIN-2020-45 | refinedweb | 839 | 51.04 |
How to continously run a piece of code while simultaneously checking for a condition?
Hi. How can you make Sikuli check if a statement / condition is being fulfilled while simultaneously executing another piece of code?
What I'm trying to achieve:
* While there isn't an image in a region, keep running a piece of code
* If the image appears (while running the code), stop running piece of code
Lacking approach:
Settings.
someRegion = Region(x1, y1, x2, y2)
someImage = "image.png"
def danceLeft(): #Just some functions
def danceRight():
def danceAcrossTheH
someCondition = None
while someCondition == None:
danceLeft()
danceRight()
danceAcross
someCondition = someRegion.
else:
print("Stopped dancing; the image has appeared!")
The problem is obvious - the script won't check the condition until it reaches the very last line in the while loop, and while it will be checking, the next iteration of the loop will be held on pause. How can this be avoided? I want it run a while loop without any interruptions while checking for a condition meanwhile, and stop the loop while the condition (an image appearing in a region, in this case) is met.
My hobby-level programming skills have so far mostly consisted of solving everything by doing silly loops, but something tells me there must be a better way. Cheers for the wonderful piece of software that is Sikuli; it's really amazing! Cheers. :D
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Manfred Hampl
- Solved:
- 2017-11-05
- Last query:
- 2017-11-05
- Last reply:
- 2017-11-05
Thanks for your reply and the link. I didn't know about 'break'. Certainly it will be handy in the future, as well as the rest of the FAQ. :) However, this does not entirely answer the question. The problem is that I want the functions to continue running, without having to stop and wait for the image explicitly (using 'exists'). I mean, to eliminate all such pauses. If I put multiple copies of the condition check, between the defined functions, it will have to wait before executing the next function.
while True; # starts endless loop
danceLeft()
if someRegion.
break
danceRight()
if someRegion.
break
danceAcross
if someRegion.
break
Perhaps this could work, if I make the condition check is short enough, but it doesn't feel optimal.
I was hoping there would be a way of continuously watching for an image in a region, while running / looping some functions in parallel, and stopping / breaking the loop while the condition is met.
Otherwise the condition check will have to be inserted everywhere, or atleast every now and then, in the loop, and the ride won't be smooth.
I think that http://
@Manfred
Good idea, but I guess this will make it for him even more complex.
@Martin
an image check always costs some time.
So the only chance is to minimise this "pause" in the script.
1. step: use
exists(image, 0)
which does only one search. depending on region and image size this may reduce the search time to some 10 millisecs.
2. step: make the search region as small as possible
Excellent, thanks! "It is possible to let the script wait for the completion of an observation or let the observation run in background (meaning in parallel)" sounds exactly like what I was looking for. It is difficult refer to the documentation, unless you know the jargon, but now it makes sense. Thanks alot RaiMan and Manfred. This helped alot. :)
Cheers for the additional feedback, RaiMan. I will try both approaches and see how they work out!
Thanks Manfred Hampl, that solved my question.
so if you want to leave the loop, when a condition is met, you have to use break.
If the loop does not have any condition to be check at every loop run beginning, the use while True (endless).
endless loops should have any terminating condition check in its body, so the script does not need to be killed by brute force.
#someCondition = None # not needed here
TheHallway( ) exists( someImg, 1): # check terminating condition
print( "Stopped dancing; the image has appeared!")
#while someCondition == None:
while True; # starts endless loop
danceLeft()
danceRight()
danceAcross
if someRegion.
break
# next statement after loop is exited | https://answers.launchpad.net/sikuli/+question/660366 | CC-MAIN-2018-47 | refinedweb | 710 | 72.05 |
Debugging (awesomely)
The simple act of jamming lots of print statements into your code to output values to help with debugging turns out to be tedious once you do it a billion times. One problem is in Ruby code that looks like this contrived example:
arr.sort.map{|el| "'#{el}'"}.join("\n")
If you wanted to inspect your object right in the middle, say after the sort but before the map, how could you? Like this maybe:
tmp = arr.sort p tmp tmp.map{|el| "'#{el}'"}.join("\n")
Ugly. Ruby 1.9 apparently has
Object#tap, which has been widely used in the community for a while anyways I believe but will now be a standard method. It has the simple definition:
def tap yield self self end
So you can do
arr.sort.tap{|o| p o}.map{|el| "'#{el}'"}.join("\n")
Just makes your life a bit easier. (Though tap can be used for more than printing things obviously.) Ruby's ability to mess with the innards of standard classes makes this possible. In languages which lack this power, you couldn't do this so easily.
Then you have Common Lisp, which arguably takes that kind of power to another level. So you can do something awesome like this and print intermediate results the whole way down the call chain without even having to edit your original code at all. The author mentions that it won't work with certain macros and special forms, but it's still awesome and useful even given its limitations. How could you do this in Ruby?
1 Comment
At least for python, we can rely on sys.settrace hookpoint as a way to track every step of the vm, and specifically identify when entering/leaving frames throughout the call chain.
Doesn't fully shoot through any cpython extensions (since to the vm, that's typically a single step), but for most usages, it's more then enough. | http://briancarper.net/blog/350.html | CC-MAIN-2017-26 | refinedweb | 325 | 72.56 |
55787/meaning-of-10j-in-scipy
10j in b = np.r_[3,[0]*5,-1:1:10j] means that the step value is imaginary. So here, the slicing starts from -1, upto 1 and the step size is 10j (start:stop:stepj). The value of 10j here is:
((1-(-1)) / 9)= 2 / 9= 0.2222222...
So if you print b, you will see the output to be as follows:
array([ 3. , 0. , 0. , 0. , 0. ,
0. , -1. , -0.77777778, -0.55555556, -0.33333333,
-0.11111111, 0.11111111, 0.33333333, 0.55555556, 0.77777778,
1. ])
Assumming a is a string. The Slice ...READ MORE
It's a function annotation.
In more detail, Python 2.x ...READ MORE
Polymorphism is the ability to present the ...READ MORE
The break statement is used to "break" ...READ MORE
There are several options. Here is a ...READ MORE
To count the number of appearances:
from collections ...READ MORE
Assuming that your file unique.txt just contains ...READ MORE
You can easily find polynomials of any ...READ MORE
Yes, you can do it as follows:
import ...READ MORE
Usually all I/Os are buffered, meaning they ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/55787/meaning-of-10j-in-scipy | CC-MAIN-2020-40 | refinedweb | 199 | 87.72 |
This section lists the known bugs and issues with SGD version 4.62.
Problem: Issues with seamless windows might be encountered when the user restarts a Windows application after closing it down. The problem is seen when the application is hosted on a Window Server 2008 R2 server.
Cause: A known problem with some versions of the SGD Enhancement Module.
Solution: Ensure that the version of the SGD Enhancement Module running on the Windows application server is the same as the SGD server version.
Problem: On Solaris 10 OS x86
platforms, enabling Integrated mode when you are logged in as
the
root user does not add applications to
the Solaris 10 Launch menu. You might also see the following
warning:
gnome-vfs-modules-WARNING **: Error writing vfolder configuration file "//.gnome2/vfolders/applications.vfolder-info": File not found.
Cause: A known issue with the Gnome Virtual File System (VFS).
Solution: No solution is currently available.
Problem: Using Internet Explorer 7 on Microsoft Windows Vista platforms, the SGD Client cannot be downloaded and installed automatically. The SGD Client can be installed manually and can be installed automatically using another browser, such as Firefox.
Cause: Internet Explorer has a Protected Mode that prevents the SGD Client from downloading and installing automatically.
Solution: Add the SGD server to the list of Trusted Sites in Internet Explorer's Security Settings.
Problem: If Java technology is enabled in your browser settings, but a Java Plugin tool is not installed on the client device, the SGD webtop does not display. The login process halts at the splash screen.
Cause: SGD uses the browser settings to determine whether to use Java technology.
Solution: Install the Java Plugin tool and create a symbolic link from the browser plug-ins directory to the location of the Java™ Virtual Machine ( JVM™ ) software. Refer to your browser documentation for more information.
Problem: When using a Canadian French (legacy) keyboard layout with Windows applications, some French characters are printed incorrectly.
Cause: A known issue with Canadian French (legacy) keyboard layouts.
Solution: No known solution. A compatible keymap file is not supplied with SGD at present.
Problem: On Solaris 10 OS,
font errors are reported and there are display problems when
starting the VirtualBox software from a Java Desktop System
desktop session that is displayed using MyDesktop. The problem
is seen when using
Xsession.jds as the
Application Command for the MyDesktop application object.
Cause: Unavailable fonts on the SGD X server.
Solution: When starting the
VirtualBox software from the Java Desktop System desktop
session, use the
-fn option to specify valid
fonts. Alternatively, install the missing fonts on the SGD
server. See the
Oracle Secure Global Desktop 4.6 Administration Guide for more
details about using fonts with SGD.
Problem: On Microsoft Windows client devices with Japanese locales, Kana mode is not available for Solaris OS applications.
Cause: On Microsoft Windows client devices, the SGD Client uses ASCII for Kana mode. Solaris OS applications use Unicode for Kana mode.
Solution: On the Microsoft
Windows client device, add a new system variable
TARANTELLA_KEYBOARD_KANA_SOLARIS. Set the value
of this system variable to
1.
Problem: When using LDAP to
authenticate users, Windows applications can fail to start if
the distinguished name (DN) of the user contains more than one
single straight quotation mark (
').
Cause: A known issue.
Solution: The workaround is
to edit the
wcpwts.exp login script. This
script is in the
/opt/tarantella/var/serverresources/expect
directory on the SGD server.
Locate the following entry in the
wcpwts.exp script:
regsub {'} $value {'"'"'} value
Edit the entry to read as follows:
regsub -all {'} $value {'"'"'} value: The load-balancing JavaServer Page (JSP) used by SGD for load balancing of user sessions does not work. A Java warning message might be shown.
Cause: To use the load-balancing JSP, Java technology must be enabled on the client device.
Solution: Do one of the following:
Enable Java technology in the browser on the client device.
Use the SGD Gateway to load balance user sessions. This is the preferred solution, as the load-balancing JSP might not be available in future releases. See the Oracle Secure Global Desktop 4.6 Gateway Administration Guide for details of how to install and configure the SGD Gateway. Solaris 10 OS Trusted Extensions platforms, startup times for Windows applications and X applications might be longer than expected.
Cause: By default, the X Protocol Engine attempts to connect to X display port 10. This port is unavailable when using Solaris 10 OS 10.04 Linux: Error messages about
ThreadLocal memory leaks are written to the Tomcat JSP container
log file at
/opt/tarantella/webserver/tomcat/.
Operation of SGD is not affected.
tomcat-version/logs/catalina.out
Cause: A known issue with the memory leak detection feature of Tomcat.
Solution: No known solution. The issue will be fixed in future releases of Tomcat. Vista
and Microsoft Windows 7 platforms.
Solution: Recreate the missing keys, by starting the Remote Desktop Connection with administrator privileges. See Microsoft Knowledge Base article 187614 for more details.
Problem: After 90 days, users cannot connect to SGD using a version 4.5 Gateway. After upgrading a Gateway to version 4.6, users cannot connect to SGD.
Cause: Version 4.5 of the SGD Gateway uses self-signed certificates that are valid for only 90 days. This affects the default self-signed SSL certificate used for client connections to the Gateway, as well as the Gateway certificate and the certificate used for the Reflection service.
After upgrading a Gateway to version 4.6, users cannot connect to SGD because the Gateway self-signed certificates have been replaced.
Solution: If you are using a version 4.5 Gateway, upgrade to version 4.6.
If you have upgraded a Gateway to version 4.6, you need to perform the standard configuration steps for authorizing a Gateway to SGD, as described in “How to Install SGD Gateway Certificates on the SGD Array” on page 16 of the Oracle Secure Global Desktop 4.6 Gateway Administration Guide.
In version 4.6, the Gateway certificate and the certificate for the Reflection service are valid for 3600 days. The default self-signed SSL certificate used for client connections to the Gateway is valid for 365 days. If you have installed your own SSL certificate for client SSL connections, this certificate is preserved when you upgrade.
Problem: Portable Document Format (PDF) printing might not work on Solaris 10 10/09 platforms. The PDF file displays PostScript™ error messages.
Cause: A known issue with some versions of Ghostscript. SGD uses Ghostscript to convert print jobs into PDF files.
Solution: Install the latest
version of Ghostscript on the SGD server. Ensure that the
symbolic link
/opt/tarantella/var/info/gsbindir points to
the directory where the new Ghostscript binaries are installed.
This fix has been verified using version 8.71 of Ghostscript.: On Ubuntu client platforms, applications displayed in kiosk mode are obscured by the Ubuntu desktop toolbars. The issue is seen when the Compiz window manager is used and visual effects are enabled for the Ubuntu desktop.
Cause: The Compiz window manager does not provide legacy full screen support by default.
Solution: Do either of the following:
Turn off visual effects for the Ubuntu desktop.
Install the Compiz Config Settings Manager and enable the Legacy Fullscreen Support option in the Workarounds plugin.
Changes made only take effect for new application sessions.
Problem: Localized HTML documentation is not available. English documentation is displayed instead.
Cause: A known issue.
Solution: PDF versions of the localized documentation are available from the SGD web server Welcome: LDAP login filters are not preserved when you upgrade to version 4.6 of SGD.
Cause: Because of LDAP enhancements introduced in SGD 4.6, any customizations you have made to the LDAP login filters are not preserved on upgrade. See Section 1.1.3, “Active Directory and LDAP Enhancements” for more details of the enhancements.
Solution: Reconfigure your LDAP login filters after upgrading. See the “Filtering LDAP or Active Directory Logins” section in Chapter 2 of the Oracle Secure Global Desktop 4.6 Administration Guide for details of how to configure LDAP login filters.
Problem: When installing the SGD Enhancement Module on 64-bit SUSE Linux platforms, installation of the UNIX audio module fails. The issue is seen when installing on SUSE Linux Enterprise Server 11.
Cause: A known issue on 64-bit SUSE Linux platforms.
Solution: The workaround is
to edit the following files in the
/opt/tta_tem/audio/src/sgdadem directory:
In the
Makefile file, change all
instances of
CFLAGS to
EXTRA_CFLAGS.
In the
sgdadem.h file, replace the
following line:
#include <linux/ioctl32.h>
Add the following lines:
#include <linux/version.h> #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,22) #include <linux/ioctl32.h> #endif
After making the changes to the
sgdadem.h
file, run the following commands to install and start the audio
module.
# cd /opt/tta_tem/audio/src/sgdadem # make # make install # /opt/tta_tem/bin/tem startaudio
Problem: Using automatic configuration to reconfigure secure connections fails on an SGD server that has been upgraded to version 4.6. The issue is seen on upgraded servers that have previously been configured for secure connections automatically, using the tarantella config enable command.
Errors are reported when you use the tarantella security disable command to restore original security settings.
Cause: A known issue when using tarantella security disable on an upgraded server.
Solution: Run tarantella security disable on the server before you upgrade. Secure connections can then be configured automatically on the upgraded server, by running tarantella security enable.
Problem: LDAP searches into parent organizational units (OUs) in Active Directory do not return any results. The issue is seen in the Administration Console when assigning applications to LDAP users using Directory Services Integration (DSI). LDAP searches into child OUs are unaffected.
Cause: A known issue with the LDAP search filter generated by the Administration Console.
Solution: The workaround is to modify the LDAP search filter.
In the Administration Console, go to the Assigned User Profiles tab for the application object.
In the Advanced Search section, append an
(objectclass=*) entry to the LDAP search
filter. For example:
ldap:///OU=Users,OU=Marketing,DC=example,DC=com,DC=uk??sub?(objectclass=*)
Problem: Cached passwords for some LDAP users may no longer work following an upgrade from version 4.50.
Cause: A known issue. The naming format for storing LDAP password cache entries has changed since SGD 4.50.
Solution: Contact Oracle
Support or see
for details of how to migrate password cache entries.
Problem: Users are unable to start applications, or to access the Administration Console. The issue is seen when the SGD Gateway is configured to use unencrypted HTTP connections between the Gateway and the SGD servers in the array.
Cause: A known issue when connections between the Gateway and the SGD servers in the array are not secure. By default, these connections are secure.
Solution: The workaround is
to edit the Apache reverse proxy configuration file at
/opt/SUNWsgdg/httpd/.
apache-version/conf/extra/gateway/httpd-gateway.conf
Comment out the following entry:
ProxyPassReverse / https://
gateway.example.com:443/
Add the following entries:
ProxyPassReverse / http://
gateway.example.com/ ProxyPassReverse / http://
gateway.example.com:80/
where
gateway.example.com is the name
of the SGD Gateway.
Problem: The Java Plugin tool is installed on the client device and Java technology is enabled in your browser settings, but SGD reports that Java is not enabled or installed for the browser. The issue is seen when logging in to SGD using Internet Explorer 9 on Windows client platforms.
Cause: A known issue when using this version of Internet Explorer.
Solution: Use one of the following workarounds.
Before logging in to SGD, enable compatibility view for Internet Explorer. See Microsoft Knowledge Base article 956197 for details of how to do this.
When the Java detection error message is displayed, click the Back button on the browser. To use this workaround, the SGD Client icon must be present in the task bar and should indicate that a connection has been established.
Problem: Active Directory
authentication fails for user names that contain accented
characters, such as the German umlaut character
(
ü).: Secure connections to the Gateway using SSL do not alway use high grade ciphers.
Cause: By default, the Gateway supports a wide range of cipher suites, including some low and medium grade ciphers.
See Section 2.3.4, “Supported Cipher Suites for SSL Connections” for a list of supported cipher suites for SSL connections.
Solution: Configure the Gateway to use a specific set of ciphers, as follows:
Stop the Gateway.
# /opt/SUNWsgdg/bin/gateway stop
In the
/opt/SUNWsgdg/etc directory
create a file called
ciphersuites.xml
that contains a list of the required ciphers. For example:
<ciphersuites> <cipher>SSL_RSA_WITH_RC4_128_MD5</cipher> <cipher>SSL_RSA_WITH_RC4_128_SHA</cipher> <cipher>TLS_RSA_WITH_AES_128_CBC_SHA</cipher> <cipher>TLS_RSA_WITH_AES_256_CBC_SHA</cipher> <cipher>TLS_DHE_RSA_WITH_AES_128_CBC_SHA</cipher> <cipher>TLS_DHE_RSA_WITH_AES_256_CBC_SHA</cipher> <cipher>TLS_DHE_DSS_WITH_AES_128_CBC_SHA</cipher> <cipher>TLS_DHE_DSS_WITH_AES_256_CBC_SHA</cipher> <cipher>SSL_RSA_WITH_3DES_EDE_CBC_SHA</cipher> <cipher>SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA</cipher> <cipher>SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA</cipher> </ciphersuites>
Add the following entries to the
/opt/SUNWsgdg/etc/gateway.xml file, so
that it includes
ciphersuites.xml.
<service id="sgd-ssl-service" class="SSL"> ... <keystore file="/opt/SUNWsgdg/proxy/etc/keystore.client" password="/opt/SUNWsgdg/etc/password"/> <xi:include </service> ... <service id="http-ssl-service" class="SSL"> ... <keystore file="/opt/SUNWsgdg/proxy/etc/keystore.client" password="/opt/SUNWsgdg/etc/password"/> <xi:include </service>
Restart the Gateway.
# /opt/SUNWsgdg/bin/gateway start
Problem: Users with Sun Type 7 Japanese keyboards cannot input characters correctly using SGD.
Cause: Missing Solaris OS keytable on the client device.
Solution: Install the appropriate patch to install the keytable on the client device.
Problem: When using the SGD Client in Integrated mode on Microsoft Windows client devices, users might notice that the Start menu entries are not sorted alphabetically.
Cause: This is caused by a Windows feature that adds new items to end of a menu, rather than preserving the alphabetical sorting.
Solution: See Microsoft Knowledge Base article 177482 for details.
Problem: For Microsoft Windows Server 2003 applications, the display color depth on the client device is limited to 8-bit for large screen resolutions. The issue is seen when screen resolutions are higher than 1600 x 1200 pixels.
Cause: A known issue with Windows Server 2003 terminal services sessions.
Solution: See Microsoft Hotfix 942610 for details of how to increase the color depth to 16-bit. | http://docs.oracle.com/cd/E19351-01/E23646/html/known-bugs-issues.html | CC-MAIN-2015-14 | refinedweb | 2,392 | 50.12 |
This is a presentation I gave at PyCon 2016. You can watch the video on YouTube and view the slides served from the repo on GitHub.
A friend of mine was asked what a closure was at a programming interview a few years ago. Despite being a competent Python and JavaScript programmer who took advantage of closures in code he wrote, he froze up at the question. It’d be nice to have something to say in response to this question, if not a solid definition.
Programmers more familiar with other languages have also asked me, “Tom, you know Python; does Python even have closures?” and “I heard Python has weak support for closures.” Once we’ve reached closure on this topic, I hope you’ll be able to respond productively and engage with questions and misunderstandings about Python scope others might have.
To find our closure we’ll start with the importance of environment to our functions and compare lexical and dynamic scope. Then we’ll follow the evolution of variables scoping in the Python language over the last 25 years. We’ll conclude that some Python certainly supports closures, but that which Python functions count as closures and since which version of Python they have depends on the definition of closure used.
Consider two functions for formatting strings: one for bolding text in
HTML, the other for bolding text in the terminal. We’ll import these functions
from their respective modules using the
from ... import ... as syntax
because they both have the same name.
>>> from htmlformat import bold as htmlbold >>> from terminalformat import bold as termbold
The source code for these two functions can be viewed with the builtin inspect module.
>>> import inspect >>> print(inspect.getsource(htmlbold)) def bold(text): return '{}{}{}'.format(BOLDBEFORE, text, BOLDAFTER) >>> print(inspect.getsource(termbold)) def bold(text): return '{}{}{}'.format(BOLDBEFORE, text, BOLDAFTER)
Although these functions appear identical, they have different behavior:
>>> htmlbold('eggplant') '<b>eggplant</b>' >>> termbold('eggplant') '\x1b[1meggplant\x1b[0m'
How is this possible; what differs between these two functions?
Here’s another, similar question: We saw before that the
bold
function uses the variable
BOLDBEFORE. Because it is neither a
parameter to the function nor a local variable, we call it a “free
variable”.
If we call that function
after setting a local variable with the same name, will that
change its behavior? Will
>>> from htmlformat import bold as htmlbold >>> def signbold(phrase): ... BEFOREBOLD = '(in Sharpie) ' ... return htmlbold(phrase) ... >>> signbold('eggplant')
output
'<b>eggplant</b>' or
'(in Sharpie) eggplant</b'?
The question amounts to whether the Python language uses “open free variables” whose values are determined by looking up the call stack (dynamic scope), or closed free variables that use the value in the environment in which the function was defined (lexical scope).
In a 1970 paper describing implementions of these two approaches, Joel Moses points out that although it might be easier to implement a language with the first behavior, programmers are usually interested in the second. They want their functions to use the variables they created for use with that function, not new variables at their functions’ call sites. The answer is that Python ignores this new variable and bold tags again surround the word eggplant.
What about changing the global variable?
>>> from htmlformat import bold as htmlbold >>>>> htmlbold('eggplant')
More eggplant sandwich on bold tags! The global variables in another module are not affected by changes to global variables in this one.
Now let’s finally take a look at those bold functions.
BEFOREBOLD = '<b>' BEFOREBOLD = '\x1b[1m' AFTERBOLD = '</b>' AFTERBOLD = '\x1b[0m' def bold(text): def bold(text): return '{}{}{}'.format( return '{}{}{}'.format( BOLDBEFORE, BOLDBEFORE, text, text, BOLDAFTER) BOLDAFTER)
Each formatting module has its own global variables. Indeed, “global” variables are terribly named because there aren’t global to your whole Python program. Since we’re stuck with that name, perhaps we should imagine each module as its own planet.
When functions are imported from another module, they emerge as emissaries from their planets with live links back to their home worlds they use to look up variables.
For function objects in Python hold
not only a reference to the name of their home module (the
.__module__
attribute) but also a reference to the very namespace of that module which
contains bindings from global variables names to values (
__globals__).
In the paper mentioned earlier, Joel Moses described an implementation of this type of behavior: for a function to behave this way, it needs both code to execute and the environment which closes the variables use in that function. He called this combination of code and environment a closure. So Python functions are already sounding a lot like closures!
Because this is a live link, any updates to the bindings that occur in the home module after the function is defined are still available to the function. We can even change global bindings directly by rebinding attributes of this data structurel that represents this environment: the imported module object.
The distinction between function definition time and function execution becomes important with this “live link” behavior. It turns out that Python analyzes function source code, even compiles it, when a function is defined. During this process it determines the scope of each variable. This determines the process that will be used to find the value of each variables, but does not actually look up this value yet.
A Python function object is the result of this process. Each of its
attributes stores a different piece of computer-readable information about
about the function. Most of this information is in the code object stored
by the
__code__ attribute.
Since Python has been available on the internet, there have been at least two
types of variables in functions: local variables and global variables.
Local variables (including function parameters) appear in
.__code__.co_varnames
and global variables and a few other things make up
.__code__.co_names.
Identifying the scope of a variable is a task Python programmers do
frequently as they read code, so you may already have an intuition for
the rules. Let’s try at a few examples to understand the rule.
>>> def movie_titleize(phrase): ... capitalized = phrase.title() ... return capitalized + ": The Untold Story"
In this function for building great movie titles, are
phrase and
capitalized local or global variables?
They are both local. One is a parameter to the function, the other is assigned to on the first line. This type of function, sometimes called a “pure” function, doesn’t need its link to its home module for looking up variables. Without this associated environment, the function would not be a closure, and here we find out first fork in the meaning of the word. Is a function a closure if it has this link to its defining environment but that environment is never used? Some would say functions require free variables to be closures, others that the combination of code and environment is enough, so long as the Python doesn’t remove this link link to home module. For an altogether different reason, most would say that none of the functions we have seen so far are closures. Hang on for that reason in a few minutes.
>>> def catchy(phrase): ... options = [phrase.title(), DEFAULT_TITLE] ... options.sort(key=catchiness) ... return options[1] ...
In this function for finding catchy phrases, are
phrase,
options,
DEFAULT_TITLE, and
catchiness local variables or global variables?
Once you decide, you can find out whether you agree with the Python
interpreter by checking those interesting attributes of the function’s
code object:
>>> catchy.__code__.co_varnames ('phrase', 'options') >>> catchy.__code__.co_names ('catchiness', 'DEFAULT_TITLE', 'sort', 'catchiness')
phrase and
options are local variables because the first was a parameter
and the second was assigned to.
DEFAULT_TITLE and
catchiness fit neither
of these descriptions so they are global variables.
A few extra strings are in the
co_names tuple because Python uses this
list for most that storing global variables.
If you saw this function in some source code and wanted to copy it alone to use,
you wouldn’t be able to: there’s important environment information you
would also need to include.
Your programmer intuition might disagree with Python’s categorization in this next example.
>>> HIGH_SCORE = 1000 >>> def new_high_score(score): ... print('congrats on the high score!') ... print('old high score:', HIGH_SCORE) ... HIGH_SCORE = score ... >>> new_high_score(1042)
It certainly looks like the author of the function wanted
HIGH_SCORE to
be a global variable, but Python categorizes it as a local variable
because it’s assigned to in the function. Calling the function results
in an UnboundLocalError because the variable, considered local for the
entirety of the function, doesn’t have a value assigned yet when it’s
printed as the old high score.
The programmer can express this authorial intent to Python with the
global
keyword, which changes the categorization of
HIGH_SCORE from local variable
to global.
>>> HIGH_SCORE = 1000 >>> def new_high_score(score): ... global HIGH_SCORE ... print('congrats on the high score!') ... print('old high score:', HIGH_SCORE) ... HIGH_SCORE = score ... >>> new_high_score.__code__.co_varnames ('score', 'HIGH_SCORE') >>> new_high_score.__code__.co_names ('print',) >>> new_high_score(1042) congrats on the high score! old high score: 1000
With the global keyword we’ve now completed a description of how scope has worked in Python from its inception through to Python 2.0 in the year 2000. Python functions have always closed over their own module-level environment. But an important method of closing free variables was still not available to us, one required by most to classify a function as a closure: using outer scopes that are not the global scope.
def tallest_building(): buildings = {'Burj Khalifa': 828, 'Shanghai Tower': 632, 'Abraj Al-Bait': 601} def height(name): return buildings[name] return max(buildings.keys(), key=height)
Are the variables
name and
buildings local or global variables in the
height function above?
name is certainly local as a parameter, but
buildings is neither local or global, it comes from an outer non-global
scope. Since
buildings is not a local variable it is assumed to be global
in Python 2.0 and calling it produces “global name ‘buildings’ is
not defined” NameError. Optionally in Python 2.1, then by default in Python
2.2, variables from outer non-global scopes were added and are found at
.__code__.co_freevars:
>>> height.__code__.co_varnames ('name',) >>> height.__code__.co_names () >>> height.__code__.co_freevars ('buildings',)
Typically when people talk about closures they mean closing around these in between outer scopes that are neither local nor global. Closing over the module-level “global” scope is considered a special case, and indeed is simpler to implement. You may already be familiar with module objects in Python: generally they’re singletons, so a given module has only one mapping of variables to values. But a function can be run many times, producing many different mappings of its local variables to values. Each of which must be kept track of so long as a function that was defined in this or an enclosing scope still exists.
formatters = {} colors = ['red', 'green', 'blue'] for color in colors: def in_color(s): return ('<span style="color:' + color + '">' + s + '</span>') formatters[color] = in_color formatters['green']('hello')
The code above defines several functions for formatting text in color in html.
With which color does the green one of these functions format the text
'hello'?
Since these three functions were defined in the same environment, they share
the same mapping of variables to values. If we consider the value of the
color variable once the for loop has finished, it becomes clear that all
three functions have the same behavior: coloring strings blue. If each
function is to have a different value associated with the color variable it is
necessary to create separate scopes for these functions to be defined in:
formatters = {} colors = ['red', 'green', 'blue'] def make_color_func(color): def in_color(s): return ('<span style="color:' + color + '">' + s + '</span>') return in_color for color in colors: formatters[color] = make_color_func(color) formatters['green']('hello')
Each time the
make_color_func function is called, a new local mapping
is created binding color to one of red, green or blue; a function called
in_color is defined which references the color variable in this outer scope;
and the
in_color function is returned and stuck in a dictionary.
This solution to the “late-binding” behavior of Python relies on separate
scopes being created for each function and requires that Python maintain
three sets of bindings for the
make_color_func function’s local scope.
Precisely how these bindings are maintained by Python is beyond our scope here,
but the
.__closure__ attribute on each of the three produced functions provides
some hint.
We’ve reached the most common definition of a closure: a function with variables closed by an outer, non-global scope. However another fork in definitions occurs here: some would call our three color functions closures and but not the earlier height function because it was used in the same scope it was defined. Although CPython doesn’t implement the two any differently, you can imagine that it becomes more difficult to maintain the environment a function to evaluate its variables once the bindings it needs goes out of scope. The distinction here is that looking up the stack instead of the “closure” solution of code + environment would result in the same behavior in the first case, making whether a function was a closure or not only distinguishable in the second case.
So by Python 2.2, functions in Python are definitely closures: every function is always a closure by my most library definition since they all carry environment with them, and only those which reference variables which have gone out of scope by the strictest. Nothing much changes with scope through the Python 2.x series, so we have now covered scope in Python up through 2.7, through to the year 2008.
But if rumblings of the insufficiency of Python’s closures have ever reached your ears, you may not have found your closure yet. You may have heard that Python has “weak support” for closures, or that Python has “read-only” closures, not “full” closures. This comes from an asymmetry between global variables and outer non-global variables, which I will from now on refer to as “nonlocal” variables.
>>> def get_number_guesser(answer): ... last_guess = None ... def guess(n): ... if n == last_guess: ... print('already guessed that!') ... last_guess = n ... return n == answer ... >>> guess = get_number_guesser(12) >>> guess(9)
Like the earlier example demonstrating the usefulness of the
global keyword,
the inner
guess function above assigns to the variable
last_guess that the
programmer meant not to be local. How can Python be informed of this intent?
With the new
nonlocal keyword in Python 3.
Without
nonlocal, nonlocal variables cannot be rebound to new values.
Nonlocal mutable objects can be mutated for a similar effect, but the identity
of the object in an outer binding cannot be changed. But since Python 3,
we definitely have “full” closures now; there are no more missing details.
As with the global keyword the change in semantics may seem small, but its lack is met with incredulity in Python 2 by some familiar with closures in other languages. As we find our closure with what closures are and whether they exist in Python, a new question arises: how did we get on without them for so long?
We use closures all over the place in Python: inner functions
(often written with the
lambda syntax)
that reference outer scopes abound, and the use of functions
in interfaces as callbacks makes their use more likely.
Decorators always take a function as an argument and often define a new
function to replace it, which itself typically holds a reference to the old
function through a free variable from the outer function scope of the
decorator. We can inspect a function for its
.__closure__ attribute
to see if it contains free variables that are closed by outer, nonlocal
scopes for those who demand this of their closures.
Adding the nonlocal keyword took nine years, from Python 2.2 in 2001 to Python 3 in 2008. If it’s so important a change, why don’t we see a ton of code using it now?
The
global keyword may have delayed this need: module-level bindings
have been modifiable in Python for a long time. And now that nonlocal
is here, the need for compatibility with Python 2 code that many library authors
have prevents some uses. Consider this abridged excerpt from Django:
def decorating_function(user_function): ... nonlocal_root = [root] # make updateable non-locally def wrapper(): nonlocal_root[0] = oldroot[NEXT] ...
Since the root variable in the outer function cannot directly changed, it is stuck in the simplest possible mutable object – a list – which is mutated to imitate rebinding. Based on the name, it’s clear both that nonlocal would be a good fit here and that the author of this code knew that when they wrote it. But compatibility with Python 2 forces the word nonlocal to be used here only to evoke the idea of a nonlocal variable.
This pattern is less common than in some languages because of Python’s excellent object system, in particular its ability to bind methods to objects. When a callback function which accesses or modifies some internal state is needed, often that state will be placed in a class instance. State on objects in Python is readable and instrospectable, and the methods of an object can be used as callbacks.
def tallest_building(): buildings = {'Burj Khalifa': 828, 'Shanghai Tower': 632, 'Abraj Al-Bait': 601} return max(buildings.keys(), key=buildings.get)
Here the
get method of the builtin Python dictionary object is used as a
callback, which concisely expresses what data the method will operate on.
And finally I posit a cultural reason: Python programmers tend to be comfortable with private data being externally accessible. Python and JavaScript are relatively similar languages, and both lack (or at least in certain versions have lacked) private object data which can be accessed by methods of the object but not by outside code, instead using conventions like a single underscore to inform users that an attribute is not part of the public interface with that object. In both languages the following pattern is possible, but in JavaScript it is commonplace while in Python it is unheard of.
>>> class Person(object): pass >>> def create_person(name): ... age = 10 ... p = Person() ... def birthday(): ... nonlocal age ... age = age + 1 ... p.birthday = birthday ... p.greet = lambda: print("Hi, I'm", name, "and I'm", age, "years old") ... return p ... >>> me = create_person('Tom') >>> me.birthday() >>> me.age Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: 'Problem' object has no attribute 'age'
The above code hides data in local variables of a constructor function
which inner functions have access to, then adds these methods to the
Person
instance. Now the
Person instance has methods for accessing and modifying private
data that are not attributes of the object itself. Again, this pattern is entirely
possible and achieves the same aim as in JavaScript, but culturally isn’t used
in Python.
I think it’s fine that we don’t use rebinding closures all that much. In new code nonlocal should be used when appropriate instead of the mutable object hack we saw above, but it’s fine for it to remain relatively rare.
Some modern Python functions are most certainly closures: whether it’s all of them or just a few depends on the definition. Is it enough to be capable of referring to variables from outer scopes (all Python functions), or must the functions make use of this ability? Must these outer scopes not be global? Must the scopes referenced by the free variables go out of scope to prove a function is a closure, or is storing the environment such that the variables could go out of scope enough? And must closures be able to rebind these free variables, disqualifying all Python 2 functions?
I like the “all Python 2.2 and greater functions which close over outer, non-global scopes are closures” answer, but have found closure in knowing the discussion to have if I were asked.
I hope I’ve helped you find closure with closures.
Further reading:
- An earlier post about rebinding closures in Python
- Ned Batchelder’s Facts and myths about Python names and values and later PyCon talk
- PEP 227 Statically Nested Scopes, which includes notes on closure implementation
- PEP 3104 Access to Names in Outer Scopes
- Wikipedia: Closure (computer programming)
Related Python topics:
- builtins: last resort of failed global variable lookups
__closure__and
__code__.cell_vars: how closures are implemented
- bytecode: what does “compiling” a function really mean?
- descriptors and method binding: the dark secret that turns functions into methods
- scopes of various comprehensions and generator expressions: I lied when I said scope hasn’t changed much
Others’ thoughts on closures in Python:
- effbot: Closures in Python
- Stack Overflow: Why aren’t Python nested functions called closures?
- ynniv: Closures in Python
- Python Tutorial: Python Scopes and Namespaces
Others’ thoughts on closures:
- Joel Moses: The Function of FUNCTION in LISP
- Martin Fowloer: Lambda
- MDN: JavaScript Closures
- Stack Overflow: What is a closure?
- programmers.stackexchange: What is a closure? | http://ballingt.com/python-closures/ | CC-MAIN-2018-34 | refinedweb | 3,518 | 60.95 |
Red Hat Bugzilla – Bug 1258014
oslo_config != oslo.config
Last modified: 2016-05-19 11:57:36 EDT
Created attachment 1068070 [details]
patches to make things work.
Description of problem:
Had upgraded from juno -> kilo and things were very broken. Had done another cluster from juno -> kilo rdo 2015.1.0 and didn't see this issue.
Multiple services, multiple processes each were broken. Notably Cinder, Horizon, Ceilometer and Trove.
Long story short, it turns out some of the packages are mixing importing oslo.config and oslo_config, either directly or via oslo.messaging importing oslo_messinag that imports oslo_config.
Since oslo_config and oslo.config are different namespaces config entries get registered in the wrong place, then break when the process goes and looks up the values in the other namespace.
I've attached the set of patches I had to apply to get the cloud starting again.
Version-Release number of selected component (if applicable):
2015.1.1
How reproducible:
100%
Steps to Reproduce:
1. Install the packages
Actual results:
Stuff wont start
Expected results:
Stuff starts
This bug is against a Version which has reached End of Life.
If it's still present in supported release (), please update Version and reopen. | https://bugzilla.redhat.com/show_bug.cgi?id=1258014 | CC-MAIN-2017-39 | refinedweb | 200 | 60.51 |
21 July 2011 10:25 [Source: ICIS news]
SHANGHAI (ICIS)--?xml:namespace>
Asian BD prices have surged to $4,250-4,300/tonne (€2,975-3,010/tonne) CFR (cost & freight) NE (northeast)
Shandong Yuhuang is planning to shut its BR plant for two months and will not make any offers, an industry source said.
Domestic BD prices have surged to around yuan (CNY) 30,000-31,000/tonne ($4,644-4,799/tonne), while BR prices were at around CNY34,000/tonne on 20 July, according to Chemease, an ICIS service in
Despite the high BR prices, producers are not able to cover their costs with most making a loss in the domestic market, according to a market source.
($1 = €0.70, $1 = CNY6.46)
Additional reporting by Alex Feng and Helen Yan | http://www.icis.com/Articles/2011/07/21/9479011/chinas-shandong-yuhuang-shuts-butadiene-rubber-plant-on-high.html | CC-MAIN-2015-22 | refinedweb | 134 | 74.32 |
hints
A simple widget for showing dismissible help texts to the user. If the widget is dismissed, it will remember by saving the state in persistent storage and will never be shown again.
Usage
Import the package
To use this package, add hints as a dependency in your pubspec.yaml
Use the package
import 'package:hints/hints.dart';
The main widget to use is HintCard.
The widget need a unique Key to keep track of if the widget is hidden or not.
It also needs a hint text and can be provided an icon.
Note: The widget will assert if no key is provided. Don't provide a generated key as it will not match next run.
Getting started
Please look at the example code for getting started.
| https://pub.dev/documentation/hints/latest/ | CC-MAIN-2019-43 | refinedweb | 128 | 75.91 |
.globalId; 21 22 import javax.faces.component.NamingContainer; 23 import javax.faces.component.UIComponentBase; 24 import javax.faces.context.FacesContext; 25 26 /** 27 * A simple container-component that causes its child components to render a clientId value without 28 * any prefix. 29 * <p> 30 * Important: this component works only when run in a JSF-1.2 (or later) environment. When run in 31 * a JSF-1.1 environment it will not cause an error, but will instead act like a NamingContainer 32 * itself, ie will <i>add</i> its own id to the child component's clientId. 33 * </p> 34 * <p> 35 * Every JSF component has a "clientId" property; when the component is rendered, many components 36 * output this as part of the rendered representation. In particular, when rendering HTML, many 37 * components write an "id" attribute on their html element which contains the clientId. The clientId 38 * is defined as being the clientId value of the nearest NamingContainer ancestor plus ":" plus the 39 * component's id. 40 * </p> 41 * <p> 42 * The prefixing of the parent container's clientId is important for safely building views from 43 * multiple files (eg using Facelets templating or JSP includes). However in some cases it is 44 * necessary or useful to render a clientId which is just the raw id of the component without any 45 * naming-container prefix; this component can be used to do that simply by adding an instance of 46 * this type as an ancestor of the problem components. This works for <i>all</i> JSF components, 47 * not just Tomahawk ones. 48 * </p> 49 * <p> 50 * Use of this component should be a "last resort"; having clientIds which contain the id of the ancestor 51 * NamingContainer is important and useful behaviour. It allows a view to be built from multiple different 52 * files (using facelets templating or jsp includes); without this feature, component ids would need to be 53 * very carefully managed to ensure the same id was not used in two places. In addition, it would not be 54 * possible to include the same page fragment twice. 55 * </p> 56 * <p> 57 * Ids are sometimes used by Cascading Style Sheets to address individual components, and JSF compound 58 * ids are not usable by CSS. However wherever possible use a style <i>class</i> to select the component 59 * rather than using this component to assign a "global" id. 60 * </p> 61 * <p> 62 * Ids are sometimes used by javascript "onclick" handlers to locate HTML elements associated with the 63 * clicked item (document.getById). Here, the onclick handler method can be passed the id of the clicked 64 * object, and some simple string manipulation can then compute the correct clientId for the target 65 * component, rather than using this component to assign a "global" id to the component to be accessed. 66 * </p> 67 * <p> 68 * This component is similar to the "forceId" attribute available on many Tomahawk components. Unlike 69 * the forceId attribute this (a) can be used with all components, not just Tomahawk ones, and (b) 70 * applies to all its child components. 71 * </p> 72 * <p> 73 * Note that since JSF1.2 forms have the property prefixId which can be set to false to make a UIForm 74 * act as if it is not a NamingContainer. This is a good idea; the form component should probably 75 * never have been a NamingContainer, and disabling this has no significant negative effects. 76 * </p> 77 * 78 * @JSFComponent 79 * name = "s:globalId" 80 * tagClass = "org.apache.myfaces.custom.globalId.GlobalIdTag" 81 */ 82 public class GlobalId extends UIComponentBase implements NamingContainer 83 { 84 public final static String COMPONENT_FAMILY = "org.apache.myfaces.custom.globalId"; 85 public final static String COMPONENT_TYPE = "org.apache.myfaces.custom.globalId"; 86 87 public String getFamily() 88 { 89 return COMPONENT_FAMILY; 90 } 91 92 // Note: this method was added to UIComponentBase in JSF 1.2; JSF-1.1 environments will 93 // simply never call it. 94 public String getContainerClientId(FacesContext facesContext) 95 { 96 return null; 97 } 98 } | http://myfaces.apache.org/sandbox-project/tomahawk-sandbox/xref/org/apache/myfaces/custom/globalId/GlobalId.html | CC-MAIN-2013-48 | refinedweb | 669 | 51.68 |
- Folder. The Move to Folder dialog box opens:
- Specify the folder where the chosen type should be moved using the Target folder drop-down list.
- Click Preserve hierarchy of folders and file names to leave the structure of folders "as is" during the refactoring. Click Put classes into separate files to put each class into a separate file.
- Select the Fix namespaces check box to adjust namespaces according to the new location automatically.
- Select the Allow change internal visibility to public when it is required check box to change the visibility of an internal type when it is moved to another project.
- All types that can be moved are displayed in the text area. Select appropriate check boxes nearby the types to move them.
- Click Next. If no conflicts are found, ReSharper performs the refactoring immediately. Otherwise, resolve conflicts. | http://www.jetbrains.com/resharper/webhelp70/Refactorings__Move__Type_to_Folder.html | CC-MAIN-2013-48 | refinedweb | 139 | 55.95 |
Question:
Hello I have the following code
namespace ConsoleApplication2 { class Program { static void Main(string[] args) { string searchText = "find this text, and some other text"; string replaceText = "replace with this text"; String query = "%SystemDrive%"; string str = Environment.ExpandEnvironmentVariables(query); string filePath = (str + "mytestfile.xml"); StreamReader reader = new StreamReader( filePath ); string content = reader.ReadToEnd(); reader.Close(); content = Regex.Replace( content, searchText, replaceText ); StreamWriter writer = new StreamWriter( filePath ); writer.Write( content ); writer.Close(); } } }
the replace doesn't find the search text because it is on separate lines like
find this text,
and some other text.
How would I write the regex epression so that it will find the text.
Solution:1
Why are you trying to use regular expressions for a simple search and replace? Just use:
content.Replace(searchText,replaceText);
You may also need to add '\n' into your string to add a line break in order for the replace to match.
Try changing search text to:
string searchText = "find this text,\n" + "and some other text";
Solution:2
To search for any whitespace (spaces, line breaks, tabs, ...), you should use \s in your regular expression:
string searchText = @"find\s+this\s+text,\s+and\s+some\s+other\s+text";
Of course, this is a very limited example, but you get the idea...
Solution:3
This is a side note for your specific question, but you are re-inventing some functionality that the framework provides for you. Try this code:
static void Main(string[] args) { string searchText = "find this text, and some other text"; string replaceText = "replace with this text"; string root = Path.GetPathRoot(Environment.SystemDirectory); string filePath = (root + "mytestfile.xml"); string content = File.ReadAllText(filePath); content = content.Replace(searchText, replaceText); File.WriteAllText(filePath, content); }
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-c-regex-with-line-breaks.html | CC-MAIN-2018-34 | refinedweb | 305 | 58.18 |
: 173: : common-list ( list1 list2 -- list3 ) \ gforth-internal 174: \ list1 and list2 are lists, where the heads are at higher addresses than 175: \ the tail. list3 is the largest sublist of both lists. 176: begin 177: 2dup u<> 178: while 179: 2dup u> 180: if 181: swap 182: then 183: @ 184: repeat 185: drop ; 186: 187: : sub-list? ( list1 list2 -- f ) \ gforth-internal 188: \ true iff list1 is a sublist of list2 189: begin 190: 2dup u< 191: while 192: @ 193: repeat 194: = ; 195: 196: : list-size ( list -- u ) \ gforth-internal 1.36 pazsan 197: \ size of the locals frame represented by list 198: 0 ( list n ) 199: begin 200: over 0<> 201: while 202: over 203: ((name>)) >body @ max 204: swap @ swap ( get next ) 205: repeat 206: faligned nip ; 1.27 pazsan 207: 208: : set-locals-size-list ( list -- ) 1.37 pazsan 209: dup locals-list ! 1.36 pazsan 210: list-size locals-size ! ; 1.27 pazsan 211: 212: : check-begin ( list -- ) 213: \ warn if list is not a sublist of locals-list 1.37 pazsan 214: locals-list @ sub-list? 0= if 1.27 pazsan 215: \ !! print current position 1.64 pazsan 216: >stderr ." compiler was overly optimistic about locals at a BEGIN" cr 1.27 pazsan 217: \ !! print assumption and reality 218: then ; 219: 1.1 anton 220: : compile-pushlocal-f ( a-addr -- ) ( run-time: f -- ) 221: locals-size @ alignlp-f float+ dup locals-size ! 222: swap ! 223: postpone f>l ; 224: 225: : compile-pushlocal-d ( a-addr -- ) ( run-time: w1 w2 -- ) 226: locals-size @ alignlp-w cell+ cell+ dup locals-size ! 227: swap ! 228: postpone swap postpone >l postpone >l ; 229: 230: : compile-pushlocal-c ( a-addr -- ) ( run-time: w -- ) 1.3 anton 231: -1 chars compile-lp+! 1.1 anton 232: locals-size @ swap ! 233: postpone lp@ postpone c! ; 234: 1.62 anton 235: 7 cells 32 + constant locals-name-size \ 32-char name + fields + wiggle room 236: 237: : create-local1 ( "name" -- a-addr ) 238: create 239: immediate restrict 240: here 0 , ( place for the offset ) ; 241: 242: variable dict-execute-dp \ the special dp for DICT-EXECUTE 243: 244: 0 value dict-execute-ude \ USABLE-DICTIONARY-END during DICT-EXECUTE 245: 246: : dict-execute1 ( ... addr1 addr2 xt -- ... ) 247: \ execute xt with HERE set to addr1 and USABLE-DICTIONARY-END set to addr2 248: dict-execute-dp @ dp 2>r 249: dict-execute-ude ['] usable-dictionary-end defer@ 2>r 250: swap to dict-execute-ude 251: ['] dict-execute-ude is usable-dictionary-end 252: swap to dict-execute-dp 253: dict-execute-dp dpp ! 254: catch 255: 2r> is usable-dictionary-end to dict-execute-ude 256: 2r> dpp ! dict-execute-dp ! 257: throw ; 258: 259: defer dict-execute ( ... addr1 addr2 xt -- ... ) 260: 261: :noname ( ... addr1 addr2 xt -- ... ) 262: \ first have a dummy routine, for SOME-CLOCAL etc. below 263: nip nip execute ; 264: is dict-execute 265: 1.1 anton 266: : create-local ( " name" -- a-addr ) 1.9 anton 267: \ defines the local "name"; the offset of the local shall be 268: \ stored in a-addr 1.62 anton 269: locals-name-size allocate throw 270: dup locals-mem-list prepend-list 271: locals-name-size cell /string over + ['] create-local1 dict-execute ; 272: 273: variable locals-dp \ so here's the special dp for locals. 1.1 anton 274: 1.3 anton 275: : lp-offset ( n1 -- n2 ) 276: \ converts the offset from the frame start to an offset from lp and 277: \ i.e., the address of the local is lp+locals_size-offset 278: locals-size @ swap - ; 279: 1.1 anton 280: : lp-offset, ( n -- ) 281: \ converts the offset from the frame start to an offset from lp and 282: \ adds it as inline argument to a preceding locals primitive 1.3 anton 283: lp-offset , ; 1.1 anton 284: 285: vocabulary locals-types \ this contains all the type specifyers, -- and } 286: locals-types definitions 287: 1.14 anton 288: : W: ( "name" -- a-addr xt ) \ gforth w-colon 289: create-local 1.1 anton 290: \ xt produces the appropriate locals pushing code when executed 291: ['] compile-pushlocal-w 292: does> ( Compilation: -- ) ( Run-time: -- w ) 293: \ compiles a local variable access 1.3 anton 294: @ lp-offset compile-@local ; 1.1 anton 295: 1.14 anton 296: : W^ ( "name" -- a-addr xt ) \ gforth w-caret 297: create-local 1.1 anton 298: ['] compile-pushlocal-w 299: does> ( Compilation: -- ) ( Run-time: -- w ) 300: postpone laddr# @ lp-offset, ; 301: 1.14 anton 302: : F: ( "name" -- a-addr xt ) \ gforth f-colon 303: create-local 1.1 anton 304: ['] compile-pushlocal-f 305: does> ( Compilation: -- ) ( Run-time: -- w ) 1.3 anton 306: @ lp-offset compile-f@local ; 1.1 anton 307: 1.14 anton 308: : F^ ( "name" -- a-addr xt ) \ gforth f-caret 309: create-local 1.1 anton 310: ['] compile-pushlocal-f 311: does> ( Compilation: -- ) ( Run-time: -- w ) 312: postpone laddr# @ lp-offset, ; 313: 1.14 anton 314: : D: ( "name" -- a-addr xt ) \ gforth d-colon 315: create-local 1.1 anton 316: ['] compile-pushlocal-d 317: does> ( Compilation: -- ) ( Run-time: -- w ) 318: postpone laddr# @ lp-offset, postpone 2@ ; 319: 1.14 anton 320: : D^ ( "name" -- a-addr xt ) \ gforth d-caret 321: create-local 1.1 anton 322: ['] compile-pushlocal-d 323: does> ( Compilation: -- ) ( Run-time: -- w ) 324: postpone laddr# @ lp-offset, ; 325: 1.14 anton 326: : C: ( "name" -- a-addr xt ) \ gforth c-colon 327: create-local 1.1 anton 328: ['] compile-pushlocal-c 329: does> ( Compilation: -- ) ( Run-time: -- w ) 330: postpone laddr# @ lp-offset, postpone c@ ; 331: 1.14 anton 332: : C^ ( "name" -- a-addr xt ) \ gforth c-caret 333: create-local 1.1 anton 334: ['] compile-pushlocal-c 335: does> ( Compilation: -- ) ( Run-time: -- w ) 336: postpone laddr# @ lp-offset, ; 337: 338: \ you may want to make comments in a locals definitions group: 1.44 anton 339: ' \ alias \ ( compilation 'ccc<newline>' -- ; run-time -- ) \ core-ext,block-ext backslash 1.42 anton 340: \G Comment till the end of the line if @code{BLK} contains 0 (i.e., 341: \G while not loading a block), parse and discard the remainder of the 342: \G parse area. Otherwise, parse and discard all subsequent characters 343: \G in the parse area corresponding to the current line. 344: immediate 1.39 crook 345: 346: ' ( alias ( ( compilation 'ccc<close-paren>' -- ; run-time -- ) \ core,file paren 1.42 anton 347: \G Comment, usually till the next @code{)}: parse and discard all 348: \G subsequent characters in the parse area until ")" is 349: \G encountered. During interactive input, an end-of-line also acts as 350: \G a comment terminator. For file input, it does not; if the 351: \G end-of-file is encountered whilst parsing for the ")" delimiter, 352: \G Gforth will generate a warning. 1.39 crook 353: immediate 1.1 anton 354: 355: forth definitions 1.54 anton 356: also locals-types 357: 358: \ these "locals" are used for comparison in TO 359: c: some-clocal 2drop 360: d: some-dlocal 2drop 361: f: some-flocal 2drop 362: w: some-wlocal 2drop 1.62 anton 363: 364: ' dict-execute1 is dict-execute \ now the real thing 1.54 anton 365: 1.1 anton 366: \ the following gymnastics are for declaring locals without type specifier. 367: \ we exploit a feature of our dictionary: every wordlist 368: \ has it's own methods for finding words etc. 369: \ So we create a vocabulary new-locals, that creates a 'w:' local named x 370: \ when it is asked if it contains x. 371: 372: : new-locals-find ( caddr u w -- nfa ) 373: \ this is the find method of the new-locals vocabulary 374: \ make a new local with name caddr u; w is ignored 375: \ the returned nfa denotes a word that produces what W: produces 376: \ !! do the whole thing without nextname 1.3 anton 377: drop nextname 1.43 anton 378: ['] W: >head-noprim ; 1.1 anton 379: 380: previous 381: 382: : new-locals-reveal ( -- ) 383: true abort" this should not happen: new-locals-reveal" ; 384: 1.22 anton 385: create new-locals-map ( -- wordlist-map ) 1.29 anton 386: ' new-locals-find A, 387: ' new-locals-reveal A, 388: ' drop A, \ rehash method 1.34 jwilke 389: ' drop A, 1.1 anton 390: 1.41 jwilke 391: new-locals-map mappedwordlist Constant new-locals-wl 392: 393: \ slowvoc @ 394: \ slowvoc on 395: \ vocabulary new-locals 396: \ slowvoc ! 397: \ new-locals-map ' new-locals >body wordlist-map A! \ !! use special access words 1.1 anton 398: 399: \ and now, finally, the user interface words 1.53 anton 400: : { ( -- latestxt wid 0 ) \ gforth open-brace 401: latestxt get-current 1.41 jwilke 402: get-order new-locals-wl swap 1+ set-order 1.32 anton 403: also locals definitions locals-types 1.1 anton 404: 0 TO locals-wordlist 405: 0 postpone [ ; immediate 406: 407: locals-types definitions 408: 1.53 anton 409: : } ( latestxt wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 1.1 anton 410: \ ends locals definitions 1.61 anton 411: ] 1.1 anton 412: begin 413: dup 414: while 415: execute 416: repeat 417: drop 418: locals-size @ alignlp-f locals-size ! \ the strictest alignment 419: previous previous 1.32 anton 420: set-current lastcfa ! 1.37 pazsan 421: locals-list 0 wordlist-id - TO locals-wordlist ; 1.1 anton 422: 1.14 anton 423: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 1.1 anton 424: } 1.9 anton 425: [char] } parse 2drop ; 1.1 anton 426: 427: forth definitions 428: 429: \ A few thoughts on automatic scopes for locals and how they can be 430: \ implemented: 431: 432: \ We have to combine locals with the control structures. My basic idea 433: \ was to start the life of a local at the declaration point. The life 434: \ would end at any control flow join (THEN, BEGIN etc.) where the local 435: \ is lot live on both input flows (note that the local can still live in 436: \ other, later parts of the control flow). This would make a local live 437: \ as long as you expected and sometimes longer (e.g. a local declared in 438: \ a BEGIN..UNTIL loop would still live after the UNTIL). 439: 440: \ The following example illustrates the problems of this approach: 441: 442: \ { z } 443: \ if 444: \ { x } 445: \ begin 446: \ { y } 447: \ [ 1 cs-roll ] then 448: \ ... 449: \ until 450: 451: \ x lives only until the BEGIN, but the compiler does not know this 452: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 453: \ that point x lives in no thread, but that does not help much). This is 454: \ solved by optimistically assuming at the BEGIN that x lives, but 455: \ warning at the UNTIL that it does not. The user is then responsible 456: \ for checking that x is only used where it lives. 457: 458: \ The produced code might look like this (leaving out alignment code): 459: 460: \ >l ( z ) 461: \ ?branch <then> 462: \ >l ( x ) 463: \ <begin>: 464: \ >l ( y ) 465: \ lp+!# 8 ( RIP: x,y ) 466: \ <then>: 467: \ ... 468: \ lp+!# -4 ( adjust lp to <begin> state ) 469: \ ?branch <begin> 470: \ lp+!# 4 ( undo adjust ) 471: 472: \ The BEGIN problem also has another incarnation: 473: 474: \ AHEAD 475: \ BEGIN 476: \ x 477: \ [ 1 CS-ROLL ] THEN 478: \ { x } 479: \ ... 480: \ UNTIL 481: 482: \ should be legal: The BEGIN is not a control flow join in this case, 483: \ since it cannot be entered from the top; therefore the definition of x 484: \ dominates the use. But the compiler processes the use first, and since 485: \ it does not look ahead to notice the definition, it will complain 486: \ about it. Here's another variation of this problem: 487: 488: \ IF 489: \ { x } 490: \ ELSE 491: \ ... 492: \ AHEAD 493: \ BEGIN 494: \ x 495: \ [ 2 CS-ROLL ] THEN 496: \ ... 497: \ UNTIL 498: 499: \ In this case x is defined before the use, and the definition dominates 500: \ the use, but the compiler does not know this until it processes the 501: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 502: \ the BEGIN is not a control flow join? The safest assumption would be 503: \ the intersection of all locals lists on the control flow 504: \ stack. However, our compiler assumes that the same variables are live 505: \ as on the top of the control flow stack. This covers the following case: 506: 507: \ { x } 508: \ AHEAD 509: \ BEGIN 510: \ x 511: \ [ 1 CS-ROLL ] THEN 512: \ ... 513: \ UNTIL 514: 515: \ If this assumption is too optimistic, the compiler will warn the user. 516: 1.28 anton 517: \ Implementation: 1.1 anton 518: 1.3 anton 519: \ explicit scoping 1.1 anton 520: 1.14 anton 521: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.36 pazsan 522: cs-push-part scopestart ; immediate 523: 524: : adjust-locals-list ( wid -- ) 1.37 pazsan 525: locals-list @ common-list 1.36 pazsan 526: dup list-size adjust-locals-size 1.37 pazsan 527: locals-list ! ; 1.3 anton 528: 1.14 anton 529: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.36 pazsan 530: scope? 531: drop adjust-locals-list ; immediate 1.1 anton 532: 1.3 anton 533: \ adapt the hooks 1.1 anton 534: 1.3 anton 535: : locals-:-hook ( sys -- sys addr xt n ) 536: \ addr is the nfa of the defined word, xt its xt 1.1 anton 537: DEFERS :-hook 1.53 anton 538: latest latestxt 1.1 anton 539: clear-leave-stack 540: 0 locals-size ! 1.62 anton 541: locals-mem-list @ free-list 542: 0 locals-mem-list ! 1.37 pazsan 543: 0 locals-list ! 1.3 anton 544: dead-code off 545: defstart ; 1.1 anton 546: 1.3 anton 547: : locals-;-hook ( sys addr xt sys -- sys ) 548: def? 1.1 anton 549: 0 TO locals-wordlist 1.3 anton 550: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton 551: lastcfa ! last ! 552: DEFERS ;-hook ; 553: 1.28 anton 554: \ THEN (another control flow from before joins the current one): 555: \ The new locals-list is the intersection of the current locals-list and 556: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 557: \ size of the new locals-list. The following code is generated: 558: \ lp+!# (current-locals-size - orig-locals-size) 559: \ <then>: 560: \ lp+!# (orig-locals-size - new-locals-size) 561: 562: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 563: \ inefficient, e.g. if there is a locals declaration between IF and 564: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 565: \ branch, there will be none after the target <then>. 566: 1.30 anton 567: : (then-like) ( orig -- ) 568: dead-orig = 1.27 pazsan 569: if 1.30 anton 570: >resolve drop 1.27 pazsan 571: else 572: dead-code @ 573: if 1.30 anton 574: >resolve set-locals-size-list dead-code off 1.27 pazsan 575: else \ both live 1.30 anton 576: over list-size adjust-locals-size 577: >resolve 1.36 pazsan 578: adjust-locals-list 1.27 pazsan 579: then 580: then ; 581: 582: : (begin-like) ( -- ) 583: dead-code @ if 584: \ set up an assumption of the locals visible here. if the 585: \ users want something to be visible, they have to declare 586: \ that using ASSUME-LIVE 587: backedge-locals @ set-locals-size-list 588: then 589: dead-code off ; 590: 591: \ AGAIN (the current control flow joins another, earlier one): 592: \ If the dest-locals-list is not a subset of the current locals-list, 593: \ issue a warning (see below). The following code is generated: 594: \ lp+!# (current-local-size - dest-locals-size) 595: \ branch <begin> 596: 597: : (again-like) ( dest -- addr ) 598: over list-size adjust-locals-size 599: swap check-begin POSTPONE unreachable ; 600: 601: \ UNTIL (the current control flow may join an earlier one or continue): 602: \ Similar to AGAIN. The new locals-list and locals-size are the current 603: \ ones. The following code is generated: 604: \ ?branch-lp+!# <begin> (current-local-size - dest-locals-size) 605: 606: : (until-like) ( list addr xt1 xt2 -- ) 607: \ list and addr are a fragment of a cs-item 608: \ xt1 is the conditional branch without lp adjustment, xt2 is with 609: >r >r 610: locals-size @ 2 pick list-size - dup if ( list dest-addr adjustment ) 611: r> drop r> compile, 612: swap <resolve ( list adjustment ) , 613: else ( list dest-addr adjustment ) 614: drop 615: r> compile, <resolve 616: r> drop 617: then ( list ) 618: check-begin ; 619: 620: : (exit-like) ( -- ) 621: 0 adjust-locals-size ; 622: 1.1 anton 623: ' locals-:-hook IS :-hook 624: ' locals-;-hook IS ;-hook 1.27 pazsan 625: 626: ' (then-like) IS then-like 627: ' (begin-like) IS begin-like 628: ' (again-like) IS again-like 629: ' (until-like) IS until-like 630: ' (exit-like) IS exit-like 1.1 anton 631: 632: \ The words in the locals dictionary space are not deleted until the end 633: \ of the current word. This is a bit too conservative, but very simple. 634: 635: \ There are a few cases to consider: (see above) 636: 637: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 638: \ We have to special-case the above cases against that. In this case the 639: \ things above are not control flow joins. Everything should be taken 640: \ over from the live flow. No lp+!# is generated. 641: 642: \ About warning against uses of dead locals. There are several options: 643: 644: \ 1) Do not complain (After all, this is Forth;-) 645: 646: \ 2) Additional restrictions can be imposed so that the situation cannot 647: \ arise; the programmer would have to introduce explicit scoping 648: \ declarations in cases like the above one. I.e., complain if there are 649: \ locals that are live before the BEGIN but not before the corresponding 650: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 651: 652: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 653: \ used on a path starting at the BEGIN, and does not live at the 654: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 655: \ the compiler know when it is working on a path starting at a BEGIN 656: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 657: \ is the usage info stored? 658: 659: \ For now I'll resort to alternative 2. When it produces warnings they 660: \ will often be spurious, but warnings should be rare. And better 661: \ spurious warnings now and then than days of bug-searching. 662: 663: \ Explicit scoping of locals is implemented by cs-pushing the current 664: \ locals-list and -size (and an unused cell, to make the size equal to 665: \ the other entries) at the start of the scope, and restoring them at 666: \ the end of the scope to the intersection, like THEN does. 667: 668: 669: \ And here's finally the ANS standard stuff 670: 1.14 anton 671: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 672: \ a little space-inefficient, but well deserved ;-) 673: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 674: \ as long as you use it in a definition 1.3 anton 675: dup 676: if 677: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 678: else 679: 2drop 680: endif ; 1.1 anton 681: 1.56 anton 682: : >definer ( xt -- definer ) \ gforth 1.48 anton 683: \G @var{Definer} is a unique identifier for the way the @var{xt} 684: \G was defined. Words defined with different @code{does>}-codes 685: \G have different definers. The definer can be used for 686: \G comparison and in @code{definer!}. 1.30 anton 687: dup >does-code 688: ?dup-if 689: nip 1 or 1.4 anton 690: else 691: >code-address 692: then ; 693: 1.56 anton 694: : definer! ( definer xt -- ) \ gforth 1.48 anton 695: \G The word represented by @var{xt} changes its behaviour to the 696: \G behaviour associated with @var{definer}. 1.4 anton 697: over 1 and if 1.13 anton 698: swap [ 1 invert ] literal and does-code! 1.4 anton 699: else 700: code-address! 701: then ; 702: 1.23 pazsan 703: :noname 1.31 anton 704: ' dup >definer [ ' locals-wordlist ] literal >definer = 1.23 pazsan 705: if 706: >body ! 707: else 708: -&32 throw 709: endif ; 710: :noname 1.28 anton 711: comp' drop dup >definer 1.21 anton 712: case 1.30 anton 713: [ ' locals-wordlist ] literal >definer \ value 1.21 anton 714: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 1.35 anton 715: \ !! dependent on c: etc. being does>-defining words 716: \ this works, because >definer uses >does-code in this case, 717: \ which produces a relocatable address 1.54 anton 718: [ comp' some-clocal drop ] literal >definer 1.21 anton 719: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 1.54 anton 720: [ comp' some-wlocal drop ] literal >definer 1.21 anton 721: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 1.54 anton 722: [ comp' some-dlocal drop ] literal >definer 1.21 anton 723: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 1.54 anton 724: [ comp' some-flocal drop ] literal >definer 1.21 anton 725: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 726: -&32 throw 1.23 pazsan 727: endcase ; 1.24 anton 728: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 1.1 anton 729: 1.58 anton 730: : locals| ( ... "name ..." -- ) \ local-ext locals-bar 1.14 anton 731: \ don't use 'locals|'! use '{'! A portable and free '{' 1.21 anton 732: \ implementation is compat/anslocals.fs 1.8 anton 733: BEGIN 1.49 anton 734: name 2dup s" |" str= 0= 1.8 anton 735: WHILE 736: (local) 737: REPEAT 1.14 anton 738: drop 0 (local) ; immediate restrict | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.65;f=h;only_with_tag=MAIN;ln=1 | CC-MAIN-2021-49 | refinedweb | 3,680 | 67.76 |
ExtJS is a JavaScript framework from Sencha for building Rich Internet Applications. It boasts one of the largest libraries of pre-built modular UI components.
Since version 5.0, Sencha has advocated the use of Model-View-ViewModel (MVVM) architecture on its platform. It also maintains support for Model-View-Controller (MVC) architecture which was the primary architecture style supported up through version 4.x.
Additionally, Sencha has focused on outfitting ExtJS with mobile-centric and responsive web application capabilities. Its former Sencha Touch framework has been integrated with ExtJS since version 6.0 with efforts to combine the customer bases and consolidate redundancies in the new combined framework.
Typical usage of ExtJS leverages the framework to build single-page rich-applications (RIA). The simplest way to get started is to make use of Sencha Cmd, a CLI build tool covering most of the general concerns in a deployment life-cycle, primarily:
The second step is download the SDK, ExtJS is a commercial product - to obtain a copy, one of:
After downloading the SDK ensure the archive is extracted before proceeding.
Note: See the official Getting Started documentation for a comprehensive guide to ExtJS projects.
After installing Sencha Cmd, it's availability can be verified by opening a console window and running:
> sencha help
We now have the tools necessary to create and deploy ExtJS applications, take note of the directory location where the SDK was extracted as this will be required in further examples.
Let's start using ExtJS to build a simple web application.
We will create a simple web application which will have only one physical page (aspx/html). At a minimum, every ExtJS application will contain one HTML and one JavaScript file—usually index.html and app.js.
The file index.html or your default page will include the references to the CSS and JavaScript code of ExtJS, along with your app.js file containing the code for your application (basically starting point of your web application).
Let’s create a simple web application that will use ExtJS library components:
Step 1: Create a empty web application
As shown in the screenshot, I have created an empty web application. To make it simple, you can use any web application project in the editor or IDE of your choice.
Step 2: Add a default web page
If you have created an empty web application, then we need to include a web page that would be the starting page of our application.
Step 3: Add Ext Js References to Default.aspx
This step shows how we make use of extJS Library. As shown in the screenshot in the Default.aspx, I have just referred 3 files:
Sencha has partnered with CacheFly, a global content network, to provide free CDN hosting for the ExtJS framework. In this sample I have used Ext's CDN library, however we could use the same files (ext-all.js & ext-all.css) from our project directory instead or as backups in the event the CDN was unavailable.
By referring to the app.js, it would be loaded into the browser and it would be the starting point for our application.
Apart from these files, we have a placeholder where UI will be rendered. In this sample, we have a div with id “whitespace” that we will use later to render UI.
<script type="text/javascript" src=""></script> <link rel="stylesheet" type="text/css" href=""/> <script src="app/app.js"></script>
Step 4: Add app folder & app.js in your web project
ExtJS provides us with a way to manage the code in an MVC pattern. As shown in the screenshot, we have a container folder for our ExtJS application, in this case 'app'. This folder will contain all of our application code split into various folders, i.e., model, view, controller, store, etc. Currently, it has only the app.js file.
Step 5: Write your code in app.js
App.js is the starting point of our application; for this sample I have just used minimum configuration required to launch the application.
Ext.application represents an ExtJS application which does several things. It creates a global variable ‘SenchaApp’ provided in the name configuration and all of the application classes (models, views, controllers, stores) will reside in the single namespace. Launch is a function that is called automatically when all the application is ready (all the classes are loaded properly).
In this sample, we are creating a Panel with some configuration and rendering it on the placeholder that we provided in the Default.aspx.
Ext.application({ name: 'SenchaApp', launch: function () { Ext.create('Ext.panel.Panel', { title: 'Sencha App', width: 300, height: 300, bodyPadding:10, renderTo: 'whitespace', html:'Hello World' }); } });
Output Screenshot
When you run this web application with Default.aspx as a startup page, the following window will appear in the browser.
This example demonstrates creating a basic application in ExtJS using Sencha Cmd to bootstrap the process - this method will automatically generate some code and a skeleton structure for the project.
Open a console window and change the working directory to an appropriate space in which to work. In the same window and directory run the following command to generate a new application.
> sencha -sdk /path/to/ext-sdk generate app HelloWorld ./HelloWorld
Note: The
-sdk flag specifies the location of the directory extracted from the framework archive.
In ExtJS 6+ Sencha have merged both the ExtJS and Touch frameworks into a single codebase, differentiated by the terms classic and modern respectively. For simplicity if you do not wish to target mobile devices, an additional flag may be specified in the command to reduce clutter in the workspace.
> sencha -sdk /path/to/ext-sdk generate app -classic HelloWorld ./HelloWorld
Without any further configuration, a fully functional demo application should now reside in the local directory. Now change the working directory to the new
HelloWorld project directory and run:
> sencha app watch
By doing this, the project is compiled using the default build profile and a simple HTTP server is started which allows the viewing of the application locally through a web browser. By default on port 1841. | https://riptutorial.com/extjs | CC-MAIN-2021-31 | refinedweb | 1,024 | 54.93 |
These are heady times for C++. Active standardization was put on hold after the C++ Standard was published in 1998, to give us time to fix bugs and let compilers and libraries catch up. Now things are "hot" again, and a lot of exciting stuff is happening.
This new column is about precisely that "exciting stuff." Titled "The New C++," it focuses on the active work now under way to extend the C++ language (not much) and library (very much) now and in the next few years as we progress toward "version 2.0" of the C++ Standard.
There's a lot to cover, and this is where you'll find the most up-to-date coverage. Some of us have already written a bit about the new C++ in other fora: for more overview information about where we're at and what (and why) exciting things are now happening, see the two complementary columns by me and Matt Austern in the January 2002 issue of CUJ [1, 2]. They contain some basic information about what's going on, who's going on about it, and perhaps most importantly of all how you can participate and how it affects your work today and in the short term, not just years down the road.
A Roadmap
In this, the first installment of "The New C++," I am going to start with a complementary bird's-eye roadmap of where we've been and where we're going, and then I'm going to devote most of this introductory column to just that introductions, of people and groups and terms, a "who's who" and "glossary" to the C++ standardization process. In future columns, I'll describe in more technical detail some of the key facilities being considered, how they work, and what issues come up as they're debated in committee and between meetings on the committee email reflectors.
Figure 1 shows the major pieces influencing the development of the C++ Standard, both in leading up to C++98's publication in 1998 and C++0x's publication at some future time. This picture should give you a useful roadmap of how various items work and connect, and what leads to what else at about what time. The rest of this column defines the terms used in Figure 1.
Figure 1: The past and future C++ timeline
And for now, I won't say much more than that, but I do hasten to point out one thing in particular: Boost is by no means the only, or even necessarily the major, outside contribution to the Library TR (Technical Report); it just happens to be the most visible single group at this time.
Dramatis Personae
Here is a brief summary of the individuals and organizations who are the major players in the past and future C++ development process.
ANSI: The American National Standards Institute. At ANSI meetings, the rule is "one company (or individual representing themselves), one vote." Within ISO meetings, ANSI's delegation is the delegation for the United States and thus receives one vote, just as the delegations for other countries represented at the meeting each get one ISO vote. Still, because of the United States' predominant role in the software industry in general and in C and C++ in particular, ANSI is something of a "first among equals" in practice at WG14 and WG21 meetings.
ANSI J11: The ANSI C committee.
ANSI J16: The ANSI C++ committee. J16 always meets together simultaneously with ISO WG21 (this being one expression of its "first among equals" status). For the past two years it has also met at the same location, and in an adjacent week, as J11 so as to promote cross-committee communication.
AT&T: AT&T Bell Labs (now AT&T Research) is where C++ began life in the early 1980s, the brainchild of Bjarne Stroustrup. AT&T offered C++ for standardization and WG21/J16 work began in 1989/1990.
Boost: The "C+ Boost" effort was originated just after the C++98 Standard was passed in order to start working on developing "existing practice" for the next round of active standardization. Initially predominantly composed of standards committee members, the current membership has grown much wider. See <>.
CWG (Core Working Group): The "subcommittee" within WG21/J16 that focuses on core language issues, such as namespaces, templates, and so forth.
HP: Hewlett-Packard, where Alex Stepanov and Meng Lee invented the revolutionary STL (Standard Template Library), much of which was incorporated into the draft C++ Standard in 1995 and subsequently refined within WG21/J16.
EWG (Evolution Working Group): The newest "subcommittee" within WG21/J16, which first met in October 2001, that focuses specifically on directions for C++0x.
ISO: The International Organization for Standardization. ISO is not an acronym, however; it comes from the Latin word for "the same." At ISO meetings, the rule is "one country, one vote." (There is some discussion going on lately about changing this, but for now that's still the rule.)
ISO WG14: ISO C committee. Within the ISO organization, the technical working group responsible for C is JTC1/SC22/WG14, usually shortened to WG14.
ISO WG21: ISO C++ committee [3]. Within the ISO organization, the technical working group responsible for C++ is JTC1/SC22/WG21, usually shortened to WG21. WG21 always meets together simultaneously with ANSI J16. For the past two years it has also met at the same location, and in an adjacent week, as WG14 so as to promote cross-committee communication.
LWG (Library Working Group): The "subcommittee" within WG21/J16 that focuses on standard library issues, such as containers, algorithms, streams, and so forth.
PWG (Performance Working Group): The "subcommittee" within WG21/J16 that focuses on the Performance Technical Report, which is not shown further here as it's not as directly concerned with C++ language and library features as experienced by users (compared to, say, Defect Reports and corrections in the TCs, and new features in C++0x).
Bjarne Stroustrup: The creator of C++ and author of its first compiler, Cfront. Stroustrup continues to be active in C++ standardization and currently chairs the fledgling C++0x EWG at WG21/J16 meetings.
Alex Stepanov: The principal creator of the STL adopted as a large part of the standard C++ library.
Further Glossary
Besides the above persons and organizations, there are some other common names that deserve definition. They'll be bandied about regularly in this column:
ARM C++: C++ as of 1990. "The ARM" is an acronym for the book titled The Annotated C++ Reference Manual [4]. The ARM was used as the base document to begin the C++ standardization effort.
C++98: The first official ISO/ANSI C++ Standard, published in 1998. Officially known as ISO/IEC IS 14882:1998(E).
C++0x: The second official ISO/ANSI C++ Standard, which is getting under way now and will be published in coming years (no date yet).
C99: The second official ISO/ANSI C Standard, published in 1999. This standard contains much that the C++ committee can be expected to adopt wholesale, or with minor modifications, as part of C++0x. After all, it's clear that the C++ committee values C compatibility, and the C committee has helped us by likewise valuing C++ compatibility, which has made some of C99's features easier to integrate into C++0x than they might otherwise have been. There are still some C99 features, however, that C++0x cannot easily adopt in their C99 form, because conflicting facilities already exist in C++98 (for example, complex is a class template in C++98 and a keyword in C99).
Library Extensions TR (Technical Report): Starting in 2001, WG21/J16 began actively soliciting and evaluating proposals for extensions to the C++98 Standard library. These are being collected for later publication in the form of a "Library Extensions" TR, which is officially non-normative, but don't kid yourself as with the draft standard of C++ in the early and mid-1990s, vendors will be tracking this closely and implementing facilities as quickly as they can. Why? Because this TR, although non-normative in itself, is specifically intended to be added wholesale and verbatim into the coming-and-will-be-very-normative C++0x Standard.
STL: The groundbreaking STL developed by Alex Stepanov and Meng Lee at Hewlett-Packard Labs in the early 1990s. Most of the HP STL was adopted in 1995 and then refined to become the "containers, iterators, and algorithms" portion of the C++98 Standard library.
TCI (Technical Corrigendum 1): Completed in 2001, the first "mid-course correction" (a.k.a. "patch," a.k.a. "service pack," a.k.a. "maintenance release") to the C++98 Standard. Contains the resolutions to Defect Reports submitted by the global C++ community.
TC2 (Technical Corrigendum 2): As we continue working on the Library TR and C++0x, there will no doubt continue to be resolutions to still-pending and not-yet-received Defect Reports. Depending on the timing of C++0x, these may be issued in the form of a second TC.
Next Time
Because of publishing lead times, even on the Web, I expect to finish writing two more installments of this column before the next C++ standards meeting in April 2002. Next time: a survey of the first batch of suggested library extensions considered at the October 2001 WG21/J16 meeting in Redmond, Washington, USA. The next time after that: a closer look at one of the proposed facilities. Stay tuned.
References
[1] Herb Sutter. "Sutter's Mill: Toward a Standard C++0x Library, Part 1," C/C++ Users Journal, January 2002.
[2] Matt Austern. "The Standard Librarian: And Now for Something Completely Different," C/C++ Users Journal, January 2002, <>.
[3] The official WG21 website is at <>.
[4] Margaret Ellis and Bjarne Stroustrup. The Annotated C++ Reference Manual (Addison-Wesley, 1989).
Herb Sutter is an independent consultant and secretary of the ISO/ANSI C++ standards committee. He is also one of the instructors of The C++ Seminar (). Herb can be reached at hsutter@acm.org. Copyright 2002 Herb Sutter | http://www.drdobbs.com/cpp/the-new-c/184403817 | CC-MAIN-2013-48 | refinedweb | 1,686 | 60.24 |
# MVCC in PostgreSQL-2. Forks, files, pages
[Last time](https://habr.com/ru/company/postgrespro/blog/467437/) we talked about data consistency, looked at the difference between levels of transaction isolation from the point of view of the user and figured out why this is important to know. Now we are starting to explore how PostgreSQL implements snapshot isolation and multiversion concurrency.
In this article, we will look at how data is physically laid out in files and pages. This takes us away from discussing isolation, but such a digression is necessary to understand what follows. We will need to figure out how the data storage is organized at a low level.
Relations
=========
If you look inside tables and indexes, it turns out that they are organized in a similar way. Both are database objects that contain some data consisting of rows.
There is no doubt that a table consists of rows, but this is less obvious for an index. However, imagine a B-tree: it consists of nodes that contain indexed values and references to other nodes or table rows. It's these nodes that can be considered index rows, and in fact, they are.
Actually, a few more objects are organized in a similar way: sequences (essentially single-row tables) and materialized views (essentially, tables that remember the query). And there are also regular views, which do not store data themselves, but are in all other senses similar to tables.
All these objects in PostgreSQL are called the common word *relation*. This word is extremely improper because it is a term from the relational theory. You can draw a parallel between a relation and a table (view), but certainly not between a relation and an index. But it just so happened: the academic origin of PostgreSQL manifests itself. It seems to me that it's tables and views that were called so first, and the rest swelled over time.
To be simpler, we will further discuss tables and indexes, but the other *relations* are organized exactly the same way.
Forks and files
===============
Usually several *forks* correspond to each relation. Forks can have several types, and each of them contains a certain kind of data.
If there is a fork, it is first represented by the only *file*. The filename is a numeric identifier, which can be appended by an ending that corresponds to the fork name.
The file gradually grows and when its size reaches 1 GB, a new file of the same fork is created (files like these are sometimes called *segments*). The ordinal number of the segment is appended at the end of the filename.
The 1 GB limitation of the file size arose historically to support different file systems, some of which cannot deal with files of a larger size. You can change this limitation when building PostgreSQL (`./configure --with-segsize`).
So, several files on disk can correspond to one relation. For example, for a small table there will be three of them.
All files of objects that belong to one tablespace and one database will be stored in one directory. You need to have this in mind since filesystems usually fail to work fine with a large number of files in a directory.
Note here that files, in turn, are divided into *pages* (or *blocks*), usually by 8 KB. We will discuss the internal structure of pages a bit further.

Now let's look at fork types.
The **main fork** is the data itself: the very table and index rows. The main fork is available for any relations (except views that do not contain data).
The names of files of the main fork consist of the only numeric identifier. For example, this is the path to the table that we created last time:
```
=> SELECT pg_relation_filepath('accounts');
```
```
pg_relation_filepath
----------------------
base/41493/41496
(1 row)
```
Where do these identifiers arise from? The «base» directory corresponds to the «pg\_default» tablespace. Next subdirectory, corresponding to the database, is where the file of interest is located:
```
=> SELECT oid FROM pg_database WHERE datname = 'test';
```
```
oid
-------
41493
(1 row)
```
```
=> SELECT relfilenode FROM pg_class WHERE relname = 'accounts';
```
```
relfilenode
-------------
41496
(1 row)
```
The path is relative, it is specified starting from the data directory (PGDATA). Moreover, virtually all paths in PostgreSQL are specified starting from PGDATA. Thanks to this, you can safely move PGDATA to a different location — nothing confines it (except for it might be required to set the path to libraries in LD\_LIBRARY\_PATH).
Further, looking into the filesystem:
```
postgres$ ls -l --time-style=+ /var/lib/postgresql/11/main/base/41493/41496
```
```
-rw------- 1 postgres postgres 8192 /var/lib/postgresql/11/main/base/41493/41496
```
The **initialization fork** is only available for unlogged tables (created with UNLOGGED specified) and their indexes. Objects like these are no way different from regular objects except that operations with them are not logged in the write-ahead log (WAL). Because of this, it is faster to work with them, but it is impossible to recover the data in the consistent state in case of a failure. Therefore, during a recovery PostgreSQL just removes all the forks of such objects and writes the initialization fork in place of the main fork. This results in an empty object. We will discuss logging in detail, but in another series.
The «accounts» table is logged, and therefore, it does not have an initialization fork. But to experiment, we can turn logging off:
```
=> ALTER TABLE accounts SET UNLOGGED;
=> SELECT pg_relation_filepath('accounts');
```
```
pg_relation_filepath
----------------------
base/41493/41507
(1 row)
```
The example clarifies that a possibility to turn logging on and off on the fly is associated with rewriting the data to files with different names.
An initialization fork has the same name as the main fork, but with the "\_init" suffix:
```
postgres$ ls -l --time-style=+ /var/lib/postgresql/11/main/base/41493/41507_init
```
```
-rw------- 1 postgres postgres 0 /var/lib/postgresql/11/main/base/41493/41507_init
```
The **free space map** is a fork that keeps track of the availability of free space inside pages. This space is constantly changing: it decreases when new versions of rows are added and increases during vacuuming. The free space map is used during insertion of new row versions in order to quickly find a suitable page, where the data to be added will fit.
The name of the free space map has the "\_fsm" suffix. But this file appears not immediately, but only as the need arises. The easiest way to achieve this is to vacuum a table (we will explain why when the time comes):
```
=> VACUUM accounts;
```
```
postgres$ ls -l --time-style=+ /var/lib/postgresql/11/main/base/41493/41507_fsm
```
```
-rw------- 1 postgres postgres 24576 /var/lib/postgresql/11/main/base/41493/41507_fsm
```
The **visibility map** is a fork where pages that only contain up-to-date row versions are marked by one bit. Roughly, it means that when a transaction tries to read a row from such a page, the row can be shown without checking its visibility. In next articles, we will discuss in detail how this happens.
```
postgres$ ls -l --time-style=+ /var/lib/postgresql/11/main/base/41493/41507_vm
```
```
-rw------- 1 postgres postgres 8192 /var/lib/postgresql/11/main/base/41493/41507_vm
```
Pages
=====
As already mentioned, files are logically divided into pages.
A page usually has the size of 8 KB. The size can be changed within certain limits (16 KB or 32 KB), but only during the build (`./configure --with-blocksize`). A built and run instance can only work with pages of the same size.
Regardless of the fork where files belong, the server uses them in a pretty similar way. Pages are first read into the buffer cache, where the processes can read and change them; then as the need arises, they are evicted back to disk.
Each page has internal partitioning and in general contains the following partitions:
```
0 +-----------------------------------+
| header |
24 +-----------------------------------+
| array of pointers to row versions |
lower +-----------------------------------+
| free space |
upper +-----------------------------------+
| row versions |
special +-----------------------------------+
| special space |
pagesize +-----------------------------------+
```
You can easily get to know the sizes of these partitions using the «research» extension pageinspect:
```
=> CREATE EXTENSION pageinspect;
=> SELECT lower, upper, special, pagesize FROM page_header(get_raw_page('accounts',0));
```
```
lower | upper | special | pagesize
-------+-------+---------+----------
40 | 8016 | 8192 | 8192
(1 row)
```
Here we are looking at the **header** of the very first (zero) page of the table. In addition to the sizes of other areas, the header has different information about the page, which we are not interested in yet.
At the bottom of the page there is the **special space**, which is empty in this case. It is only used for indexes, and even not for all of them. «At the bottom» here reflects what is in the picture; it may be more accurate to say «in high addresses».
After the special space, **row versions** are located, that is, that very data that we store in the table plus some internal information.
At the top of a page, right after the header, there is the table of contents: the **array of pointers** to row versions available in the page.
**Free space** can be left between row versions and pointers (this free space is kept track of in the free space map). Note that there is no memory fragmentation inside a page — all the free space is represented by one contiguous area.
Pointers
--------
Why are the pointers to row versions needed? The thing is that index rows must somehow reference row versions in the table. It's clear that the reference must contain the file number, the number of the page in the file and some indication of the row version. We could use the offset from the beginning of the page as the indicator, but it is inconvenient. We would be unable to move a row version inside the page since it would break available references. And this would result in the fragmentation of the space inside pages and other troublesome consequences. Therefore, the index references the pointer number, and the pointer references the current location of the row version in the page. And this is indirect addressing.
Each pointer occupies exactly four bytes and contains:
* a reference to the row version
* the size of this row version
* several bytes to determine the status of the row version
Data format
-----------
The data format on disk is exactly the same as the data representation in RAM. The page is read into the buffer cache «as is», without whatever conversions. Therefore, data files from one platform turn out incompatible with other platforms.
For example, in the X86 architecture, the byte ordering is from least significant to most significant bytes (little-endian), z/Architecture uses the inverse order (big-endian), and in ARM the order can be swapped.
Many architectures provide for data alignment on boundaries of machine words. For example, on a 32-bit x86 system, integer numbers (type «integer», which occupies 4 bytes) will be aligned on a boundary of 4-byte words, the same way as double-precision numbers (type «double precision», which occupies 8 bytes). And on a 64-bit system, double-precision numbers will be aligned on a boundary of 8-byte words. This is one more incompatibility reason.
Because of the alignment, the size of the table row depends on the field order. Usually this effect is not very noticeable, but sometimes, it may result in a considerable growth of the size. For example, if fields of types «char(1)» and «integer» are interleaved, usually 3 bytes between them go to waste. For more details of this, you can look into Nikolay Shaplov's presentation "[Tuple internals](https://pgconf.ru/media/2016/05/13/tuple-internals.pdf)".
Row versions and TOAST
======================
We will discuss details of the internal structure of row versions next time. At this point, it is only important for us to know that each version must entirely fit one page: PostgreSQL has no way to «extend» the row to the next page. The Oversized Attributes Storage Technique (TOAST) is used instead. The name itself hints that a row can be sliced into toasts.
Joking apart, TOAST implies several strategies. We can transmit long attribute values to a separate internal table after breaking them up into small toast chunks. Another option is to compress a value so that the row version does fit a regular page. And we can do both: first compress and then break up and transmit.
For each primary table, a separate TOAST table can be created if needed, one for all attributes (along with an index on it). The availability of potentially long attributes determines this need. For example, if a table has a column of type «numeric» or «text», the TOAST table will be immediately created even if long values won't be used.
Since a TOAST table is essentially a regular table, it has the same set of forks. And this doubles the number of files that correspond to a table.
The initial strategies are defined by the column data types. You can look at them using the `\d+` command in psql, but since it additionally outputs a lot of other information, we will query the system catalog:
```
=> SELECT attname, atttypid::regtype, CASE attstorage
WHEN 'p' THEN 'plain'
WHEN 'e' THEN 'external'
WHEN 'm' THEN 'main'
WHEN 'x' THEN 'extended'
END AS storage
FROM pg_attribute
WHERE attrelid = 'accounts'::regclass AND attnum > 0;
```
```
attname | atttypid | storage
---------+----------+----------
id | integer | plain
number | text | extended
client | text | extended
amount | numeric | main
(4 rows)
```
The names of the strategies mean:
* plain — TOAST is unused (used for data types known to be short, such as «integer»).
* extended — both compression and storage in a separate TOAST table are allowed
* external — long values are stored in the TOAST table without compression.
* main — long values are first compressed and only get into the TOAST table if the compression did not help.
In general, the algorithm is as follows. PostgreSQL aims to have at least four rows fit one page. Therefore, if the row size exceeds one forth of the page, the header taken into account (2040 bytes for a regular 8K-page), TOAST must be applied to a part of the values. We follow the order described below and stop as soon as the row no longer exceeds the threshold:
1. First we go through the attributes with the «external» and «extended» strategies from the longest attribute to the shortest. «Extended» attributes are compressed (if it is effective) and if the value itself exceeds one forth of the page, it immediately gets into the TOAST table. «External» attributes are processed the same way, but aren't compressed.
2. If after the first pass, the row version does not fit the page yet, we transmit the remaining attributes with the «external» and «extended» strategies to the TOAST table.
3. If this did not help either, we try to compress the attributes with the «main» strategy, but leave them in the table page.
4. And only if after that, the row is not short enough, «main» attributes get into the TOAST table.
Sometimes it may be useful to change the strategy for certain columns. For example, if it is known in advance that the data in a column cannot be compressed, we can set the «external» strategy for it, which enables us to save time by avoiding useless compression attempts. This is done as follows:
```
=> ALTER TABLE accounts ALTER COLUMN number SET STORAGE external;
```
Re-running the query, we get:
```
attname | atttypid | storage
---------+----------+----------
id | integer | plain
number | text | external
client | text | extended
amount | numeric | main
```
TOAST tables and indexes are located in the separate pg\_toast schema and are, therefore, usually not visible. For temporary tables, the «pg\_toast\_temp\_*N*» schema is used similarly to the usual «pg\_temp\_*N*».
Of course, if you like nobody will hinder your spying upon the internal mechanics of the process. Say, in the «accounts» table there are three potentially long attributes, and therefore, there must be a TOAST table. Here it is:
```
=> SELECT relnamespace::regnamespace, relname
FROM pg_class WHERE oid = (
SELECT reltoastrelid FROM pg_class WHERE relname = 'accounts'
);
```
```
relnamespace | relname
--------------+----------------
pg_toast | pg_toast_33953
(1 row)
```
```
=> \d+ pg_toast.pg_toast_33953
```
```
TOAST table "pg_toast.pg_toast_33953"
Column | Type | Storage
------------+---------+---------
chunk_id | oid | plain
chunk_seq | integer | plain
chunk_data | bytea | plain
```
It's reasonable that the «plain» strategy is applied to the toasts into which the row is sliced: there is no second-level TOAST.
PostgreSQL hides the index better, but it is not difficult to find either:
```
=> SELECT indexrelid::regclass FROM pg_index
WHERE indrelid = (
SELECT oid FROM pg_class WHERE relname = 'pg_toast_33953'
);
```
```
indexrelid
-------------------------------
pg_toast.pg_toast_33953_index
(1 row)
```
```
=> \d pg_toast.pg_toast_33953_index
```
```
Unlogged index "pg_toast.pg_toast_33953_index"
Column | Type | Key? | Definition
-----------+---------+------+------------
chunk_id | oid | yes | chunk_id
chunk_seq | integer | yes | chunk_seq
primary key, btree, for table "pg_toast.pg_toast_33953"
```
The «client» column uses the «extended» strategy: its values will be compressed. Let's check:
```
=> UPDATE accounts SET client = repeat('A',3000) WHERE id = 1;
=> SELECT * FROM pg_toast.pg_toast_33953;
```
```
chunk_id | chunk_seq | chunk_data
----------+-----------+------------
(0 rows)
```
There is nothing in the TOAST table: repeating characters are compressed fine and after compression the value fits a usual table page.
And now let the client name consist of random characters:
```
=> UPDATE accounts SET client = (
SELECT string_agg( chr(trunc(65+random()*26)::integer), '') FROM generate_series(1,3000)
)
WHERE id = 1
RETURNING left(client,10) || '...' || right(client,10);
```
```
?column?
-------------------------
TCKGKZZSLI...RHQIOLWRRX
(1 row)
```
Such a sequence cannot be compressed, and it gets into the TOAST table:
```
=> SELECT chunk_id,
chunk_seq,
length(chunk_data),
left(encode(chunk_data,'escape')::text, 10) ||
'...' ||
right(encode(chunk_data,'escape')::text, 10)
FROM pg_toast.pg_toast_33953;
```
```
chunk_id | chunk_seq | length | ?column?
----------+-----------+--------+-------------------------
34000 | 0 | 2000 | TCKGKZZSLI...ZIPFLOXDIW
34000 | 1 | 1000 | DDXNNBQQYH...RHQIOLWRRX
(2 rows)
```
We can see that the data are broken up into 2000-byte chunks.
When a long value is accessed, PostgreSQL automatically and transparently for the application restores the original value and returns it to the client.
Certainly, it is pretty resource-intensive to compress and break up and then to restore. Therefore, to store massive data in PostgreSQL is not the best idea, especially if they are frequently used and the usage does not require transactional logic (for example: scans of original accounting documents). A more beneficial alternative is to store such data on a file system with the filenames stored in the DBMS.
The TOAST table is only used to access a long value. Besides, its own mutiversion concurrency is supported for a TOAST table: unless a data update touches a long value, a new row version will reference the same value in the TOAST table, and this saves space.
Note that TOAST only works for tables, but not for indexes. This imposes a limitation on the size of keys to be indexed.
> For more details of the internal data structure, you can read the [documentation](https://postgrespro.com/docs/postgresql/11/storage).
>
>
[Read on](https://habr.com/ru/company/postgrespro/blog/477648/). | https://habr.com/ru/post/469087/ | null | null | 3,145 | 60.45 |
Working with ImageViews and Bitmaps in Android Application Development
- Examining ImageView
- Using Bitmaps and Canvas
- Introducing Picasso
- Summary
- Q&A
- Workshop
- Exercise
What You’ll Learn in This Hour:
- Examining ImageView Bitmaps
- Using Bitmaps and Canvas
- Introducing Picasso
Images and media can play an important role in creating an exceptional Android app. In this chapter, you look at the details of handling images and bitmaps, including creating bitmaps, using drawing commands, and handling very large images.
Examining ImageView
You learned about different types of views in Hour 10, “More Views and Controls.” An ImageView is a view that displays an image, but you will find that there are unique aspects to working with images. An ImageView can display any drawable image. The source of the image can be a resource, a drawable, or a bitmap.
Displaying an Image
There are four methods available for setting an image in an ImageView. They differ by how the image to display use the following:
ImageView mainImage = (ImageView) findViewById(R.id.imageView1); mainImage.setImageResource(R.drawable.mainImage)
To populate a Drawable object from a resource, use the getResources.getDrawable() method:
Drawable myDrawable = getResources().getDrawable(R.drawable.ic_launcher);
In this hour, you populate an ImageView using a resource id as the source and then explore several properties of how an ImageView can display an image.
Using ScaleTypes in ImageView
ImageViews include a ScaleType property. The ScaleType defines how the image will be displayed. The complete set of ScaleTypes are as follows:
- has the effect of enlarging the entire image. For a large image, this has the effect of showing the center of the image.
- ImageView.ScaleType.CENTER_INSIDE: The image is scaled, and the aspect ratio is maintained. The width and height of the image fit within the ImageView.
- ImageView.ScaleType.FIT_CENTER: Maintain aspect ratio and fit the image in the center of the ImageView.
- ImageView.ScaleType.FIT_START: Maintain aspect ratio and fit the image in the left and top edge of the ImageView.
- ImageView.ScaleType.FIT_END: Maintain aspect ratio and fit the image in the right and bottom edge of the ImageView.
- ImageView.ScaleType.FIT_END: Maintain aspect ratio and fit the image in the right and bottom edge of the ImageView.
- ImageView.ScaleType.MATRIX: Scale using a matrix.
You can change scaleType dynamically in your code. Listing 11.1 show the code for an app that displays an ImageView and includes a RadioGroup and set of RadioButtons for changing the scale type. When a radio button is selected, the scaleType for the ImageView is updated.
LISTING 11.1 Changing ScaleType Programatically
1: package com.talkingandroid.hour11application; 2: import android.app.Activity; 3: import android.os.Bundle; 4: import android.widget.ImageView; 5: import android.widget.RadioGroup; 6: 7: public class ScaleActivity extends Activity { 8: RadioGroup radioGroup; 9: ImageView imageView; 10: 11: @Override 12: protected void onCreate(Bundle savedInstanceState) { 13: super.onCreate(savedInstanceState); 14: setContentView(R.layout.activity_scale); 15: radioGroup = (RadioGroup) findViewById(R.id.radioGroup); 16: imageView = (ImageView) findViewById(R.id.imageView); 17: radioGroup.setOnCheckedChangeListener(new 18: RadioGroup.OnCheckedChangeListener() { 19: @Override 20: public void onCheckedChanged(RadioGroup group, int checkedId) { 21: switch (checkedId){ 22: case R.id.radioCenter: 23: imageView.setScaleType(ImageView.ScaleType.CENTER); 24: break; 25: case R.id.radioCenterCrop: 26: imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); 27: break; 28: case R.id.radioCenterInside: 29: imageView.setScaleType(ImageView.ScaleType.CENTER_INSIDE); 30: break; 31: case R.id.radioFitCenter: 32: imageView.setScaleType(ImageView.ScaleType.FIT_CENTER); 33: break; 34: case R.id.radioFitStart: 35: imageView.setScaleType(ImageView.ScaleType.FIT_START); 36: break; 37: case R.id.radioFitEnd: 38: imageView.setScaleType(ImageView.ScaleType.FIT_END); 39: break; 40: case R.id.radioFitXY: 41: imageView.setScaleType(ImageView.ScaleType.FIT_XY); 42: break; 43: } 44: } 45: }); 46: } 47:}
On line 17 of Listing 11.1, an OnCheckChangeListener() is set for the RadioGroup. When the change is detected, the select RadioButton id is checked, and the appropriate scaleType is set on the image.
The image used in the code for Listing 11.1 is shown in Figure 11.1. The image is 900 pixels wide and 200 pixels high. It is used in several other examples in this chapter.
FIGURE 11.1 Base image for showing ScaleType (scaletest.png).
By using this simple image with four circles of different colors, it is easy to see the effect of the changing ScaleType.
The ImageView is set to match the parent width and height. When the image scaleType is set to CENTER_INSIDE, the image is shown taking the full width of the ImageView and is centered with a height that is proportional to the width.
Figure 11.2 shows the base image using the scaleTypes set to CENTER, CENTER_CROP, and CENTER_INSIDE. Using CENTER shows the image in actual size. Because the size of the image is larger than the ImageView, the green and blue circles in the center are shown. CENTER_CROP shows half of the green and blue circle. The height of the image fills the ImageView. CENTER_INSIDE shows the entire image centered in the ImageView.
FIGURE 11.2 ScaleTypes CENTER, CENTER_CROP, and CENTER_INSIDE.
Figure 11.3 shows the base image using the ScaleTypes FIT_CENTER, FIT_START, FIT_END, and FIT_XY. The aspect ratio is maintained in the first three, but when using FIT_XY, the image fills the ImageView and “stretches” the image to fit.
FIGURE 11.3 ScaleTypes FIT_CENTER, FIT_START, FIT_END, and FIT_XY.
Rotating an Image
An ImageView contains several methods for rotating an image. When you rotate an image, you must set the point in the image to rotate around. That is the pivot point. The method setPivotX() and setPivotY() are used to set the pivot point.
Once the pivot point is set, you can call the setRotation() method to make the image actually rotate.
The idea in Listing 11.2 is to set the pivot point to the center of the ImageView and to rotate the image 30 degrees each time the button is clicked. The ImageView is defined to have height and width set to match_parent. The ImageView occupies the entire screen.
To get the center of the ImageView, the width and height are divided by 2. To continuously rotate, the number of clicks count is kept. The angle to rotate is 30 times the number of clicks. So, if the button is clicked twice, the image is rotated 60 degrees.
Figure 11.4 shows the rotated image.
LISTING 11.2 Rotating an Image
1: package com.talkingandroid.hour11application; 2: import android.app.Activity; 3: import android.os.Bundle; 4: import android.view.View; 5: import android.widget.Button; 6: import android.widget.ImageView; 7: 8: public class RotateActivity extends Activity { 9: Button rotateButton; 10: ImageView imageView; 11: int numClicks = 1; 12: 13: @Override 14: protected void onCreate(Bundle savedInstanceState) { 15: super.onCreate(savedInstanceState); 16: setContentView(R.layout.activity_rotate); 17: imageView = (ImageView)findViewById(R.id.imageView); 18: rotateButton = (Button) findViewById(R.id.button); 19: rotateButton.setOnClickListener(new View.OnClickListener() { 20: @Override 21: public void onClick(View v) { 22: imageView.setPivotX(imageView.getWidth()/2); 23: imageView.setPivotY(imageView.getHeight() / 2); 24: imageView.setRotation(30*numClicks); 25: numClicks++; 26: } 27: }); 28: } 29: }
FIGURE 11.4 Rotated image.
Setting Alpha
Alpha level indicates the opacity of an image. An image can be completely transparent, completely opaque, or somewhere in the middle. The alpha level can be set on an ImageView using the setAlpha() method or, since API level 11, the setImageAlpha() method. These methods take an integer parameter. A parameter of 0 indicates complete transparency and 255 for complete opacity. | https://www.informit.com/articles/article.aspx?p=2423187&seqNum=3 | CC-MAIN-2021-25 | refinedweb | 1,240 | 52.56 |
There are two ways of accessing the classes stored in a package.
1. Using import keyword
We want to use a class in a number of places in a programe or we may like to use many of the classes contained in a package. We may achieve this as follows
import java.lang.*;
import java.lang.Scanner;
The first import statement imports all the classes and interfaces of language package into your java program. On the other hand the second import statement will import only the Scanner class into your java program.
2. Using package name and class name in full
If your not imported any of the package class using import keyword; then to use such classes in your program you need to follow the java naming conventions
java.io.Scanner scr =new java.io.Scanner(System.in);
Note: only a member of this blog may post a comment. | http://www.tutorialtpoint.net/2021/12/using-system-packages-or-importing.html | CC-MAIN-2022-05 | refinedweb | 150 | 70.53 |
Motivation
Most of the programming languages are open enough to allow programmers to do things multiple ways for a similar outcome. JavaScript is in no way different. With JavaScript, we often find multiple ways of doing things for a similar outcome, and that's confusing at times.
Some of the usages are better than the other alternatives and thus, these are my favorites. I am going to list them here in this article. I am sure, you will find many of these on your list too.
1. Forget string concatenation, use template string(literal)
Concatenating strings together using the
+ operator to build a meaningful string is old school. Moreover, concatenating strings with dynamic values(or expressions) could lead to frustrations and bugs.
let name = 'Charlse'; let place = 'India'; let isPrime = bit => { return (bit === 'P' ? 'Prime' : 'Nom-Prime'); } // string concatenation using + operator let messageConcat = 'Mr. ' + name + ' is from ' + place + '. He is a' + ' ' + isPrime('P') + ' member.'
Template literals(or Template strings) allow embedding expressions. It has got unique syntax where the string has to be enclosed by the backtick. Template string can contain placeholders for dynamic values. These are marked by the dollar sign and curly braces (${expression}).
Here is an example demonstrating it,
let name = 'Charlse'; let place = 'India'; let isPrime = bit => { return (bit === 'P' ? 'Prime' : 'Nom-Prime'); } // using template string let messageTemplateStr = `Mr. ${name} is from ${place}. He is a ${isPrime('P')} member.` console.log(messageTemplateStr);
2. isInteger
There is a much cleaner way to know if a value is an integer. The
Number API of JavaScript provides a method called,
isInteger() to serve this purpose. It is very useful and better to be aware.
let mynum = 123; let mynumStr = "123"; console.log(`${mynum} is a number?`, Number.isInteger(mynum)); console.log(`${mynumStr} is a number?`, Number.isInteger(mynumStr));
Output:
3. Value as Number
Have you ever noticed,
event.target.value always returns a string type value even when the input box is of type number?
Yes, see the example below. We have a simple text box of type number. It means it accepts only numbers as input. It has an event handler to handle the key-up events.
<input type='number' onkeyup="trackChange(event)" />
In the event handler method, we take out the value using
event.target.value. But it returns a string type value. Now I will have an additional headache to parse it to an integer. What if the input box accepts floating numbers(like, 16.56)? parseFloat() then? Ah, all sorts of confusion and extra work!
function trackChange(event) { let value = event.target.value; console.log(`is ${value} a number?`, Number.isInteger(value)); }
Use
event.target.valueAsNumber instead. It returns the value as the number.
let valueAsNumber = event.target.valueAsNumber; console.log(`is ${value} a number?`, Number.isInteger(valueAsNumber));
4. Shorthand with AND
Let's consider a situation where we have a boolean value and a function.
let isPrime = true; const startWatching = () => { console.log('Started Watching!'); }
This is too much code to check for the boolean condition and invoke the function,
if (isPrime) { startWatching(); }
How about using the short-hand using the AND(&&) operator? Yes, avoid the
if statement altogether. Cool, right?
isPrime && startWatching();
5. The default value with || or ??
If you ever like to set a default value for a variable, you can do it using the OR(||) operator easily.
let person = {name: 'Jack'}; let age = person.age || 35; // sets the value 35 if age is undefined console.log(`Age of ${person.name} is ${age}`);
But wait, it has a problem. What if the person's age is 0(a just born baby, maybe). The age will be computed as 35 (
0 || 35 = 35). This is unexpected behavior.
Enter the
nullish coalescing operator (??). It is a logical operator that returns its right-hand side operand when its left-hand side operand is
null or
undefined, and otherwise returns its left-hand side operand.
To rewrite the above code with the
?? operator,
let person = {name: 'Jack'}; let age = person.age ?? 35; // sets the value 0 if age 0, 35 in case of undefined and null console.log(`Age of ${person.name} is ${age}`);
6. Randoms
Generating a random number or getting a random item from an array is a very useful method to keep handy. I have seen them appearing multiple times in many of my projects.
Get a random item from an array,
let planets = ['Mercury ', 'Mars', 'Venus', 'Earth', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']; let randomPlanet = planets[Math.floor(Math.random() * planets.length)]; console.log('Random Planet', randomPlanet);
Generate a random number from a range by specifying the min and max values,
let getRandom = (min, max) => { return Math.round(Math.random() * (max - min) + min); } console.log('Get random', getRandom(0, 10));
7. Function default params
In JavaScript, function arguments(or params) are like local variables to that function. You may or may not pass values for those while invoking the function. If you do not pass a value for a param, it will be
undefined and may cause some unwanted side effects.
There is a simple way to pass a default value to the function parameters while defining them. Here is an example where we are passing the default value
Hello to the parameter
message of the
greetings function.
let greetings = (name, message='Hello,') => { return `${message} ${name}`; } console.log(greetings('Jack')); console.log(greetings('Jack', 'Hola!'));
8. Required Function Params
Expanding on the default parameter technique, we can mark a parameter as mandatory. First, define a function to throw an error with an error message,
let isRequired = () => { throw new Error('This is a mandatory parameter.'); }
Then assign the function as the default value for the required parameters. Remember, the default values are ignored when a value is passed is as a parameter at the invocation time. But, the default value is considered if the parameter value is
undefined.
let greetings = (name=isRequired(), message='Hello,') => { return `${message} ${name}`; } console.log(greetings());
In the above code,
name will be undefined and that will try to set the default value for it which is the
isRequired() function. It will throw an error as,
9. Comma Operator
I was surprised when I realized, comma(,) is a separate operator and never gone noticed. I have been using it so much in code but, never realized its true existence.
In JavaScript, the comma(,) operator is used for evaluating each of its operands from left to right and returns the value of the last operand.
let count = 1; let ret = (count++, count); console.log(ret);
In the above example, the value of the variable
ret will be, 2. Similar way, the output of the following code will be logging the value 32 into the console.
let val = (12, 32); console.log(val);
Where do we use it? Any guesses? The most common usage of the comma(,) operator is to supply multiple parameters in a for a loop.
for (var i = 0, j = 50; i <= 50; i++, j--)
10. Merging multiple objects
You may have a need to merge two objects together and create a better informative object to work with. You can use the spread operator
...(yes, three dots!).
Consider two objects, emp and job respectively,
let emp = { 'id': 'E_01', 'name': 'Jack', 'age': 32, 'addr': 'India' }; let job = { 'title': 'Software Dev', 'location': 'Paris' };
Merge them using the spread operator as,
// spread operator let merged = {...emp, ...job}; console.log('Spread merged', merged);
There is another way to perform this merge. Using
Object.assign(). You can do it like,
console.log('Object assign', Object.assign({}, emp, job));
Output:
Note, both the spread operator and the Object.assign perform a shallow merge. In a shallow merge, the properties of the first object are overwritten with the same property values as the second object.
For deep merge, please use something like,
_merge of lodash.
11. Destructuring
The technique of breaking down the array elements and object properties as variables called,
destructuring. Let us see it with few examples,
Array
Here we have an array of emojis,
let emojis = ['🔥', '⏲️', '🏆', '🍉'];
To destructure, we would use the syntax as follows,
let [fire, clock, , watermelon] = emojis;
This is the same as doing,
let fire = emojis[0]; but with lots more flexibility.
Have you noticed, I have just ignored the trophy emoji using an empty space in-between? So what will be the output of this?
console.log(fire, clock, watermelon);
Output:
Let me also introduce something called the
rest operator here. If you want to destructure an array such that, you want to assign one or more items to variables and park the rest of it into another array, you can do that using
...rest as shown below.
let [fruit, ...rest] = emojis; console.log(rest);
Output:
Object
Like arrays, we can also destructure objects.
let shape = { name: 'rect', sides: 4, height: 300, width: 500 };
Destructuring such that, we get a name, sides in a couple of variables and rest are in another object.
let {name, sides, ...restObj} = shape; console.log(name, sides); console.log(restObj);
Output:
Read more about this topic from here.
12. Swap variables
This must be super easy now using the concept of
destructuring we learned just now.
let fire = '🔥'; let fruit = '🍉'; [fruit, fire] = [fire, fruit]; console.log(fire, fruit);
13. isArray
Another useful method for determining if the input is an Array or not.
let emojis = ['🔥', '⏲️', '🏆', '🍉']; console.log(Array.isArray(emojis)); let obj = {}; console.log(Array.isArray(obj));
14. undefined vs null
undefined is where a value is not defined for a variable but, the variable has been declared.
null itself is an empty and non-existent value that must be assigned to a variable explicitly.
undefined and
null are not strictly equal,
undefined === null // false
Read more about this topic from here.
15. Get Query Params
window.location object has a bunch of utility methods and properties. We can get information about the protocol, host, port, domain, etc from the browser URLs using these properties and methods.
One of the properties that I found very useful is,
window.location.search
The
search property returns the query string from the location URL. Here is an example URL:. The
location.search will return,
?project=js
We can use another useful interface called,
URLSearchParams along with
location.search to get the value of the query parameters.
let project = new URLSearchParams(location.search).get('project');
Output:
js
Read more about this topic from here.
This is not the end
This is not the end of the list. There are many many more. I have decided to push those to the git repo as mini examples as and when I encounter them.
atapas
/
js-tips-tricks
List of JavaScript tips and tricks I am learning everyday!
js-tips-tricks
List of JavaScript tips and tricks I am learning everyday!
- See it running here:
- Read this blog for more insights:
Many Thanks to all the
Stargazers who has supported this project with stars(
⭐)
What are your favorite JavaScript tips and tricks? How about you let us know about your favorites in the comment below?
If it was useful to you, please Like/Share so that, it reaches others as well. I am passionate about UI/UX and love sharing my knowledge through articles. Please visit my blog to know more.
You may also like,
- 10 lesser-known Web APIs you may want to use
- 10 useful HTML5 features, you may not be using
- 10 useful NPM packages you should be aware of (2020 edition)
Feel free to DM me on Twitter @tapasadhikary or follow.
Discussion
hey, awesome article. just a small correction
because of the null coalescing, if
person.age === 0, the variable
ageis
0.
Big thank for helping me to correct the typo.
Hadn't seen the
isRequiredidea before! Clever list :)
Thanks Elliot!
Number.isIntegermethod was designed to be used with numbers in order to test if it is an integer or not (NaN, Infinity, float...).
developer.mozilla.org/en-US/docs/W...
Using
typeof mynum === "number"is a better solution to test if a value is a number.
Note, that the Number rounding of floats in JavaScript is not accurate
Since this rounding affects integer representation, too, you should also consider
Number.isSafeInteger
developer.mozilla.org/en-US/docs/W...
Also please consider Integer boundaries in JS, that are represented by
Number.MAX_SAFE_INTEGERand
Number.MIN_SAFE_INTEGER
developer.mozilla.org/en-US/docs/W...
developer.mozilla.org/en-US/docs/W...
Edit: sry wanted to post on top level, somehow ended up as comment on yours
Awesome, thanks Jan!
I thought I would know everything in this list, but learned something new!!! Didn't know about the nullish coalescing operator, pretty useful! Thank you Tapas
Thanks, Oscar!
It's quite new, so it won't work everywhere yet.
?? operator not working for me. shows as below
SyntaxError: Unexpected token '?'
Which browser are you using? It should work.
Output, Age of Jack is 35
Also, check if there is a syntax error. You can paste the code as a comment.
Points 3,8,9 blew my mind away. Especially the user of a function to throw errors 🙏👏👏
Great.. Thanks, Venkatesh!
Really nice tricks to know. Nice work keep this good work going...
Good health
Thanks Sakar for the encouragements!
Awesome, thanks for sharing!
Thanks for reading and commenting!
Good tricks shared in this article I learnt some new also
Thanks Fahad!
I have used window.location.hash in pages where tab content need to be loaded initially based on url hash. I find this pretty useful
Cool, thanks for sharing Niroj! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/atapas/my-favorite-javascript-tips-and-tricks-4jn4 | CC-MAIN-2021-04 | refinedweb | 2,248 | 60.01 |
Working with Data and Conventions
Frequently, you'll want to access structured data from the program you're analyzing. angr has several features to make this less of a headache.
Working with types
angr has a system for representing types.
These SimTypes are found in
angr.types - an instance of any of these classes represents a type.
Many of the types are incomplete unless they are supplamented with a SimState - their size depends on the architecture you're running under.
You may do this with
ty.with_state(state), which returns a copy of itself, with the state specified.
angr also has a light wrapper around
pycparser, which is a C parser.
This helps with getting instances of type objects:
import angr # note that SimType objects have their __repr__ defined to return their c type name, # so this function actually returned a SimType instance. angr.types.parse_type('int') int angr.types.parse_type('char **') char** angr.types.parse_type('struct aa {int x; long y;}') struct aa angr.types.parse_type('struct aa {int x; long y;}').fields OrderedDict([('x', int), ('y', long)])
Additionally, you may parse C definitions and have them returned to you in a dict, either of variable/function declarations or of newly defined types:
>>> angr.types.parse_defns("int x; typedef struct llist { char* str; struct llist *next; } list_node; list_node *y;") {'x': int, 'y': struct llist*} >>> defs = angr.types.parse_types("int x; typedef struct llist { char* str; struct llist *next; } list_node; list_node *y;") >>> defs {'list_node': struct llist} # if you want to get both of these dicts at once, use parse_file, which returns both in a tuple. >>> defs['list_node'].fields OrderedDict([('str', char*), ('next', struct llist*)]) >>> defs['list_node'].fields['next'].pts_to.fields OrderedDict([('str', char*), ('next', struct llist*)]) # If you want to get a function type and you don't want to construct it manually, # you have to use parse_defns, not parse_type >>> angr.types.parse_defns("int x(int y, double z);") {'x': (int, double) -> int}
And finally, you can register struct definitions for future use:
'struct abcd { int x; int y; }') angr.types.register_types(angr.types.parse_types('typedef long time_t;')) angr.types.parse_defns('struct abcd a; time_t b;') {'a': struct abcd, 'b': long}angr.types.define_struct(
These type objects aren't all that useful on their own, but they can be passed to other parts of angr to specify data types.
Accessing typed data from memory
Now that you know how angr's type system works, you can unlock the full power of the
state.mem interface!
Any type that's registered with the types module can be used to extract data from memory.
import angr b = angr.Project('examples/fauxware/fauxware') s = b.factory.entry_state() s.mem[0x601048] <<untyped> <unresolvable> at 0x601048> s.mem[0x601048].long <long (64 bits) <BV64 0x4008d0> at 0x601048> s.mem[0x601048].long.resolved <BV64 0x4008d0> s.mem[0x601048].long.concrete 0x4008d0 s.mem[0x601048].abcd <struct abcd { .x = <int (32 bits) <BV32 0x4008d0> at 0x601048>, .y = <int (32 bits) <BV32 0x0> at 0x60104c> } at 0x601048> s.mem[0x601048].long.resolved <BV64 0x4008d0> s.mem[0x601048].long.concrete 4196560L s.mem[0x601048].deref <<untyped> <unresolvable> at 0x4008d0> s.mem[0x601048].deref.string <string_t <BV64 0x534f534e45414b59> at 0x4008d0> s.mem[0x601048].deref.string.resolved <BV64 0x534f534e45414b59> s.mem[0x601048].deref.string.concrete 'SOSNEAKY'
The interface works like this:
- You first use [array index notation] to specify the address you'd like to load from
- If at that address is a pointer, you may access the
derefproperty to return a SimMemView at the address present in memory.
- You then specify a type for the data by simply accessing a property of that name. For a list of supported types, look at
state.mem.types.
- You can then refine the type. Any type may support any refinement it likes. Right now the only refinements supported are that you may access any member of a struct by its member name, and you may index into a string or array to access that element.
- If the address you specified initially points to an array of that type, you can say
.array(n)to view the data as an array of n elements.
- Finally, extract the structured data with
.resolvedor
.concrete.
.resolvedwill return bitvector values, while
.concretewill return integer, string, array, etc values, whatever best represents the data.
- Alternately, you may store a value to memory, by assigning to the chain of properties that you've constructed. Note that because of the way python works,
x = s.mem[...].prop; x = valwill NOT work, you must say
s.mem[...].prop = val.
If you define a struct using
define_struct or
register_types, you can access it here as a type:
32 bits) <BV32 0x8949ed31> at 0x400580>, .y = <int (32 bits) <BV32 0x89485ed1> at 0x400584> } at 0x400580>s.mem[b.entry].abcd <struct abcd { .x = <int (
Working with Calling Conventions
A calling convention is the specific means by which code passes arguments and return values through function calls. While angr comes with a large number of pre-built calling conventions, and a lot of logic for refining calling conventions for specific circumstances (e.g. floating point arguments need to be stored in different locations, it gets worse from there), it will inevitably be insufficient to describe all possible calling conventions a compiler could generate. Because of this, you can customize a calling convention by describing where the arguments and return values should live.
angr's abstraction of calling conventions is called SimCC.
You can construct new SimCC instances through the angr object factory, with
b.factory.cc(...).
- Pass as the
argskeyword argument a list of argument storage locations
- Pass as the
ret_valkeyword argument the location where the return value should be stored
- Pass as the
func_tykeyword argument a SymType for the function prototype.
- Pass it none of these things to use a sane default for the current architecture!
To specify a value location for the
args or
ret_val parameters, use instances of the
SimRegArg or
SimStackArg classes.
You can find them in the factory -
b.factory.cc.Sim*Arg.
Register arguments should be instantiated with the name of the register you're storing the value in, and the size of the register in bytes.
Stack arguments should be instantiated with the offset from the stack pointer at the time of entry into the function and the size of the storage location, in bytes.
Once you have a SimCC object, you can use it along with a SimState object to extract or store function arguments more cleanly.
Take a look at the API documentation for details.
Alternately, you can pass it to an interface that can use it to modify its own behavior, like
b.factory.call_state, or...
Callables
Callables are a Foreign Functions Interface (FFI) for symbolic execution.
Basic callable usage is to create one with
myfunc = b.factory.callable(addr), and then call it!
result = myfunc(args, ...)
When you call the callable, angr will set up a
call_state at the given address, dump the given arguments into memory, and run a
path_group based on this state until all the paths have exited from the function.
Then, it merges all the result states together, pulls the return value out of that state, and returns it.
All the interaction with the state happens with the aid of a
SimCC, to tell where to put the arguments and where to get the return value.
By default, it uses a sane default for the architecture, but if you'd like to customize it, you can pass a
SimCC object in the
cc keyword argument when constructing the callable.
You can pass symbolic data as function arguments, and everything will work fine.
You can even pass more complicated data, like strings, lists, and structures as native python data (use tuples for structures), and it'll be serialized as cleanly as possible into the state.
If you'd like to specify a pointer to a certain value, you can wrap it in a
PointerWrapper object, available as
b.factory.callable.PointerWrapper.
The exact semantics of how pointer-wrapping work are a little confusing, but they can be boiled down to "unless you specify it with a PointerWrapper or a specific SimArrayType, nothing will be wrapped in a pointer automatically unless it gets to the end and it hasn't yet been wrapped in a pointer yet and the original type is a string, array, or tuple."
The relevant code is actually in SimCC - it's the
setup_callsite function.
If you don't care for the actual return value of the call, you can say
func.perform_call(arg, ...), and then the properties
func.result_state and
func.result_path_group will be populated.
They will actually be populated even if you call the callable normally, but you probably care about them more in this case! | https://docs.angr.io/docs/structured_data.html | CC-MAIN-2017-51 | refinedweb | 1,459 | 56.35 |
Heap Sort:
Heap sort is a comparison based sorting technique based on Binary Heap data structure. It is similar to selection sort where we first find the maximum element and place the maximum element at the end. We repeat the same process for remaining element.
Binary Heap:
Let us first define a Complete Binary Tree. A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible..
You make like.
Program to implement Heap Sort in C++
#include <iostream> using namespace std; void max_heapify(int *a, int i, int n) { int j, temp; temp = a[i]; j = 2*i; while (j <= n) { if (j < n && a[j+1] > a[j]) j = j+1; if (temp > a[j]) break; else if (temp <= a[j]) { a[j/2] = a[j]; j = 2*j; } } a[j/2] = temp; return; } void heapsort(int *a, int n) { int i, temp; for (i = n; i >= 2; i--) { temp = a[i]; a[i] = a[1]; a[1] = temp; max_heapify(a, 1, i - 1); } } void build_maxheap(int *a, int n) { int i; for(i = n/2; i >= 1; i--) { max_heapify(a, i, n); } } int main() { int n, i, x; cout<<"Enter no of elements of array\n"; cin>>n; int a[20]; for (i = 1; i <= n; i++) { cout<<"Enter element"<<(i)<<endl; cin>>a[i]; } build_maxheap(a,n); heapsort(a, n); cout<<"\n\nSorted Array\n"; for (i = 1; i <= n; i++) { cout<<a[i]<<endl; } return 0; }
Sample Output:
Enter no of elements of array 5 Enter element1 3 Enter element2 8 Enter element3 9 Enter element4 3 Enter element5 2 Sorted Array 2 3 3 8 9 | https://proprogramming.org/heap-sort-in-c/ | CC-MAIN-2018-43 | refinedweb | 288 | 52.87 |
Can someone please explain what ledValue = (adcValue >> 7); /* Light up all LEDs up to ledValue */ LED_PORT = 0; for (i = 0; i <= ledValue; i++) { LED_PORT |= (1 << i); } is doing in the code mentioned below? int main(void) { // -------- Inits --------- // uint8_t ledValue; uint16_t adcValue; uint8_t i; initADC0(); LED_DDR = 0xff; // ------ Event loop ------ // while (1) { ADCSRA |= (1 << ADSC); /* start ADC conversion */ loop_until_bit_is_clear(ADCSRA, ADSC); /* wait until done */ adcValue = ADC; /* read ADC in */ /* Have 10 bits, want 3 (eight LEDs after all) */ ledValue = (adcValue >> 7); /* Light up all LEDs up to ledValue */ LED_PORT = 0; for (i = 0; i <= ledValue; i++) { LED_PORT |= (1 << i); } _delay_ms(50); } /* End event loop */ return (0); /* This line is never reached */ }
8_Bit_ADC bit shifting
3 posts / 0 new
Author
Level: New Member
Joined: Tue. Apr 2, 2019
Posts: 11 View posts
The comments tell all! Effectively you have a 8 led bargraph.
If the adc reference is 5V, each led will represent the input voltage in 5/8 volts.
Top
- Log in or register to post comments
So now we are left with a 3 bit value (representing 0-7). The next part of the code simply counts up to that value in an i loop. Each i pass, the port is or'd with a 1-bit shifted i positions left, thus turning on one additional led position (the next position). If the 3 bit value was 5, we'd end up turning on 5 leds (one by one). It happens so fast that it appears they light up simultaneously. You could add a small delay in the loop for a "bargraph" effect.
Note your code does not explicitly set up the adc for use (such as channel selection, reference level, clocking rate, enable ADC power, etc), other than:
Whatever that does (in case you have trouble). It could just be a return statement, for all we know---that wouldn't work!
When in the dark remember-the future looks brighter than ever. I look forward to being able to predict the future!
Top
- Log in or register to post comments | https://www.avrfreaks.net/comment/2675311 | CC-MAIN-2020-34 | refinedweb | 340 | 67.49 |
Currently there is no natural way to execute a java process in the background. Here is a chat I had with merscwog in the IRC channel about this topic:
<edovale> any of you guys know how can I get the javaexec task not to wait for process termination before returning?
<merscwog> Not sure that it can be done right now. Adam would know for sure, but he's not on at the moment. Basically you would need something like the setDaemon() option that the JettyRun tasks have.
<merscwog> Presumably you want to join with the forked process sometime in a later task, or you just want to leave the forked process running until explicitly stopped.
<edovale> I will need to stop the process after the tests are done.
<edovale> Do you think a better approach could be to use the ant exec task?
<merscwog> I've used the built in groovy string execute() methods, or the standard Java ProcessBuilder and calling start() on that and handling the returned Process object later.
<edovale> Thanks, I'll look into that.
<merscwog> You also might consider filing a JIRA about enhancing the Exec task and JavaExec tasks to allow for running something in the background, and setting a Process object as part of the task that can be manipulated by a further task (to allow waitFor() and destroy())
<edovale> Are you then certain that it can not be done now?
<merscwog> No. I am not certain, but it delegates to a org.gradle.process.internal.DefaultJavaExecAction which only has one execute() method, and it has a waitForFinish() call directly before it checks to see if isIgnoreExitValue is set.
<merscwog> Hmm, I guess you could in theory simply override the javaExecHandleBuilder JavaExecAction object and do what you'd want.
<edovale> ok.. that sounds like certainty to me.. I could definetely overwrite the javaExecHandlerBuilder but the IMHO the task should expose this functionality in a more natural way. It doesn't seem to me this is an odd requirement.
<edovale> I'll file the jira issue..
Should be the equivalent of spawn=true in Ant Java task.
Related forum issue:
Are there any updates to this ticket? Can we include in 1.1 release?
@Pablo, it probably won't make it into the 1.1 release. We'll try and see if we can squeeze it in.
There are some pretty simple work-arounds:
def process = ['java', '-cp', 'some-path', 'SomeMainClass', 'arg'].execute()
process.in.close()
process.out.close()
process.err.close()
Thank you Adam we'll try those out. Please keep us posted.
Any chance to get this fixed?
It would be great i.e. to launch GWT superdev mode within the IDE and still have its gradle integration available for other useful things (like dependencies resolution). Also switching the java execution code to ant calls is not so painless because of issues related to different classpath definition.
I would really love this. This is now the second round of searching where I have this exact need.
Actually, what I also need, is the ability to fire off a process which spawns threads - and when the main-thread returns, I want the task to be finished. However, the threads run on in the background.
This is obviously going to be used for integration tests - whereby I start up e.g. a "self-contained Jetty" in a main method, and then when the application is up, the main method exits, and I can go for the tests.
Using ant.java, I can get this "when main method exits, the task is done" feature (by having both fork and spawn = false), but I ran into some other issues then (both that it runs within a restrictive security manager not e,g, allowing MBeans to be registered)..
Also using the ant.java, I can spawn it totally, but then I both loose the standard output, and I would need to implement some shaky polling strategy to check if the thing has come up. (Full fork/spawn/disown is also very good in itself - to e.g. on command line do a full build and then spawn the result - and exit Gradle. But when I need the spawn to just exist for the duration of the rest of the build, to interact with it from the rest of the build script for testing, the full fork is not issue is now tracked on GitHub: | https://issues.gradle.org/browse/GRADLE-1254.html | CC-MAIN-2021-31 | refinedweb | 736 | 71.44 |
How to convert csv into ped formate?
I want to convert a genotype data into numeric (0,1 and 2) format
I tried to make this conversion using Plink. How can I read .csv file into Plink? How can I make this conversion? What about missing data?
Thanks.
See also questions close to this topic
- Fetch data within a collection from mongodb from R
- Assigning dummy values based on previous occurrences in R
Consider the data frame below:
nameID titleID year dummy 1 a b 1999 1 2 e c 1999 1 3 i d 2000 0 4 o f 2000 0 5 a f 2000 1 6 e g 2001 0 7 i h 2002 0 8 i j 2003 0 9 u k 2003 1 10 o l 2004 1 11 a m 2004 0 12 o m 2004 0 13 u n 2005 0
I need a script that will add a new column, "dummycount", and assign either value 0 or 1, depending on the following conditions:
- 0 = For a given "nameID", no previous occurrence of 1 in the column "dummy"
- 1 = For a given "nameID", at least a single occurrence of 1 in the column "dummy".
Here is an example of the desired output, that I put together:
nameID titleID year dummy dummycount 1 a b 1999 1 0 2 e c 1999 1 0 3 i d 2000 0 0 4 o f 2000 0 0 5 a f 2000 1 1 6 e g 2001 0 1 7 i h 2002 0 0 8 i j 2003 0 0 9 u k 2003 1 0 10 o l 2004 1 0 11 a m 2004 0 1 12 o m 2004 0 1 13 u n 2005 0 1
As you see, "dummycounts" only takes the value 1, if the "nameID" has at least one previous, single occurrence of 1 in the "dummy" column.
Thank you for your help!
- R: Append multiple rows to dataframe within for-loop
I have PDF files that I made from these wikipedia pages (for example):
I have a list of keywords I want to search for within the document and extract the sentences in which they appear.
keywords <- c("altitude", "range", "speed")
I can call the file, extract the text from the PDF, pull the sentences with the keywords from the PDF. This works if I do this with each of the keywords individually, but when I try to do this in a loop I keep getting this issue where the rows aren't appending. Instead it's almost doing a cbind and then an error gets thrown regarding the number of columns. Here is my code and any help you can provide as to what I can do to make this work is much appreciated.
How do I get the rows to append correctly and appear in one file per PDF?
pdf.files <- list.files(path = "/path/to/file", pattern = "*.pdf", full.names = FALSE, recursive = FALSE) for (i in 1:length(pdf.files)) { for (j in 1:length(keywords)) { text <- pdf_text(file.path("path", "to", "file", pdf.files[i])) text2 <- tolower(text) text3 <- gsub("\r", "", text2) text4 <- gsub("\n", "", text3) text5 <- grep(keywords[j], unlist(strsplit(text4, "\\.\\s+")), value = TRUE) } temp <- rbind(text5) assign(pdf.files[i], temp) }
After I get the rows to append correctly the next step will be to add in the keywords as a variable to the left of the extracted sentences. Example of ideal output:
keywords sentence altitude sentence1.1 altitude sentence1.2 range sentence2.1 range sentence2.2 range sentence2.3 speed sentence3.1 speed sentence3.2
Would this be done in the loop as well or post as a separate function?
Any help is appreciated.
- Magento Error while Bulk Uploading Products
Error Message Says:
Fatal error: Uncaught TypeError: Argument 1 passed to Mage_Catalog_Model_Convert_Adapter_Product::saveRow() must be of the type array, boolean given, called in /home/mezmiz35/public_html/app/code/core/Mage/Adminhtml/controllers/System/Convert/ProfileController.php on line 250 and defined in /home/mezmiz35/public_html/app/code/core/Mage/Catalog/Model/Convert/Adapter/Product.php:625 Stack trace: #0 /home/mezmiz35/public_html/app/code/core/Mage/Adminhtml/controllers/System/Convert/ProfileController.php(250): Mage_Catalog_Model_Convert_Adapter_Product->saveRow(false) #1 /home/mezmiz35/public_html/app/code/core/Mage/Core/Controller/Varien/Action.php(418): Mage_Adminhtml_System_Convert_ProfileController->batchRunAction() #2 /home/mezmiz35/public_html/app/code/core/Mage/Core/Controller/Varien/Router/Standard.php(254): Mage_Core_Controller_Varien_Action->dispatch('batchRun') #3 /home/mezmiz35/public_html/app/code/core/Mage/Core/Controller/Varien/Front.php(172): Mage_Core_Controller_Varien_Router_Standard->match(Object(Mage_Core_C in /home/mezmiz35/public_html/app/code/core/Mage/Catalog/Model/Convert/Adapter/Product.php on line 625
Please advise what went wrong here.enter image description here
- TypeError: 'int' object is not subscriptable - when trying to create csv files
My dictionary
auto_annolooks like this:
defaultdict(<class 'dict'>, {'Beda': {'Fuery': {'anger': 2, 'anticipation': 1, 'disgust': 2, 'fear': 2, 'sadness': 2}}, 'Fuery': {'Beda': {'surprise': 1}, 'Fuery': {'anger': 1, 'anticipation': 6, 'disgust': 2, 'fear': 1, 'joy': 5, 'sadness': 2, 'surprise': 4, 'trust': 4}, 'Hawkeye': {'anger': 1, 'fear': 3, 'trust': 1},...#etc
My goal is to automatically create two csv files using these kind of dictionaries. One csv file for nodes (Character's Ids from 0 to x, and their Label, aka the character's name), and the second csv file for their relations according to an emotion and its weight (here: keys of first dict are the the
source, and the keys of the nested dict are the
target.
So far I came up with this function that uses
pickleto load the dictionary above:
def automate_gephi(): """CREATES TWO CSV FILES TO USE IN GEPHI""" auto_anno = pickle.load(open("auto_anno.p", "rb")) characters = set() for char1,value in auto_anno.items(): # this is the predicted graph (a dictionary where each key is an experiencer and each value is a stimulus with emotions) for char2,val in value.items(): characters.add(char1) characters.add(char2) file_node = open("nodes.csv", "w") #only nodes and id's go here file_node.write("Id"+"\t"+"Label"+"\n") # for each node create a numeric id and write to file for n,name in enumerate(characters): file_node.write(str(n)+"\t"+"%s" %name+"\n") file_node.close() # edges read_nodes = open("nodes.csv","r") edges_file = open("edges.csv","w") sep = "\t" edges_file.write("Source"+sep+"Target"+sep+"Label"+sep+"Weight"+"\n") Adjacency = {} for line in read_nodes: try: Adjacency[line.strip().split("\t")[1]] = line.strip().split("\t")[0] except IndexError: pass continue for key,value in auto_anno.items(): source = key for k1,v1 in value.items(): target = k1 for emotion,weight in v1.items(): try: edges_file.write(str(Adjacency[source])+sep+str\ (Adjacency[target])+sep+emotion+sep+\ " ".join([i for i in weight["Weight"]])+"\n") except KeyError: pass edges_file.close()
But I'm getting this error message:
line 224, in automate_gephi " ".join([i for i in weight["Weight"]])+"\n") TypeError: 'int' object is not subscriptable
An example of the desired output:
FILE 1: Nodes:
Id Label 0 Beda 1 Fuery 2 Hawkeye
FILE 2: Edges:
Source Target Label Weight 0 1 anger 2 0 1 anticipation 1 . . .#etc
What am I missing here? Any help is appreciated!
Thanks in advance!
- Auto refresh layer data on ArcGIS
I'm trying to get data to auto refresh on a map layer I have added on ArcGIS. The information shows up just fine but does not refresh automatically as it does on the web page.
I have tried converting from its original source (XML on a web page updating every 15 minutes) to Excel, CSV (Both also update just fine) as well as trying to pull from Google Drive. I should mention I am using a free trial to test this program out so there's probably a little handcuffing.
Any help would be appreciated.
- Plink send commands after sudo
I'm trying to find a solution for my current issue:
I have the following: plink commands.txt linux server user that needs to sudo on that linux
What I want to achieve is:
force plink to run the commands from commands.txt, however the first thing I need to do is escalate to sudo with:
sudo rootsh -i -u
And then run a script under that user.
I'm using plink -t to be able to enter the password for the sudo escalation, however I haven't found a way to send the command to run that script.
I cannot change anything on the server side. Also, I am a bit limited on what I can install on my side(ex: could probably use expect), but would rather automate it 98%, leaving just the entering of the sudo password to be done manually.
I've tried various combinations and nothing seems to work so far.
Any suggestions are appreciated
- Unexpected char when executing .ps1 file using PuTTY Plink
I'm executing remotely some scripts to get information from a server, using Plink tool from putty. The trouble comes when I use a .ps1 file, because one '?' appears on the beginning, making the first line incorrect, but with .bat files works as desired.
For example, I want to print the content of a file:
GetDate.bat:
type C:/Data/DateOfCompilation.txt
And then:
PS C:/Users/MyUser> plink -ssh <User>@<IP> -i C:\Key.ppk -m C:\Scripts\GetDate.bat 10/09/2018 14:32:02,72
Everything okay
GetDate.ps1:
Get-Content -Path C:/Data/DateOfCompilation.txt
Execution:
PS C:/Users/MyUser> plink -ssh <User>@<IP> -i C:\Key.ppk -m C:\Scripts\GetDate.ps1 ?Get-Content : The term '?Get-Content' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + ?Get-Content -Path C:/Data/DateOfCompilation.txt + ~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (?Get-Content:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
Also, if I add more code, the other lines work fine, it's just the first one which fails with that '?' added at the beginning.
(However, running locally the script works fine)
I have other ps1 scripts much more extended, so using only bat files is not the best option.
I have looked at the documentation, other forums and here, but I'm not able to find anything. Maybe I do not know anything about ps1 files.
- permission denied for plink but works on putty
I have setup ssh for git on a Linux machine.
When I ssh using putty (with user name and password) and run
cd {git dir} && git fetch --all
I get
# git fetch --all Fetching origin
and it succeed
but when I try to do the same with plink
plink -ssh {ipAddress} -l username -pw password "cd {git dir} && git fetch --all"
I get
Git Fetching Fetching origin Permission denied (publickey). fatal: The remote end hung up unexpectedly error: Could not fetch origin
Whats going on here? and how can I fix it? | http://quabr.com/48247965/how-to-convert-csv-into-ped-formate | CC-MAIN-2018-39 | refinedweb | 1,827 | 56.25 |
Obiriec Close Profit v.1.0.1 paid.
Support:
If you need this cBot you can contact me via telegram here: telegram
The cost of this cBot is € 200 which you can pay with Paypal at the address that communicates to you.
How to install
using cAlgo.API; namespace cAlgo { [Robot(TimeZone = TimeZones.UTC, AccessRights = AccessRights.None)] public class ObiriecCloseProfit : Robot { protected override void OnStart() { string msg = "If you need this cBot you can contact me via telegram here:"; string msg2 = "The cost of this cBot is € 200 which you can pay with Paypal at the address that communicates to you."; ChartObjects.DrawText("botcomment", msg, StaticPosition.TopLeft, Colors.Red); } } } | https://ctrader.com/algos/cbots/show/2174 | CC-MAIN-2020-24 | refinedweb | 109 | 57.27 |
interface for the XSLT attribute handling this module handles the specificities of attribute and attribute groups processing. Daniel Veillard interface for the document handling implements document loading and cache (multiple document() reference for the same resources must be equal. Daniel Veillard interface for the extension support This provide the API needed for simple and module extension support. Daniel Veillard interface for the non-standard features implement some extension outside the XSLT namespace but not EXSLT with is in a different library. Daniel Veillard interface for the XSLT functions not from XPath a set of extra functions coming from XSLT but not in XPath Daniel Veillard and Bjorn Reese <breese@users.sourceforge.net> interface for the XSLT import support macros and fuctions needed to implement and access the import tree Daniel Veillard interface for the key matching used in key() and template matches. implementation of the key mechanims. Daniel Veillard interface for the XSLT namespace handling set of function easing the processing and generation of namespace nodes in XSLT. Daniel Veillard Implementation of the XSLT number functions Implementation of the XSLT number functions Bjorn Reese <breese@users.sourceforge.net> and Daniel Veillard interface for the pattern matching used in template matches. the implementation of the lookup of the right template for a given node must be really fast in order to keep decent performances. Daniel Veillard precomputing stylesheets this is the compilation phase, where most of the stylesheet is "compiled" into faster to use data. Daniel Veillard interface for the libxslt security framework the libxslt security framework allow to restrict the access to new resources (file or URL) from the stylesheet at runtime. Daniel Veillard interface for the template processing This set of routine encapsulates XPath calls and Attribute Value Templates evaluation. Daniel Veillard the XSLT engine transformation part. This module implements the bulk of the actual Daniel Veillard interface for the variable matching and lookup. interface for the variable matching and lookup. Daniel Veillard Interfaces, constants and types related to the XSLT engine Interfaces, constants and types related to the XSLT engine Daniel Veillard internal data structures, constants and functions Internal data structures, constants and functions used by the XSLT engine. They are not part of the API or ABI, i.e. they can change without prior notice, use carefully. Daniel Veillard macros for marking symbols as exportable/importable. macros for marking symbols as exportable/importable. Igor Zlatkovic <igor@zlatkovic.com> set of utilities for the XSLT engine interfaces for the utilities module of the XSLT engine. things like message handling, profiling, and other generally useful routines. Daniel Veillard Macro to check if the XSLT processing should be stopped. Will return from the function. Macro to check if the XSLT processing should be stopped. Will return from the function with a 0 value. Macro to check if the XSLT processing should be stopped. Will goto the error: label. quick check for xslt namespace attribute Checks that the element pertains to XSLT namespace. quick check whether this is an xslt element Checks the value of an element in XSLT namespace. Check that a node is a 'real' one: document, element, text or attribute. check for bit 15 set Special value for undefined namespace, internal Macro to do a casting from an object pointer to a function pointer without encountering a warning from gcc #define XML_CAST_FPTR(fptr) (*(void **)(&fptr)) This macro violated ISO C aliasing rules (gcc4 on s390 broke) so it is disabled now get pointer to compiler context The XSLT "vendor" URL for this processor. The XSLT "vendor" string for this processor. The default version of XSLT supported. Internal define to enable usage of xmlXPathCompiledEvalToBoolean() for XSLT "tests"; e.g. in <xsl:if A macro to import intergers from the stylesheet cascading order. A macro to import pointers from the stylesheet cascading order. get pointer to namespace map check for namespace mapping internal macro to test tree fragments check if the argument is a text node Common fields used for all items. Fields for API compatibility to the structure _xsltElemPreComp which is used for extension functions. Note that @next is used for storage; it does not reflect a next sibling in the tree. TODO: Evaluate if we really need such a compatibility. Currently empty. TODO: It is intended to hold navigational fields in the future. The in-scope namespaces. This is the libxslt namespace for specific extensions. internal macro to set up tree fragments Max number of specified xsl:sort on an element. The XSLT specification namespace. This is Norm's namespace for SAXON extensions. The set of options to pass to an xmlReadxxx when loading files for XSLT consumption. Specific value for pattern without priority expressed. Internal define to enable on-demand xsl:key computation. Internal define to enable the refactored variable part of libxslt Internal define to enable the optimization of the compilation of XPath expressions. Registering macro, not general purpose at all but used in different modules. Registering macro, not general purpose at all but used in different modules. Macro used to define extra information stored in the context Macro used to free extra information stored in the context Macro used to access extra information stored in the context This is Michael Kay's Saxon processor namespace for extensions. Macro to flag that a problem was detected internally. Sampling precision for profiling Macro to flag unimplemented blocks. Control the type of xsl debugtrace messages emitted. This is the Apache project XALAN processor namespace for extensions. This is James Clark's XT processor namespace for extensions. Add template "call" to call stack Drop the topmost item off the call stack If either cur or node are a breakpoint, or xslDebugStatus in state where debugging must occcur at this time then transfer control to the xslDebugBreak function add a key definition to a stylesheet Push an element list onto the stack. Register the XSLT pattern associated to @cur Allocate an extra runtime information slot statically while compiling the stylesheet and return its number Allocate an extra runtime information slot at run-time and return its number This make sure there is a slot ready in the transformation context Apply the xsl:use-attribute-sets. If @attrSets is NULL, then @inst will be used to exctract this value. If both, @attrSets and @inst, are NULL, then this will do nothing. Process the XSLT apply-imports element. Processes a sequence constructor on the current node in the source tree. @params are the already computed variable stack items; this function pushes them on the variable stack, and pops them before exiting; it's left to the caller to free or reuse @params afterwards. The initial states of the variable stack will always be restored before this function exits. NOTE that this does *not* initiate a new distinct variable scope; i.e. variables already on the stack are visible to the process. The caller's side needs to start a new variable scope if needed (e.g. in exsl:function). @templ is obsolete and not used anymore (e.g. <exslt:function> does not provide a @templ); a non-NULL @templ might raise an error in the future. BIG NOTE: This function is not intended to process the content of an xsl:template; it does not expect xsl:param instructions in @list and will report errors if found. Called by: - xsltEvalVariable() (variables.c) - exsltFuncFunctionFunction() (libexsl/functions.c) Strip the unwanted ignorable spaces from the input tree Apply the stylesheet to the document NOTE: This may lead to a non-wellformed output XML wise ! Apply the stylesheet to the document and allow the user to provide its own transformation context. Processes the XSLT 'apply-templates' instruction on the current node. Processes all attributes of a Literal Result Element. Attribute references are applied via xsl:use-attribute-set attributes. Copies all non XSLT-attributes over to the @target element and evaluates Attribute Value Templates. Called by xsltApplySequenceConstructor() (transform.c). Process one attribute of a Literal Result Element (in the stylesheet). Evaluates Attribute Value Templates and copies the attribute over to the result element. This does *not* process attribute sets (xsl:use-attribute-set). Process the given node and return the new string value. Process the given string, allowing to pass a namespace mapping context and return the new string value. Called by: - xsltAttrTemplateValueProcess() (templates.c) - xsltEvalAttrValueTemplate() (templates.c) QUESTION: Why is this function public? It is not used outside of templates.c. Process the xslt attribute node on the source node Used for to correct the calibration for xsltTimestamp() Processes the XSLT call-template instruction on the source node. Check if the given prefix is one of the declared extensions. This is intended to be called only at compile-time. Called by: xsltGetInheritedNsList() (xslt.c) xsltParseTemplateContent (xslt.c) Check if the resource is allowed to be read Check if the resource is allowed to be written, if necessary makes some preliminary work like creating directories Processes the xsl:choose instruction on the source node. Unregister all global variables set up by the XSLT library Cleanup the state of the templates used by the stylesheet and the ones it imports. Process the xslt comment node on the source node Precompile an attribute in a stylesheet, basically it checks if it is an attrubute value template, and if yes establish some structures needed to process it at transformation time. Compile the XSLT pattern and generates a list of precompiled form suitable for fast matching. [1] Pattern ::= LocationPathPattern | Pattern '|' LocationPathPattern reorder the current node list accordingly to the set of sorting requirement provided by the array of nodes. Execute the XSLT-copy instruction on the source node. Copies a namespace node (declaration). If @elem is not NULL, then the new namespace will be declared on @elem. Do a copy of an namespace list. If @node is non-NULL the new namespaces are added automatically. This handles namespaces aliases. This function is intended only for *internal* use at transformation-time for copying ns-declarations of Literal Result Elements. Called by: xsltCopyTreeInternal() (transform.c) xsltShallowCopyElem() (transform.c) REVISIT: This function won't be used in the refactored code. Process the XSLT copy-of instruction. Adds @string to a newly created or an existent text node child of @target. Creates a Result Value Tree (the XSLT 1.0 term for this is "Result Tree Fragment") Process an debug node Dumps a list of the registered XSLT extension functions and elements Get the current default debug tracing level mask Set the default debug tracing level mask Find decimal-format by name reorder the current node list accordingly to the set of sorting requirement provided by the arry of nodes. reorder the current node list accordingly to the set of sorting requirement provided by the arry of nodes. This is a wrapper function, the actual function used is specified using xsltSetCtxtSortFunc() to set the context specific sort function, or xsltSetSortFunc() to set the global sort function. If a sort function is set on the context, this will get called. Otherwise the global sort function is called. An xsltDocLoaderFunc is a signature for a function which can be registered to load document not provided by the compilation or transformation API themselve, for example when an xsl:import, xsl:include is found at compilation time or when a document() call is made at runtime. Pre process an XSLT-1.1 document element Process an EXSLT/XSLT-1.1 document element Implement the document() XSLT function node-set document(object, node-set?) reorder the current node list @list accordingly to the document order This function is slow, obsolete and should not be used anymore. Deallocates an #xsltElemPreComp structure. Process the xslt element node on the source node Implement the element-available() XSLT function boolean element-available(string) Process the given AVT, and return the new string value. Evaluate a attribute value template, i.e. the attribute value can contain expressions contained in curly braces ({}) and those are substituted by they computed value. Evaluates all global variables and parameters of a stylesheet. For internal use only. This is called at start of a transformation. This is normally called from xsltEvalUserParams to process a single parameter from a list of parameters. The @value is evaluated as an XPath expression and the result is stored in the context's global variable/parameter hash table. To have a parameter treated literally (not as an XPath expression) use xsltQuoteUserParams (or xsltQuoteOneUserParam). For more details see description of xsltProcessOneUserParamInternal. Check if an attribute value template has a static value, i.e. the attribute value does not contain expressions contained in curly braces ({}) Processes the sequence constructor of the given instruction on @contextNode and converts the resulting tree to a string. This is needed by e.g. xsl:comment and xsl:processing-instruction. Evaluate the global variables of a stylesheet. This needs to be done on parsed stylesheets before starting to apply | http://www.xmlsoft.org/XSLT/libxslt-api.xml | crawl-001 | refinedweb | 2,138 | 57.06 |
Our basic runtime environment requires upwards of 200 DDS connections, spread across 75 to 100 executables per execution. The need exists to run hundreds of simultaneous executions. We forsee a limitation is total DDS connections, due to the ~230 domains with ~120 participants each. What could be the solution to this scalability issue? Are there any workarounds/alternatives?
Hi,
Is your scalability issue related to the domain number limit? I need more information on your scenario to figure out a solution to your problem. From your description, I understand you have from 75 to 100 applications running DDS. Are these applications creating more than one participant? Each application manages a number of endpoints (datawriters and datareaders), I assume the number of connections corresponds to the total number of endpoints, am I right?
It would be great if you could provide a description of what each application creates: participants, topics, publishers/subscribers, and data writers/data readers.
Fernando.
The easy answer to your question is that many of the applications will have multiple domain participants, using multiple domains/partitions. Each domain participant will have up to 600 readers/writers. The issue become multiple run scalability, a set of applications being a "run". Current DDS limitations would constrict us to about 150 simultanious runs, and this is unacceptable. Is there any way around this.
Hello Jim,
Not sure if this is what you meant, but to be clear the limitation is not 120 DomainParticipants on a single DDS DomainId. Rather the limitation applies to the number of DomainParticipants on a single DDS domain running all on the same computer.
This limitation comes from the fact that each DomainParticipants allocates 2 ports (which must get unique port numbers within a single computer). Since by default a DDS DomainId reserves a range of 250 ports you have the limit of 120 participants per domain per computer.
The limit of 250 domains comes from the fact that each DDS DomainId. reserves 250 ports and the fact that the highest port number allowed is ~65000.
You can use this Spreadsheet to compute the UDP ports used by RTI Connext DDS to see the ports that would be assigned for each DomainId and participantId. By the way in RTI Connext DDS the formulas to assign ports are configurable. So it may be possible to use this feature in your situation. But before I recommend this option I would like to understand better your setup.
If I understood correctly you have a set of "executions" that you want to run in parallel. Each "execution" has contains 75-100 "executables" each of which may have multiple DomainParticipants. Each DomainParticipant having 600 DataReaders/DataWriters. Is this correct?
What you did not clarify is how many computers are you using to run the system. If all of these were to run on a single computer then I agree that you could not do it with the out-of-the-box settings due to the port number limitations.
However, if you were running in say 10 computers (or VMs) then you would be able to run up to 1200 participants in a single DomainId (120 per computer). If you had 20 (or VMs) then you could run up to 2400 per domainId, and so forth.
Regarding the use of separate DomainIds for each "execution". I agree this seems the best way to keep things separate. So if you can afford the number of DomainIds this would require it seems a good way to go.
I do not understand as well the need to have multiple DomainParticipants in a single "executable". You are certainly allowed to do it and some applications (like bridges) really need it. However a DomainParticipant is an expensive resource so it is often better to use a single DomainParticipant per executable when you can. If you need to isolate traffic then you can do it by creating multiple Publishers and Subscribers and assigning different partitions. In terms of matching and communications you will get the same effect as different DomainIds but you will not need additional ports to be open for this and thus you have no limits on the numbers of Partitions.
Finally you mention each DomainParticipant having up to 600 DataReaders/Writers. This sounds like a large number as well. DataReaders and DataWriters are network entities that need to be discovered and consume significant resources. The question is why you need so many. Is is because you have many different data-types of information? Is it that you have many Topics?
This may be something you already know. But I wanted to mention it for completeness. Note that in DDS the data you publish is structured and each Topic can have "keys" that further identifies separate information streams within the Topic. So you do not need to map each stream of information to a separate Topic and hence a different DataWriter. For example if you have a set of temperature sensors which you want to publish and subscribe to separately, you do not need topics like "Temperature/Sensor1", "Temperature/Sensor2", "Temperature/Sensor3", etc. Rather you can have a single Topic "Temperature" and define the corresponding data-type to have a string attribute 'sensor_name' to which you assign the name of the sensor. Then you only need on DataWriter to publish any temperature sensor and a DataReader to read from any Temperature sensor. You can still subscribe to a single temperature sensor (or to a collection of them) using ContentFilteredTopics.
Finally as I mentioned you could modify the formula that RTI Connext DDS used to do port assignment. This would allow you to trade-off the number of participants you can run on a single computer (or VM) against the number of independent DomainId. This configuration can be changed on the DomainParticipantQos by setting the WireProtocolQosPolicy.
Regards,
Gerardo
For clarification, we are writing a simulation framework to integrate multiple simulation/tactical components into a single "Enterprise". We will need to run multiple "Enterprises" simultaniously. There will be a "Master Controller" that will start and controll all execution. There will be a single "Simulation controller" per computer to launch processes and possibly terminate zombie processes. All "Simulation Entities" will communicate with each other directly, sending meta-data to the "Master Controller" for time advancement control.
We are running simulations and tactical software in Monte-Carlo fashion. These tactical components (~50 executables) use up to 4 domains for data encapsulation, many act as "bridges" like you reference earlier. We cannot change the need to run tactical software. Add in our simulation components (somewhere on the order of 20 executables) and Controllers and this leads to the 200 domain participants per "Enterprise". Due to the total DDS participant max (~27500) we could only run ~138 consecutive Monte-Carlo runs. We are broken at this point. This should answer your number of Domain Participants question.
The number of message IDLs will be on the order of 600 to support this tactical system, what we use in the "Simulation World" will be ~30 messages. Not all processes will pub/sub all messages, but the possibility exists that someone could.
We plan on using as many compute resources as possible to execute all runs. We have several racks of 6-10 servers, each server having between 16 and 24 processors.
On the topic of "Keys". What is the typical run-time cost of data exclusion using keys. This simulation has strict requirements to run real-time. If keys are "costly" they must be discarded, otherwise i am open to using them.
So back to the original question, is there a hard limit to the number of participants that can exist on a single network? From everything i have seen, that answer is "yes", although i would love to hear that is wrong.
If more clarification is needed, please let me know.
hi...i m suffering with very serious problem in DDS..my problem is we recieve data on two diffrent domain id (1 and 2)....but at run time we change the setting of domain id -2 and domain id 2 kill our listner and recieve data from domain id 1.this is depend upon runtime setting...and using this setting sometime domain 2 block his listner and recieve data from domain id 1.(for same topic data with diffrent domain id...content of topic is diffrent).how we handle this problem?
I apologize but I did not understand the description of your problem:
You said you receive data on two DDS domain IDs (1 and 2). This means when you create the respective DDS DomainParticipant you are specifying domainId=1 and domainId=2, correct?
You said that "at run-time we change the setting from domain id 2". What do you mean by "change the setting"? Which setting are you changing? Are you changing a QoS, re-stating the application? Could you explain more precisely what you are doing here?
You said "and domain id 2 kill our listener and receive data from domain id 1". I do not understand. What do you mean by "killing teh listener" Who is receiving data from domain id 1? Is that the DomainParticipant that was created in domain id 2? That should not be possible.
You said "domain 2 block his listener and recieve data from domain id 1" I also do not understand this. One domain id should never receive data froma different domain id, unless you have some bridging application like the RTI Routing Service.
Can you explain your problem again with more complete and precise description on exactly what you are doing and how you are doing it?
Gerardo
we are using same domain participant with 2 domain id.we are recieving same topic1 but diffrent content with diffrent (1-2_domain id.)dpmain id
we are recieving our data with two diffrent domain id..
setting means -for this topic1 normaly we recieve data with diffrent domain id with diffrent content(setting is another topic which decide that topic1 data recieving with diffrent domain id or both are recieving same domain id data) this is decided at runtime when we recieve setting topic value than run time we decide what we want
quote: "we are using same domain participant with 2 domain id"
This is technically impossible, using the standard definitions we have for domain participant, and domain id.
By Domain Participant, do you mean "an application which is enabled for DDS"? in which case, are you creating two Domain Participants, one for each domain id?
Please supply your long-form definitions of "domain participant" and "domain id". It would help if you could supply the actual code you use to create the single domain participant.
hi we are attachig one folder "problem"......in this folder we attach two idol file "code.idl" and " typedef.idl".,in typedef.idl we typedef (" typedef long variable1") variable1.and include this typedef.idl in code.idl file
#include "typedef.idl"
struct code
{
variable1 var;
};
and after doing this process when we generate automatic code for JAVA using DDS TOOL
and run this code in eclipse than we got "COMPILATION ERROR" and we can not complie this code....(error in code.java and codetypesupport.java)
tydef is not working ...
we attach our two idol file with automatic generated code
can u give me solution of this problem..
in same idl file when we generate c++ automatic code genration than its working properly.............but java code for this file is not working
can any person give me my problem solution?
By asking your questions on an unrelated and two-year-old thread, you've effectively hidden them from the people who might be able to answer them. Also, because you've asked multiple unrelated questions, I don't know which question you mean, when you say "can any person give me my problem solution"
Please open new topic threads (one for each unanswered question). Click on the "Technical Questions" link at the top of this page, then click on the "New Topic" link, and enter your problem discription.
rip | https://community.rti.com/content/forum-topic/dds-scalability | CC-MAIN-2019-47 | refinedweb | 2,010 | 56.25 |
The Big Picture
The Big Picture¶
Start using Symfony in 10 minutes! This chapter will walk you through some of
First, check that the PHP version installed on your computer meets the Symfony
requirements: 5.3.3 or higher. Then, open a console and execute the following
command to install the latest version of Symfony in the
myproject/
directory:
Note
Composer is the package manager used by modern PHP applications and the only recommended way to install Symfony. To install Composer on your Linux or Mac system, execute the following commands:
To install Composer on a Windows system, download the executable installer.
Beware that the first time you install Symfony, it may take a few minutes to
download all its components. At the end of the installation process, the
installer will ask you to provide some configuration options for the Symfony
project. For this first project you can safely ignore this configuration by
pressing the
<Enter> key repeatedly.
Running Symfony¶
Before running Symfony for the first time, execute the following command to make sure that your system meets all the technical requirements:
Fix any error reported by the command and then use the PHP built-in web server to run Symfony:
If you get the error There are no commands defined in the "server" namespace., then you are probably using PHP 5.3. That's ok! But the built-in web server is only available for PHP 5.4.0 or higher. If you have an older version of PHP or if you prefer a traditional web server such as Apache or Nginx, read the Configuring a Web Server article.
Open your browser and access the URL to see the
Welcome page of Symfony:
Understanding the Fundamentals¶
One of the main goals of a framework is to keep your code organized and to allow your application to evolve easily over time by avoiding the mixing of database calls, HTML tags and business logic in the same script. To achieve this goal with Symfony, you'll first need to learn a few fundamental concepts and terms.
Symfony comes with some sample code that you can use to learn more about its main concepts. Go to the following URL to be greeted by Symfony (replace Fabien with your first name):
Note
Instead of the greeting page, you may see a blank page or an error page. This is caused by a directory permission misconfiguration. There are several possible solutions depending on your operating system. All of them are explained in the Setting up Permissions section of the official book.
What's going on here? Have a look at each part of routes the request to the code that handles it by matching the
requested URL (i.e. the virtual path) against some configured paths. The demo
paths are defined in the
app/config/routing_dev.yml configuration file:
This imports a
routing.yml file that lives inside the AcmeDemoBundle:
In addition to YAML files, routes can be configured in XML or PHP files and can even be embedded in PHP annotations. This flexibility is one of the main features of Symfony, a framework that never imposes a particular configuration format on you.
Controllers¶
A controller:
Symfony using:
The logical name of the file containing the
_demo routes is
@AcmeDemoBundle/Controller/DemoController.php and refers
to the
src/Acme/DemoBundle/Controller/DemoController.php file. In this
file, routes are defined as annotations on action methods:
The
@Route() annotation creates a new route matching the
/hello/{name}
path to the
helloAction() method. Any string enclosed in curly brackets,
like
{name}, is considered a variable that can be directly retrieved as a
method argument with the same name.
If you take a closer look at the controller code, you can see that instead of
rendering a template and returning a
Response object like before, it
just returns an array of parameters. The
@Template() annotation tells
Symfony to render the template for you, passing to it each variable of the
returned array. The name of the template that's rendered follows the name
of the controller. So, in this example, the
AcmeDemoBundle:Demo:hello.html.twig
template is rendered (located at
src/Acme/DemoBundle/Resources/views/Demo/hello.html.twig).
Templates¶
The controller renders the
src/Acme/DemoBundle/Resources/views/Demo/hello.html.twig
template (or
AcmeDemoBundle:Demo:hello.html.twig if you use the logical name):
By default, Symfony uses Twig as its template engine but you can also use traditional PHP templates if you choose. The second part of this tutorial will introduce how templates work in Symfony.
Bundles¶
You might have wondered why the Bundle word is used in many names you have seen so far. part of this tutorial._2<<
But what you see initially is only the tip of the iceberg; click on any of the bar sections to open the profiler and get much more detailed information about the request, the query parameters, security details, and database queries:
Of course, it would be unwise to have this tool enabled when you deploy your
application, so by default, the profiler is not enabled in the
prod
environment.
What is an Environment?¶
An Environment represents a group of configurations that's used to run
your application. Symfony defines two environments by default:
dev
(suited for when developing the application locally) and
prod (optimized
for when executing the application on production).
Typically, the environments share a large amount of configuration options. For
that reason, you put your common configuration in
config.yml and override
the specific configuration file for each environment where necessary:
In this example, the
dev environment loads the
config_dev.yml configuration
file, which itself imports the common.
Therefore, if you try to access the
URL, you'll get a 404 error.
Tip
If instead of using PHP's built-in webserver, you use Apache with
mod_rewrite enabled and take advantage of the
.htaccess file
Symfony provides in
web/, you can even omit the
app.php part of the
URL. The default
.htaccess points all requests to the
app.php front
controller:. | http://symfony.com/doc/2.4/quick_tour/the_big_picture.html | CC-MAIN-2017-47 | refinedweb | 1,010 | 54.22 |
Quick quiz: In this C++ program, is the definition of
munge guaranteed to be memory safe? (Assume that the definition of
increment_counter uses only modern C++ idioms and doesn’t do anything like dereference an invalid pointer.)
#include <iostream> #include <vector> class foo { public: std::vector<int> indices; int counter; foo() : indices(), counter(0) { indices.push_back(1); indices.push_back(2); indices.push_back(3); } void increment_counter(); int &get_first_index() { assert(indices.size() > 0); return indices[0]; } void munge() { int &first = get_first_index(); increment_counter(); std::cout << first << std::endl; first = 20; } }; int main() { foo foo; foo.munge(); return 0; }
The answer: Even with this caveat, we can’t tell! It depends on the definition of
increment_counter.
If
increment_counter has this definition, the code is memory safe:
void foo::increment_counter() { counter++; }
But if
increment_counter has this definition, for example, then it isn’t:
void foo::increment_counter() { indices.clear(); counter++; }
This definition would cause the
first reference in
munge to become a dangling reference, and the call to
std::cout and subsequent assignment of
first will have undefined behavior. If
first were not an
int but were instead an instance of a class, and
munge attempted to perform a virtual method call on it, then this would constitute a critical security vulnerability.
The point here is that determining memory safety in C++ requires non-local reasoning. Any analysis that tries to determine safety of C++ code, whether performed by a machine or performed by a human auditor, has to analyze many functions all at once, rather than one function at a time, to determine whether the code is memory safe. As this example illustrates, sticking to modern C++ coding styles, even with bounds checks, is not enough to prevent this.
There are a few ways around this:
For each function call, analyze the source to the called function to determine whether it’s memory safe in the context of the caller. This doesn’t always work, though: it’s hard or impossible when function pointers or virtual methods are involved (which function ends up being called?), and it’s hard with separately compiled code (what if the called function is in a DLL that you don’t have source for?)
Change the type of
indicesto
std::vector<std::shared_ptr<int>>; i.e. use reference counting to keep the pointer alive. This has a runtime cost.
Inline the body of
increment_counter, so that the memory safety of
mungeis immediately clear.
Make
increment_countera class method (or just a function) instead of an instance method, and have it take
counterby reference. The idea here is to prevent the possibility that
increment_countercould mess with
indicesin any way by shutting off its access to it.
What does this have to do with Rust? In fact, this error corresponds to a borrow check error that Brian Anderson hit when working on the scheduler. In Rust, the corresponding code looks something like this:
impl Foo { fn get_first_index(&'a mut self) -> &'a mut int { assert!(self.indices.len() > 0); return &mut indices[0]; } fn munge(&mut self) { let first = self.get_first_index(); self.increment_counter(); // ERROR println(first.to_str()); *first = 20; } }
This causes a borrow check error because the
first reference conflicts with the call to
increment_counter. The reason the borrow check complains is that the borrow check only checks one function at a time, and it could tell (quite rightly!) that the call to
increment_counter might be unsafe. The solution is to make
increment_counter a static method that only has access to counter; i.e. to rewrite the
self.increment_counter() line as follows:
Foo::increment_counter(&mut self.counter);
Since the borrow check now sees that
increment_counter couldn’t possibly destroy the
first reference, it now accepts the code.
Fortunately, such borrow check errors are not as common anymore, with the new simpler borrow check rules. But it’s interesting to see that, when they do come up, they’re warning about real problems that affect any language with manual memory management. In the C++ code above, most programmers probably wouldn’t notice the fact that the memory safety of
munge depends on the definition of
increment_counter. The challenge in Rust, then, will be to make the error messages comprehensible enough to allow programmers to understand what the borrow checker is warning about and how to fix any problems that arise. | http://pcwalton.github.io/blog/2013/04/12/a-hard-case-for-memory-safety/ | CC-MAIN-2015-40 | refinedweb | 715 | 53.81 |
From: Khem Raj <address@hidden> uclibc defines __GLIBC__ but it does not expose struct shed_param as much as glibc and is not needed too per standard. gnulib attempts to use it but we have to account for it because in this case uclibc does not behave like glibc. Signed-off-by: Khem Raj <address@hidden> Signed-off-by: Mike Frysinger <address@hidden> --- lib/spawn.in.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/lib/spawn.in.h b/lib/spawn.in.h index 26c3c10..c4304a3 100644 --- a/lib/spawn.in.h +++ b/lib/spawn.in.h @@ -32,7 +32,7 @@ /* Get definitions of 'struct sched_param' and 'sigset_t'. But avoid namespace pollution on glibc systems. */ -#ifndef __GLIBC__ +#if !defined __GLIBC__ || defined __UCLIBC__ # include <sched.h> # include <signal.h> #endif -- 1.7.3.2 | http://lists.gnu.org/archive/html/bug-gnulib/2010-11/msg00273.html | CC-MAIN-2014-35 | refinedweb | 136 | 71.31 |
Key Concepts
Review core concepts you need to learn to master this subject
Import Python Modules
Module importing
Aliasing with ‘as’ keyword
Date and Time in Python
random.randint() and
random.choice()
Import Python Modules.
Module importing
Module importing
In Python, you can import and use the content of another file using
import filename, provided that it is in the same folder as the current file you are writing.
Aliasing with ‘as’ keyword
Aliasing with ‘as’ keyword
In Python, the
as keyword can be used to give an alternative name as an alias for a Python module or function.
Date and Time in Python
Date and Time in Python
Python provides a module named
datetime to deal with dates and times.
It allows you to set
date ,
time or both
date and
time using the
date(),
time()and
datetime() functions respectively, after importing the
datetime module .
random.randint() and
random.choice().
- 1In the world of programming, we care a lot about making code reusable. In most cases, we write code so that it can be reusable for ourselves. But sometimes we share code that’s helpful across a br…
- 2datetime is just the beginning. There are hundreds of Python modules that you can use. Another one of the most commonly used is random which allows you to generate numbers or select items at random…
- 3Notice that when we want to invoke the randint() function we call random.randint(). This is default behavior where Python offers a namespace for the module. A namespace isolates the functions, cl…
- 4Let’s say you are writing software that handles monetary transactions. If you used Python’s built-in floating-point arithmetic to calculat…
- 5You may remember the concept of scope from when you were learning about functions in Python. If a variable is defined inside of a function, it will not be accessible outside of the function….
- 6You’ve learned: - what modules are and how they can be useful - how to use a few of the most commonly used Python libraries - what namespaces are and how to avoid polluting your local namespace - … | https://www.codecademy.com/learn/learn-python-3/modules/learn-python3-modules | CC-MAIN-2019-26 | refinedweb | 348 | 69.41 |
Introduction to C#
C# is a general-purpose, modern and object-oriented programming language pronounced as “C sharp”. It was developed by Microsoft led by Anders Hejlsberg and his team within the .Net initiative and was approved by.
A bit about .Net Framework
.Net applications are multi-platform applications and framework can be used from languages like C++, C#, Visual Basic, COBOL etc. It is designed in a manner so that other languages can use it.
know more about .Net Framework
Why C#?
C# has many other reasons for being popular and in demand. Few of the reasons are mentioned below:
- Easy to start: C# is a high-level language so it is closer to other popular programming languages like C, C++, and Java and thus becomes easy to learn for anyone.
- Widely used for developing Desktop and Web Application: C# is widely used for developing web applications and Desktop applications. It is one of the most popular languages that is used in professional desktop. If anyone wants to create Microsoft apps, C# is their first choice.
- Community:The larger the community the better it is as new tools and software will be developing to make it better. C# has a large community so the developments are done to make it exist in the system and not become extinct.
- Game Development: C# is widely used in game development and will continue to dominate. C# integrates with Microsoft and thus has a large target audience. The C# features such as Automatic Garbage Collection, interfaces, object-oriented, etc. make C# a popular game developing language.
Beginning with C# programming:
Finding a Compiler:
There are various online IDEs such as GeeksforGeeks ide, CodeChef ide etc. which can be used to run C# programs without installing.
Windows: Since the C# is developed within .Net framework initiative by Microsoft, it provide various IDEs to run C# programs: Microsoft Visual Studio, Visual Studio Express, Visual Web Developer
Linux: Mono can be used to run C# programs on Linux.
Programming in C#:
Since the C# is a lot similar to other widely used languages syntactically, it is easier to code and learn in C#.
Programs can be written in C# in any of the widely used text editors like Notepad++, gedit, etc. or on any of the compilers. After writing the program save the file with the extension .cs.
Example: A simple program to print Hello Geeks
Output:
Hello Geeks
Explanation:
1. Comments: Comments are used for explaining code and are used in similar manner as in Java or C or C++. Compilers ignore the comment entries and does not execute them. Comments can be of single line or multiple lines.
Single line Comments:
Syntax:
// Single line comment
Multi line comments:
Syntax:
/* Multi line comments*/
2. using System: using keyword is used to include the System namespace in the program.
namespace declaration: A namespace is a collection of classes. The HelloGeeksApp namespace contains the class HelloGeeks.
3. class: The class contains the data and methods to be used in the program. Methods define the behavior of the class. Class HelloGeeks has only one method Main similar to JAVA.
4. static void Main(): static keyword tells us that this method is accessible without instantiating the class. 5. void keywords tells that this method will not return anything. Main() method is the entry-point of our application. In our program, Main() method specifies its behavior with the statement Console.WriteLine(“Hello Geeks”); .
6. Console.WriteLine(): WriteLine() is a method of the Console class defined in the System namespace.
7. Console.ReadKey(): This is for the VS.NET Users. This makes the program wait for a key press and prevents the screen from running and closing quickly.
Note: C# is case sensitive and all statements and expressions must end with semicolon (;).:
- C# is widely used for developing desktop applications, web applications and web services.
- It is used in creating applications of Microsoft at a large scale.
- C# is also used in game development in Unity. | https://www.geeksforgeeks.org/introduction-to-c-sharp/ | CC-MAIN-2021-39 | refinedweb | 665 | 57.98 |
Comment on Tutorial - How to Send SMS using Java Program (full code sample included) By Emiley J.
Comment Added by : devi
Comment Added at : 2008-04-01 01:02:28
Comment on Tutorial : How to Send SMS using Java Program (full code sample included) By Emiley J.
I have loaded allthe 5 java files.But how to implement in the program ? Plz tell me.It is very urgent.PLZ SEND IT MY EMAIL deviprasad83, your post helped me a lot with solving my o
View Tutorial By: Florian Brunner at 2008-06-26 08:59:42
2. Thanks a lot for valuable program.
View Tutorial By: Kailash at 2012-07-30 10:09:12
3. thanks for he info, was looking everywhere for som
View Tutorial By: John at 2011-05-16 11:28:54
4. good tutorial
View Tutorial By: mohang at 2011-12-24 05:24:56
5. Good example thank you so much
View Tutorial By: Albadri at 2011-05-14 15:48:17
6. The concept is clear.
View Tutorial By: sam at 2012-08-23 03:52:15
7. Hi Shruti, If you have better samples then feel fr
View Tutorial By: jagan at 2010-03-03 19:42:46
8. import java.io.*;
import java.util.*;
View Tutorial By: srikanth at 2014-05-30 06:44:58
9. i want to knew about byte datatype. why did not by
View Tutorial By: pankaj kr tiwari at 2011-12-02 04:37:43
10. Hi Ramlak,
I tried to use the datab
View Tutorial By: Braj at 2009-02-07 03:31:13 | http://java-samples.com/showcomment.php?commentid=33396 | CC-MAIN-2018-09 | refinedweb | 268 | 76.72 |
GetVolumePathName function
Retrieves the volume mount point where the specified path is mounted.
Syntax
Parameters
- lpszFileName [in]
A pointer to the input path string. Both absolute and relative file and directory names, for example "..", are acceptable in this path.
If you specify a relative directory or file name without a volume qualifier, GetVolumePathName returns the drive letter of the boot volume.
If this parameter is an empty string, "", the function fails but the last error is set to ERROR_SUCCESS.
- lpszVolumePathName [out]
A pointer to a string that receives the volume mount point for the input path.
- cchBufferLength [in]
The length of the output buffer, in TCHARs.
Return value
If the function succeeds, the return value is nonzero.
If the function fails, the return value is zero. To get extended error information, call GetLastError.
Remarks
If a specified path is passed, GetVolumePathName returns the path to the volume mount point, which means that it returns the root of the volume where the end point of the specified path is located.
For example, assume that you have volume D mounted at C:\Mnt\Ddrive and volume E mounted at "C:\Mnt\Ddrive\Mnt\Edrive". Also assume that you have a file with the path "E:\Dir\Subdir\MyFile". If you pass "C:\Mnt\Ddrive\Mnt\Edrive\Dir\Subdir\MyFile" to GetVolumePathName, it returns the path "C:\Mnt\Ddrive\Mnt\Edrive\".
If either a relative directory or a file is passed without a volume qualifier, the function returns the drive letter of the boot volume. The drive letter of the boot volume is also returned if an invalid file or directory name is specified without a valid volume qualifier. If a valid volume specifier is given, and the volume exists, but an invalid file or directory name is specified, the function will succeed and that volume name will be returned. For examples, see the Examples section of this topic.
You must specify a valid Win32 namespace path. If you specify an NT namespace path, for example, "\DosDevices\H:" or "\Device\HardDiskVolume6", the function returns the drive letter of the boot volume, not the drive letter of that NT namespace path.
For more information about path names and namespaces, see Naming Files, Paths, and Namespaces.
You can specify both local and remote paths. If you specify a local path, GetVolumePathName returns a full path whose prefix is the longest prefix that represents a volume.
If a network share is specified, GetVolumePathName returns the shortest path for which GetDriveType returns DRIVE_REMOTE, which means that the path is validated as a remote drive that exists, which the current user can access.
There are certain special cases that do not return a trailing backslash. These occur when the output buffer length is one character too short. For example, if lpszFileName is C: and lpszVolumePathName is 4 characters long, the value returned is "C:\"; however, if lpszVolumePathName is 3 characters long, the value returned is "C:". A safer but slower way to set the size of the return buffer is to call the GetFullPathName function, and then make sure that the buffer size is at least the same size as the full path that GetFullPathName returns. If the output buffer is more than one character too short, the function will fail and return an error.
In Windows 8 and Windows Server 2012, this function is supported by the following technologies.
SMB does not support volume management functions.
Trailing Path Elements
Trailing path elements that are invalid are ignored. For remote paths, the entire path (not just trailing elements) is considered invalid if one of the following conditions is true:
- The path is not formed correctly.
- The path does not exist.
- The current user does not have access to the path.
Junction Points and Mounted Folders
If the specified path traverses a junction point,
GetVolumePathName returns the volume to which the
junction point refers. For example, if
W:\Adir is a junction point
that points to
C:\Adir, then
GetVolumePathName invoked on
W:\Adir\Afile returns "
C:\".
If the specified path traverses multiple junction points, the entire chain is followed, and
GetVolumePathName returns the volume to which the
last junction point in the chain refers.
If a remote path to a mounted folder or junction point is specified, the path is parsed as a remote path, and
the mounted folder or junction point are ignored. For example if
C:\Dir_C is linked to
D:\Dir_D and
C: is mapped to
X: on a remote computer, calling
GetVolumePathName and specifying
X:\Dir_C on the remote computer returns
X:\.
Examples
For the following set of examples, U: is mapped to the remote computer \\YourComputer\C$, and Q is a local drive.
For the following set of examples, the paths contain invalid trailing path elements.
Requirements
See also
- DeleteVolumeMountPoint
- GetFullPathName
- GetVolumeNameForVolumeMountPoint
- SetVolumeMountPoint
- Volume Management Functions
- Volume Mount Points | http://msdn.microsoft.com/en-us/library/aa364996(v=vs.85).aspx | CC-MAIN-2014-35 | refinedweb | 806 | 53.61 |
Content-type: text/html
wprintf, fwprintf, swprintf - Print formatted output for wide characters
Standard C Library (libc.so, libc.a)
#include <wchar.h>
int wprintf(
const wchar_t *format
[,value]...);
#include <stdio.h> #include <wchar.h>
int fwprintf(
FILE *stream,
const wchar_t *format
[,value]...);
#include <wchar.h>
int swprintf(
wchar_t *wstr,
size_t n,
const wchar_t *format
[,value]...);
Interfaces documented on this reference page conform to industry standards as follows:
fwprintf(), swprintf(), wprintf(): ISO C
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies a wide-character string that combines literal characters with conversion specifications. Specifies the data to be converted according to the format parameter. Points to a FILE structure specifying an open stream to which converted values are written. Specifies a character array in which the converted values are stored. Specifies the maximum number of output wide characters, including the terminating null wide character. Unless n is zero, a terminating null wide character is always added to output.
The wprintf() function converts, formats, and writes its value parameters, under control of the format parameter, to the standard output stream stdout.
The fwprintf() function converts, formats, and writes its value parameters, under control of the format parameter, to the output stream specified by the stream parameter.
The swprintf() function converts, formats, and stores its value parameters, under control of the format parameter, into consecutive wide characters starting at the address specified by the wstr parameter. The swprintf() function places a null wide character (L'/0') at the end of the wide-character string. Specify the n parameter to limit the formatted wide-character string to the allotted space for wstr.
The format parameter is a wide-character string that contains the following value parameters.
[Digital] The e, E, f, and g formats represent the special floating-point values as follows: +NaNQ or -NaNQ +NaNS or -NaNS +INF or -INF +0 or -0
The representation of the + (plus sign) depends on whether the + or (space) formatting flag is specified.
The wprintf(), fwprintf(), and swprintf() functions allow for the insertion of a language-dependent radix character in the output wide-character string. The radix character is defined by langinfo data in the program's locale (category LC_NUMERIC). In the POSIX (C) locale, or in a locale where the radix character is not defined, the radix character defaults to . (period).
[Digital] The st_ctime and st_mtime fields of the file are marked for update between the successful execution of the wprintf() or fw wide characters that are output. Otherwise, they return a negative value.
[Digital] The wprintf() and fwprintf() functions fail if either stream is unbuffered or stream's buffer needed to be flushed and the function call caused an underlying write() or lseek() function to be invoked. In addition, if the wprintf() or fwprintf() function fails, errno is set to one of the following values: [Digital] The O_NONBLOCK flag is set for the file descriptor underlying stream and the process would be delayed in the write operation. [Digital] The file descriptor underlying stream is not a valid file descriptor open for writing. [Digital] An attempt was made to write to a file that exceeds the process's file size limit or the maximum file size. [Digital] An invalid wide character was detected. [Digital] The read operation was interrupted by a signal that was caught, and no data was transferred. [Digital] The implementation supports job control; the process is a member of a background process group and is attempting to write to its controlling terminal; TOSTOP is set; the process is neither ignoring nor blocking SIGTTOU; and the process group of the process is orphaned. [Digital] There was no free space remaining on the device containing the file. [Digital] An attempt was made to write to a pipe or FIFO that is not open for reading by any process. A SIGPIPE signal will also be sent to the process.
Functions: fopen(3), printf(3), putwc(3), scanf(3), towctrans(3), towlower(3), vprintf(3), vwprintf(3), wctrans(3), wscanf(3)
Files: locale(4)
Standards: standards(5) delim off | http://backdrift.org/man/tru64/man3/fwprintf.3.html | CC-MAIN-2016-44 | refinedweb | 681 | 53.21 |
C++ comes with libraries which provides us many ways for performing input and output. In C++ input and output is performed in the form of sequence of bytes or more commonly known as streams.
Input Stream: If the direction of flow of bytes is from device(for example: Keyboard) to the main memory then this process is called input.
Output Stream: If the direction of flow of bytes is opposite, i.e. from main memory to device( srteam. This header file is used to handle the data being read from a file as input or data being written into the file as output.
In C++ articles, these two keywords cout and cin are used very often for taking inputs and printing outputs. These two are the most basic methods of taking input and output in C++. For using cin and cout we must include the header file iostream in our program.
In this article we will mainly discuss about the objects defined in the header file iostream like cin and cout.
- Standard output stream (cout): Usually the standard output device is the display screen. cout is the instance of the ostream class. cout is used to produce output on the standard output device which is usually the display screen. The data needed to be displayed on the screen is inserted in the standard output stream (cout) using the insertion operator (<<).
#include <iostream> using namespace std; int main( ) { char sample[] = "GeeksforGeeks"; cout << sample << " - A computer science portal for geeks"; return 0; }
Output:
GeeksforGeeks - A computer science portal for geeks
As you can see in the above program the insertion operator(<<) insert the value of the string variable sample followed by the string “A computer science portal for geeks” in the standard output stream cout which is then displayed on screen.
- standard input stream (cin): Usually the input device is the keyboard. cin is the instance of the class istream and is used to read input from the standard input device which is usually keyboard.
The extraction operator(>>) is used along with the object cin for reading inputs. The extraction operator extracts the data from the object cin which is entered using the keboard.
#include<iostream> using namespace std; int main() { int age; cout << "Enter your age:"; cin >> age; cout << "\nYour age is: "<<age; return 0; }
Input : 18
Output:
Enter your age: Your age is: 18
The above program asks the user to input the age. The object cin is connected to the input device. The age entered by the user is extracted from cin using the extraction operator(>>) and the extracted data is then stored in the variable age present on the right side of the extraction operator.
- Un-buffered standard error stream (cerr): cerr is the standard error stream which is used to output the errors. This is also an instance of the ostream class. As cerr is un-buffered so it is used when we need to display the error message immediately. It does not have any buffer to store the error message and display later.
#include <iostream> using namespace std; int main( ) { cerr << "An error occured"; return 0; }
Output:
An error occured
- buffered standard error stream (clog): This is also an instance of ostream class and used to display errors but unlike cerr the error is first inserted into a buffer and is stored in the buffer until it is not fully filled. The error message will be displayed on the screen too.
#include <iostream> using namespace std; int main( ) { clog << "An error occured"; return 0; }
output:
An error occured
Related Articles:
- cout << endl vs cout << "\n" in C++
- Problem with scanf() when there is fgets()/gets()/scanf() after it
- How to use getline() in C++ when there are blank lines in input?/C++ Preprocessors
- Clearing The Input Buffer In C/C++
- Operators in C / C++
- endl vs \n in C++
- What happen when we exceed valid range of built-in data types in C++?
- Associative arrays in C++
- Sequence vs Associative containers in C++
- Anonymous classes in C++
- <regex> library in C++ STL
- <numeric> library in C++ STL
Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. | https://www.geeksforgeeks.org/basic-input-output-c/ | CC-MAIN-2018-13 | refinedweb | 697 | 67.28 |
3.5.20. Web Server Classes¶
Most of the source in master/buildbot/www is self-explanatory. However, a few classes and methods deserve some special mention.
3.5.20.1. Resources¶
- class
buildbot..
Redirect(url)¶
This is a subclass of Twisted Web’s
Error. If this is raised within
asyncRenderHelper, the user will be redirected to the given URL.
- class
buildbot..
Resource¶
This class specializes the usual Twisted Web
Resourceclass.
It adds support for resources getting notified when the master is reconfigured.
needsReconfig¶
If True,
reconfigResourcewill be called on reconfig.
It’s surprisingly difficult to render a Twisted Web resource asynchronously. This method makes it quite a bit easier:
asyncRenderHelper(request, callable, writeError=None)¶
This method will call
callable, which can return a Deferred, with the given
request. The value returned from this callable will be converted to an HTTP response. Exceptions, including
Errorsubclasses, are handled properly. If the callable raises
Redirect, the response will be a suitable HTTP 302 redirect.
Use this method as follows:
def render_GET(self, request): return self.asyncRenderHelper(request, self.renderThing) | https://docs.buildbot.net/1.1.0/developer/cls-www.html | CC-MAIN-2020-34 | refinedweb | 177 | 51.44 |
In this blog, I’m explaining the concept of properties.
Properties are special kind of class members. Member variables or methods in a class or structures are called Fields. Properties are an extension of fields and are accessed using the same syntax. They use assessors through which the values of the private fields can be read, written or manipulated.
We use predefined set and get methods to access and modify them. Property reads and writes are translated to get and set method calls.
The get { } implementation must include a return statement. It can access any member on the class.
The set { } implementation receives the implicit argument "value." This is the value to which the property is assigned.
Properties have many uses: they can validate data before making free to a change; they can transparently expose data on a class where that data is actually retrieved from some other source, such as a database; they can take an action when data is changed, such as raising an event, or changing the value of other fields.
· Properties can be marked as public, private, protected, internal, or protected internal. These access modifiers define how users of the class can access the property.
· A property may be declared as a static property by using the static keyword.
There is an example student personal details.
using System;
namespace PropertiesExample
{
public class Student
{
public static int NumberOfStudents;
private static int count=1;
private string name;
// A read-write instance property of Student class
public string Name
{
// here, get method is used to get the name
get {
return name;
}
// here, set method is used to get the name
set {
name = value;
}
}
// A read-only static property of Student class
public static int Counter
{
// here, only get count not set
get { return count; }
}
// A Constructor:
public Student()
{
// Calculate the Student's number:
count = count + NumberOfStudents;
}
}
class Test
{
static void Main()
{
Student.NumberOfStudents = 1000; // number of student: 1000
Student studentObj = new Student();
studentObj.Name = "Manoj pandey"; // a new student 'manoj pandey'
System.Console.WriteLine("Student number: "+Student.Counter); // student number is total no. of student +1
System.Console.WriteLine("Student name: "+studentObj.Name); // student name
Console.ReadKey(); // hold the screen
}
}
}
Output:
Student number: 1001
Student name: Manoj pandey
In this example, a new student added after number of student+1 by using get and set method. | https://www.mindstick.com/blog/673/properties | CC-MAIN-2017-22 | refinedweb | 387 | 56.45 |
The QVideoSurfaceFormat class specifies the stream format of a video presentation surface. More...
#include <QVideoSurfaceFormat>
This class was introduced in Qt 4.6..
Enumerates the layout direction of video scan lines.
Enumerates the Y'CbCr color space of video frames.
Constructs a null video stream format.
Contructs a description of stream which receives stream of type buffers with given frame size and pixel format.
Constructs a copy of other.
Destroys a video stream description.
Returns the height of frame in a video stream.
Returns the frame rate of a video stream in frames per second.
See also setFrameRate().
Returns the size of frames in a video stream.
See also setFrameSize(), frameWidth(), and frameHeight().
Returns the width of frames in a video stream.
See also frameSize() and frameHeight(). be the same as that of the surface format.
Identifies if a video surface format has a valid pixel format and frame size.
Returns true if the format is valid, and false otherwise.
Returns a video stream's pixel aspect ratio.
See also setPixelAspectRatio().
Returns the pixel format of frames in a video stream.
Returns the value of the video format's name property.
See also setProperty().
Returns a list of video format dynamic property names.
Returns the direction of scan lines.
See also setScanLineDirection().
Sets the frame rate of a video stream in frames per second.
Sets the size of frames in a video stream to size.
This will reset the viewport() to fill the entire frame.
This is an overloaded function.
Sets the width and height of frames in a video stream.
This will reset the viewport() to fill the entire frame.
Sets a video stream's pixel aspect ratio.
See also pixelAspectRatio().
This is an overloaded function.
Sets the horizontal and vertical elements of a video stream's pixel aspect ratio.
Sets the video format's name property to value.
Sets the direction of scan lines.
See also scanLineDirection().
Sets the viewport of a video stream to viewport.
Sets the Y'CbCr color space of a video stream. It is only used with raw YUV frame types.
See also yCbCrColorSpace().
Returns a suggested size in pixels for the video stream.
This is the size of the viewport scaled according to the pixel aspect ratio.
Returns the viewport of a video stream.
The viewport is the region of a video frame that is actually displayed.
By default the viewport covers an entire frame.
See also setViewport().
Returns the Y'CbCr color space of a video stream.
See also setYCbCrColorSpace().
Returns true if other is different to a video format, and false if they are the same.
Assigns the values of other to a video stream description.
Returns true if other is the same as a video format, and false if they are the different. | http://doc.trolltech.com/main-snapshot/qvideosurfaceformat.html | crawl-003 | refinedweb | 463 | 79.87 |
0
The practical exercise for this session will be to create a cash register program that you input the cost of an item, the number purchased and if a sales tax applies to the item. If a tax does apply to the item, the tax value will be ten percent of the items cost.
In the example, the bread costs $2, it does not have sales tax on it, so its final price is $2. The shirt on the other hand, has a cost of $20, and does have tax on it, so ten percent of 20 is 2, making the final price of the shirt $22.00. The program will then calculate the cost of all the items. This will be in a loop so that a number of different items can be inputed and when finished the program it will produce a grand total.
here is my code so far
// iostream is needed for this cout statement. #include <iostream> #include <cstdlib> using namespace std; // functions in program float total = 0.0; float grandtotal = 0.0; float rrp = 0.0; float taxinc = 0.0; float subtotal = 0.0; //float numberpurchd = 0; //float num = 0; //void item(); //float numberpurch(); //float orig(); //float tax(); //float totals(); char itemn; int rrpd; int numberpurchd; char ans; /* * The main function is the starting point for the program. It contains all of * the statements in this program. */ int main() { char itemn; // char rrpd; char ans; char response; cout << "Cash Register Program\n\n"; do{ cout << "Item: "; cin >> itemn; cout << "How Many Item being purchased: "; cin >> numberpurchd; cout << "Input original Price: $"; cin >> rrpd; // num = rrp * rrpd; rrp = rrpd * numberpurchd; cout << "Before GST: $" << rrp << endl; cout << "Does GST apply to item (t = yes or f = no): "; cin >> ans; switch(ans) { case 't': // taxinc; //taxinc = (rrp * 10 / 100); taxinc = rrp * (.10/100); break; rrp; case 'f': break; } // close switch // taxinc = (rrp * GST); cout << "Total including GST: $" << taxinc << endl; subtotal = taxinc + rrp; cout << "Complete Subtotal including GST: $" << subtotal << endl; total = subtotal + total; cout << "Next item? y = yes or n = no: "; cin >> response; }while(response != 'n'); // close do loop grandtotal = total ; cout << "Total Amount: $" << grandtotal << endl; cout << "Thank You for ordering and Have a nice day!" << endl; system("PAUSE"); return 0; } // close main function
help would be appreciated | https://www.daniweb.com/programming/software-development/threads/319415/help-with-this | CC-MAIN-2018-05 | refinedweb | 378 | 77.27 |
What's New in Groovy 1.6
- |
-
-
-
-
-
-
-
Read later
My Reading List.
Groovy is used in many Open Source projects such as Grails, Spring, JBoss Seam and more, as well as integrated in commercial products and Fortune 500 mission-critical applications for its scripting capabilities offering a nice extension mechanism to these applications, or for its ability to let subject matter experts and developers author embedded Domain-Specific Languages to express business concepts in a readable and maintainable fashion. the Easy performance:
@Immutable final class Coordinates { Double latitude, longitude } def c1 = new Coordinates(latitude: 48.824068, longitude: 2.531733) def c2 = new Coordinates(48.824068, 2.531733) assert c1 == c2.
@Lazy
Another:
class Person { @Lazy pets = ['Cat', 'Dog', 'Bird'] } def p = new Person() assert !(p.dump().contains('Cat')) assert p.pets.size() == 3 assert p.dump().contains('Cat'):ovy's convention for properties is that any field without any visibility modifier is exposed as a property, with a getter and a setter transparently generated for you. For instance, this
Person class exposes a getter
getName() and a setter
setName() for a private
name field: grab.SESSIONS); context.resourceBase = "." context.addServlet(TemplateServlet, "*.gsp") server.start() sleep duration server.stop() } runServer(10000).
Grape can also be used as a method call instead of as an annotation. You can also install, list, resolve dependencies from the command-line using the
grape command. For more information on Grape, please refer to the documentation.
Swing builder improvements
To the frame below with a text field and a lable below, and the label's text is bound on the text field's content.
.:
>>IMAGE<<
- Declaratively expose Java/Groovy objects as JMX managed MBeans
- Support class-embedded or explicit descriptors
- Inherent support for JMX's event model
- Seamlessly create JMX event broadcasters
- Attach event listeners as inline closures
- Use Groovy's dynamic nature to easily react to JMX events notifications
- Provides a flexible registration policy for MBean
- No special interfaces or class path restrictions
- Shields developer from complexity of JMX API
- Exposes attribute, constructors, operations, parameters, and notifications
- Simplifies the creation of connector servers and connector clients
- Support for exporting JMX timers.
We've now reached the end of this article and if you're not a Groovy user yet, I hope this artcile will give you a better understanding of what Groovy has to offer in your projects, and if you knew and used Groovy already, that you learned about all the new features of the language. The next step for you, dear reader, is to go download Groovy 1.6. And if you wish to dive deeper into Groovy, Grails and Griffon, I also invite you to join us at the GR8 Conference, a conference dedicated to Groovy, Grails and Griffon, taking place in Copenhagen, Denmark, where experts and makers of these technologies will guide you through with practical presentations and hands-on labs.
About the author/Devoxx. Guillaume also co-authored Groovy in Action along with Dierk König. Before founding G2One, the Groovy/Grails company,
Jean-Simon LaRochelle
Bravo a Guillaume et à toute l'équipe de Groovy. Vraiment du beau travail!
JS
Wow!
by
Rick Hightower
Groovy 1.6 AST Transformations are blowing my mind...
Then I can't forget about @Bindable and @Grab :-)
I need to switch to Groovy 1.6... from 1.5... I know this now... I did not realize how much cool stuff was going to be in Groovy 1.6.
Excellent article BTW and an amazing release... Thanks thanks thanks
twitter.com/RickHigh
Great article showcasing 1.6
by
Matthew McCullough
Great improvements
by
andrej koelewijn
Immutable objects
by
Mike Rettig
Can methods require @immutable arguments and return types? That would make it possible to write side effect free functions that can be parallelized. If the compiler can enforce immutability, then it would be trivial to write an erlang-like messaging library or a a parallelized linear algebra implementation.
Also, immutable types should also generate modifiers for all properties. Otherwise, working with immutable types is cumbersome and involves a lot of boilerplate copying of fields between objects.
Example:
def c1 = new Coordinates(latitude: 48.824068, longitude: 2.531733)
def c2 = new Coordinates(48.824068, 2.531733
//copies the prior longitude, and sets the latitude to the provided value
def c3 = c2.modify(latitude:49.13123)
This is a trivial example, but in practice data structures can be large and complicated, so it should be trivial to create slight variations of existing data.
Support for immutable types is critical as more people attempt to embrace multi-core machines through message based concurrency or side-effect free functional programming.
Mike
Awesome!
by
Maxim Gubin
It's great that you've added full annotation support, because that makes Groovy code truly more Java-esque. I think after this release Groovy is going to get much more attention that it so truly deserves!
When is the standalone Gorm coming? I am really looking forward to that (Hope I'm not asking too much).
Great job once again!
Re: Immutable objects
by
Guillaume Laforge
You can follow it here: markmail.org/thread/bnhu6rrleyqwzmqu
Re: Awesome!
by
Guillaume Laforge
If you look at the samples of the latest RC, you'll see a Spring MVC PetClinic app using GORM (ie. outside Grails)
@Singleton and double checked locking
by
Hans-Peter Störr
stackoverflow.com/questions/70689/efficient-way... .
Hans-Peter
Nice but some issues with Groovy...
by
Sony Mathew).
Thread interruptions (e.g. during Thread.sleep) seem to be silently and mysteriously handled causing unexpected behaviours during Ctrl-C and shutdown hook processings.
Scripting against different APIs leads to many castings to appropriate types desired by API methods (e.g. frequently casting to various Arrays). Its not always clear what type Groovy uses natively. This should be seamless.
Adding classpath in script to current GroovyClassLoader still causes script to not recognize classes used right after - groovy is not dynamic enough..haha. I have to use reflection(e.g. Classloader.loadClass()) for all newly added classpaths..
Spent a good hour+ figuring out why something wasn't happenning as expected - later found out my override of an API method was silently returning false. I had forgotten to return a value for the boolean method() - no errors given.
There are several Java language features gotchas that are not implemented as expected (so "code in Java" is not always a correct stmt) - I can't remember them now - i just worked around them.
I am sure there are good reasons for all the above - but my point is - things listed above are going to be the first things a Java guy tries and expects them to just work.
Lazy closures
by
Ray Davis
class DataLoader {
// Can override with named parameter in the constructor.
def getService = { ComponentManager.get(it) }
...
@Lazy def siteService = getService("SiteService")
...
}
def serviceGetter = { AlternativeManager.getService(it) }
def dataLoader = new DataLoader(getService: serviceGetter)
Re: Nice but some issues with Groovy...
by
Guillaume Laforge
Multi variable returns, assignments and swaps is a feature i've always longed - so that is great. And the other features look fabulous as well.
Cool, I'm glad you like them.).
Groovy provides a couple mechanisms for that, with map and closure coercion, like {} as SomeInterface (for interfaces with just one method), or even [meth1:{}, meth2:{}] as SomeOtherInterface (for when you need to implement or extend several methods).
In Groovy 1.7, we'll be adding raw / classical anonymous inner classes, even though they're not really necessary per se in most situations.
Thread interruptions (e.g. during Thread.sleep) seem to be silently and mysteriously handled causing unexpected behaviours during Ctrl-C and shutdown hook processings.
We're not doing anything special there.
Scripting against different APIs leads to many castings to appropriate types desired by API methods (e.g. frequently casting to various Arrays). Its not always clear what type Groovy uses natively. This should be seamless.
There are some occasions where we need casting, but they're not very frequent.
Unlike a statically compiled language, you have to remember that a dynamic language chooses methods at runtime, according to the runtime type of the parameters passed to the method you're calling. So that difference makes that sometimes you need to help the dynamic runtime know which method you really intended to call.
That said, some precise examples would be needed, because that statement doesn't help much, I'm afraid.
Adding classpath in script to current GroovyClassLoader still causes script to not recognize classes used right after - groovy is not dynamic enough..haha. I have to use reflection(e.g. Classloader.loadClass()) for all newly added classpaths.
Again, an example would be helpful, as I'm sure we can find a neater solution here..
Variables not def'ed or without types, in scripts, go into the binding of the script.
Otherwise, def'ed and typed variables are just "local" variables. Local to your script, so that's normal (and as designed) that they are not available to other scripts.
If you need to share variables, you'd better share the same binding for all your scripts.
As for named inner classes, I'm not sure I understand what you mean here.
If it's in scripts again, if you want to access local variables, they are indeed local to the script, a bit like if you added a local variable to your main() method. So they indeed can't be seen from another class defined in your script.
So, again, you use the binding (don't def/don't type your variables). That seems like what's best for your use case, it seems.
Spent a good hour+ figuring out why something wasn't happenning as expected - later found out my override of an API method was silently returning false. I had forgotten to return a value for the boolean method() - no errors given.
That's because of optional return.
If your forget to return anything, the latest statement of your method will be evaluated to a boolean.
There are several Java language features gotchas that are not implemented as expected (so "code in Java" is not always a correct stmt) - I can't remember them now - i just worked around them.
There aren't many differences left with Java anyway, apart from anonymous inner classes, for instance, or array initializers. Otherwise, the "compatibility" is very high.
I am sure there are good reasons for all the above - but my point is - things listed above are going to be the first things a Java guy tries and expects them to just work.
When you're encountering problems, please report them on the Groovy user mailing-list, you'll get very fast responses to all your questions.
Furthermore, remember that Groovy is a dynamic language, so there are obviously some differences with Java from times to times, but it's close to Java enough to be very easy to get started with.
Re: @Singleton and double checked locking
by
Jean-Simon LaRochelle
JS
@Delegate-Annotation
by
Stefan Undorf
Re: Nice but some issues with Groovy...
by
Sony Mathew
Can developers create new AST Transformations?
by
Jarppe Lansio
Is it possible to write new AST Transformations for my own purposes? For example, a transformation to create a builder class for each immutable etc.
--
-jarppe
Re: Can developers create new AST Transformations?
by
Guillaume Laforge
You can have a look at those two wiki pages explaining (through a tutorial) the process:
PackageScope
by
Tuomas Kassila...
This message contain a web link to actual sample.
Really amazing features
by
Matthew Adams
@Category and @Mixin/.mixin() are really nice, too. @Delegate is extremely convenient. Nice!
Really great stuff. I can't wait to get my hands dirty.
-matthew
PS: I'm also glad to see the OSGi support as well!
Excelent!
by
Jaromir Nemec
One small typo:
assert engine.evaluate("2 + 3") == 5
I guess the method should be eval; evaluate did not worked for me.
The wordCount example illustrates nice the extensibility of the metaclass, but I wonder if the regex shouldn't be something like split(/\W+/) to get the word count.
One again a good job, I upgraded immediatelly after reading this:)
regards,
Jaromir D.B. Nemec | https://www.infoq.com/articles/groovy-1-6 | CC-MAIN-2016-36 | refinedweb | 2,035 | 56.66 |
units: A domain-specific type system for dimensional analysis. The Haddock documentation is insufficient for using the units package. Please see the README file, available from the package home page.
Modules
- Data
- Data.Dimensions
- Data.Dimensions.Internal
- Data.Dimensions.SI
- Data.Dimensions.SI.Prefixes
- Data.Dimensions.SI.Types
- Data.Dimensions.Show
Downloads
- units-1.0.1.tar.gz [browse] (Cabal source package)
- Package description (as included in the package)
Maintainer's Corner
For package maintainers and hackage trustees
Readme for units-1.0.1[back to package description]
units.
Limitations:
The units package does not easily allow users to write code polymorphic in the chosen units. For example, a
sumfunction that adds together a homogeneous list of dimensioned quantities is not straightforward. The package exports its internals to allow clients to try to get these working, but it is generally hard to do. However, monomorphic functions are easy.
The units package is not generalized over number representation: it forces client code to use
Double. It wouldn't be hard to generalize, though, but it would add a fair amount of extra cruft here and there. Shout (to
eir@cis.upenn.edu) if this is important to you. modules. For any given project, you will
include some set of these modules. There are dependency relationships
between them. Of course, you're welcome to
import a module without its
dependents, but it probably won't be very useful to you. I hope that this list
grows over time.
Data.Dimensions
This is the main exported module. It exports all the necessary functionality for you to build your own set of units and operate with them. All modules implicitly depend on this one.
Data.Dimensions.Show
This module defines a
Showinstance for dimensioned quantities, printing out the number stored along with its canonical dimension. This behavior may not be the best for every setting, so it is exported separately.
Data.Dimensions.SI
This module exports unit definitions for the SI system of units.
Data.Dimensions.SI.Prefixes
This module exports the SI prefixes. Note that this does not depend on
Data.Dimensions.SI-- you can use these prefixes with any system of units.
Data.Dimensions.SI.Types
This module exports several useful types for use with the SI package, which it depends on. For example,
Lengthis the type of dimensioned quantities made with
Meters.
Examples
Unit definitions
Here is how to define two inter-convertible units:
data Meter = Meter -- each unit is a datatype that acts as its own proxy instance Unit Meter where -- declare Meter as a Unit type BaseUnit Meter = Canonical -- Meters are "canonical" instance Show Meter where -- Show instances are optional but useful show _ = "m" -- do *not* examine the argument! data Foot = Foot instance Unit Foot where type BaseUnit Foot = Meter -- Foot is defined in terms of Meter conversionRatio _ = 0.3048 -- do *not* examine the argument! instance Show Foot where show _ = "ft" type Length = MkDim Meter -- we will manipulate Lengths type Length' = MkDim Foot -- this is the *same* as Length extend :: Length -> Length -- a function over lengths extend x = dim $ x .+ (1 % Meter) -- more on this later inMeters :: Length -> Double -- extract the # of meters inMeters = (# Meter) -- more on this later
Let's pick this apart. The
data Meter = Meter declaration creates both the
type
Meter and a term-level proxy for it. It would be possible to get away
without the proxies and lots of type annotations, but who would want to?
Then,. We also must define the conversion
ratio, which is the number of meters in a foot. Note that the
conversionRatio method must take a parameter to fix its type parameter, but
it must not inspect that parameter. Internally, it will be passed
undefined quite often.
The
MkDim type synonym makes a dimensioned quantity for a given unit. Note
that
Length and
Length' are the same type. The
MkDim machinery notices
that these two are inter-convertible and will produce the same dimensioned
quantity.
Note that, as you can see in the function examples at the end, it is necessary
to specify the choice of unit when creating a dimensioned quantity or
extracting from a dimensioned quantity. Thus, other than thinking about the
vagaries of floating point wibbles and the
Show instance, it is completely
irrelevant which unit is canonical.. Let's also have a unit of time:
data Second = Second instance Unit Second where type BaseUnit Second = Canonical instance Show Second where show _ = "s" type Time = MkDim Second
Units can be multiplied and divided with the operators
:* and
:/, at either
the term or type level. For example:
type MetersPerSecond = Meter :/ Second type Velocity1 = MkDim MetersPerSecond speed :: Velocity1 speed = 20 % (Meter :/ Second)
The units package also provides combinators "%*" and "%/" to combine the types of dimensioned quantities.
type Velocity2 = Length %/ Time -- same type as Velocity1
There are also exponentiation combinators
:^ (for units) and
%^ (for
dimensionDim MetersSquared type Area2 = Length %^ Two -- same type as Area1 roomSize :: Area1 roomSize = 100 % (Meter :^ pTwo) roomSize' :: Area1 roomSize' = 100 % (Meter :* Meter)
These operations have no defined inverses, though I don't think they would be hard to define. Shout if you need that functionality.
Note that addition and subtraction on units does not make physical sense, so those operations are not provided.
Dimension-safe cast
The haddock documentation shows the term-level dimensioned quantity
combinators. The only one deserving special mention is
dim, dimensioned quantities have
a looser notion of type equality than Haskell does. For example, "meter *
second" should be the same as "second * meter", even those these are in
different order. The
dim function checks (at compile time) to make sure its
input type and output type represent the same underlying dimension and then
performs a cast from one to the other. When providing type annotations, it is
good practice to start your function with a
dim $ to prevent the possibility
of type errors. For example, say we redefine velocity a different way:
type Velocity3 = Scalar %/ Time %* Length addVels :: Velocity1 -> Velocity1 -> Velocity3 addVels v1 v2 = dim $ v1 .+ v2
This is a bit contrived, but it demonstrates the point. Without the
dim, the
addVels function would not type-check. Because
dim needs to know its
result type to type-check, it should only be used at the top level, such as
here, where there is a type annotation to guide it.
Note that
dim is always dimension-safe -- it will not convert a time to a
length! | https://hackage.haskell.org/package/units-1.0.1 | CC-MAIN-2020-16 | refinedweb | 1,068 | 54.32 |
On Sat, 1 Jan 2011, Michael Niedermayer wrote: > On Sat, Jan 01, 2011 at 06:44:55AM +0200, Anssi Hannula wrote: > > On 29.12.2010 14:19, Stefano Sabatini wrote: > > > On date Wednesday 2010-12-29 13:58:00 +0200, Anssi Hannula encoded: > > >> On 29.12.2010 13:40, Stefano Sabatini wrote: > > >>> On date Wednesday 2010-12-29 06:54:14 +0200, Anssi Hannula encoded: > > >>>> --- > > >>>> libavformat/avformat.h | 2 ++ > > >>>> libavformat/utils.c | 4 ++++ > > >>>> 2 files changed, 6 insertions(+), 0 deletions(-) > > >>>> > > >>>> diff --git a/libavformat/avformat.h b/libavformat/avformat.h > > >>>> index c6f2827..9eab2da 100644 > > >>>> --- a/libavformat/avformat.h > > >>>> +++ b/libavformat/avformat.h > > >>>> @@ -368,6 +368,8 @@ typedef struct AVOutputFormat { > > >>>> const AVMetadataConv *metadata_conv; > > >>>> #endif > > >>>> > > >>>> + const AVClass *priv_class; ///< AVClass for the private context > > >>>> + > > >>>> /* private fields */ > > >>>> struct AVOutputFormat *next; > > >>>> } AVOutputFormat; > > >>> > > >>> Put this after the "next" field or it will break ABI (note for the > > >>> committer: bump minor). > > >> > > >> Well, I thought the /* private fields */ meant that the lavf internal > > >> variables are last so that they can be added/removed/modified at will > > >> without breaking ABI, which only works if they indeed are the last ones. > > >> > > >> But if that is not the intention, priv_class can be put last (it is public). > > > > > > If the "next" field is not used outside lavf then it should be safe to > > > keep it at the end of the struct. > > > > Well, it shouldn't be, but we can't be sure. > > > > I guess the question is, do we care if applications wrongly using it > > break? (I don't know the answer to this) > > applications using private fields are writtenm in the knowledge that they will > stop working the next commit to ffmpeg What about applications registering muxers/demuxers of their own? In that case, the AVOutputFormat is allocated statically in their binary. Do we need something like av_register_protocol2, where the caller provides the size of the struct as a parameter, or do we simply not care about those cases? // Martin | http://ffmpeg.org/pipermail/ffmpeg-devel/2011-January/105379.html | CC-MAIN-2016-36 | refinedweb | 319 | 62.27 |
Parameters are passed to applets in NAME=VALUE pairs in
<PARAM> tags between the opening and closing
APPLET tags.
Inside the applet, you read the values passed through the
PARAM
tags with the
getParameter() method of the
java.applet.Applet class.
The program below demonstrates this with a generic string drawing applet. The applet parameter "Message" is the string to be drawn.
import java.applet.*; import java.awt.*; public class DrawStringApplet extends Applet { private String defaultMessage = "Hello!"; public void paint(Graphics g) { String inputFromPage = this.getParameter("Message"); if (inputFromPage == null) inputFromPage = defaultMessage; g.drawString(inputFromPage, 50, 25); } }
You also need an HTML file that references your applet. The following simple HTML file will do:
<HTML> <HEAD> <TITLE> Draw String </TITLE> </HEAD> <BODY> This is the applet:<P> <APPLET code="DrawStringApplet" width="300" height="50"> <PARAM name="Message" value="Howdy, there!"> This page will be very boring if your browser doesn't understand Java. </APPLET> </BODY> </HTML>
Of course you are free to change "Howdy, there!" to a "message" of your choice. You only need to change the HTML, not the Java source code. PARAMs let you customize applets without changing or recompiling the code.
This applet is very similar to the HelloWorldApplet. However
rather than hardcoding the message to be printed it's read into the
variable
inputFromPage from a
PARAM
element in the HTML.
You pass
getParameter() a string that names the
parameter you want. This string should match the name of a
PARAM element in the HTML page.
getParameter() returns the value of the parameter. All
values are passed as strings. If you want to get another type like
an integer, then you'll need to pass it as a string and convert it
to the type you really want.
The
PARAM element is also straightforward.
It occurs between
<APPLET> and
</APPLET>. It has two attributes of its own,
NAME and
VALUE.
NAME identifies which
PARAM this is.
VALUE is the string value of
the
PARAM. Both should be enclosed in double quote
marks if they contain white space.
An applet is not limited to one
PARAM. You can pass as many
named PARAMs to an applet as you like. An applet does not
necessarily need to use all the PARAMs that are in the HTML.
Additional PARAMs can be safely ignored. | http://www.cafeaulait.org/course/week5/16.html | CC-MAIN-2013-20 | refinedweb | 384 | 67.96 |
The first thing that we're going to do today is use create-react-app. Then, we'll locate the components we're going to use from the KendoReact site, and install them using node package manager.
We will also install the Kendo default theme.
We first build out the project using create-react-app. If you are new to Create React App, check out this article to learn more. Otherwise, let's open our terminal and globally install it (if needed):
npm install create-react-app -g
Once installed we can run create-react-app anytime we want, let's do just that.
create-react-app
create-react-app kendo-react
We'll mostly be working in the src directory. Remember you can always refer to the KendoReact documentation to get more information about all the components. For this project we'll be working with Buttons, DropDowns, NumericTextBox and Data Grid components.
src
First, let's just install the buttons. We see that in the Buttons documentation that we have an Installation section that let's us know how to get started. We just need to install the Buttons library with npm by running:
npm install @progress/kendo-react-buttons
That will save the package to the project's package.json and all Kendo packages follow this same naming convention:
package.json
npm install @progress/kendo-react-<componennt-name>
Now lets install the rest of the packages we need: DropDowns, NumericTextBoxes and also the internationalization package, which is required for globalization features in KendoReact components.
npm install @progress/kendo-react-grid @progress/kendo-data-query @progress/kendo-react-inputs @progress/kendo-react-intl @progress/kendo-react-dropdowns @progress/kendo-react-dateinputs @progress/kendo-react-pdf @progress/kendo-drawing
Now we can go ahead and talk about the theme. In order to get some nice, modern styling, we need to install one of these themes. For this project, we actually won't be doing any customization in CSS, we'll solely rely on the styling from the theme. If you do want to customize, you can use the Progress Theme Builder. This builder lets you customize your theme for any Kendo UI component library. You can use Material, Bootstrap or your own custom settings using those themes as a starting point.
For today, we are actually just going to install the default theme. All we are going to do is run:
npm install @progress/kendo-theme-default
This package is now added to your package.json and also resides in your node_modules directory and we can include it in React with a simple import. Next, we import the theme CSS into our App.js page:
package.json
node_modules
App.js
import '@progress/kendo-theme-default/dist/all.css';
Before getting started on the Kendo components, you can delete the contents of App.css, the logo.svg and its import statement at the top of the App.js file. While we're editing the App.js file, let's replace the HTML (JSX) with the following:
App.css
logo.svg
App.js
<div> <h1>KendoReact Grid</h1> </div>! | https://www.telerik.com/blogs/kendoreact-creating-robust-react-applications | CC-MAIN-2019-26 | refinedweb | 517 | 58.08 |
Aastha Solutions
@aasthasolutions on WordPress.org
- Member Since: August 27th, 2018
- Location: Bhavnagar, Gujarat, India
- Website: aasthasolutions.com
- Job Title: WordPress Expert
- Employer: Aastha Solutions
Bio
Interests
Contributions Sponsored
Contribution HistoryAastha Solutions’s badges:
- Plugin Developer
Committed [2392327] to Plugins SVN:
tested and updated upto 5.1.1
Committed [2392320] to Plugins SVN:
solve shortcode return issue and tested upto wordpress version 5.1.1
Committed [2392314] to Plugins SVN:
tested up to version 5.1.1
Committed [2392307] to Plugins SVN:
test and update
Committed [2392301] to Plugins SVN:
review code and update version from 1.0.0 to 1.0.1
Committed [2377184] to Plugins SVN:
tested up to WordPress version 5.5.1
Committed [2377182] to Plugins SVN:
tested up to WordPress version 5.5.1
Committed [2377170] to Plugins SVN:
Tested upto 5.5.1
Committed [2377162] to Plugins SVN:
Testing for WordPress version 5.5.1
Posted a reply to Such a useful app to post from front end, on the site WordPress.org Forums:
Thank you very much for your positive review. Keep posting for suggestions as well.
Created a topic, Fantastic plugin forever, on the site WordPress.org Forums:
I loved this plugin when i thought to develop. Now fin…
Committed [2329201] to Plugins SVN:
change directory
Committed [2326232] to Plugins SVN:
change name of file.
Committed [2326231] to Plugins SVN:
Initial Release first file uploading.
Committed [2324943] to Plugins SVN:
Initial Release
Committed [2324147] to Plugins SVN:
update Requires version of wordpress
Committed [2324141] to Plugins SVN:
Test and update for version 5.4.2
Committed [2267171] to Plugins SVN:
upload banner
Committed [2267135] to Plugins SVN:
Upload icon
Committed [2267085] to Plugins SVN:
Initial Release
Posted a reply to Background, on the site WordPress.org Forums:
Sorry background image is not set now we will update plugin soon with this functionality
Committed [1937207] to Plugins SVN:
display video
Committed [1935311] to Plugins SVN:
banner
Committed [1935307] to Plugins SVN:
screenshots
Committed [1935305] to Plugins SVN:
screenshots
Committed [1935298] to Plugins SVN:
first commit
Committed [1935296] to Plugins SVN:
first commit
Committed [1935295] to Plugins SVN:
first commit
Committed [1935294] to Plugins SVN:
first commit
Committed [1932317] to Plugins SVN:
new banner black
Committed [1932316] to Plugins SVN:
change banner image
Committed [1932166] to Plugins SVN:
change plugin name
Committed [1932162] to Plugins SVN:
height dp icon
Committed [1932160] to Plugins SVN:
new icon
Committed [1932159] to Plugins SVN:
rtl banner
Committed [1932155] to Plugins SVN:
change banner name
Committed [1931690] to Plugins SVN:
change FAQ
Committed [1931647] to Plugins SVN:
removed
Committed [1931645] to Plugins SVN:
remove 2nd ss
Committed [1931637] to Plugins SVN:
add youtube video
Committed [1931629] to Plugins SVN:
change ss name
Committed [1931619] to Plugins SVN:
import plugins files
Committed [1931618] to Plugins SVN:
import plugins files
Committed [1931617] to Plugins SVN:
import plugins files
Posted a reply to Change front language according to language selected in user profile, on the site WordPress.org Forums:
In the WPML language settings section select the Set admin language as editing language check-box…
Posted a reply to Can you change the file of of images in WordPress?, on the site WordPress.org Forums:
First thing you need to do is install and activate the Media File Renamer plugin.…
Posted a reply to Best Plugins or Apps for Membership and Submissions?, on the site WordPress.org Forums:
If you want free use this. this plugin have 4.5 star rating WP-Members Membership Plugin
Committed [1931520] to Plugins SVN:
new readme
Committed [1931519] to Plugins SVN:
chnage name
Committed [1931505] to Plugins SVN:
change
Developer
Bubbles Animates Name
Active Installs: Less than 10
Particle Background
Active Installs: 600+ | https://profiles.wordpress.org/aasthasolutions/ | CC-MAIN-2022-21 | refinedweb | 618 | 50.97 |
#include <serialGLCD.h> void setup() { Serial.begin(115200); // Default baud rate of the display. delay(5000); serialGLCD lcd; } void loop() { serialGLCD lcd; lcd.gotoLine(11); lcd.toggleFont(); Serial.print("Arduino");delay(500); lcd.clearLCD(); }
¿Qué tal si eliminas la instrucción en la que estás borrando todos los ciclos el LCD? Elimina:delay(500); lcd.clearLCD();
hola, yo no entiendo mucho, pero deduzco que si lo escribes todo el rato con el loop, siempre van a ocurrir 2 cosas: 1) que se te llene toda la pantalla al quitar el clear. 2) que te parpadee, porque para que no se te llene tendras que limpiar la pantalla con el clear.
Jenn Holt's updated firmware for sparkfun Graphical LCD backpackThis is only for the 128x64 display!!!! I took out all the code for the larger display.I only own the small one and this was all I could test. others are welcome to adapt the code if they want.interface stuff:default baud is still 115200, and will revert to this when a character is sent during the splash screen** The display now implements XON/XOFF flow control. If your application supports this, you can re-compile the code with a smaller RX_BUFFER_SIZE value to make more room for sprites. The default RX_BUFFER_SIZE is the 256 max. If you are going to BitBlt large images I recommend implementing XON/XOFF or you may overrun the buffer. You could also put a delay in your host code if you overrun the buffer while BitBlt'ing a large image. ASCII characters sent to the display will be printed left to right, top to bottomcommands are still prefaced by 0x7C** there are basic debug messages that get sent when command mode is entered and exited, these are set with command 0x02 0x## with ##=(0,1,2) 0= no messages, 1=binary format messages, 2=ascii format messages. the messages are sent out on the uart** there is an LCD reset command that resets the LCD should things get screwy, command 0x06. (pulls /RESET low, then high, clears the screen and sets x_offset=0, y_offset=0)Text stuff:**display now responds to carrige returns and line feeds. by default a carrige return also executes a line feed. you can toggle this with command 0x04**there is no more demo mode**text rendering is much faster**you can define your own fonts, however you must decide what fonts you want at compile time. All font data is in two files font.h and aux_font.h, replace them with your own fonts if you wish. There is a utility called bmp2header_font which will create a font.h file from a bitmap(one large image with characters sequential in the x direction), not you must manually rename text_array to aux_text array in the header if you want to use it for aux_font.h**by default the code builds with the original Sparkfun font in default, and a double sized font in aux, you can switch back and forth with command 0x08**the text is now rendered using the bitblt function, so you can use all the logical modes(0..7) for drawing text, set the mode with command 0x0A 0x##.Graphical stuff:**the coordinate system is now (0,0) in upper left and increasing right and down**the line and circle algorithms have been replaced to use only integer math (Bresenham's line algorithm and midpoint circle algorithm)**support for NUM_SPRITES sprites labled 0..(NUM_SPRITES-1). each sprite has SPRITE_SIZE bytes of data dedicated to it. the default #defines in the code set these numbers to 8 sprites of 34 bytes each, you can change this if you want, but be careful not to use up all the SRAM, the ATmega168 only has 1K. 34 bytes/sprite is enough for a 16x16 block, although sprites don't have to be square. command 0c0B draws a sprite, the sparkfun logo is sprite 0, but this can be overwritten. Command 0x0D uploads a sprite.**draw filled box command. this draws a box filled with a repeating byte(virtical stripe of data) ie 0x00 would clear every pixel, 0xFF would set them, 0xAA would draw stripes** bitblt if you want to draw something larger than a sprite.**raw read/write functions: write_command, write_data, write_block, read_data, read_byte, read_block**reverse mode now inverts the graphics on the screen instead of clearing it.Command list:all commands must start with 0x7C, this tells the display that the following bytes are part of a command0x00: clear screen, no other bytes (ex 0x7c 0x00) clears/sets all pixels on the screen(depending on if reverse is set) sets x_offset and y_offset to 00x01: set debug level, follow with byte for debug level(0,1,2) level 0 is no messages level 1 is binary messages, the display will echo a byte = command when the command is started and a byte = 0x00 when the command is finished level 2 is text messages, useful when using a terminal. will echo "Entered Command n" when command n is started and will echo "Exited Command n, RX_buffer=y" when the command is done, and will report how many bytes are in the recieve buffer0x02: set backling duty cycle. argument is duty cycle in percent ie (0..100) ex: 0x7C 0x02 0x32 will set the backlight to 50%0x03: draw circle. arguments are x, y, radius, set/reset. so command (0x7C 0x03 0x20 0x20 0x0A 0x01) will draw a circle at (32,32) with a radius of 10 by setting the pixels0x04: toggles CR/LF. each time this command is sent, wheather or not a CR automatically executes a LF is toggled. this is saved to EEPROM and is persistant over power cycles0x05: erase block. draws a block on the screen with clear(reverse=0) or set(reverse=1) pixels arcuments are x1,y1,x2,y2 the coordinates of two opposite corners of the block. ex: (0x7C 0x05 0x00 0x00 0x10 0x10) clears from (0,0) to (16,16)0x06: LCD reset. resets the LCD, clears the screen, and sets x_offset and y_offset to 00x07: change baud rate. 1..6. ex: (0x7C 0x07 0x05) sets baud 57600. this setting is persistant over power cycles. 1=4800 2=9600 3=19200 4=38400 5=57600 6=115200 | http://forum.arduino.cc/index.php?topic=113612.msg881338 | CC-MAIN-2017-22 | refinedweb | 1,045 | 69.01 |
12 Oct 09:45 2012
Re: bug in Biostrings mismatchTable?
Hervé Pagès <hpages@...>
2012-10-12 07:45:36 GMT
2012-10-12 07:45:36 GMT
Hi Janet, Thanks again for the bug report. This one should be fixed in Biostrings 2.26.2 (release) and 2.27.3 (devel). Cheers, H. On 10/10/2012 05:13 PM, Janet Young wrote: > Hi there, > > I think I've found a bug in mismatchTable (Biostrings). It's reporting a mismatch after the end of the reported alignment. I think the code below shows the problem. > > thanks, as usual! > > Janet > > ##### > > library(Biostrings) > > ### couple of seqs, the middle portion aligns, but the last few bases don't. I'm not interested in those last few bases, so I do a local alignment > seq1 <- DNAString("GCTGAAGTAGTTCTCCAGAA") > seq2 <- DNAString("GTAGTTCTCCAAAGT") > aln1 <- pairwiseAlignment ( seq1, seq2, type="local" ) > aln1 > # Local PairwiseAlignmentsSingleSubject (1 of 1) > # pattern: [7] GTAGTTCTCCA > # subject: [1] GTAGTTCTCCA > # score: 21.79932 > > end(pattern(aln1)) > # [1] 17 > > mismatchTable(aln1) > # PatternId PatternStart PatternEnd PatternSubstring PatternQuality > #1 1 18 18 G 7 > # SubjectStart SubjectEnd SubjectSubstring SubjectQuality > #1 12 12 A 7 > #### the one mismatch that's reported is after the end of the alignment as reported above. There's another mismatch after the end of the alignment that wasn't reported > >.2 IRanges_1.17.0 BiocGenerics_0.5.0 > > loaded via a namespace (and not attached): > [1] parallel_2.16.0 stats4_2.16.0 > > _______________________________________________ >: | http://permalink.gmane.org/gmane.science.biology.informatics.conductor/44057 | CC-MAIN-2015-18 | refinedweb | 236 | 65.62 |
In mid-February 2000 the people responsible for Borland's C++ products made the core of their product line available as a free download.
They are giving away are the command-line compiler tools and libraries. These are the core of the retail Borland programming tools, like Borland C++ Builder - all the power that runs these expensive products, but without the fancy, user friendly graphical interface - you have to pay for that...
The aim of this tutorial is to get beginners up and programming in C++, easily and cheaply, using a nice friendly replacement for Borland's own expensive GUIs. I assume you are capable of downloading and installing a file by yourself, and are using some variant of Microsoft Windows.
Why would you want to use this C++ compiler?Presumably you want to learn and use C++. This compiler is fast, free,. If you're a command line freak and want to compile under DOS, forget it - and use Linux.
The main problem with this free C++ compiler is that it's very hard for the average computer user, brought up on Windows, to use. No buttons to click, just all sorts of magic invocations that have to be typed in by hand at the command prompt (the what?).
To overcome this learning curve a little, this tutorial will guide you through downloading and setting up the free tools, and setting up a nice friendly Windows interface to work from. We'll be using my favourite text editor, EditPlus, but other programmers' text editors can be setup in a similar fashion.
We need to download two software packages:
The text Editor's the easiest, just go to, download and install the current version (about 1MB). For the purposes of this document, we are using version 2.x. EditPlus is shareware, so if you end up using it, do remember to register it.
Getting the Tools from Borland took me a long while. At the Borland website, as of February 2000 (the month this free offer started), you needed to have both cookies and Java script enabled on your browser. I had neither at first. Hopefully this situation will change, and it will all become a little easier to access. You need to go to, Register, then fill in a questionnaire before you can download anything. You should then be given a link to download the package from. It weighs in at about 8MB.
Don't install the Compiler yet, we'll cover that in the next section:
This is the part where most new programmers will get unstuck. The instructions that come with the free Tools package are terse to say the least - not very helpful where you are unsure about how to set your 'Path Environment and Windows 95/98 differ here, so we'll break off to separate sections:
Restart your machine, and you'll be ready to...
We are going to be using the text editor you downloaded in Step one, EditPlus, a great shareware editor. It is very good at interfacing with command line tools like our free compiler, even without much configuring. The following instructions are for Version 2, but Version 1 differs only very slightly...
The whole aim of this tutorial is to wean beginner C++ programs by simply typing 'em in and pressing Ctrl+1. Try it with the following (This is standard C++, not the C that many 'C++' tutorials will teach you...):
// Hello World
///////////////////
#include <iostream>
using namespace std;
int main()
{
cout << "Hello World!" << endl;
return 0;
}
You will very quickly out grow this method of programming - one file programming is only really sufficient for learning purposes. You will need to learn about make You can read, or download this whole book free. It's a really good introduction to C++, and takes you to quite an advanced level. The great thing about this book is that is deals with, and teaches the recent C++ standard, not just a whole bunch of old C constructs like many books do.
The C++ Programming Language, Bjarne Stroustrup Bjarne invented the language, and his book remains the authoritative reference. It has been updated through a number of editions as new features are added to the language.
The C++ Standard Library, Nicolai Josuttis By itself, C++ does not know how to write to your screen, or open a file. These indispensable functions are performed by library functions. The C++ library adds to the C library, including an implementation of the STL. This book serves as an exellent tutorial and reference.<br />
method1 (list the methods you want to use from Borland here one below another)<br />
method2<br />
method3
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/606/Getting-started-with-Borland-C?msg=794960 | CC-MAIN-2017-43 | refinedweb | 805 | 69.62 |
Created on 2006-07-03 13:22 by mamrhein, last changed 2007-11-01 17:41 by gvanrossum.
In order to reduce the pickle footprint of UUIDs I
would add a __reduce__ method to class UUID like
def __reduce__(self):
return (uuid, (self.int,))
together with a helper function (at module level) like
def uuid(i):
return UUID(int=i)
Is the footprint of UUID an issue?
Note that changing the pickle format of UUID will require code that can
unpickle both versions, for compatibility. I don't really see the need.
Also, no real patch provided.
No clear problem, no patch. | http://bugs.python.org/issue1516327 | crawl-002 | refinedweb | 102 | 66.84 |
electron-cookies
Provides document.cookie support for Electron
Installation
npm install @exponent/electron-cookies --save
Usage
import ElectronCookies from '@exponent/electron-cookies'; // Add support for document.cookie, using the given origin (protocol, host, and // port) ElectronCookies.enable({ origin: '', }); // Remove support for document.cookie. Cookies are not cleared from the // underlying storage. ElectronCookies.disable();
Behavior
When getting or setting a cookie, the browser needs to know the current URL of the page.
In Electron, HTML files are usually served from the local filesystem, which has no domain. electron-cookies provides you a way to specify the origin of the URL to use:
ElectronCookies.enable({ origin: '' });
This tells electron-cookies the origin of the URL to use when accessing cookies. The path of the URL is the relative path from the app's root to the path of the HTML file. So if the app is at
/Users/you/Desktop/ and the HTML file is at
/Users/you/Desktop/web/index.html, the synthesized URL will be.
Alternatively, if you omit an origin, the full
file: URL of the HTML file is used instead. | https://www.npmtrends.com/@exponent/electron-cookies | CC-MAIN-2021-39 | refinedweb | 182 | 50.43 |
Hi, > > Semantic URIs represent not just documents but also abstract concepts > > and may be referenced by other datasets. > > I'm not sure what this actually means. What kind of abstract concepts? > The idea of there existing a relationship between packages called > «Depends»? The package is a concept, the relationship is a concept too. For more about Semantic Web URIs: > Assuming we use HTTP for this, HTTP redirects are a thing, so we can > always move stuff around if we need to. Or we could just get this right the first time. The namespace chosen should be able to have "persistent/permanent URLs" (PURLs) defined in it. Redirects are useful but they shouldn't be used as an excuse for messing up the first attempt. Thanks, Iain. -- | https://lists.debian.org/debian-project/2015/12/msg00042.html | CC-MAIN-2020-05 | refinedweb | 126 | 74.49 |
I would like to be able to programatically detect the last bar / day of a backtest so I can generate a summary report (via the log api) on that day
is there any way I can detect this?
I would like to be able to programatically detect the last bar / day of a backtest so I can generate a summary report (via the log api) on that day
is there any way I can detect this?
Hi Jason,
I gather that you would prefer not to enter the ending bar datetime stamp manually? This would work but is awkward.
I've attached code that sorta solves the problem. The assumption here is that end_date = context.spy.security_end_date will provide the most recent trading date for SPY, advanced by one trading day. Then, if your backtest ends on a typical Mon. through Thurs. (market open and no early close), you can detect the last call to handle_data. I think that the more general case can be coded, as well, but I have to dig into.
Quantopian folks, it would be handy if security_end_date actually returned the date for the last trade data available to the backtest algorithm. Why is it advanced by one trading day? Is this for compatibility with live trading?
Grant
from datetime import timedelta def initialize(context): context.spy = sid(8554) end_date = context.spy.security_end_date context.last_date = end_date - timedelta(days=1) print context.last_date def handle_data(context, data): if get_datetime() == context.last_date: print 'last call to handle_data'
Thanks for the suggestion Grant! I will give this a try. Just looking at the code though, it seems a bit strange that it would work? I though security_end_date was for the current bar (if the security isn't delisted ) so really seems strange that should be detectable as the end date....
also, I am assuming this only works in daily mode? I do all my work in minute mode so I will give this a try to verify, then dig through tradigncalendar.py and try to figure out a solution as you suggest.
Grant,
Firstly, I am a big fan - thank you so much for your contributions to this forum.
However, I can't seem to get your code to do what I think is intended. Context.last_date always ends up being one day before the security_end_date without regard to the backtester dates. So it will only print 'last call to handle_data' if the backtester end date is set to the most recent (ie today's) date. I thought the timedelta function would somehow magically calculate the delta of the security_end_date and the backtester end-date so then context.last_date would be the last backtesting date. Am i misunderstanding the point of the thing?
Currently, I manually set a date to run final performance metrics. In practice this means I do every backtest twice - the first time just to realize that i forgot to set the final date in the code. Any help would be great appreciated!
Hello Robby,
The assumption here is that you'll be running the backtest up to the most recent date for which data are available. As you can see from the attached backtest, sometimes the value of 'days' in timedelta(days=3) will need to be adjusted upward, when the security_end_date is advanced by more than one calendar day (e.g. the backtest ends on a Friday).
This is not ideal, and buried within the backtester is the datetime stamp of the last bar (daily/minute). If this could be accessed, then testing for the last call to handle_data would be straightforward.
The code here should work, so long as you tweak the 'days' value. For minute bars, it'd have to be modified to include the last minute bar datetime stamp. If you need this, just let me know.
Grant
from datetime import timedelta def initialize(context): context.spy = sid(8554) end_date = context.spy.security_end_date context.last_date = end_date - timedelta(days=3) # adjust value of days print 'context.last_date = ' + str(context.last_date) def handle_data(context, data): print get_datetime() if get_datetime() == context.last_date: print 'last call to handle_data'
Thanks to Grant's revelation of the strange (maybe just unexpected?) output from the "dir" command in another thread, I think I solved this problem for you. It uses a regex to grab the last date set in the simulation. It then looks up this date in the zipline utilities to find the closing time. | https://www.quantopian.com/posts/how-to-detect-the-last-bar-of-a-backtest-for-generating-a-summary-report | CC-MAIN-2018-39 | refinedweb | 738 | 65.83 |
Add XY Coordinates (Data Management)
Summary
Adds the fields POINT_X and POINT_Y to the point input features and calculates their values. It also appends the POINT_Z and POINT_M fields if the input features are Z- and M-enabled.
Illustration
Usage
Add XY Coordinates is most commonly used to get access to point features to perform analysis or to extract points based on their x,y location.
If the POINT_X, POINT_Y, POINT_Z, and POINT_M fields exist, their values are recalculated.
If points are moved after using Add XY Coordinates, their POINT_X and POINT_Y values, and POINT_Z, and POINT_M values—if present—must be recomputed by running Add XY Coordinates again.
Project does not modify the values of POINT_X, POINT_Y, POINT_Z, or POINT_M.
If the Input Features are in a geographic coordinate system, POINT_X and POINT_Y represent the longitude and latitude, respectively.
If an ArcMap layer is selected as input , the x,y coordinates are based on the input's coordinate system, not that of the data frame.
This tool modifies the input data. See Tools with no outputs for more information and strategies to avoid undesired data changes.
Syntax
Code Sample
The following Python Window script demonstrates how to use the AddXY function in immediate mode.
import arcpy from arcpy import env # Author: ESRI #) | http://help.arcgis.com/en/arcgisdesktop/10.0/help/0017/001700000032000000.htm | CC-MAIN-2016-07 | refinedweb | 213 | 53.92 |
#include <djv_renderer.h>
Renderer class provides the general interface to render images.
Destructor.
Adds the chunk as its data.
This method firstly examines the chunk content by both its chunk Id and the actual content and if the chunk is acceptable, consumes the content and updates the internal state; otherwise if the content is not acceptable, that is, the content is not for the render or some invalid data, this method do nothing but returns immediately.
trueif the chunk is correctly consumed; otherwise
false.
Get bits-per-component on the original image file.
Get dot-per-inch on the original image file.
Gets the original height of the image in pixels.
Referenced by Celartem::DjVu::ImageRenderer::render().
Obtains the internal renderer instance.
Get photometric-interpretation of the original image file.
Gets the original width of the image in pixels.
Referenced by Celartem::DjVu::ImageRenderer::render().
Renders a portion of the mask.
Implemented in Celartem::DjVu::ImageRenderer. | https://www.cuminas.jp/sdk/classCelartem_1_1DjVu_1_1Renderer.html | CC-MAIN-2017-51 | refinedweb | 156 | 52.76 |
Filtering with dotCover
dot:
Clicking OK and re-running the tests, we now see that the entire test assembly has been excluded
Excluding an entire Namespace
We can exclude entire namespaces by setting it in the Class Mask as shown below::
produced by adding the following filters:!
20 Responses to Filtering with dotCover
Scott Marlowe says:July 9, 2010
Is it possible to set these filters from the command line?
hhariri says:July 9, 2010
@Scott
Currently no but it will have these features.
Brian says:July 14, 2010
Will there be exclusion by attribute? That’s a feature I’ve used with NCover and really like.
hhariri says:July 17, 2010
@Brian,
Not sure about that one but I’ll check (or someone else will follow-up). However, can I ask, why not use filters? It’s less intrusive and does not pollute your code with framework specific attributes.
Simon Hargraves says:August 3, 2010
It would be awesome if we could right click on the assembly line in the Coverage result area and exclude it from there also.
Hadi Hariri says:August 6, 2010
@Simon,
That’s already on the list of features but most likely won’t be in the first version
Steve Dunn says:August 19, 2010
I would really like to be able to use the [CoverageExclude] attribute. Without this, I probably wouldn’t recommend it for my team. We have a huge amount of code and use this attribute to signify to nCover to ignore the type.
I’m not a great fan of attributes, but I find this particular attributes makes the intent of the code clearer, e.g. ‘this component physically touches external dependencies and can’t be tested’ (whether right or wrong!)
Steve Dunn says:August 19, 2010
It looks like dotCover suffers the same problems with Lamdas that nCover does. For instance, I cannot write a unit test that shows 100% coverage for the following method:
public static IEnumerable Pairwise(this IEnumerable source, Func resultSelector)
{
TSource previous = default( TSource ) ;
using( var it = source.GetEnumerator( ) )
{
if( it.MoveNext( ) )
previous = it.Current ;
while( it.MoveNext( ) )
yield return resultSelector( previous, previous = it.Current ) ;
}
}
Hadi Hariri says:August 19, 2010
@Steve
Coverage attribute will be added in future versions. The Lambda issue is a known, however we’re going to look into that particular code.
Steve Dunn says:August 19, 2010
Hadi, thanks. I pasted the wrong snippet of code. The above code IS covered by my xUnit tests, but for some reason, when I run the whole test suite at once, it says it’s not covered.
I think this is because I do quasi-bdd style tests, something like:
public class FooSpecs
{
public class WhenFooIsConfigured
{
public class And_someone_calls_it_who_IS_permissioned
{
[Fact]
public void It_should_do_something( )
{}
}
public class And_someone_calls_it_who_IS_NOT_permissioned
{
[Fact]
public void It_should_do_something( )
{}
}
}
}
The R# test runner sometimes doesn’t run the test when I right click on FooSpecs, sometimes it does. Also, if I dotCover the tests manually and then dotCover the whole test suite, it says that previously covered code is no longer covered.
I’ll shortly be seeing how it copes with mSpec.
Hadi Hariri says:August 19, 2010
@Steve
I use MSpec. Can’t get weirder than that :).
Could you log it with an attachment project so the devs can take a look, at ?
Michael says:December 24, 2010
Is it possible to exclude all files ending with one given pattern? For instance I would like to exclude from the coverage analysis all files in the solution matching the pattern *Specs.cs (i.e ending with Specs).
The reason for that question is that our tests are defined in the project where the object being tested resides so that we do not have to duplicate the folder structure in a separate test project.
Thanks
Hadi Hariri says:December 26, 2010
@Michael,
You’d need to do it for each of the folders with Folder.Specs.*. Not sure if *.Specs would work.
Michael says:December 27, 2010
@Haidi,
I had tried *.Specs, *.Specs.cs etc.. already but unfortunately this does not work.
Filtering each folder would theoretically do the trick, but would be way to much work.
Is this a filtering feature that you guys might consider implementing?
Cheers,
Hadi Hariri says:December 29, 2010
@Michael,
Not sure if it’s on the backlog. Could you please add it as a feature request at
Phil says:February 8, 2011
I have been using dotCover for about a month now.
You mention that this is a beta product and that you aim to make it easier to manage filters in future releases so here are some of my suggestions and comments based on what I have played around with so far:
I like the right-click folder, project, etc to add/remove from coverage filters.
It’d be nice to have this in both the solution explorer and the coverage results window, this way I can manage down to the function level.
Would like to see a folder/file/path name filter option as well. Using WCF and other MS tools that auto generate code with specific filenames can be filtered out. For example one could exclude all Reference.cs below the root solution folder.
for reporting I think it could be useful to keep a histogram counting the number of times a method is called during the analysis. This way if I have a few methods that are not covered well, but one that is called more frequently I can concentrate effort there as it is likely a more significant point of potential failure.
cheers,
Phil
Michael K. Campbell says:June 21, 2012
I’m using dotCover 2.0 and still don’t get why setting up filters/exclusions is so hard.
The new UI where I can set up filters IS great. Fantastic even – as it’ll let me use wildcards and so on. So that’s a GREAT win.
BUT, I still see a HUGE disconnect in the UI.
Right now in the Coverage Browser, I can go in… right click on entire assemblies, namespaces, classes, methods, and so on – it’s INSANELY granular and VERY easy to use.
Then I can save said ‘coverage’ to a .dcvr file.
What I’d LOVE to do?
Be able to LOAD that back in and have THAT be what is used for setting all of my filters. Or, maybe some other file/etc.
But it seems to me that I should be able to just save a ‘MySolution.dcvr’ (or .whatever) and have THAT be the more-or-less ‘defacto’ list of coverage that I care about when I re-run coverage. Ideally, there’d be some sort of convention that would try to re-load that per solution… but even WITHOUT that option if I could just use the dotCover > Open Coverage Snapshot and have THAT remember all the crap I excluded before AND have that be what shows up in the Unit Tests window/Coverage tab?
THEN this would be IDEAL for me.
Because, as it is, I still find myself scratching my head each time I open this up, exclude a few classes/assemblies I don’t care about and then see the option to SAVE and then LOAD these ‘snapshots’ without ANY of the state (i.e. exclusions/filters) persisted.
Jesse Jacob says:July 18, 2012
I second what Michael Campbell said. The disconnect between the test runner’s coverage tab’s filtering abilities and the dotCover 2.0 Edit Coverage Filters screen is just weird. I’ll add a feature request, but I’m sure there’s some reason you guys aren’t doing this because it seems really, really obvious that it’s a huge win for your users if both UIs can generate permanent filters.
Jesse Jacob says:July 18, 2012
It’s already a feature:
Everyone who wants this, please log into youtrack and vote it up!
Jalal Mohammad says:July 29, 2014
C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\VSTest.Console.exe
/Platform:x64 UnitTestProject1.dll
C:\Intel\dts_brita-repo\BritaAPI\BritaAPI_CS\UnitTestProject1\bin\Release
C:\Intel\dts_brita-repo\BritaAPI\BritaAPI_CS\UnitTestProject1\bin\Release
output_BRITAAPI.xml
BritaAPI
BritaAPI.TproveCSAPI.*
*
UnitTestProject1
*
*
i am trying to exclude TPROVECSAPI namespace from the report but i still see it..
Can anybody help? | https://blog.jetbrains.com/dotnet/2010/07/08/filtering-with-dotcover/ | CC-MAIN-2020-45 | refinedweb | 1,391 | 63.8 |
ptsname - get name of the slave pseudo-terminal device
#include <stdlib.h> char *ptsname(int fildes);
The ptsname() function returns the name of the slave pseudo-terminal device associated with a master pseudo-terminal device. The fildes argument is a file descriptor that refers to the master device. The ptsname() function returns a pointer to a string containing the pathname of the corresponding slave device.
This interface need not be reentrant.
Upon successful completion, ptsname() returns a pointer to a string which is the name of the pseudo-terminal slave device. Upon failure, ptsname() returns.
grantpt(), open(), ttyname(), unlockpt(), <stdlib.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/ptsname.html | CC-MAIN-2014-10 | refinedweb | 101 | 59.5 |
Yahoo’s YQL stand for Yahoo Query Language. It’s a query language very similar to SQL that lets users query their multiple web services using a single unified language so that users wouldn’t have to learn multiple APIs. The information about the service is here.
The YQL console gives you examples of the statements that are used to query Yahoo’s APIs. These queries are what we will use to send as a parameter to the YQL service. For instance, in the screen shot below we have a query making a request to Yahoo’s Local service and searching for fast food locations.
YQL Query searching Yahoo’s Local service for fast food locations.
Also note, there are other options that are included. We have the option of whether we want the information returned as XML or JSON. We can also choose if we want diagnostic and debug information returned in our result set. These option will function as the query-string parameters that we will send as part of our request to the YQL service.
Take some time to play with the console and create the statements that you will like to send to the service.
This service is available to the public, so you don’t have to create an API key if you don’t want. Yet, there are some restrictions that may be applied to the times you can call it they make be affected if you are making an anonymous call.
To get started first we’ll open up Visual studio and create a console project called YQLExample.
First we need to add a few things to get this working. First add a reference to the System.Web assembly. This will give us access to the
HttpUtility class that will allow us to encode the query we will send.
HttpUtility
Next use the package console manager to get the Newtonsoft JSON serielizer using this command
PM> Install-Package Newtonsoft.Json
Now add the following using statement to the file.
using System.Net;
using System.Web;
using Newtonsoft.Json.Linq;
The System.Net namespace will give us access to classes that will allow us to make the request to the web service and get results back. The
Newtonsoft.Json.Linq namespace will give us access to classes that will help us to manipulate the returned JSON we will get back from the web service.
System.Net
Newtonsoft.Json.Linq
First build the string which will be used as the URL to the web service.
Take note of the format parameter we are adding that will make the service give us our results as JSON and the diagnostics parameter that will keep the service from returning parameter information back to us.
Next we will use the WebClient class to make a call to the service and to return our results back as a string.
WebClient
Finally we use the Newtonsoft JSON serializer that gives us access to powerful classes that let us manipulate our results from the web service call.
First, we turn the results from the web call to a JObject object using the Parse function from the JObject class. Then we “travel” down the hierarchy of the
JObject to cast our results as a JArray. this lets us have our results as an enumerable collection of
JToken objects. We then loop through our collection and display the results to the console.
JObject
JArray
JToken
And that’s pretty much it. You can modify the YQL query that we send to the service as you like. I also advise playing with the console and checking out the YQL documentation. You will be amazed at the amount of information that is freely available.
Link to a “Gist” of the code in it’s. | http://www.codeproject.com/script/Articles/View.aspx?aid=552378 | CC-MAIN-2016-30 | refinedweb | 628 | 72.26 |
Memory leak somewhere?
My code is using massive amounts of memory (eventually), which I don't think should be the case. The memory use slowly increases until my computer runs out of memory. I tried enabling garbage collection, but that didn't help (or didn't help enough). I don't see any reason why this should use more and more memory. It takes somewhere between 5 and 10 hours for this program to use up my 16 GB of memory, but that memory user is increasing is clear quickly.
import numpy trials = 5800 length = 5000 mean = 0 results = [0]*trials data = [0]*length for i in range(trials): data[0] = numpy.random.normal(0,1) for j in range(1, length): data[j] = data[j-1] + numpy.random.normal(0,1) for k in range(2, length): if data[k] > 2*sqrt(k*log(log(k+1))): results[i] += 1 mean += results[i] | https://ask.sagemath.org/question/34364/memory-leak-somewhere/ | CC-MAIN-2017-09 | refinedweb | 154 | 65.93 |
An asynchronous Python library for building services on the Facebook Messenger Platform
Project description
Boomerang
Boomerang is an asynchronous Python library for building services on the Facebook Messenger Platform. See the documentation at ReadTheDocs!
Why should I use Boomerang?
- Requests are handled asynchronously, allowing for rapid handling of callbacks and messages. Boomerang is built on the incredibly fast Sanic webserver, uvloop replacement for asyncio’s event loop, and aiohttp for asynchronous HTTP requests.
- Boomerang has 100% test coverage and extensive documentation for learners.
- There are two options for interfacing with the Messenger Platform: use the high-level API which introduces abstractions like acknowledge, which marks a received message as seen and displays a ‘typing’ icon while preparing the actual response; or use the low-level API for more flexibility and send actions individually.
- The library is open-source under the MIT License, and can be used for commercial purposes.
Why shouldn’t I use Boomerang?
- The library uses Python 3.5’s async and await syntax, as does the underlying Sanic server. If support for older versions is required, Boomerang isn’t a great fit.
- Boomerang hosts its own server (Sanic), which allows for tightly-integrated and rapid handling. However, if you want to use a different server (like Flask), Boomerang isn’t suitable.
Example
The following example is a simple echo server. When a user sends a message to the bot, the bot echoes the message back:
from boomerang import Messenger, messages, events # Set the app's API access tokens, provided by Facebook {0}'.format(message.text)) # Inform the sender that their message is being processed await self.acknowledge(message) # Return the message's text to respond return message.text app.run(hostname='0.0.0.0', debug=True)
Features
- Support for the Webhook, Send, Thread Settings, and User Profile APIs.
- Full support for message templates.
- High- or low-level interface for sending messages.
- Automatic attachment hosting: the library can send a local file by serving it statically with a unique URL, which is passed to Messenger. This is cached meaning files are only served once, helping performance.
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template. Cookiecutter is really cool and you should check it out!
History
0.6.0 (12-2-2017)
- BREAKING CHANGE: Handler functions are no longer overridden in the Messenger class; instead, the @app.handle decorator is used. See the documentation for more details.
- Handler functions can now simply return responses in a variety of formats, which are interpreted and sent as a reply.
0.5.0 (5-2-2017)
- Implement the User Profile API.
0.4.0 (5-2-2017)
- Implement the Thread Settings API.
0.3.0 (4-2-2017)
- Add automatic attachment hosting using the internal server
- Add proper handling of Messenger API errors
0.2.1 (1-2-2017)
- Update dependency versions to fix VersionConflict in Travis CI.
0.2.0 (1-2-2017)
- Implement the Send API. All non-beta templates and messages are supported (except for the airline templates).
0.1.0 (25-12-2016)
- Implement the Webhook API, with handling of all non-beta event types excepting the ‘message echo’ event, which will be added upon completion of the Send API implementation.
0.0.0 (22-12-2016)
- Initial development version.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
boomerang-0.6.0.tar.gz (33.2 kB view hashes) | https://pypi.org/project/boomerang/ | CC-MAIN-2022-27 | refinedweb | 585 | 58.69 |
State of voice recognition
As computers diminish in size and increase in portability, the need to interact with them without using a keyboard or mouse increases. Voice is an alternative. Superficially, much less bandwidth is available with voice communications than with visual interaction. As a result of the impression that a picture equals a thousand words, computers display to screens in response to mechanical peripheral input much more readily than they accept audio input and respond in a like manner.
Successful techniques already exist, however limited, to issue instructions to smaller computers with voice. The goal is to have a computer react to speech and take a specific action based on the command. The general process to achieve this goal is to build (or adapt) a model, apply a spoken command against that model in a recognition process, and then decide on an action in a dialog manager. Models can be broad in the sense of recognizing a variety of voices but few commands, or you can train your own model from a specific grammar that gives the possibility of quite complex interpretation and interaction. Figure 1 shows the speech recognition and interpretation development flow.
Figure 1. Speech recognition and interpretation development flow
It is important to draw a distinction between natural language processing (NLP) and specific n-gram grammars. The latter states categorically what the recognizer can expect to hear, while the former does its best to decode natural language into a simpler structure by discarding some elements of the received speech and rearranging others. While technologies such as SRGS and SISR lean mostly toward the processing of natural language, programmers look for ways to use those same tools for other types of grammars.
This article uses an SRGS approach to defining fixed grammars and addresses the issues of out-of-vocabulary (OOV) prompts and context in the dialog manager using the example of a set of 2-gram or bigram examples.
The programming perspective
From a programming perspective, a few issues are important. First, you can express a grammar in a number of different ways, depending on which speech recognizer application you use. Second, those grammars are poorly coordinated with the dialog managers that use them. Dialog managers are important in their own right because they offer ways to deal with issues such as intelligent response and OOV recognition. Third, because you are talking about specific grammars designed to do a specific job, you have to rewrite or adapt the model and the dialog manager for each one. The more you can autogenerate them, the better.
Intelligent response implies that the computer takes context into account. If your question asks for a temperature and your previous question was about your computer, then it must be the computer's temperature that you need.
OOV is a common problem in small grammars. To put it simply, it says that some prompts are important and others are necessary for building the model but not important for later processing.
Autogeneration is straightforward, using scripts, such as Bash, Perl, and PHP, or regular programming languages, such as C and C++, provided that there are clear rules. And SRGS is designed to encapsulate those rules.
An example
Listing 1 is a plain-text grammar in prototype.
Listing 1. Prototype of grammar
COMPUTER WAKE COMPUTER STATUS COMPUTER SLEEP
The grammar in Listing 1 is quite simple and specific. It tells the computer that it hears only three possible prompts. Each prompt starts with the word
COMPUTER and can be followed by
WAKE,
STATUS, or
SLEEP. No other commands are possible. The speech recognizer has only one job, which is to choose whichever of the three options it considers to be the closest to what it heard and pass that command to the next stage. For instance, if I say
MAKE COFFEE, it returns
COMPUTER plus one of the three alternative words. The dialog manager should apply some intelligence. For example, if it hears
COMPUTER SLEEP, it should not respond to any more commands until it hears
COMPUTER WAKE. It should respond to
COMPUTER STATUS only if it is in a
WAKE state, at which point it can announce the processor temperature, free space on disk, and a whole host of other interesting things. It is not a practical grammar by any means—when building an acoustic model from grammars as small as this, you soon run into problems regarding insufficient samples. This prototype is intended only as an illustration of the principle.
Training a computer to recognize spoken sounds and apply grammar rules to what it hears is a fairly straightforward process, even in the world of open source. For complete guidance about how to achieve an effective speech recognition system using a fixed vocabulary, see the VoxForge site. The VoxForge tutorials use tools such as HTK from Cambridge University and the Julius voice recognition engine from the University of Nagoya in Japan. See Resources for links to all of these sites.
Building an audio model with HTK requires that you express the grammar in a particular format, as in Listing 2.
Listing 2. Grammar in HTK format
$major = COMPUTER ; $minor = WAKE | STATUS | SLEEP ; ( SENT-START ( $major $minor ) SENT-END )
The same process with the Julius engine requires a slightly different format, as in Listing 3.
Listing 3. Grammar in Julius format
S : NS_B SENT NS_E SENT: MAJOR MINOR MAJOR: COMPUTER MINOR: WAKE STATUS SLEEP
The HTK and Julius formats share structural similarities from a programming viewpoint, but they are sufficiently different that they are not interchangeable.
Listing 4 shows a basic dialog manager in PHP that can deal with this grammar.
Listing 4. A plain dialog manager
<?php ... function dm($prompt_heard) { global $wake_state; // FALSE is asleep so do not respond, TRUE is awake $parts = explode(" ",$prompt_heard); $minor = $parts[1]; switch ($minor) { case 'WAKE': $wake_state = TRUE ; break; case 'SLEEP': $wake_state = FALSE ; break; case 'STATUS': if ($wake_state) { announce_status(); } else { // do nothing } break; default: // OOV - any other prompt, just ignore it. break; } } ?>
This PHP function passes the result from the recognizer as an argument in
$prompt_heard. The
$wake_state variable is declared to be a global and known throughout
the dialog manager. The
explode() dissects the
parts of the prompt heard. In this example case, you know that the first
part will be
COMPUTER, so the following
switch looks only at the minor part. If
WAKE, then set the wake state to
TRUE. If
SLEEP, then set the wake state to
FALSE. If the minor part is
STATUS, then
announce the result, but only if the wake state is
TRUE. You can announce the status to speakers using a process similar
to the one explored in the developerWorks article "PHP bees and audio
honey: Accessible agent-based audio alerts and feedback" (see Resources). The default section will catch any
additional prompts that you add to exercise the recognizer—for unrecognized prompts don't do anything, except maybe write to a log for later troubleshooting.
From a programming efficiency point of view, if you can even partly autogenerate the grammar and the dialog manager, you'll save time and effort plus improve accuracy. The efforts to create even higher level grammars to do this (see Resources) are largely in the realm of NLP.
How SRGS can help
SRGS is intended to express the same ideas as Listing 2 and Listing 3 but in the more rigorous structure provided by XML, as in Listing 5.
Listing 5. Prototype in SRGS format
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" ""> <grammar xmlns="" xml: <meta name="author" content="Colin Beckingham"/> <rule id="myroot" scope="public"> <example> COMPUTER WAKE </example> <example> COMPUTER SLEEP </example> <example> COMPUTER STATUS </example> <ruleref uri="#command1"/> <!-- ruleref <ruleref uri="#major"/> <ruleref uri="#minor"/> </rule> <rule id="major"> <one-of> <item> COMPUTER </item> </one-of> </rule> <rule id="minor"> <one-of> <item> WAKE </item> <item> SLEEP </item> <item> STATUS </item> </one-of> </rule> </grammar>
Listing 5 shows the customary XML declaration, followed by a
DOCTYPE statement that locates the DTD, in this case pointing at detail related to grammars. It then follows the root element
<grammar>. The grammar element contains a number of important attributes, including namespaces, the mode (which in this case is
voice because the destination for the data is a speech recognizer), and the ID of the root rule, which is the place to start when looking for prompts to match (in this case,
myroot). The
myroot rule contains
rulerefs that point to other rules. The DTD permits
<example> elements for information. The rule with ID
command1 follows. Then follow two more rules,
major and
minor. The
major rule contains the single word
COMPUTER. The
minor rule contains the alternatives
WAKE,
SLEEP, and
STATUS. These last are
<item> elements in a
<one-of> structure, indicating that one and only one can apply at a time. Using the
<ruleref> element of the root rule, the major and minor rules become part of the overall rule structure.
In summary, the rules follow this structure:
- The grammar attribute
rootpoints at the root rule
myroot.
- The
myrootrule contains one or more
rulerefs, each of which has a URI that points to another rule, in this case
command1, which is the
master rulefor an n-gram. The
rulereffor
command2is commented out because it is a placeholder for an as yet nonexistent master rule.
- Master rule
command1contains
rulerefs that show that other rules are used,
majorfollowed by
minor.
The model generation processes for HTK and Julius are not user agents in
W3C terms because they do not currently read SRGS format directly but
instead use SRGS to define the grammar calls for a script to translate
from SRGS to the other formats. In addition, neither HTK nor Julius
generates a dialog manager because of insufficient information. How, for
example, can it guess that
SLEEP means stop responding?
Translating from SRGS to HTK or Julius
Listing 6 is a simple translator in PHP that examines the SRGS version and creates output in the form of an equivalent basic HTK or Julius format grammar.
Listing 6. Translator
<?php // test translator: SRGS to HTK/Julius ; } } } } } // now the output $varnamestr .= ""; $jvarnamestr = ""; foreach ($words as $k=>$v) { // master $jvarnamestr .= "\nSENT: "; foreach ($v as $kk=>$vv) { // sub $i = 0; foreach ($vv as $wd) { // word $j = count($vv); $htkwds[$k][$kk] .= " $wd "; $i++; $jwds[$k][$kk] .= " $wd "; if ($i < $j) $htkwds[$k][$kk] .= "|"; } // htk $varname = "$".$k."_".$kk.""; $htkv .= "$varname = ".$htkwds[$k][$kk].";\n"; $varnamestr .= " ".$varname; // julius $jvarname = strtoupper($k."_".$kk); $jvarwordstr .= "$jvarname: ".$jwds[$k][$kk]."\n"; $jvarnamestr .= " ".$jvarname." "; } $varnamestr = $varnamestr." |"; } // output as HTK echo "\n-------------------------\n"; echo "HTK Version\n-------------------------\n"; $varnamestr = substr($varnamestr,0,-2); $htk = $htkv."( SENT-START (".$varnamestr." ) SENT-END )"; echo "$htk\n-------------------------\n"; // output as Julius echo "Julius Version\n-------------------------\n"; $julius = "S : NS_B SENT NS_E"; $julius .= "$jvarnamestr\n"; $julius .= "$jvarwordstr"; echo "$julius-------------------------\n"; // end echo "Done\n\n"; ?>
The overall goal of the program in Listing 6 is to
scan the SRGS document and fill a multidimensional array with SimpleXML
objects that represent the prompt structure. When the array is complete,
it then generates the string variables required from that array and
outputs to the HTK and Julius formats. The array is filled using a series
of
foreach statements that pick out the root
rule, master rules, and the rules that the masters refer to. The result is
an array where the first key is the name of the master rule and the second
key is the position—0 (major) or 1 (minor). The
$i and
$j variables are counters that
control the addition of a vertical bar (|) which is an OR symbol in the HTK format. Finally, the output uses variables created from the words and the IDs of the master rules. Listing 7 is the output of a sample session.
Listing 7. Translator output
> php mytrans.php ------------------------- HTK Version ------------------------- $command_0 = COMPUTER ; $command_1 = WAKE | SLEEP | STATUS ; $zoo_0 = ANIMALS ; $zoo_1 = TIGER | LION | LEOPARD ; ( SENT-START ( $command_0 $command_1 | $zoo_0 $zoo_1 ) SENT-END ) ------------------------- Julius Version ------------------------- S : NS_B SENT NS_E SENT: COMMAND_0 COMMAND_1 SENT: ZOO_0 ZOO_1 COMMAND_0: COMPUTER COMMAND_1: WAKE SLEEP STATUS ZOO_0: ANIMALS ZOO_1: TIGER LION LEOPARD ------------------------- Done
This code brings you back to the format of Listings 2 and 3. For testing purposes, a second master rule is added to ensure that it processes multiple master rules. Note that this is a basic translator that does not deal with repeats (for example, where a set of numbers can be repeated, as in a phone number). You still need to define other files depending on vocabulary, lexicon, and phoneme structure chosen before you can build the model, but this at least gives you a start. See the VoxForge tutorial in Resources for further guidance.
Dialog manager generator
The programmer now turns to the dialog manager. It is helpful if you can generate at least part of the dialog from the SRGS source. If you work with a context-free grammar, the structure of n-gram (see Resources) might be what you require. In this current n-gram situation, the grammar is fixed. The grammar contains four words, and using those words gives only three possible answers.
While remaining strictly within the standard, the SRGS definition permits you to add a couple of details that are helpful in generating a dialog manager. First, it allows the addition of a
weight attribute to the
<item> element as an integer or decimal number. Second, it allows the addition of
<tag> elements to rules as children that can contain arbitrary strings. These are most often ECMAScript (JavaScript) expressions. They are commonly used to issue SISR instructions to NLP parsers in browsers, but in this instance they might be useful to you for sending hints to a dialog manager generator.
You already have a little information from the grammar: The bigram format calls for two switch statements, which are the minors nested inside the majors. This much is straightforward. But context and OOV call for a bit more than that. This proposal uses the
weight attribute to deal with OOV and the
tag element to handle context.
Listing 8. Enhanced SRGS with dialog manager instructions
... <item weight="1"> COMPUTER <tag>$ WAKE <tag>$wake_state = TRUE;</tag></item> <item weight=""> SLEEP <tag>$wake_state = FALSE;</tag></item> <item weight=""> STATUS <tag> if ($wake_state) { announce_status(); }</tag></item> ... <item weight="0"> ANIMALS <tag></tag></item> ...
Note that the major
COMPUTER has a weight of 1,
but
ANIMALS has a weight of 0. In this context, you want
COMPUTER * to be recognized, but
ANIMALS * is to be ignored as OOV. Additionally, the tag elements contain snippets of PHP code that the generator can insert.
The goal of the dialog manager generator shown in Listing 9 is to build a
dm() function similar to Listing 4.
Listing 9. Dialog manager generator
<?php // Dialog manager generator // Colin Beckingham, 2010 // test dmgen ; } } } } } $dmout1 = "<?php ... function dm(\$prompt_heard) { global \$wake_state; // FALSE is asleep so do not respond, TRUE is awake \$parts = explode(\" \",\$prompt_heard); \$major = \$parts[0]; \$minor = \$parts[1]; switch (\$major) {"; foreach ($words as $mw) { $mww = $mw[0][0]->attributes(); if ($mww->weight == 1) { $maj = " case '".trim($mw[0][0])."':\n "; $maj .= "".$mw[0][0]->tag.""; $min1 = "\n switch (\$minor) {"; $ins .= "\n$maj$min1"; foreach ($mw[1] as $mm) { $min2 = "\n case '".trim($mm)."':\n"; $min3 = " ".$mm->tag."\n"; $min4 = " break;"; $ins .= "$min2$min3$min4"; } $min5 = "\n case default:\n break;\n }"; $ins .= $min5; } } $dmout2= " default: // OOV - any other prompt, just ignore it. break; } } ?> "; echo "$dmout1$ins$dmout2\n"; ?>
The first half of the code in Listing 9 is exactly the
same as the code for the translator in Listing 6. This duplication is intentional because eventually the generation of the grammar and the dialog manager can be accomplished in the same program. Having established the
$words array with SimpleXML objects, you can now scan through those objects and pick out values for not only items, but also weights and tags. After output of some of the introductory static code, the dialog manager generator iterates through the SimpleXML objects, rendering the output in PHP code format and nesting the
switch statements as required.
Listing 10 shows some example output from a test source SRGS file that contains three master rules, one of which should be ignored as test data only.
Listing 10. Generator output
> php dmgen2.php <?php ... function dm($prompt_heard) { global $wake_state; // FALSE is asleep so do not respond, TRUE is awake $parts = explode(" ",$prompt_heard); $major = $parts[0]; $minor = $parts[1]; switch ($major) { case 'COMPUTER': $context = "tech"; switch ($minor) { case 'WAKE': $wake_state = TRUE; break; case 'SLEEP': $wake_state = FALSE; break; case 'STATUS': if ($wake_state) { announce_status(); } break; case default: break; } case 'APPLES': $context = "fruit"; switch ($minor) { case 'PIPPIN': pippin apple break; case 'DELICIOUS': delicious apple break; case 'SPARTAN': spartan apple break; case default: break; } default: // OOV - any other prompt, just ignore it. break; } } ?>
The files that generated this output are available in Download.
Conclusion
With SRGS, you can state requirements for a fixed grammar in addition to their usual role in NLP, providing a central location for the generation of both grammar and dialog manager files. By using the
weight attribute to define whether a master rule is to be detected and the
<tag> element to instruct the dialog manager as to what action to take when a specific prompt is detected, autogeneration of grammar and dialog managers is more rigorous and effective.
To interact with a computer solely by voice is much more difficult work t than using various hardware input devices and monitoring the computer state with visual feedback. As voice programmers, a primary objective should be to make it simpler for users to interact by voice—particularly for those who have no choice but to use voice and ears. While the grammar and dialog manager generators presented here are far from simple themselves, their product can make the process of making simple, stackable tools easier.
Download
Resources
Learn
- VoxForge tutorials: Build your own voice-interactive model.
- Speech Recognition Grammar Compilation in Grammatical Framework" (ACL Anthology Network, 2007): Learn to generate grammar-based language models for speech recognition systems from Grammatical Framework grammars.
- Speech Recognition Grammar Specification Version 1.0: Peruse the W3C syntax to represent grammars for speech recognition and to specify the words and word patterns listened for by a speech recognizer.
- Semantic Interpretation for Speech Recognition Version 1.0: Review the W3C syntax and semantics of semantic interpretation tags for speech recognition grammars.
- Stochastic Language Models (N-Gram) Specification: Examine the W3Csyntax for representing N-Gram (Markovian) stochastic grammars. Stochastic grammars support large vocabulary and open vocabulary applications, and represent concepts or semantics.
- PHP bees and audio honey: Accessible agent-based audio alerts and feedback (Colin Beckingham, developerWorks, Oct 2009): Feed audio information to computer speakers in this PHP approach.
- Querying a database using open source voice control software (Colin Beckingham, Linux.com, May 2008): Check out the a successful attempt to query a database by voice and have the computer respond verbally.
-ulius: Explore a high-performance, two-pass large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers that is based on word N-gram and context-dependent HMM.
- HTK: Delve into this portable toolkit for building and manipulating hidden Markov models. Primarily used for speech recognition research, HTK is also used for research in speech synthesis, character recognition, and DNA sequencing.
-. | http://www.ibm.com/developerworks/xml/library/x-srgsvoicexml/index.html | CC-MAIN-2015-22 | refinedweb | 3,221 | 51.78 |
An android reflow controller that anyone can build.
Difficult 0.4mm connector?
7″, 1024×600 IPS, bluetooth. £35 delivered..
The high-side of the board
MAINS_L and MAINS_N are the live and neutral mains terminals, respectively. SW_IN and SW_OUT are the terminals that connect to the on-off switch. The maximum current rating for this design is 8A so the switch must be rated accordingly and should contain an embedded fuse because there’s no provision for one on the board itself. There’s not much chance of a standard household fuse blowing before the triac is destroyed but it should blow before anything really bad, such as a fire, starts.
C4 is an X-class film capacitor rated at 275VAC. It’s used for emissions filtering and can be considered optional if you’re not bothered about that.
You have your fuse for overcurrent protection and now you have R3, a varistor for overvoltage protection. Under normal operating conditions the varistor has a very high resistance but when a voltage spike is encountered the varistor quite suddenly becomes very low resistance, protecting the triac from the spike. I’ve never personally witnessed a triac failure but I’m informed that they can fail explosively so it’s good to protect them if we can.
OVEN_L and OVEN_N are the live and neutral output terminals to the halogen oven. The triac controls when these terminals are switched on and off. Speaking of the triac…
The triac
Q4, R1, U3, Q1, R2 and R4 form the triac control circuit and it’s worth exploring this in a little more detail.
A triac is an electronically controlled semiconductor that, when activated, can conduct current through it in either direction. This bi-directionality makes it a good choice for using a small DC control signal to switch a large AC mains supply on and off. Another key feature is that once a triac is activated (latched) it will stay latched on until the AC sine wave crosses zero again. If we combine this feature with a zero-crossing detector then it becomes a simple matter of timing to ‘dim’ an AC wave by chopping it off at the same point in each half-cycle.
Chopped (dimmed) AC waveform
The diagram above shows an example of how AC dimming is achieved. You wait until the wave crosses zero, then you wait some more depending on how little or how much dimming you want and then you switch on the triac. The triac will then stay switched on until the next zero crossing at which point you start again.
It’s important to choose a triac that’s rated comfortably in excess of the current that you plan to conduct so that it can handle the inrush current surge that happens when the halogen oven first switches on. I rate this controller for an 8A load so I’ve chosen a BTA312-600B 12A triac from NXP to do the job. Note as of 2017 this Triac is now obsolete. Replace it with the directly equivalent ST Micro BTA12-600BWRG.
The TO-220 package can be cooled in accordance with the rating of the BTA312.
A triac of this size will typically come in a TO-220 package and it will get hot during use so an appropriately rated heatsink is absolutely required.
Fairchild MOC3020M in an on-trend white package
I use a MOC3020M triac optocoupler to control the triac’s gate and the driving circuit is taken from the MOC3020M datasheet with the RC snubber components removed. A snubber is used to prevent false triggering of the triac when the load being driven has an inductive component such as the fan on the halogen oven that I’m using. False triggering tends to happen during the 4th triac quadrant and can be eliminated by using a triac that only triggers in quadrants 1 to 3, such as the BTA312 that I’m using. Therefore I don’t need an RC snubber and have removed it from the circuit.
The MOC3020M requires between 30 and 60mA to switch on. This is too much to drive directly from the pin of an MCU so I’m using the 2N5551 transistor, Q1, to drive it. There’s nothing special about the 2N5551, I just happen to have some in stock. If you have another general purpose NPN knocking around then you can use that. Just be sure to choose a base resistor that fully switches the transistor on.
Triac thermal considerations
I mentioned before that a triac can get hot and we need to provide them with at least a heatsink and sometimes even a fan for active cooling. Heatsinks are rated in °C/W which tells you how many degrees they will rise above ambient per Watt of thermal energy that you ask them to dissipate.
NXP publish a helpful and well-written application note that walks you through the maths required to calculate the power dissipated by the triac.
P is the power dissipation in Watts, V0 is the triac’s knee voltage, given in the datasheet. IT(AVE) is the average load current and is calculated from the RMS load current using the following equation:
RS is the triac slope resistance and can be found in the datasheet. Referring to the BTA312 datasheet we find V0 = 1.164 and RS = 0.027. I’ll use IT(RMS) of 8A as the worst case for this design. Using the second equation I calculate that IT(AVE) = 7.2A. Now I can use the power equation to determine that P = 10.1W. I’ll definitely need a heatsink.
Choosing a suitable heatsink
Tj is the triac’s junction temperature and it must be kept below the maximum value specified in the datasheet: 125°C for the BTA312 and in practice we need to stay well away from that value due to the error margin between the theoretical thermal calculations and real life.
AN10384 gives us the equation that we need to calculate the junction temperature for the power dissipation that we have just calculated.
Ta is the ambient temperature (°C) and Rthj−a is the junction-to-ambient thermal resistance (°C/W). The package’s Junction-to-ambient will often be quoted in the datasheet and can be used to determine if you can operate your design without a heatsink. It is 60°C/W for the BTA312’s TO-220 package which means that I must definitely use a heatsink.
Thermal resistance, like electrical resistance, can be broken down into several smaller resistances connected in series. The TO-220 package used by the BTA312 has the following resistance model:
See figure 3 in AN10384 for a pictorial representation of where the different components of the overall thermal resistance occur in the TO-220 package.
Rth j-mb is the thermal resistance of the junction to mounting base. The value is quoted in the datasheet as 2°C/W for a half-cycle.
Rth mb-hs is the resistance between the mounting base of the TO-220 and the heatsink. This will vary depending on whether the package is screwed or clipped to the heatsink and whether or not thermal grease is used. I am screwing the TO-220 to the heatsink and I will use thermal grease. AN10384 quotes a typical value of 0.5°C/W for this method.
Rth hs-a is the resistance between the heatsink and ambient and will be quoted by the manufacturer. I’ve now got enough numbers to work out the maximum thermal resistance that I can accept from my heatsink. If I assume that I don’t want the junction temperature any higher than 100°C then the maximum heatsink thermal resistance works out to be 5°C/W.
I had a browse through the range of TO-220 heatsinks at Farnell and found that the FA-T220-38E offers a thermal resistance of 3.8°C/W in a nice compact unit.
Ohmite FA-T220-38E heatsink
Putting the rating of 3.8°C/W into the Tj equation yields a value of 88°C for an ambient temperature of 25°C. This value will rise by 1°C for every degree rise in ambient. I’m happy with that value and will definitely use this heatsink in my design.
The transformer
In the interests of safety I decided to use an isolation transformer to step down the mains voltage to a safe level for the low side of the board. Transformer-less designs for powering MCUs from the mains do exist and are cheaper to build but they are not as safe.
The Myrra 44193 steps down a 220-240VAC input to a 6VAC output with full isolation protection. The coils are encased in an insulating material and you can’t see it but there will be a piece of insulating material between the two coils to prevent accidental short circuit. Finally the whole unit is potted with some kind of resin. It’s a very heavy component and certainly feels well made.
The mains connector
The mains connector block is a 6-way 5.08mm connector designed to handle all the mains inputs and outputs. Two pins are provided for 240VAC live and neutral. Two similar pins are provided for the oven output and two more pins are provided for routing out to an external on-off switch.
I’ve chosen an appropriately rated connector block that you can get from Maplin (order code A18QN). You could use screwed down spade connectors instead if you’re unable to obtain a suitable connector in your country. Directly soldering mains wires to a PCB is generally not recommended because stress over time can result in a cracked joint.
I’ll be running a 3-core cable to the metal housing that I use to enclose the PCB. The earth wire will be screwed to the metal housing as will the metal mounting screws for the project PCB that will also connect to the ground plane on the low-side of the device.
There was no room to mount a fuse on the PCB so the IEC connector that I will mount to the chassis of the enclosure has a fuse holder built in to it.
The low-side of the board
Now we’re done with the mains stuff, let’s move on to the low-side of the board, starting with the bridge rectifier and zero-crossing detection circuits.
Bridge rectifier and zero-crossing detector
The output of the transformer is a sine wave, shown below:
Diodes D1 to D4 form a classic full wave bridge rectifier circuit used to turn the negative part of the wave into positive.
Full-wave rectified output
I used 1N4007 diodes for the rectifier which, with their 1kV rating are very much over-specified for this purpose. I used them simply because I had a lot of them in stock. You can just as easily use any of the others from the 1N400x range if that’s what you have available.
Capacitor C3 is used to smooth the rectified wave into something that we can sensibly use for input to the LDO regulators that will supply 5V and 3.3V to the rest of the board. Diode D6 shields the zero-crossing detection circuit from the effect of the smoothing capacitor.
Smoothed output
The scope’s voltage scale is 2V/div in the above screen shot so we can see that I’m getting around 6.5V which is enough to power the LD1117 5V regulator with its 1V (typical) drop-out. The capacitor C3 can’t completely smooth the output on its own which is why there’s an apparent sawtooth ripple to the output. Let’s have a look at that ripple magnified:
Magnified ripple
My scope shows that the peak-to-peak ripple voltage is about 1.2V, quite a lot really and we’ll have to see how that affects the output from the regulators, particularly the 3.3V regulator that powers the thermocouple A-D converter.
R5, R6 and Q2 form the zero crossing circuit. What we aim to do is convert the rising and falling AC wave into a digital on-off signal that we can use as an input to an MCU pin. We can then use the MCU’s ability to raise an interrupt on the rising or falling edge of a signal to tell us when the zero crossing has happened.
Scope probe of the AC wave and the ZSENSE pin level
The AC wave is applied through a base resistor to the 2N5551 general purpose NPN transistor. When the wave rises past the base-emitter ‘turn-on’ voltage of the transistor then it will conduct and the ZSENSE pin will sense zero since the transistor is now allowing current to flow through it to ground. When the wave falls below the ‘turn-on’ voltage the transistor will stop conducting and the ZSENSE pin will read high. The falling edge happens close enough to the zero crossing for me to use as a signal to start my triac activation timing calculations.
On/off switching for different percentages
This animated image shows the ACTIVE signal in relation to different dimming percentages from zero to 100 in steps of 10. For the 100% signal I just set the line high and for zero it is set low.
The JP1 jumper is my way of safely working on the low-side of this board without having to connect it to the mains. The normal case is for the jumpers to be fitted and the low-side is then powered by the mains. When I’m working on it I remove the two jumpers and connect my bench DC power supply to pins 1 and 3.
Voltage regulation
The roughly 6.3V output from the bridge-rectifier is fed to a pair of ST Micro LD1117 regulators in TO-220 packages. I would have liked to have had the whole board running at 3.3V but unfortunately the HC-06 bluetooth module has a bizarre power arrangement whereby the signal levels are specified to be at 3.3V but the adaptor board takes a 5V power input. That’s annoying but LD1117 voltage regulators are cheap and I’ve got the board space so it’s no problem to take two.
I haven’t measured the noise levels on the regulator outputs in isolation – only with the full board built and with no issues observed in the prototype I can only assume that the outputs are stable enough for this design.
The thermocouple ADC
In my previous design I elected to use the older MAX6675 K-type thermocouple ADC. That chip is now obsolete, replaced by the pin-compatible MAX31855 that adds a few new features, the most notable of which is the ability to measure below zero. Firmware written for the MAX6675 will need to be re-written for the MAX31855 but the differences are minimal and it’s still the same old poll-over-SPI protocol.
Bad Max 2
I am beginning to lose faith in Maxim’s ability to produce things that work. Readers of my previous article will remember that I managed to get hold of a bad MAX6675 that caused me to lose hours of my life trying to figure out what was wrong before figuring out that the IC itself was bad.
Well it happened again with the MAX31855. This time the symptoms were an incorrect scaling as the temperature rose. That is, it would appear to be correct at room temperature but would then lag as the temperature increased. For example, dunking a waterproofed thermocouple probe in boiling water would read around 75°C when the real temperature is obviously 100°C. Having been burned before by dodgy Maxim gear I googled around and stumbled across this article on the eevblog forum. Exactly the same symptoms as me and yes, I had a MAX31855 with one of the date codes identified as dodgy.
253AL date code
So if you’ve got a MAX31855K with this date code then don’t even bother trying to fit it. Toss it in the bin or return it to your supplier and get a replacement.
The replacement that arrived worked perfectly. No more dodgy temperature readings. After all these less than ideal experiences I’m tempted to drop the whole K-type thermocouple approach and use an RTD instead.
The two ferrite beads and the 10nF capacitor are there to clean the weak signal before it gets to the MAX31855. Maxim recommends the presence of the capacitor in their datasheet and experience reported on the internet is that you get severe noise issues if you choose not to fit it. The ferrites are my own addition and I can say that my readings are completely stable with no false reports at all.
The T-GND jumper is not fitted in my design. If fitted then it can be used to ground the negative wire of the thermocouple. The old MAX6675 required this but the MAX31855 does not. If you decide to fit a MAX6675 instead of the 31855 then you’ll need this jumper block and it will need to be fitted.
The bluetooth module
As mentioned previously I’ve decided to integrate with the very cheaply available HC-06 module that you can get from China for about £3. These boards only require power to be applied before they will automatically find and pair with any compatible nearby device. After they’re paired then you can send and receive data using an MCU’s UART at 9600 baud.
I’ve added a LED to my schematic that will light up when the device is paired and communication between the Android app and the firmware on the board is active.
The MCU
With the ‘easy build’ aim of this project in mind there was only ever going to be one range of MCUs in the frame for the controller and that would be Atmel’s 8-bit AVRs. They’re widely available, they come in DIP packages and thanks to the popularity of the Arduino platform there’s an army of hackers out there who know something about how to write firmware for them even if their choice of platform does mean that they may never get familiar with the instruction set or the IO registers.
The L is important – the regular ATmega8 will not work at 3.3V
I prototyped the firmware build with the ATmega328P MCU that you can find in the Arduino Uno and when I’d more-or-less finished the code size was 12 or 13K. After moving the entire implementation from an old-style
.cpp/
.h structure to
inline header-only methods the gcc 4.9.2 optimiser was able to really grasp the nettle and bring the image down to about 7.5Kb, and that’s with using expensive floating point operations for the PID algorithm. A round of applause from me goes to the authors of the gcc optimiser.
With only a little finishing off to do I knew I would be able to fit the whole thing into an ATmega8L. Some of the peripherals are lacking or are not as capable on the ATmega8L compared to the ATmega328P but not enough to cause me any issues.
I need to interface the MCU with some 3.3V peripherals so the actual MCU that I’ve chosen is the ATmega8L running at 3.3V and 8Mhz. As you can see from the schematic I’ve selected an external crystal with 22pF load capacitors for the system clock source even though the ATmega8L has an internal oscillator that can run at 8MHz.
The reason for this choice is that I’ve read about considerable variation in the accuracy of the internal oscillator that could potentially affect the device’s ability to do reliable UART communications – a UART is an asynchronous peripheral that relies on each end to have accurate enough clocks to communicate. We also need it to accurately track time over a reflow program so I decided to play safe and install an external crystal.
The Nokia 5110 LCD
Obviously the main GUI for this project is your handheld Android device but I still decided to add a small and cheap LCD to the design to provide feedback on the current temperature and the bluetooth link status. With the addition of a rotary encoder and a button the entire process, including configuration and reflow, can be operated from this little LCD. This could be useful if you just want to do a quick reflow and don’t have your Android tablet or phone handy.
The photograph shows the actual LCD that I bought. I chose the red one because it matches the colour of the PCB that I’m using. Do note that the one that you use must have the pins in the same order on the breakout header to mine if you want it to press down directly into the board.
Another key point to note is that the backlight pin, labelled ‘LIGHT’ on the PCB must be the output from the backlight circuit and not the input. As you can see from the schematic I’m applying a PWM waveform to a transistor to set the backlight brightness in the assumption that the ‘LIGHT’ pin is an output to ground.
General purpose 2N5551 used for backlight PWM control
If you happen to get a board where the LED pin is a VCC input then my backlight circuit won’t work and you won’t have a backlight. That’s no big deal because these STN LCDs are readable in ambient light unlike the reflective TFTs that must have a backlight to allow you to see anything at all.
The fan connector
The fan will be mounted close to the triac
The fan connector will accept any 3-pin computer fan that will run at 6.3V. Most of them will do and they’ll run nice and quiet as well. My heatsink calculations indicate that I probably won’t need a fan but those calculations assume that the heatsink is in free air. When it’s enclosed in a sealed box I will have to provide some form of ventilation and may have to fit a fan as well.
Bill of materials
Here’s a table that contains the bill of materials for this project. I got nearly all the parts from Farnell and have included their part number where available for easy reference.
These parts are for a 220-240VAC design. If your mains supply is 120VAC then you’ll need to calculate appropriate substitutes. If anyone does build a 120VAC design then I’ll be happy to publish the components that are required here.
PCB design
I decided to go for a two layer board that fits within a 10x10cm square. This means that you can get 10 copies of the board printed at one of the Chinese online services such as Elecrow, Seeed or ITead for about US$13. I’ve been using Elecrow recently and have been very happy with their service. I uploaded my gerbers to their site, paid the fee and went and did something else for 3 weeks. Time passes…
I laid out the PCB with the high side of the board marked off at the top left. A warning legend in the silkscreen states the obvious about the high voltage present in that area. There is no absolute number for the width of a trace required to carry a given current across a PCB. Increasing the width of a trace will lower the electrical resistance and hence decrease the power loss and the heat generated to dissipate that power loss. If you approach the problem with power-loss and heat in mind then you can calculate a figure for the trace width that’ll deliver those numbers for you.
No, I’m not going to subject you to the mathematics behind it all because there are on-line calculators out there that’ll let you experiment interactively with your numbers. I settled on a trace width of 140mil which for an outer layer on 1oz copper conducting 8A would result in a 20°C worst case temperature rise for a power loss of 500mW.
The only exception to this 140mil design rule is the middle pin of the TO-220 package that houses the triac’s MT2 terminal. There’s just no way around the fact that you have to neck down the trace quite drastically to get it through the middle pin.
I’ve tried to ameliorate this by mirroring the narrow part of that trace on the bottom layer of the board and joining it to the top layer with a pair of vias on each side.
There are standards for how close high voltage traces can get to other traces (creepage distance) that take into account the environment that the product will operate in (pollution level). You might be surprised at how low those minimums are; for this board 1.5mm would be enough. I leave much more spacing than that and I also provide routed cutouts in the places where the high voltage areas come closest to low voltage.
The bottom side of the board outside the high voltage area is filled and connected to the GND net. Likewise for the top layer except the fill is connected to the 3.3V net. Having GND and VCC as fills on alternate sides takes out the two largest nets from the routing problem and makes it easy to route the rest of the board.
The four corner mounting screw holes are part of the earthing strategy and will be screwed into the metal enclosure that I use to house this project. The three screws that are in the low voltage area also connect to the GND plane on the bottom.
Building the PCB
The only surface mount device on this PCB is the MAX31855 and it has a low number of generously spaced pins. I eschewed my hot air gun, previous reflow oven and hot plate in favour of a plain old iron and the tack-soldering method because I wanted to show how easy it is to assemble this PCB and you can see me soldering the MAX31855 in the video that accompanies this article. No laughing at the back please; it’s hard to solder from behind a video camera!
Of course the remainder of the through hole components are very easy to solder, if a little time consuming due to all the bending, threading and trimming of leads. There are a few tips I can give for building this board:
- Start with the lowest profile components and work up to the tallest. That way when you flip the board over to solder a component it should rest on itself which helps prevent it falling back through the board.
- The pin-spacing in the footprint for the 2N5551 transistors in the TO-92 package is quite narrow. To avoid solder bridges I recommend starting each pin with a clean iron and taking care to not over-apply solder.
- I use sockets for my ICs. You don’t have to but you’ll remove the possibility of causing damage through excess heat if you use sockets.
Important safety note
The mains traces on the top of the board may run directly under the metal of the Triac heatsink. It will depend on the type of heatsink that you have chosen to use and it definitely happens with the large Ohmite FA-T220-38E that I have selected. The solder mask is then the only thing preventing the heatsink becoming live and of course solder mask is far from being an appropriate insulator for mains voltages.
To avoid this issue, simply ensure that there is at least a couple of millimetres of air-gap between the bottom of the heatsink and the board when you solder the Triac into place. The large lugs on the bottom of the heatsink fit into the board holes provided for them and can be soldered into place to totally prevent any movement.
Alternatively, insulate the base of the heatsink with electricians tape rated well in excess of the AC peak voltage which is about 340V in the UK and 170V in the USA.
Here it is, fully built. There’s something deeply satisfying about completing a through-hole board. All the large components make it really look like you’ve actually built ‘a thing’. I know that technically you achieve a lot more for a given surface area with surface mount but the finished article does often look a bit like a few black plastic shavings with some capacitative dust sprinkled hither and thither. Old school wins in the looks department for me.
The firmware
The firmware for this device was compiled using avr-gcc 4.9.2 and can be cloned from github. You can compile it from source or you can flash it directly to the MCU if your mains frequency is 50Hz. I include a reference build for 50Hz mains supplies in the
bin folder.
Flashing the program to the MCU
The firmware can be uploaded directly to the MCU using a USBASP 3.3v programmer connected to the P6 header using jumper wires. Avoid the older version of the programmer that only supports 5V programming. The newer versions have a jumper that allows you to select between 5V and 3.3V.
For safety please do not program the MCU while the board is connected to the mains. The jumpers must be removed from JP1 and a DC power source connected to pins 1 and 3. This power source connects directly into the 5V LD1117 regulator. If you have a bench power supply then 6.3V would be ideal. If you don’t then a wall-wart that can supply between 6.3 and 12V would work. Don’t attempt to use a 9V battery because they can’t supply enough current for long enough.
Jumpers on = transformed mains (as illustrated). Jumpers off = DC supply
Flashing is a two step process using the avrdude utility. Firstly you should program the MCU fuses to use an external crystal instead of the internal oscillator. The command for that is:
avrdude -c usbasp -p m8 -e -U lfuse:w:0xff:m -U hfuse:w:0xd9:m
You only ever need to program the fuses once. Secondly you should program the MCU, again using avrdude:
avrdude -c usbasp -p m8 -e -U flash:w:awreflow2.hex
That’s it. The board should now be up and running. You can of course re-run the programming command as many times as you need to.
Compiling from source
I use
scons to build the firmware. To compile,
cd into the
atmega8l directory and use scons to build it:
$ scons scons: Reading SConscript files ... Usage: scons mains=<FREQUENCY> [upload] <FREQUENCY>: 50/60. 50 = 50Hz mains frequency 60 = 60Hz mains frequency [upload] specify this option to automatically upload to using avrdude to a USBASP connected MCU. Examples: scons mains=50 // UK mains frequency scons mains=60 // USA mains frequency
A timer peripheral is used to work out where in the half-wave I need to turn the triac on and off. The duration of a half-cycle is different for people on 50Hz and 60Hz supplies therefore the build command needs to know which frequency you are on so that it can pass that information to the source code through a preprocessor definition.
The timing part of the controller firmware is probably the most interesting. Let’s take a look in detail at how it works. The one-paragraph summary of the flow of control is this. The zero-crossing event triggers an external interrupt in the MCU. The desired power percentage is used to calculate a timer comparator value and the timer is started. The comparator triggers an interrupt and the triac gate is activated inside the interrupt handler. The timer is bumped forward so that its counter will overflow in a number of ticks equal to the triac’s gate minimum latching pulse width. When the overflow interrupt is triggered the triac gate control is released and the timer stopped. The triac will remain latched until the next zero crossing and the whole thing starts again.
This sequence allows us to control the triac asynchronously and under interrupt control. The following diagram should serve to illustrate the explanation in the previous paragraph.
I’m using the
TIMER2 peripheral on the ATmega8 to do the work.
TIMER2 is an 8-bit up-counter with a comparator register. It can raise an interrupt when the comparator is matched and when the timer overflows, that is it ticks over from 255 back to zero. The timer clock can be sourced from the CPU clock and has a number of divider options so that you can have the resolution that you need.
In the zero crossing handler I start the timer like this:
// OCR2 is the comparator register. See main source code for // the code that calculates the _counter variable. OCR2=_counter; TCNT2=0; // start timer at 8MHz / 1024 = 128uS per tick TCCR2=(1 << CS20) | (1 << CS21) | (1 << CS22);
The choice of 128µS per tick is made so that the timer doesn't overflow during the 50Hz/2 half wave. When
TCNT2 reaches the
_counter variable the comparator interrupt will trigger, and here's the handler:
/* * The comparator ISR handler */ inline void OvenControl::timerComparatorHandler() const { // activate the triac gate. we must hold it active for a minimum amount // of time before switching it off GpioActivateOven::set(); // the overflow interrupt will fire when the minimum pulse width is reached TCNT2=256-TRIAC_PULSE_TICKS; }
Now the timer is running again and will overflow after
TRIAC_PULSE_TICKS clock ticks have elapsed.
TRIAC_PULSE_TICKS is defined in the
SConstruct build file as 4 which is 512µS and is enough to latch the BTA312 triac.
Finally, when the overflow does happen here's the handler:
/* * The overflow ISR handler */ inline void OvenControl::timerOverflowHandler() const { // turn off the oven GpioActivateOven::reset(); // turn off the timer. the zero-crossing handler will restart it TCCR2=0; }
A millisecond timer
I need a general purpose timer for checking timeouts and performing short delays so I use
TIMER0 for that purpose.
TIMER0 is the least capable of the three timers on the ATmega8L so I'm always pleased to find a use for it. Here's how I set it up:
TCCR0 |= (1 << CS01); // CLK/8 TIMSK |= (1 << TOIE0); // Timer0 overflow interrupt
CLK/8 gives me an 8MHz/8 = 1MHz 8-bit counter. It'll overflow every 256 ticks so the overflow interrupt call frequency will be (CLK/8)*256 = 256µS. If I were to increment my millisecond counter every 4 ticks then I'd get a counter that ticks every 256*4 = 1024µS. That's no good, I need 1000µS for an accurate counter which would mean an overflow interrupt every 250µS.
The solution is to add 6 to the
TCNT0 register each overflow interrupt and this is possible because
TCNT0 is read/write at all times. The overflow interrupt handler looks like this:
ISR(TIMER0_OVF_vect) { using namespace awreflow; MillisecondTimer::_subCounter++; if((MillisecondTimer::_subCounter & 0x3)==0) MillisecondTimer::_counter++; TCNT0+=6; }
Communicating with the MAX31855
The MAX31855 is a SPI slave device implemented with 3 wires: MISO, CS and SCLK. The Nokia 5110 board on this controller is also an SPI slave device and the ATmega8L has only one SPI peripheral therefore I share the SPI bus by using different CS lines for each device. Here's the SPI pin initialisation code:
// SPI limiting values: // 1. Max Nokia 5110 LCD clock = 4MHz // 2. Max MAX31855 clock = 5MHz // Selected clock is osc/4, mode is 0 SPCR=(1 << MSTR) | // master (1 << SPE) | // enabled (0 << SPR1) | (0 << SPR0); // (for clarity) we are intentionally choosing fosc/4 here SPSR |= (1 << SPI2X); // clear interrupt flag by reading this register uint8_t dummy __attribute__((unused))=SPSR; dummy=SPDR;
Some of the firmware operates under asynchronous interrupt control and some of it works synchronously via a main loop. The SPI communication works in the main loop. I sample the temperature from the MAX31855 every 500ms like this:
inline bool TemperatureSensor::loop() { uint8_t i; uint32_t value; // if ready then process the raw response if(MillisecondTimer::hasTimedOut(_lastTemperatureTime,500)) { // bring CS low GpioSpiCs::reset(); // clock 4 bytes from the SPI bus value=0; for(i=0;i<4;i++) { SPDR=0; // start "transmitting" (actually just clocking) while((SPSR & (1<<SPIF))==0); // wait until transfer ends value<<=8; // make space for the byte value|=SPDR; // merge in the new byte } // restore CS high GpioSpiCs::set(); // deselect the slave // parse out the value/code from the packed response processResponse(value); _lastTemperatureTime=MillisecondTimer::millis(); return true; } return false; }
The 32-bit response from the SPI bus contains the temperature and some status information about the state of the thermocouple. The MAX31855 is able to detect states such as open circuit and shorts to GND and VCC. Here's my conversion code. I take only the integer part of the temperature, discarding the fractional part.
inline void TemperatureSensor::processResponse(uint32_t value) { // we need a sane compiler static_assert(sizeof(Response)==3,"Internal compiler fail: sizeof(Response)!=3"); if(value & 1) _lastTemperature.status=Status::OC_FAULT; else if(value & 2) _lastTemperature.status=Status::SCG_FAULT; else if(value & 4) _lastTemperature.status=Status::SCV_FAULT; else { // if negative value, drop the lower 18 bits and extend sign bits if(value & 0x80000000) value=0xFFFFC000 | ((value >> 18) & 0x00003FFFF); else value >>= 18; // resolution is 0.25C, divide by 4 to get the integer value _lastTemperature.celsius=value/4; _lastTemperature.status=Status::OK; } }
Communicating with the HC-06 Bluetooth module
Communicating with the bluetooth module could not be simpler. The module implements a 9600 baud UART port. You don't have to worry about pairing because it'll do that automatically when it detects a compatible master device and since I'm a slave device I only need to respond to commands that I receive from the master, and receiving commands implies that I'm paired. A flashing red LED on the HC-06 board goes solid red when pairing is successful.
Setting up the USART peripheral in the ATMega8L for 9600 board communications is straightforward:
// 9600 baud: ( UBRRH=0; UBRRL=51; UCSRA=0; UCSRB=(1 << RXEN) | (1 << TXEN); // enable RX, TX UCSRC=(1 << URSEL) | (1 << UCSZ1) | (1 << UCSZ0); // 8-N-1
I set out to prove this as a synchronous implementation with the intention of moving it to an asynchronous interrupt-driven implementation if the polling frequency of the main loop caused problems with the protocol. There were no such problems, it seems that my main loop is plenty fast enough so I left as synchronous.
Checking to see if a byte has arrived in the peripheral's data register and then reading it out is a simple task:
// is something ready? if((UCSRA & (1 << RXC))==0) return Command::NONE; // move data into storage _commandData[_commandPos]=UDR;
Likewise, when you've got some data to send it's just as simple. You just need to wait for a flag to tell you that the data register is clear then you can write to it:
for(i=0;i<5+1+dataSize;i++) { while(!(UCSRA & (1 << UDRE))); UDR=_commandData[i]; }
There are many other reusable classes in the firmware. For example there are minimal size implementations of a rotary encoder and a debounced button. There are also very small and efficient GPIO input/output classes implemented in assembly language. You can browse the firmware here on Github.
The Android App
When designing the Android app I had the aim of it working on resolutions down to 800x480 and it should work on tablets and phones alike. 800x480 was chosen as the most common resolution available today on very cheap tablets that you can pick up on ebay. Modern brand-name phones and tablets have much, much higher resolutions. My Google Nexus 10, for example, has a resolution of 2560x1600 - the same as a high-end 30" desktop monitor.
Android programming is straightforward stuff. The need to run on low-end devices means that the APIs are necessarily quite simple. If you've ever programmed C#/WPF then you'll feel at home with the XML/code combination used to create designs and implement code-behind and there are none of the complex features such as property paths and routed events that raise the learning curve for a C#/WPF beginner.
I just can't abide small monitors! (click for larger)
The Android Studio development environment is based on the IntelliJ platform which is pretty good and definitely fit for purpose. It's not as slick and professional as Visual Studio and the Android build/debug process feels slow and cobbled together from command-line tools running behind the scenes but it does work and I've never had to break out of the development flow to go fix some plumbing manually in a config file somewhere. Google have done well to make all that stuff hang together.
The main screen of the app lists a number of options. You can select between the two reflow profiles: leaded and unleaded. You can start the reflow session. You can see the current temperature of the oven and you can change any of the parameters that you can change using the rotary encoder input on the controller itself.
The app will automatically try to connect to the controller when it starts up and will keep trying until it succeeds. Note that communications will continue even after the user navigates away from the app. This is so that when the wife phones up to tell you to record Coronation Street while you're in the middle of a reflow session it doesn't get cancelled or suspended.
The downside of this is that you must manually close the application when you're finished to preserve your battery life. All recent android versions have a button that will show you a list of running apps. Sometimes it's a rotating list and sometimes a left-to-right list. You can use this list to close my app and save battery life.
On this android version you long-press an app icon to close it
Moving on from the main screen to the reflow screen (android calls these screens 'activities') we see a reflow profile chart waiting to start. When you're ready to go you can hit the 'Go' button and the process will start.
A cool glowing green blob will track the progress of the reflow session and you can see this in action in the video that I've linked in later in this article. You can abort it at any time by clicking the 'Stop' button at the top right. I've provided an 'Exit' button to go back to the main screen but you can just as well use Android's built-in 'Back' button.
Credit where credit's due, the icons used in my app are from the icons8 site.
Watch the video
I've uploaded a video to YouTube that shows me going through the build process and then doing a test run at the end. It came out surprisingly long at just under an hour so if you don't mind watching me waffling on for ages then click on the link below to watch it.
Build your own
If you're interested in building one of these boards then here's a summary of the steps you need to take.
- Download the gerbers from my downloads page and send off to Elecrow, ITead or Seeed to get them manufactured.
- Order the parts from the bill of materials from Farnell or your favourite local supplier. Build the board. The video might help you with this.
- Clone the firmware from Github.
- Upload
awreflow2.hexto the ATMega8L using a USBASP 3.3V programmer connected to the board pins while powering it with 6.3V from a bench power supply. (see Flashing the program to the MCU in the main article).
Enable application side-loading on your Android device then installUpdate! You can now get the app direct from Google Play by searching the Play Store for Andy's Workshop.
awreflow2.apkfrom the
bin/firmware directory.
- Switch the controller on first, then the Android device. Pair them. The HC-06 pairing code is probably 1234. Now start the app and wait for the connection. It will take a few seconds.
Enjoy your reflow controller and remember, always treat the mains supply as potentially deadly. Never work on or even touch any part of the board while the mains supply is connected and never work on a mains project unless you're able to give it your full and undivided attention at all times.
That's all for now folks. I hope you've enjoyed reading this article and if you've got any comments then please feel free to leave them below. Questions and discussions can be posted over in the forum where I'm looking forward to hearing your thoughts.
Update 1: improving the phase angle timing
A reader contacted me to suggest that the mapping of the output from the PID algorithm (a percentage) could be improved by considering a non-linear mapping to the mains phase angle. Currently my algorithm linearly maps the desired power percentage to a time along the x-axis of the half-wave. This does not take into account the rising and falling of the waveform — particularly at the start and end of the phase.
So I started thinking. It seems logical that the actual output power at a given phase angle is going to be proportional not to the linear distance along the axis but to the area under the curve. The area under the curve is the definite integral from zero to the phase angle. I could pre-calculate a mapping table of desired output power percentage to a linear percentage phase time duration and store it in flash at a cost of just 98 bytes.
The area under the curve is an easy integration formula
I fired up Excel and got calculating. You can see the results in the spreadsheet in the doc directory on Github. Beware that this spreadsheet contains array formulae that Excel appears to be spectacularly bad at recomputing. Be ready for a few minutes of delay when you open the worksheet.
Calculating a lookup table with Excel
The output from the sheet is a mapping of the desired output power percentage to a linear integer percentage that has the least error from the ideal percentage.
The implementation in OvenControl.cpp and OvenControl.h is trivial and I moved quickly to running a test cycle.
Test run results
The improvements are noticeable in the way that the oven applies power at the low and high ends of the output but I clearly have more work to do to eliminate the oscillation that I get as the algorithm struggles to accurately follow the curve. I intend to improve the insulation, tweak the algorithm parameters and increase the temperature sampling rate in the firmware and the app. More updates to follow.
Update 2: fixing bugs, addressing heat loss
I think I've finally got it where I want it now. There were a number of issues that I addressed to get to this end state.
Firstly there was a bug in the android app. It was supposed to sample the temperature at 0.33Hz on the options screen and 1Hz during a reflow job. The bug meant it always sampled at 0.33Hz. I've changed the logic to 0.5Hz for the options screen and 4Hz for the reflow job. The source and compiled binaries on github have been updated.
Secondly I've insulated the top of my oven with foil. I can't understate the difference that this has made. Heat loss through the glass top was the single biggest issue.
Fully insulated oven (with small observation gap in lid)
Thirdly I've changed my PID parameters to 10/1/0. A large proportional component makes the oven react quickly when needed. I don't need a derivative component.
Test run
The difference is amazing — particularly the insulation. The oven never has to apply 100% power except for a second or two during the ramp-up period. The rest of the time it just gently applies small nudges to the temperature. Ignore the steep ragged falls at the end — that's me lifting the lid to cool it down because the oven has no means to actively lose heat.
You can see the effect of all these improvements in this followup video:
Update 3: Bug fixes and ATMega8L firmware improvements
Version 2 of the app is now available from the Google Play Store and fixes a few minor issues. I'd advise everyone to upgrade to this new version. Details of the bugs that were fixed are in the Play Store release notes.
There's also a new version of the ATMega8L firmware available now in github. This release fixes bugs in the manual reflow option and adds a new feature that continually shows the current oven duty cycle while the android device is connected.
Pingback: Android-based Reflow Brings Solder Profiles to Your Lab | Hackaday()
Pingback: Android-based Reflow Brings Solder Profiles to Your Lab | Hack The Planet()
Pingback: Android Based Reflow Controller - Electro-Labs.com()
Pingback: Projects i want to do. | John's Ramblings()
Pingback: Homemade Reflow Controller with Android & Bluetooth via @andyworkshop « Adafruit Industries – Makers, hackers, artists, designers and engineers!() | https://andybrown.me.uk/wk/2015/07/12/awreflow2/ | CC-MAIN-2022-21 | refinedweb | 8,319 | 69.01 |
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.”.
This is an interesting case. I was not aware there were situations where the compiler would allow you to declare a local instance without either initializing it or calling a constructor explicitly. Back in my C++ days, I recall many a time when local instances would be declared using a constructorless notation:
S s1; // default constructor automatically called
I always found this notation confusing - particularly since to following is also legal, but much more obvious:
S s1 = S(); // also legal but clearer
@Eric:
When compiling the version which is in the same assembly, does it complain regardless of the contents of Frob(). Or does it actually check whether Frob() references fields, and complain about those specifically?
To be more precise, say I have Frob implemented thus:
void Frob() { handle = ...; }
Would it still complain about the lack of definite initialization? If not, and if I call Frob on uninitialized handle, will it recognize s.handle as initialized after Frob returns for the purposes of definite initialization? Or does it generally assume that any method call on a struct will access all its fields?
(I kinda have to ask because the behavior that you describe only makes some sense to me if compiler actually does cross-method boundary analysis for definite assignment for methods for which it has the source code).
@Leo: your second C++ snippet requires a copy constructor to exist (even if it's optimized away, which it legally can be), while the first one does not, so there is a difference. I don't really have a problem with the parentheses-less syntax as such; the real evilness of C++ is the distinction between POD and non-POD, and initialization thereof.
We do not do cross-method analysis. Basically, it's analyzed as though the code
h1.Frob()
was actually written as
MyHandle.Frob(ref h1);
Since we are passing a variable as "ref", it must be definitely assigned before the method call. We check definite assignment on structs by verifying that all known fields of the struct are fully assigned. In the "import from metadata" case, we don't even load information about private fields off of the disk, so as far as we know, its fully assigned. But in source code, we have all the private field info available in memory already, and we use it. -- Eric
Is this behavior in line with the specification?
Good question. The spec says "A struct-type variable is considered definitely assigned if each of its instance variables is considered definitely assigned." Notice that it does not say "each of its accessible instance variables'. So technically, I suppose this is a violation of the specification. -- Eric
Pavel does make a point though.
Suppose I have this silly mutable value type:
public struct Point {
public int X { get; set; }
public int Y { get; set; }
public Color Color { get; set; }
public void GetColorFromWhatever(Whatever) {
Color = ...;
}
}
and I call
Point p;
p.X = x;
p.Y = y;
p.GetColorFromWhatever(whatever);
Obviously (obvious to the programmer, not to the compiler), this is what I meant to do albeit a bit awkward. I don't really want to assign color since it's a calculated value. But I have to either do
Point p = new Point();
or
p.Color = dummy;
I was willing to accept that - I prefer calling the constructor anyway. But now you tell me that if Color was a struct in a different assembly it would work - now that suddenly doesn't make sense.
[quote]Every struct has a default constructor which initializes its fields to their default values[/quote]
This is the real problem IMHO. As far as I'm concerned, there should never be an implicit parameter-less (default) constructor - if the dev wanted the object to be called with a parameter-less constructor, it would be in the definition. I've been bitten by this back in my C++ days (though for the life of me I don't remember the context, only that it was a PITA to track down), and hoped that C# was immune to it.
I'm sure there is some reason for it, but does that reason outweigh the problems?
An important case to consider in this discussion is this one:
S[] sArray = new S[1000000];
If you require me to call S's constructor, how will you make sure I call it for all the values of this array?
In the case of arrays of classes, it's simple, because they're initialized to null, which means to class exists yet. But in the case of a value type, as soon as you allocate space, you have a struct.
Neil: the only way to assure that all elements of an array get their constructors called is to have the compiler generate loops for each array to call the default constructor on each element. This would mean that every array element would get initialized at least twice -- first to all-zeroes by the CLR, then by the default constructor, then possibly by the actual intended value for each element. That should pretty much answer Darrell's question.
Please clarify what you mean by referenced assembly. Here is what I did and I got the error "Use of unassigned local variable..."
1) Created a console application project called "Assembly1". Created the public struct MyHandle {...}
2) Created a second console application project called "Client". Referenced "Assembly1". In Main() I wrote
MyHandle h1 = new MyHandle();
h1.Frob();
//I got the error "Use of unassigned local variable..."
I am using C# 3.0.
That's weird. There's no unassigned local at all in that scenario. -- Eric
Looks the same on C# 3.5, I also created 2 console applications in seperate solutions. In solution Test1 I put
namespace Test1{ public struct Class1 { private int lala; public Class1(int lala) { this.lala = lala; } public void Foo() { } }}
Then I copy the Exe someplace else and reference that (so the Exe, not the project) in solution Test2:
Test1.Class1 c;c.Foo(); // Use of unassigned local variable
I'm probably missing something obvious here, but I can't figure out what.
Holy goodness, this just gets weirder. I'll update the entry. -- Eric
Eric, in light of your update, does this:
> Unfortunately, though I think I’ve argued that this behaviour is plausibly undesirable, we’re stuck with it.
still apply, or is it now officially in the defect-to-be-fixed land?. -- Eric
So the problem is just that private reference types in members of structs from other assemblies might be usable without the other assembly's author explicitly setting them to null? I think I can live with that.
I'm less familiar with the CLR spec than the C# spec. C#'s definite assignment rules make any implementation detail moot, but in the CLR, are we _guaranteed_ that value type local variables will in fact be initialized to zeroes? Or is that simply a coincidental fact of the current implementation?
I realize that, just as this bug won't be fixed for fear of breaking a bunch of existing buggy BCL code, there may be an implicit requirement at this point that stack variables be initialized to zeroes. But does the spec actually require that?
See Partition III section 3.4.7, "localloc". I quote: "If the localsinit flag on the method is true, the block of memory returned is initialized to 0; otherwise, the initial value of that block of memory is unspecified." -- Eric
Thanks for the pointer. So, the way I read that as it relates to my question is that the answer is "no, there is no guarantee locals will be initialized to zeros". That is, the C# specification doesn't, as near as I can tell, require that the run-time option "localsinit" be used (*), and so whether they are or not is entirely up to the specific implementation.
IMHO, that could in fact be an argument for going ahead and fixing this bug, in spite of the effort that would be required in the BCL to fix up all the uninitialized variables. After all, _some_ of those uninitialized variables could in fact be bugs themselves, even if not all are. What better way to improve the overall quality of the BCL implementation than to take advantage of a correct implementation of the definite assignment rules?
(*) Indeed, the C# specification seems to mainly be agnostic about a specific run-time implementation; the .NET Framework is assumed and of course certain primitives are explicitly used (System.Int32, System.Boolean, System.Threading.Monitor, etc.), but otherwise it seems that there's no specific target language for the compilation output, never mind use of specific features of the target language.
The C# compiler is responsible for implementing the semantics of the C# language, and the semantics of the language are such that it *should* be impossible for the runtime's initialization implementation details of locals to be unobservable (modulo of course using debuggers to peer into variables before they are assigned to). Now, I think it is plausibly argued that what we have here is a bug; with this, one can observe the fact that our implementation of the compiler does in fact ensure that fields of struct locals are initialized to zeroes. But it would be rather strange for the spec to say "a correct implementation must ensure that local initialization details are non-observable, but an incorrect implementation is required to ensure that they're initialized to zeroes". The spec doesn't say anything at all about what an incorrect implementation is required to do. -- Eric
>.
By "fixing" this I rather meant switching to "guaranteed null initialization for structs" (or any other intermediate semantics that would ensure consistent behavior between structs in current project and ones from referenced assemblies). I understand that a "proper fix" in the spirit of language spec would be out of question for compat reasons.
@pete: there's one other bit of Ecma-335 CLI spec that is relevant here to understand why assemblies produced by VC# always specify "localsinit" (Partition III, 1.8.1.1 "Verification"):
"Verification assumes that the CLI zeroes all memory other than the evaluation stack before it is made visible to programs. A conforming implementation of the CLI shall provide this observable behavior. Furthermore, verifiable methods shall have the localsinit bit set, see Partition II (Flags for Method Headers). If this bit is not
set, then a CLI might throw a Verification exception at any point where a local variable is accessed, and where the assembly containing that method has not been granted SecurityPermission.SkipVerification." | http://blogs.msdn.com/b/ericlippert/archive/2010/01/18/a-definite-assignment-anomaly.aspx | CC-MAIN-2015-06 | refinedweb | 1,788 | 52.6 |
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | Notes
#include <sys/wait.h> #include <sys/time.h> #include <sys/resource.h> pid_t wait3(int *statusp, int options, struct rusage *rusage);
pid_t wait4(pid_t pid, int *statusp, int options, struct rusage *rusage);
The wait3() function delays its caller until a signal is received or one of its child processes terminates or stops due to tracing. If any child process has died or stopped due to tracing and this has not already been reported, return is immediate, returning the process ID and status of one of those children. If that child process has died, it is.h.
The status of any child processes that are stopped, and whose status has not yet been reported since they stopped, are also reported to the requesting process.
If rusage is not a null pointer, a summary of the resources used by the terminated process and all its children is returned. Only the user time used and the system time used are currently available. They are returned in the ru is returned and errno is set to EINTR. If WNOHANG was set in options, it has at least one child process specified by pid for which status is not available, and status is not available for any process specified by pid, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
The wait3() and wait4() functions return 0 if WNOHANG is specified and there are no stopped or exited children, and return the process ID of the child process if they return due to a stopped or terminated child process. Otherwise, they return -1 and set errno to indicate the error. | Attributes | See Also | Notes | http://docs.oracle.com/cd/E19082-01/819-2243/6n4i099rp/index.html | CC-MAIN-2015-22 | refinedweb | 285 | 69.72 |
I want a camera that wont move unless my character is at the edge of the screen. I was thinking of using a trigger attached to my camera but I can't figure out how to use them.
My code for character movement
using UnityEngine;
using System.Collections;
public class CharacterMovement : MonoBehaviour {
private Rigidbody2D rb;
public float moveSpeed;
// Use this for initialization
void Start () {
rb = GetComponent<Rigidbody2D>();
}
// Update is called once per frame
void Update () {
float moveHorizontal = Input.GetAxis ("Horizontal");
float moveVertical = Input.GetAxis ("Vertical");
Vector2 movement = new Vector2 (moveHorizontal, moveVertical);
rb.MovePosition(rb.position + movement * moveSpeed);
}
}
Answer by hexagonius
·
Jan 03, 2017 at 10:39 AM
I think one trigger is enough. If you put it to the camera but at z position of the character it should be triggered. you can make it smaller than the screen, on every side the character should be able to transition. If the character exits the trigger on any of those sides you should freeze movement end transition the camera. also sweep the character so he's on the other side of the screen now. there would be 2 states in which transitions can be possible: - character leaves the trigger. - character moves in direction from where the transition came after a.
Camera movement with arrow keys?
1
Answer
Clamping when Zoom changes?
0
Answers
Switching Cameras, Movement not working.
1
Answer
Camera is set ok but when i press play it faces down
1
Answer
help with switching camera
0
Answers | https://answers.unity.com/questions/1293415/a-camera-in-the-style-of-megamandouble-dragon.html?sort=oldest | CC-MAIN-2019-43 | refinedweb | 249 | 57.16 |
.
Let’s start with an example before looking at the maths! Imagine that we are at a supermarket and we are looking at the number of people that queue up at the till. The number of people in queue is the state of our system. There can be 0 person in the queue, or 1, or 2, … or 10 … or 20 …
The system can change from one state to the other depending if a person arrived in the queue or someone finished checkout and left the queue, or nothing changed.
We can model this system like so:
If we generalise this example we can say that a Markov chain is composed of a set of states \(S={s_1,s_2,…,s_m}\) and at every time step the system can move from one state to another according to a transition model \(T\). Therefore a Markov chain is defined by:
- A set of states \(S={s_1,s_2,…,s_m}\)
- An initial state \(s_0\)
- A transition model \(T(s,s’)\)
A Markov chain observe a very important property: The next state depends only of the current state. That means that the next state doesn’t depend on the past. It doesn’t depend on how many people were in the queue 2 or 3 or 10 times before. The only thing that matters is the current number of people in the queue.
This is expressed by the transition model \(T(s,s’)\) which means that the next state \(s’\) depends only of the current state \(s\).
This is a very important property which means that if you want to model your process as a Markov chain you need to define your state very carefully as it should include everything you need to predict the next state.
Let’s take another example and see how we can model it as a Markov chain. I go to the gym a few times a week and I usually do 2 kind of sessions:
- Cardio workout
- Strength workout
In this case our model has 3 states:
- Cardio workout
- Strength workout
- Rest day
How de we move from one state to another? Well if I did a strength session I’ll be pretty exhausted so I am most likely to have a rest day the day after. If I had a rest day I can a cardio or a strength workout, or being lazy and take another rest day … nothing is certain it’s all a matter of probability, which gives us this kind of diagram
Now that we know how to transition from one state to the other we can answer interesting questions like:
- Given that I had a rest day how many chances are there that I’ll train tomorrow ?
- Given that I had a cardio workout today what will I do in 2 days time (or 3 or 10 or 100) ?
- Given that I had a strength workout today what is the probabilities than do a strength workout in 2 days (or 10 or 100) ?
Knowing the probabilities for the next day is straightforward but computing the probabilities over several steps is more interesting as several paths are possible:
We can then sum up the probabilities of all the possible paths to answer the question: “Given that today is a rest day, what is the probability that I’ll rest again in 2 days time?”.
As you can see it gets tedious pretty quickly. In fact to compute this kind of probabilities we define the transition model as a \(m * m\) matrix where \(T_{ij}\) is the probability to move from state \(s_i\) to state \(s_j\). We can then compute the probabilities in \(n\) steps by computing \(T^n\).
If we do it we numpy it looks like this:
import numpy as np T = np.array([ [0.4, 0.3, 0.3], [0.5, 0.2, 0.3], [0.7, 0.2, 0.1] ]) T_2 = np.linalg.matrix_power(T, 2) T_3 = np.linalg.matrix_power(T, 3) T_10 = np.linalg.matrix_power(T, 10) T_50 = np.linalg.matrix_power(T, 50) T_100 = np.linalg.matrix_power(T, 100) // start in state "Rest" v = np.array([[1.0, 0.0, 0.0]]) print(" v_1: " + str(np.dot(v,T))) print(" v_2: " + str(np.dot(v,T_2))) print(" v_3: " + str(np.dot(v,T_3))) print(" v_10: " + str(np.dot(v,T_10))) print(" v_50: " + str(np.dot(v,T_50))) print("v_100: " + str(np.dot(v,T_100)))
and it produces the following output:
v_1: [[ 0.4 0.3 0.3]] v_2: [[ 0.52 0.24 0.24]] v_3: [[ 0.496 0.252 0.252]] v_10: [[ 0.50000005 0.24999997 0.24999997]] v_50: [[ 0.5 0.25 0.25]] v_100: [[ 0.5 0.25 0.25]]
In this case the probabilities converges to a steady state (only the probabilities converge not the states which keep changing randomly) but it’s not always the case depending on the structure of the chain.
In fact the structure of the chain demonstrates interesting properties. By looking only at the structure of the graph we can tell if the probabilities will converge or not and if the initial state matters or not.
To understand if the initial state matters or not we need to define 2 kind of states:
- Recurrent state
- Transient state
A recurrent state is a state for which whatever the transitions you make there is always a path to go back to that initial state.
A transient state is a state that is not recurrent – there exists a path for which it’s not possible to go back to the initial state.
Then if we group all the connected recurrent state into groups we can say that the initial state doesn’t matter as long as there is only 1 recurrent group in the chain (no matter where you start from you’ll always end up into a state within this group). If there is more than 1 recurrent group the initial state matters.
Then the convergence property is a bit trickier to observe because the system may oscillate between 2 (or more) sets of states. In this case the system is said to be periodic. A system is periodic if it exists some set of states (not necessarily connected) for which you always from one set to another (there is no way to stay within the same set).
It means that as long as a state is connected to itself the chain is not periodic (because it exists a transition that stays in the same set of states).
Hopefully this post gave you a good feeling of what a Markov chain is. I didn’t really got into the mathematics involved beyond this but now that you have a good intuition of how it works it should be less painful.
The MIT provides very detailed courses on Markov chains:
Markov chain are used in wide variety of domains – basically everywhere you need planning – from computer science (buffer / queue) to HR to management … | http://www.beyondthelines.net/machine-learning/markov-chain/ | CC-MAIN-2017-26 | refinedweb | 1,156 | 68.5 |
Backpack
This is the launch page for Backpack, actively maintained by Edward (as of Nov 2017).
Backpack is a system for retrofitting Haskell with an applicative, mix-in module system. It has been implemented in GHC 8.2 and cabal-install 2.0, but it is not supported by Stack.
The documentation for how to use Backpack is a bit scattered about at this point, but here are useful, up-to-date (as of 2017-04-02, prior to GHC 8.2's release) references:
- This pair of blog posts: Try Backpack, ghc --backpack and Cabal packages have up-to-date tutorials for using the main features of Backpack, with and without Cabal.
- The GHC manual section on module signatures gives the gory details about how Backpack's signature files (hsig) work. A more user-friendly version of this can be found on Haskell wiki "Module signature"
- There is not yet a manual entry in Cabal for how Cabal works. This section is under development.
- Edward Z. Yang's thesis contains detailed information about the specification and implementation of Backpack. We also have an older paper draft which was submitted to ICFP'16. History nuts can also read the original POPL paper but note that Backpack has changed dramatically since then.
- Hackage does not yet support uploads of Backpack-using packages. next.hackage is a Hackage instances running a development branch of Hackage that can handle Backpack; for now, Backpack-enabled packages should be uploaded here.
You might find it useful to find some code using Backpack. Here are the biggest examples worth looking at:
- backpack-str defines a signature and implementations for strings. It is quite comprehensive, and the packages are available on next.hackage.
- coda parametrizes over a few core data types including notions of "delta". It takes advantage of zero-cost abstraction, which lets it split into multiple data types, while still ensuring they are UNPACKed in the end.
- streamy defines a signature and implementations for "streaming" libraries (e.g., conduit, pipes and streaming).
- haskell-opentracing defines a signature for the OpenTracing standard, a middleware built on top of this signature, and (at the moment) a single backend to Jaeger.
- reflex-backpack is a kind of crazy experiment at Backpack'ing Reflex. Reflex uses a lot of advanced GHC features and it took some coaxing to get Backpack to handle it all, but handle it all it did!
Some more out-of-date documents:
- Backpack specification. This was subsumed by my thesis but once Backpack stabilizes it will be worth distilling the thesis PDF back into a more web-friendly format.
Known gotchas
Can I use this with Stack? No, Backpack requires support from the package manager, and Stack integration has not been implemented yet.
Can I use this with Template Haskell? Yes, but you need GHC 8.2.2; GHC 8.2.1 has a critical bug which means that most real-world uses of TH will not work. See this issue and this issue for two examples of this occurring in the wild.
Can I use this with the C preprocessor? No, this is buggy (you'll get an error
<command line>: unknown package: hole). See #14525 for the issue and a patch.
Make sure cabal-version is recent enough. (#4448) If you set the
cabal-version of your package too low, you may get this error:
Error: Mix-in refers to non-existent package 'pkg' (did you forget to add the package to build-depends?) In the stanza 'executable myexe' In the inplace package 'pkg'
This is because internal libraries are feature-gated by the
cabal-version of your package. Setting it to
cabal-version: >= 2.0 is enough to resolve the problem.
You can't instantiate a dependency with a locally defined module. Consider the following package:
library other-modules: StrImpl build-depends: foo-indef mixins: foo-indef requires (Str as StrImpl)
This looks like it should work, but actually it will fail:
Error: Non-library component has unfilled requirements: StrImpl In the stanza 'library' In the inplace package 'mypkg-1.2'
The reason for this is Backpack does not (currently) support instantiating a package with a locally defined module: since the module can turn around and *import* the mixed in
foo-indef, which would result in mutual recursion (not presently supported.)
To solve this problem, just create a new library to define the implementing module. This library can be in the same package using the convenience library mechanism:
library str-impl exposed-modules: StrImpl library build-depends: str-impl, foo-indef mixins: foo-indef requires (Str as StrImpl)
How can I declare that a module implements a signature? In traditional Haskell style, you can write
x :: Type to specify that a value
x should have some type. In Backpack, specifying that a module implements a signature is done out-of-line; you must create a third component to link them together (e.g., a test suite):
test-suite implements type: exitcode-stdio-1.0 main-is: Main.hs build-depends: base, foo-implementation, foo-sig default-language: Haskell2010
A few notes about this encoding:
- If you need to specify that a module implements multiple signatures, you include all of those signatures in the same implements test or create separate implements tests.
- Being a test suite, this requires you to create a dummy
Main.hsfile (
main = return ()is sufficient) and add a
basedependency. So why do we pick a test suite? A test suite will ensure that you have in fact filled all of the holes of a
foo-sig, whereas a regular library will happily pass through any unfilled holes, making it easy for you to think that a check has occurred when it has not.
- You might wonder if you can skip defining an extra test-suite by mixing in the signature package from the implementation package. Unfortunately, this runs afoul the "you can't instantiate a dependency with a local module" restriction. Additionally, this adds an extra spurious dependency to your package which is not actually needed.
Backpack-related tickets
Backpack-related tickets are marked with keyword 'backpack'. If the ticket is assigned to ezyang, it means he's planning on working on it. | https://ghc.haskell.org/trac/ghc/wiki/Backpack | CC-MAIN-2018-22 | refinedweb | 1,034 | 55.54 |
Search: Search took 0.04 seconds.
[FIXED] Wrong $fieldset-header-font variable in FieldSet.scss
- Last Post By:
- Last Post: 21 Apr 2014 2:51 PM
- by ahmedmohammed
PullRefresh working but not shown - ST2.2.1
List of ui properties available for styling?Started by Superflippy, 5 Jun 2013 7:11 AM
- Last Post By:
- Last Post: 7 Jun 2013 6:40 PM
- by Superflippy
[INFOREQ] 2.2.0 CSS files are compiled with errorsStarted by MikeSidorov, 15 Apr 2013 11:44 PM
SCSS for classes with custom namespace
Cannot Style CSS "SCSS" With Architect
- Last Post By:
- Last Post: 28 Mar 2013 10:03 AM
- by honestbleeps
Theming in SASS
- Last Post By:
- Last Post: 13 Feb 2013 2:59 PM
- by rogerfulton
Use of !important to override inline style
- Last Post By:
- Last Post: 16 Jan 2013 7:55 PM
- by mitchellsimoens
A question about theming
- Last Post By:
- Last Post: 11 Dec 2012 1:12 PM
- by mitchellsimoens
theming - scss - ext-grid-ui ?
Results 1 to 25 of 38 | http://www.sencha.com/forum/tags.php?tag=scss | CC-MAIN-2014-35 | refinedweb | 172 | 69.01 |
Each HTTP connection that your application makes results in a certain amount of overhead. This library supports batching, to allow your application to put several API calls into a single HTTP request. Examples of situations when you might want to use batching:
- You have many small requests to make and would like to minimize HTTP request overhead.
- A user made changes to data while your application was offline, so your application needs to synchronize its local data with the server by sending a lot of updates and deletes.
Note: You're limited to 1000 calls in a single batch request. If you need to make more calls than that, use multiple batch requests.
Note: You cannot use a media upload object in a batch request.
Details
You create batch requests by calling
new_batch_http_request() on your service
object, which returns a
BatchHttpRequest
object, and then calling
add() for each request you want to execute.
You may pass in a callback with each request that is called with the response to that request.
The callback function arguments are:
a unique request identifier for each API call,
a response object which contains the API call response,
and an exception object which may be set to an exception raised by the API call.
After you've added the requests, you call
execute() to make the requests.
The
execute() function blocks until all callbacks have been called.
In the following code snippet, two API requests are batched to a single HTTP request, and each API request is supplied a callback:
from apiclient.http import BatchHttpRequest def list_animals(request_id, response, exception): if exception is not None: # Do something with the exception pass else: # Do something with the response pass def list_farmers(request_id, response): """Do something with the farmers list response.""" pass service = build('farm', 'v2') batch = service.new_batch_http_request() batch.add(service.animals().list(), callback=list_animals) batch.add(service.farmers().list(), callback=list_farmers) batch.execute(http=http)
You can also supply a single callback that gets called for each response:
from apiclient.http import BatchHttpRequest def insert_animal(request_id, response, exception): if exception is not None: # Do something with the exception pass else: # Do something with the response pass service = build('farm', 'v2') batch = service.new_batch_http_request(callback=insert_animal) batch.add(service.animals().insert(name="sheep")) batch.add(service.animals().insert(name="pig")) batch.add(service.animals().insert(name="llama")) batch.execute(http=http)
The
add()
method also allows you to supply a
request_id parameter for each request.
These IDs are provided to the callbacks.
If you don't supply one, the library creates one for you.
The IDs must be unique for each API request,
otherwise
add() raises an exception.
If you supply a callback to both
new_batch_http_request() and
add(), they both get called. | https://developers.google.com/api-client-library/python/guide/batch | CC-MAIN-2018-13 | refinedweb | 456 | 54.22 |
Learn how to use MobX to manage the state of your React apps with ease.
TL;DR: MobX is one of the popular state management libraries out there frequently used with React. In this article, you will learn how to manage the state of your React apps with MobX. If you need, you can find the code developed throughout the article in this GitHub repository.
"Learn how to manage the state of your @reactjs apps with MobX, an alternative to Redux."
TWEET THIS
Prerequisites
Before diving into this article, you are expected to have prior knowledge of React already. If you still need to learn about React, you can find a good React article here.
Besides knowing React, you will need Node.js and NPM installed on your machine. If you don't have them, please, follow the instructions here.
State Management in React
Before understanding the concept of state management, you have to realize what a state is. A state in this context is the data layer of your application. When it comes to React and the libraries that help it manage state, you can say that state is an object that contains the data that your application is dealing with. For instance, if you want to display a list of items on your app, your state will contain the items you intend to display. State influences how React components behave and how they are rendered. Yes! It is as simple as that.
State management, therefore, means monitoring and managing the data (i.e., the state) of your app. Almost all apps have state in one way or the other and, as such, managing state has become one of the most important parts of building any modern app today.
When you think about state management in React apps, basically, there are three alternatives:
- Redux;
- the new React Context API;
- and MobX.
Redux
Redux is the most popular state management solution for React apps. Redux strictly abides by the single source of truth principle..
To learn more about Redux, check out this article.
React Context API
The React Context API is another alternative for state management in your React app. This is not a library like the earlier mentioned alternatives. Rather, this is a framework in-built solution. Actually, this API is not something new, it had existed in React a long while ago. However, you will frequently listen people calling it as the new React Context API because only recently (more specifically on React
v16.3) this API has reached a mature stage.
In fact, Redux uses this API behind the scenes. The API provides a way to pass data down a React component tree without explicitly passing it through all the child components. This API revolves around two components, the
Provider (used by a component located in a higher hierarchy of the component tree) to provide the data and the
Consumer (used by a
Component down the hierarchy) to consume data.
To learn more about the new React Context API, check out this article.
In the next section, you will learn about the third alternative at your disposal, MobX.
MobX Introduction
As mentioned, MobX is another state management library available for React apps. This alternative uses a more reactive process, and it is slowly gaining popularity in the community. MobX is not just a library for React alone, it is also suitable for use with other JavaScript libraries and frameworks that power the frontend of web apps.
"MobX is a reactive alternative to Redux and integrates very well with @reactjs apps."
TWEET THIS
MobX is sponsored by reputable companies such as Algolia, Coinbase, etc. MobX hit 16,719 stars on GitHub at the time of writing. That obviously tells you it is becoming a solid choice for state management in React applications.
In the following subsections, you will learn about important concepts that you have to keep in mind while developing with MobX. Then, in the next section, you will see MobX in action while creating a sample app.
Observable State on MobX
Observable state is one of the main concepts of MobX. The idea behind this concept is to make an object able to emit new changes on them to the observers. You can achieve this with the
@observable decorator. For example, imagine you have a variable named
counter that you expect to change with time. You can make it observable like so:
@observable counter = 0
Or, you can declare it like so:
decorate(ClassName, { counter: observable })
ClassName, in the second example, is the name of the class where the
counter object resides. This decorator can be used in instance fields and property getters.
Computed Values on MobX
Computed value is another important concept of MobX. These values are represented by the
@computed decorator. Computed values work in hand with observable states. With computed values, you can automatically derive values. Say you have a snippet like this:
class ClassName { testTimes100 = 0; @observable test = 0; @computed get computedTest() { return this.testTimes100 * 100; } }
In this snippet, if the value of
test changes, the
computedTest method is called and
testTimes100 is updated automatically. So, with computed values, MobX can automatically compute other values when needed by using
@computed.
Reactions on MobX.
The
when reaction accepts two functions as parameters, the
predicate and the
effect. This reaction runs and observes the first function (the
predicate) and, when this one is met, it runs the
effect function.
Here you can see an example of how this function works:
when( // predicate () => this.isEnabled, // effect () => this.exit() );
Once the
isEnabled class property is
true, the
effect executes the
exit function. The function that returns
isEnabled must be a function that reacts. That is,
isEnabled must be marked with
@computed so that the value is automatically computed or, better yet, marked with an
@observable decorator.
The next reaction function is the
autorun function. Unlike the
when function, this function takes in one function and keeps running it until it is manually disposed. Here you can see how you can use an
autorun function:
@observable age = 10 const dispose = autorun(() => { console.log("My age is: ", age.get()) })
With this in place, anytime the variable
age changes, the anonymous function passed to
autorun logs it out. This function is disposed once you call
dispose.
The next one, the
reaction function, mandatorily accepts two functions: the data function and side effect function. This function is similar to the
autorun function but gives you more control on which observables to track. Here, the data function is tracked and returns data to be used inside effect function. Whereas an
autorun function reacts to everything used in its function, the
reaction function reacts to observables you specify.
Here you can see a simple use case:
const todos = observable([ { title: "Read Auth0 Blog", done: false, }, { title: "Write MobX article", done: true } ]); const reactionSample = reaction( () => todos.map(todo => todo.title), titles => console.log("Reaction: ", titles.join(", ")) );
In this case, the
reaction function reacts to changes in the length and title of the list.
Another reaction function available for React developers is the
observer function. This one is not provided by the main MobX package but, instead, provided by the
mobx-react library. To use the
observer function, you can simply add the
@observer decorator in front of it like so:
@observer class ClassName { // [...] }
With this
reaction function, if an object tagged with the
@observable decorator is used in the
render method of the component and that property changes, the component is automatically re-rendered. The
observer function uses
autorun internally.
Actions on MobX
Actions are anything that modifies the state. You can mark your actions using the
@action decorator. As such, you are supposed to use the
@action on any function that modifies observables or has side effects. A simple example is this:
@observable variable = 0; @action setVariable(newVariable){ this.variable = newVariable; }
This function is updating the value of an observable, and so it is marked with
@action.
MobX and React in Practice
Now that you have gone through the main concepts in MobX, it is time to see it in action. In this section, you will build a simple user review dashboard. In the review dashboard, a user will enter a review using an input field, select a rating from a dropdown list, and finally submit the review.
The dashboard will show the total number of reviews, the average star rating, and a list of all the reviews. You will use MobX to manage certain operations like updating the reviews in realtime on the dashboard, calculating the total number of reviews submitted and lastly, obtaining the average star rating. Once you are done, your app will look similar to this:
Scaffolding a new React app
To quickly scaffold a new React app, you will use the
create-react-app CLI tool to bootstrap your React quickly. If you are on NPM
v5.2.0 or greater, you can open a terminal, move into the directory where you usually save your projects, and issue the following command:
npx create-react-app react-mobx-tutorial
If you have an older version of NPM, you will have to proceed as follows:
# install create-react-app globally npm install -g create-react-app # use it to create your project create-react-app react-mobx-tutorial
This tool will need some seconds (or even a couple of minutes depending on your internet connection) to finish its process. After that, you can open your new project (
react-mobx-tutorial) on your preferred IDE.
Installing Dependencies
After creating your app, the next step is to install the required dependencies. For this article, you need only three dependencies: the main
mobx library to add MobX to your app; the
mobx-react library to add React specific functions available through MobX; and the
react-star-rating-component dependency to easily implement a rating bar in the app.
To install them, move into your project and use NPM, as follows:
# move into app directory cd react-mobx-tutorial # install deps npm install mobx mobx-react react-star-rating-component --save
Creating a Store with MobX
You might wonder why haven't you heard about stores on the last section (MobX Introduction). The thing is, MobX does not require you to use stores to hold your data. Actually, they explain in this resource, stores are part of an opinionated approach that they discovered at Mendix while working with MobX.
"The main responsibility of stores is to move logic and state out of your components into a standalone testable unit that can be used in both frontend and backend JavaScript." - Best Practices for building large scale maintainable projects
As such, the first thing you are going to do in your app is to add a store. This will ensure that the app reads from (and writes to) a global state object instead of its own components' state. To set this up, create a new file called
Store.js inside the
src directory and add the following code to it:
class Store { reviewList = [ {review: "This is a nice article", stars: 2}, {review: "A lovely review", stars: 4}, ]; addReview(e) { this.reviewList.push(e); } get reviewCount() { return this.reviewList.length; } get averageScore() { let avr = 0; this.reviewList.map(e => avr += e.stars); return Math.round(avr / this.reviewList.length * 100) / 100; } } export default Store;
In this store, you defined a
reviewList array containing some items already. This is the list your whole app will feed on. Besides defining this array, the store also defines three methods:
addReview(): Through this method, your app will add new reviews to the
reviewListarray.
averageScore(): This is the method that your app will use to get the average score inputted by users.
reviewCount(): You will use this method to get the size of
reviewList.
Next, you will expose these methods as observables so that other parts of your application can make use of it. MobX has a set of decorators that defines how observable properties will behave (as discussed earlier). To declare these observables, you will use the
decorate function and add it to your
App.js file as shown here:
// ... leave other imports untouched ... import Store from './Store'; import {decorate, observable, action, computed} from 'mobx'; decorate(Store, { reviewList: observable, addReview: action, averageScore: computed, reviewCount: computed }); // ... leave class definition and export statement untouched ...
As you can see, you are using the
decorate function to apply the
observable,
action, and
computed decorators to the fields defined by
Store. This makes them tightly integrated with MobX, and you can now make your app react to changes in them.
Updating the Store on MobX
Next, you will create a component with the form that will collect users' review and update the store accordingly. To keep things organized, you will create a directory called
components inside the
src directory. For the rest of the article, you will use this directory for all your React components.
After creating the
components directory, add a file called
Form.js inside it and add the following code to this file:
import React, {Component} from 'react'; export default class Form extends Component { submitReview = (e) => { e.preventDefault(); const review = this.review.value; const stars = Number(this.stars.value); this.props.store.addReview({review, stars}) }; render() { return ( <div className="formSection"> <div className="form-group"> <p>Submit a Review</p> </div> <form onSubmit={this.submitReview}> <div className="row"> <div className="col-md-4"> <div className="form-group"> <input type="text" name="review" ref={node => { this.review = node; }} </div> </div> <div className="col-md-4"> <div className="form-group"> <select name="stars" id="stars" className="form-control" ref={node => { this.stars = node; }}> <option value="1">1 Star</option> <option value="2">2 Star</option> <option value="3">3 Star</option> <option value="4">4 Star</option> <option value="5">5 Star</option> </select> </div> </div> <div className="col-md-4"> <div className="form-group"> <button className="btn btn-success" type="submit">SUBMIT REVIEW</button> </div> </div> </div> </form> </div> ) } }
The new component that you just defined contains only two functions:
submitReview and
render. The
submitReview function, which React will call when users submit the form, get the
review inputted by users and the number of
stars and then call the
addReview function from the store. Note that this component is calling the
addReview function through
props. As such, while using the
Form component, you will have to pass this function to it.
Now, regarding the
render function, although lengthy, you can see that all it does is to use some HTML elements and some Bootstrap classes to define a beautiful form with:
- a title: "Submit a Review";
- an
inputtext where users will write their review;
- a drop-down box (
select) where users will choose how many stars they give to the review (between 1 and 5);
- and a
submitthat will trigger the
submitReviewfunction when clicked (through the
onSubmit={this.submitReview}property of the
formelement).
Reacting to Changes with MobX
Once users submit the form and the store receives the new review, you need to display the updated data to your users immediately. For this purpose, you will create a component that will display the average number of stars from reviews given and the total number of reviews.
To create this component, create a new file called
Dashboard.js inside the
components directory and insert the following code into it:
import React from 'react'; import {observer} from 'mobx-react' function Dashboard({store}) { return ( <div className="dashboardSection"> <div className="row"> <div className="col-md-6"> <div className="card text-white bg-primary mb-6"> <div className="card-body"> <div className="row"> <div className="col-md-6"> <i className="fa fa-comments fa-5x" /> </div> <div className="col-md-6 text-right"> <p id="reviewCount">{store.reviewCount}</p> <p className="announcement-text">Reviews</p> </div> </div> </div> </div> </div> <div className="col-md-6"> <div className="card text-white bg-success mb-6"> <div className="card-body"> <div className="row"> <div className="col-md-6"> <i className="fa fa-star fa-5x" /> </div> <div className="col-md-6 text-right"> <p id="averageScores">{store.averageScore}</p> <p className="announcement-text">Average Scores</p> </div> </div> </div> </div> </div> </div> </div> ) } export default observer(Dashboard);
As you can see, this component contains two
card elements (or Bootstrap components). The first one uses
store.reviewCount to show how many reviews were inputted so far. The second one uses
store.averageScore to show the average score given by reviewers.
One thing that you must note is that, instead of exporting the
Dashboard component directly, you are encapsulating the component inside the
observer() function. This turns your
Dashboard into a reactive and smart component. With this in place, any changes made to any content in store within the component above will make React re-render it. That is, when
averageScore and
reviewCount get updated in your store, React will update the user interface with new contents instantaneously.
Besides this dashboard, you will also create a component that will show all reviews inputted by users. As such, create a file called
Reviews.js inside the
components directory and paste the following code into it:
import React from 'react'; import {observer} from 'mobx-react'; import StarRatingComponent from 'react-star-rating-component'; function List({data}) { return ( <li className="list-group-item"> <div className="float-left">{data.review}</div> <div className="float-right"> <StarRatingComponent name="reviewRate" starCount={data.stars}/> </div> </li> ) } function Reviews({store}) { return ( <div className="reviewsWrapper"> <div className="row"> <div className="col-12"> <div className="card"> <div className="card-header"> <i className="fa fa-comments"/> Reviews </div> <ul className="list-group list-group-flush"> {store.reviewList.map((e, i) => <List key={i} data={e} /> )} </ul> </div> </div> </div> </div> ) } export default observer(Reviews);
In the snippet above, you are importing the
StarRatingComponent installed earlier to display the number of stars selected by the user during the review. Also, you are creating a component called
Review that is used only inside this file. This component is what will render the details of a single review, like the comment inputted (
review) and the amount of
stars.
Then, in the end, you are defining the
Reviews component, which is also wrapped by the
observer() function to make the component receive and display changes in the MobX store as they come. This component is quite simple. It uses the
card Bootstrap component to display an unordered (
ul) list of reviews (
reviewList) and a title ("Reviews").
Wrapping Up your MobX App
With these components in place, your app is almost ready for the prime time. To wrap up things, you will just make some adjustments to the UI, make your
App component use the components you defined in the previous sections, and import Bootstrap (which you have been using but you haven't imported).
So, for starters, open the
App.css file in your project and replace its contents like this:
.formSection { margin-top: 30px; } .formSection p { font-weight: bold; font-size: 20px; } .dashboardSection { margin-top: 50px; } .reviewsWrapper { margin-top: 50px; }
These are just small adjustments so you can have a beautiful user interface.
Next, open the
App.js file and update this as follows:
// ... leave the other import statements untouched ... import Form from './components/Form'; import Dashboard from './components/Dashboard'; import Reviews from './components/Reviews'; import Store from './Store'; // ... leave decorate(Store, {...}) untouched ... const reviewStore = new Store(); class App extends Component { render() { return ( <div className="container"> <Form store={reviewStore}/> <Dashboard store={reviewStore}/> <Reviews store={reviewStore}/> </div> ); } } export default App;
There are three new things happening in the new version of your
App component:
- You are importing and using all the components you defined before (
Form,
Dashboard, and
Reviews).
- You are creating an instance of your
Storeclass and calling it
reviewStore.
- You are passing the
reviewStoreas a prop called
storeto all components.
With that in place, the last thing you will have to do is to open the
index.html file and update it as follows:
<!DOCTYPE html> <html lang="en"> <head> <!-- ... leave other tags untouched ... --> <title>React and MobX</title> <link rel="stylesheet" href=""> <link href="" rel="stylesheet"> </head> <!-- ... leave body and its children untouched ... --> </html>
In this case, you are simply changing the title of your app to "React and MobX" and making it import Bootstrap and Font Awesome (a library of icons that you are using to enhance your UI).
After refactoring the
index.html file, go back to your terminal and make your app run by issuing the following command:
# from the react-mobx-tutorial npm start
Now, if you open in your preferred browser, you will be able to interact with your app and see React and MobX in action. How cool is that?
"I just learned how to used MobX to manage the state of a @reactjs app".
Conclusion
In this post, you learned about state management in React apps. You also had the opportunity to take a quick look at the various alternatives for managing state in React apps, more specifically, MobX.
After that, you were able to build an app to show the most important concepts in MobX. MobX might not as popular as Redux when it comes to state management on React, but it is very mature, easy to start with, and provides a seamless way to integrate into a new or an existing application.
I do hope that you enjoyed this tutorial. Happy hacking! | https://auth0.com/blog/managing-the-state-of-react-apps-with-mobx/ | CC-MAIN-2018-47 | refinedweb | 3,556 | 54.83 |
hello,
Is there any product to orchestrate/manage web services in a simple way?
I have a simple use case. i am building a simple order processing application which uses 4 web services.
1. Email validation web services (synchronous)
2. Credit card web service (asynchronous)
3. Inventory web service (asynchronous)
4. Shipping web service (asynchronous)
Web service #1 is a synchronous web service. So first i call emailWS.valdate().if it is valid email then i call async web service #2 and #3. Web service #2 and #3 are invoked in parallel. after receiving response messages from async web services #2 and #3, i need to call web service #4.
The problem i am facing is managing the response messages from those async web services that includes correlating the response message with my application data..is there any standard way to correlate response message with my request message? how do I handle failures.. Lets say what if inventory web service is down for 2 days after i sent my request. i could do some kind of ping mechanism to see whether the web service is down or not..
I guess there are lots of issues around managing and coordinating multiple web services particularly in loosely coupled asynchronous environment.
I believe whoever uses multiple web services within an application might face the same problem that i face.
<r/>
orchestrating web services (6 messages)
Threaded Messages (6)
- orchestrating web services by Edwin Khodabakchian on April 17 2002 19:40 EDT
- orchestrating web services by Robert Yu on April 17 2002 20:10 EDT
- orchestrating web services by Edwin Khodabakchian on April 22 2002 03:03 EDT
- orchestrating web services by AMIT RINGSHIA on April 22 2002 05:21 EDT
- orchestrating web services by Doron Sherman on April 22 2002 07:02 EDT
- orchestrating web services by Doron Sherman on April 26 2002 15:39 EDT
orchestrating web services[ Go to top ]
Robert,
- Posted by: Edwin Khodabakchian
- Posted on: April 17 2002 19:40 EDT
- in response to Robert Yu
Collaxa offers a web service orchestration that focuses on addressing the exact problem your are discribing: compose a set of synchronous and asynchronous web services into a multi-step business process.
The solution is based on 3 parts: a flexible abstract called ScenarioBean, an Orchestration container and an orchestration container.
ScenarioBeans are an XML/Java abstraction that allow developers to choreograph interactions with about synchronous and asynchronous web services without having to deal with a lot of the plumbing you are describing in your message.
Here is what your example would look like:
package com.collaxa.samples;
public class OrderProcessingScenario extends Scenario
{
<conversation>
public void processOrder( IOrder order )
{
// invoke asynchronous web service. The orchestration
// will detect that the invocation is asynchronous and
// passivate the scenario until a response is received.
// -- all soap marshalling, stack persistance,
// -- and correlation is handled by the container.
creditCardWS.handlePayment( order );
inventoryWC.checkQuantity( order );
shippingWC.sendOrder( order );
}
ScenarioBeans make the complexity of calling asynchronous web services transparent to the developer.
4 orchestration tags (<parallel>, <parallelN>, <listen> and <conversation>) combined with the full power of Java allow developers to choreograph both simple linear interactions or more sophisticated non-linear interactions.
You can learn more about ScenarioBeans and download the Collaxa orchestration server and kick the tires from
- Edwin (edwink at collaxa dot com)
orchestrating web services[ Go to top ]
Hello Edwin,
- Posted by: Robert Yu
- Posted on: April 17 2002 20:10 EDT
- in response to Edwin Khodabakchian
Thanks for the reply. I looked at the website () very cool product!.
Awesome! It addresses most of my problem such as async web service correlation..looks like i can even do EJB, JMS calls within my Order Processing Scenario. So that i don't need to convert my EJBs into web services..thats cool.
Looks like i just need to deal with 4 special tags to solve most of my plumbing problems. I have couple of questions related to Scenario Bean:
1. How is the Scenario bean different from normal EJB Session bean/Entity Bean?
2. Can i run the scenario bean inside any J2EE container?
thanks
<r/>
orchestrating web services[ Go to top ]
Robert,
- Posted by: Edwin Khodabakchian
- Posted on: April 22 2002 15:03 EDT
- in response to Robert Yu
Thank you for your positive feedback.
Question #1: ScenarioBean versus SessionBean?
SessionBean is used to model short-lived business transactions: when you invoke a method on a ScenarioBean, you expect it to do its magic and return within a few milliseconds.
ScenarioBean on the other hand are used to model long-lived business processes and workflows. When you invoke a ScenarioBean, a new long-lived, multi-step transaction is initiated and its handle/key is returned. The scenariobean coordinates the business process, invoking web services, EJBs, JMS components and also users if needs. An invocation on a scenario bean might take a few minutes or a few months to complete.
Question #2: Can I run ScenarioBeans inside any EJB container?
ScenarioBeans come with their own container (we call that container an orchestration container). That container encapsulates all the services need to execute, manage, persist and monitor ScenarioBeans. Collaxa will be publishing a white paper on the internal of the container very shortly.
The orchestration container in 100% interoperable with existing EJB containers allowing developers to leverage EJBs, JMS and other J2EE services.
The orchestration container can be loaded in memory with most application servers, web servers and Java Messaging server offering a flexible way to add orchestration capabilities to existing infrastructure.
Those this address you questions? Did you get a chance to download the Collaxa Orchestration Server and kick the tires?
Please let me know if there is anything else we can do.
Best,
Edwin
orchestrating web services[ Go to top ]
Also look at IBM's WSFL (Web Services Flow Language). I am not sure if it has been implemented as yet - but I may be out of synch now.
- Posted by: AMIT RINGSHIA
- Posted on: April 22 2002 17:21 EDT
- in response to Edwin Khodabakchian
Also IBM has another tool on alphawirks called "Web Services PMT (Process management toolkit)" worth a look too.
-Amit
orchestrating web services[ Go to top ]
A Java Scenario Beans is a programmatic abstraction which leverages the full power of Java and a few tags to program orchestration logic. Declarative dialects of XML, such as IBM's WSFL or other emerging standards, such as BPML, can be executed by special-purpose Scenario Beans. This way, the orchestration container runs WSFL Scenario Beans, BPML Scenario Beans, etc. Also, note that these XML dialects are not the first (and likely not the last) attempts at creating flow langugage standards. Scenario Beans is a flexible future-proof programmable abstraction that can support other dialects and run all of them in parallel as part of your system if you so desire.
- Posted by: Doron Sherman
- Posted on: April 22 2002 19:02 EDT
- in response to AMIT RINGSHIA
Doron\
orchestrating web services[ Go to top ]
Here's a more elaborate example of web service orchestration using Java. There's a code sample you can review in the following article describing a real-estate application:
- Posted by: Doron Sherman
- Posted on: April 26 2002 15:39 EDT
- in response to Robert Yu | http://www.theserverside.com/discussions/thread.tss?thread_id=13075 | CC-MAIN-2015-14 | refinedweb | 1,213 | 52.9 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
[SOLVED] How to pass current values to wizard in odoo?
I have some wizard opens when I click a button, this wizard insert some values in child model of the main model so I need to send some data from main model to the wizard
data includes current id
Mohamed,
Use context for this
Example
return {
'type': 'ir.actions.act_window',
'name': 'Name ',
'view_mode': 'form',
'target': 'new',
'res_model': 'Your.Model',
'context': {'parent_obj': self.id}
}
Inside the function of wizard object
@api.one
def your_function(self):
print self._context['parent_obj']
Thanks
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/solved-how-to-pass-current-values-to-wizard-in-odoo-110946 | CC-MAIN-2017-34 | refinedweb | 144 | 58.28 |
Creating a Pager Control for ASP.NET
Dino Esposito
Wintellect
October 2003
Applies to:
Microsoft® ASP.NET
Summary: Tackles the problem of adding paging capabilities to any ASP.NET control. Also gives a number of tips and tricks useful when developing composite ASP.NET controls. (18 printed pages)
Download the source code for this article.
From the programmer's perspective, one of the worst drawbacks of Microsoft® SQL Server™ queries is that they often return much more rows than actually fit in the application's user interface. This unfortunate circumstance puts developers in a dilemma. Should they create a very long page that takes users a while to scroll through, or is the issue better addressed by setting up a manual paging mechanism?
Which solution is better depends mostly upon the nature of the data you retrieve. A long list of items—for example, the result of a search—is preferably rendered through equally sized, relatively short pages. A long, single item, like the text of an article, is more comfortably consumed if inserted entirely in the page. In the final analysis, the decision should be made in light of the overall usability of the application. So, how does Microsoft® ASP.NET face the problem of data paging?
ASP.NET provides powerful data-bound controls to format the results of a query into HTML markup. However, only one of these data-bound controls—specifically, the DataGrid control—natively supports paging. Other controls, such as the DataList, the Repeater, or the CheckBoxList, just don't page. These and other list controls don't page, not because they are structurally unable to page, but because they, unlike the DataGrid, don't contain any specific code that handles paging. However, the code to handle paging is relatively boilerplate and can be added to any of these controls.
Scott Mitchell covered DataGrid paging in a recent article titled Creating a Pageable, Sortable DataGrid. The article also references other useful information around the Web that gives you the basics, and more, relating to grid paging. If you want to see an example of how to make the DataList control page, have a look at this article. It demonstrates how to create a custom DataList control that features current index and page-size properties and fires a page-changed event.
The same code can be reworked to serve the paging needs of other list controls, such as the ListBox and the CheckBoxList. Nevertheless, adding paging capabilities to individual controls is not really all that great of an idea, for as mentioned, the paging code is rather boilerplate. So what could be better for a smart programmer, than stuffing it all into a new general-purpose pager control?
In this article, I'll build a pager control that will make a buddy list control page over the results of a SQL Server query. The control is named SqlPager and supports two types of buddy controls—list controls and base data list controls.
Highlights of the SqlPager Control
The SqlPager control is an ASP.NET composite control that contains a one-row table. The row, in turn, contains two cells—the navigation bar and the page descriptor. The user interface of the control is a strip that ideally is the same width as the buddy control. The navigation bar section provides the clickable elements to move through pages; the page descriptor section gives users some feedback about the currently displayed page.
Figure 1. The SqlPager control as it shows up in the Visual Studio .NET page designer
Just like the embedded pager of the DataGrid control, the SqlPager control has two navigation modes—next/previous and numeric pages. An ad hoc property, PagerStyle, lets you choose the more convenient style. The control works in conjunction with a list control. You assign the pager such a buddy control through the ControlToPaginate string property.
SqlPager1.ControlToPaginate = "ListBox1";
Basically, the pager gets the results of a SQL Server query, prepares an appropriate page of records, and displays that through the DataSource property of the buddy control. When the user clicks to view a new page, the pager retrieves the requested data and shows it again through the buddy. The paging mechanism is completely transparent to the list control. The data source of the list control is programmatically updated and contains, at any time, only the records that fit into the current page.
The paging engine of the control features quite a few public properties, such as CurrentPageIndex, ItemsPerPage, and PageCount to get and set the index of the current page, the size of each page, and the total number of displayed pages. The pager manages any logic needed for data retrieval and paging.
The SelectCommand property sets the text of the command to use for fetching data. The ConnectionString property defines name and location of the database, plus the credentials to connect. How the query is executed depends on the value of the PagingMode property. Feasible values for the property are the values of the homonym PagingMode enumeration—Cached and NonCached. If the Cached option is selected, the entire result set is retrieved using a data adapter and a DataTable object. The result set is optionally placed in the ASP.NET Cache object and reused until it expires. If the NonCached option is selected, the query retrieves only the records that fit into the current page. No data is placed in the ASP.NET Cache this time. The NonCached mode is nearly identical to the custom paging mode of the DataGrid control.
The full programming interface of the SqlPager control is shown in the table below.
Table 1. The programming interface of the SqlPager control
Since the SqlPager control inherits WebControl, it also features a bunch of UI-related properties to manage fonts, borders, and colors.
Building the SqlPager Control
I'll build the SqlPager control as a composite control and have it inherit the WebControl class. Composite controls are a special flavor of ASP.NET server controls that result from the combination of one or more constituent server controls.
public class SqlPager : WebControl, INamingContainer { ... }
Unless you are building a completely custom control or extending an existing one, most of the time when you create a new control, you are actually building a composite control. To create the SqlPager I will assemble a Table control and either a few LinkButton controls or a DropDownList control, depending on the pager style.
There are a few guidelines to bear in mind to build composite controls. First off, you have to override the CreateChildControls protected method. The CreateChildControls method is inherited from Control and is called when server controls have to create child controls for rendering or after a postback.
protected override void CreateChildControls() { // Clear existing child controls and their viewstate Controls.Clear(); ClearChildViewState(); // Build the control tree BuildControlHierarchy(); }
When overriding this method, you should do a couple of important things. You create and initialize any required instance of the child controls and add them to the Controls collection of the parent control. Before you go with the new control tree, though, you should remove any existing child controls and clear any viewstate information child controls may have left around.
A composite component should also implement the INamingContainer interface, so that the ASP.NET runtime can create a new naming scope for it. This ensures that all controls in the composite control have a unique name. This will also ensure that the postback data of child controls is handled automatically.
Being a naming container is particularly important for the SqlPager control. SqlPager, in fact, contains some LinkButton controls and needs to catch and handle their click events in order to navigate pages. Just like any other control in an ASP.NET page, a LinkButton is given an ID, which is used to identify the control that will handle the postback event.
When handling a postback, the ASP.NET runtime attempts to find a match between the event target ID and the ID of any control that is a direct child of the main form. Our LinkButton is a child of the pager and is subsequently unable to run its server-side code. Does this mean that only direct children of the form can fire and handle server events? Of course not, as long as you use naming containers.
By making the SqlPager control implement the INamingContainer interface, you change the actual ID of the embedded link button from, say, First to SqlPager1:First. When users click to view a new page, the postback event has SqlPager1:First as the target control. The algorithm that the runtime uses to identify the target control is actually a bit more sophisticated than I described a moment ago. The runtime considers the name of the event target as a colon-separated string. The match is actually sought between the children of the form and the first token of a colon-separated string, such as SqlPager1:First. Since the pager is a child of the form, the match is found and the pager gets the click event. If you find this explanation inadequate or confusing, just download the source code of the SqlPager control, remove the INamingContainer marker interface, and recompile. You'll see that the pager posts back but is unable to handle the click event internally.
The INamingContainer interface is a method-less, marker interface whose implementation doesn't require you to do more than to specify the name in the class declaration.
Another important aspect of composite controls is that they normally don't require custom logic for rendering. The rendering of a composite control follows from the rendering of constituent controls. When building a composite control, you normally don't have to override the Render method.
The SqlPager tree of controls consists of a table with one row and two cells. The table inherits most of the visual settings of the pager—foreground and background colors, borders, font information, and width. The first cell contains the navigation bar whose structure depends on the value of the PagerStyle property. If the pager style is NextPrev, the navigation bar is made of four VCR-like link buttons. Otherwise, it will consist of a drop-down list.
private void BuildControlHierarchy() { // Build the surrounding table (one row, two cells) Table t = new Table(); t.Font.Name = this.Font.Name; t.Font.Size = this.Font.Size; t.BorderStyle = this.BorderStyle; t.BorderWidth = this.BorderWidth; t.BorderColor = this.BorderColor; t.Width = this.Width; t.Height = this.Height; t.BackColor = this.BackColor; t.ForeColor = this.ForeColor; // Build the table row TableRow row = new TableRow(); t.Rows.Add(row); // Build the cell with the navigation bar TableCell cellNavBar = new TableCell(); if (PagerStyle == this.PagerStyle.NextPrev) BuildNextPrevUI(cellNavBar); else BuildNumericPagesUI(cellNavBar); row.Cells.Add(cellNavBar); // Build the cell with the page index TableCell cellPageDesc = new TableCell(); cellPageDesc.HorizontalAlign = HorizontalAlign.Right; BuildCurrentPage(cellPageDesc); row.Cells.Add(cellPageDesc); // Add the table to the control tree this.Controls.Add(t); }
It is extremely important for the correct rendering of the pager that you add each control to the proper Controls collection. The outermost table must be added to the Controls collection of the pager. The link buttons and the drop-down list must be added to the Controls collection of the respective table cell.
The following code gives an idea of the code used to build the link buttons navigation bar. Each button is rendered with a Webdings character, disabled as needed, and bound to an internal Click event handler.
private void BuildNextPrevUI(TableCell cell) { bool isValidPage = ((CurrentPageIndex >=0) && (CurrentPageIndex <= TotalPages-1)); bool canMoveBack = (CurrentPageIndex>0); bool canMoveForward = (CurrentPageIndex<TotalPages-1); // Render the << button LinkButton first = new LinkButton(); first.ID = "First"; first.Click += new EventHandler(first_Click); first.Font.Name = "webdings"; first.Font.Size = FontUnit.Medium; first.ForeColor = ForeColor; first.ToolTip = "First page"; first.Text = "7"; first.Enabled = isValidPage && canMoveBack; cell.Controls.Add(first); : }
The alternative style for the pager—numeric pages listed in a drop-down list—is built as follows:
private void BuildNumericPagesUI(TableCell cell) { // Render a drop-down list DropDownList pageList = new DropDownList(); pageList.ID = "PageList"; pageList.AutoPostBack = true; pageList.SelectedIndexChanged += new EventHandler(PageList_Click); pageList.Font.Name = this.Font.Name; pageList.Font.Size = Font.Size; pageList.ForeColor = ForeColor; if (TotalPages <=0 || CurrentPageIndex == -1) { pageList.Items.Add("No pages"); pageList.Enabled = false; pageList.SelectedIndex = 0; } else // Populate the list { for(int i=1; i<=TotalPages; i++) { ListItem item = new ListItem(i.ToString(), (i-1).ToString()); pageList.Items.Add(item); } pageList.SelectedIndex = CurrentPageIndex; } }
All event handlers—Click and SelectedIndexChanged—end up changing the currently displayed page. Both methods fall into a common GoToPage private method.
private void first_Click(object sender, EventArgs e) { GoToPage(0); } private void PageList_Click(object sender, EventArgs e) { DropDownList pageList = (DropDownList) sender; int pageIndex = Convert.ToInt32(pageList.SelectedItem.Value); GoToPage(pageIndex); } private void GoToPage(int pageIndex) { // Prepares event data PageChangedEventArgs e = new PageChangedEventArgs(); e.OldPageIndex = CurrentPageIndex; e.NewPageIndex = pageIndex; // Updates the current index CurrentPageIndex = pageIndex; // Fires the page changed event OnPageIndexChanged(e); // Binds new data DataBind(); }
Handlers for other navigation buttons differ from first_Click only for the page number they pass to the GoToPage method. The GoToPage method is responsible for the PageIndexChanged event and for firing the data-binding process. It prepares the event data (old and new page index) and triggers the event. The GoToPage is defined as private but you can programmatically change the displayed page using the CurrentPageIndex property.
public int CurrentPageIndex { get {return Convert.ToInt32(ViewState["CurrentPageIndex"]);} set {ViewState["CurrentPageIndex"] = value;} }
CurrentPageIndex property, like all the properties listed in Table 1, has a pretty simple implementation. It saves its content to, and restore it from, the viewstate. The page index is validated and used during the data binding step.
The Data Binding Step
The DataBind method is common to all ASP.NET controls and, for data-bound controls, it triggers the refresh of the user interface to reflect new data. The SqlPager control uses this method to start the data retrieval operation based on the values of the SelectCommand and ConnectionString properties. It goes without saying that the process aborts if any of these properties are blank. Likewise, the data binding step is canceled if the buddy control doesn't exist. To locate the buddy control, the DataBind method uses the FindControl function on the Page class. It stems from this that the buddy control must be a direct child of the main form.
The control to paginate can't be an arbitrary ASP.NET server control. It must be either a list control or a base data list. More generally, the buddy control must expose the DataSource property and implement the DataBind method. These are actually the only requirements for a potential pageable control. All controls in the Microsoft® .NET Framework that inherit from either ListControl or BaseDataList fulfill the first requirement; all Web controls, instead, by design meet the DataBind requirement. With the current implementation, you can't use the SqlPager control to page a Repeater. Unlike its companion controls, DataList and DataGrid, the Repeater doesn't inherit BaseDataList, nor does it provide the features of a list control. The table below lists the controls you can page with SqlPager.
Table 2. Data-bound controls that can be paged by the SqlPager control
The code below illustrates the data binding process as implemented by the SqlPager control.
public override void DataBind() { // Fires the data binding event base.DataBinding(); // Controls must be recreated after data binding ChildControlsCreated = false; // Ensures the control exists and is a list control _controlToPaginate = Page.FindControl(ControlToPaginate); if (_controlToPaginate == null) return; if (!(_controlToPaginate is BaseDataList || _controlToPaginate is ListControl)) return; // Ensures enough info to connect and query is specified if (ConnectionString == "" || SelectCommand == "") return; // Fetch data if (PagingMode == PagingMode.Cached) FetchAllData(); else FetchPageData(); // Bind data to the buddy control BaseDataList baseDataListControl = null; ListControl listControl = null; if (_controlToPaginate is BaseDataList) { baseDataListControl = (BaseDataList) _controlToPaginate; baseDataListControl.DataSource = _dataSource; baseDataListControl.DataBind(); return; } if (_controlToPaginate is ListControl) { listControl = (ListControl) _controlToPaginate; listControl.Items.Clear(); listControl.DataSource = _dataSource; listControl.DataBind(); return; } }
A different fetch routine is called according to the value of the PagingMode property. In any case, the resultset is bound to an instance of the PagedDataSource class. This class provides some facilities to page data. In particular, when the whole data set is cached, the class automatically retrieves the records for the current page and returns boolean values to give information about the first and the last page. I'll come back to the internal structure of this class shortly. In the listing above, the helper PagedDataSource object is represented by the _dataSource variable.
Next, the SqlPager control figures out the type of the buddy control and binds the contents of the PagedDataSource object to the buddy's DataSource property.
At a certain point, the DataBind method shown above also resets the ChildControlsCreated property to false. Why is it so?
When the page that contains the pager posts back, all controls are recreated; the pager is no exception. Normally, all controls and their children are created by the time the page is ready for rendering. A moment before each control receives the OnPreRender notification, the protected EnsureChildControls method is called so that each control can build its control tree. When this happens, the data binding process is complete and the new data has been cached.
However, when the page posts back because one of the pager constituent controls is clicked (that is, a user clicked to change the page), the control tree of the pager is built, long before the rendering stage. In particular, the tree must be in place when the related server-side event is processed, and consequently, before the data binding starts. The rub is that the data binding modifies the page index and this must be reflected in the user interface. If you don't take countermeasures, the page index in the pager won't be refreshed when users switch to another page.
There are various ways to fix this, but it's important to be aware of the issue and its real causes. You can avoid building a control tree and generate all the output in the Render method. Alternatively, you can modify the portions of the tree affected by the data binding changes. I've chosen a third way, one that requires less code and works whatever portions of the control's user interface are affected by the data binding changes. By setting the ChildControlsCreated property to false, you invalidate any previously created tree of controls. As a result, the tree will be recreated prior to rendering.
The Paging Engine
The SqlPager control supports two ways of retrieving data—cached and non-cached. In the former case, the select command executes as is and the whole resultset is bound to the internal paged data source object. The PagedDataSource object will automatically return the records that fit into a particular page. The PagedDataSource class is also the system component working behind the DataGrid default paging mechanism.
Retrieving all the records to display only the few that fit into a page is not always a smart approach. Due to the stateless nature of Web applications, in fact, the potentially large query runs every time the page is requested. To be effective, the cached approach must rely on some sort of cache object and the ASP.NET Cache object is an excellent candidate. The use of caching techniques make the application run faster, but also offers a snapshot of the data that doesn't reflect the most recent changes. In addition, more memory is used on the Web server. Paradoxically, this might even create a scalability problem if large a amount of data is cached on a per-session basis. The Cache container is global to the application; if the data is stored in it on a per-session basis, you also need to generate session-specific entry names.
On the up side of the Cache object is full support for expiration policies. In other words, the data stored in the cache can be automatically released after a certain duration. The following code illustrates a private method of the SqlPager class that fetches data and stores it in the cache.
private void FetchAllData() { // Looks for data in the ASP.NET Cache DataTable data; data = (DataTable) Page.Cache[CacheKeyName]; if (data == null) { // Fix SelectCommand with order-by info AdjustSelectCommand(true); // If data expired or has never been fetched, go to the database SqlDataAdapter adapter = new SqlDataAdapter(SelectCommand, ConnectionString); data = new DataTable(); adapter.Fill(data); Page.Cache.Insert(CacheKeyName, data, null, DateTime.Now.AddSeconds(CacheDuration), System.Web.Caching.Cache.NoSlidingExpiration); } // Configures the paged data source component if (_dataSource == null) _dataSource = new PagedDataSource(); _dataSource.DataSource = data.DefaultView; _dataSource.AllowPaging = true; _dataSource.PageSize = ItemsPerPage; TotalPages = _dataSource.PageCount; // Ensures the page index is valid ValidatePageIndex(); if (CurrentPageIndex == -1) { _dataSource = null; return; } // Selects the page to view _dataSource.CurrentPageIndex = CurrentPageIndex; }
The name of the cache entry is unique to the control and request. It includes the URL of the page and the ID of the control. The data is bound to the cache for the specified number of seconds. To give items an expiration, you must use the Cache.Insert method. The following, and simpler, code will add the item to the cache but doesn't include any expiration policy.
Page.Cache[CacheKeyName] = data;
The PagedDataSource object gets the data to page through its DataSource property. It is worth noting that the DataSource property of the PagedDataSource class accepts only IEnumerable objects. The DataTable doesn't meet this requirement; that's why I resort to the DefaultView property.
The SelectCommand property determines the query run against the SQL Server database. This string is expected to be in the form SELECT-FROM-WHERE. No ORDER BY clause is supported and, if specified, is stripped off. This is just what the AdjustSelectCommand method does. Any sorting information can be specified using the SortField property. The AdjustSelectCommand method itself adds a proper ORDER BY clause based on the value of SortField. Is there a reason for this?
When the pager works in NonCached mode, the original query is modified to ensure that only the records for the current page are retrieved. The real query text that hits SQL Server takes the following form.
SELECT * FROM (SELECT TOP
ItemsPerPage* FROM (SELECT TOP
ItemsPerPage*CurrentPageIndex* FROM (
SelectCommand) AS t0 ORDER BY
SortFieldASC) AS t1 ORDER BY
SortFieldDESC) AS t2 ORDER BY
SortField
The query makes up for the lack of a ROWNUM clause in SQL Server 2000 and reorders records in such a way that only the n.th block of x items is returned properly sorted. You specify the base query and the pager breaks it up into smaller pages. Only the records that fit into a page are returned. As you can see, the query above needs to handle the sort field aside from the query. That's why I added the SortField as a separate property. The only drawback of this code is that it defaults to ascending order. By making the ASC/DESC keywords parametric, you can make this code virtually perfect:
private void FetchPageData() { // Need a validated page index to fetch data. // Also need the virtual page count to validate the page index AdjustSelectCommand(false); VirtualRecordCount countInfo = CalculateVirtualRecordCount(); TotalPages = countInfo.PageCount; // Validate the page number (ensures CurrentPageIndex is valid or -1) ValidatePageIndex(); if (CurrentPageIndex == -1) return; // Prepare and run the command SqlCommand cmd = PrepareCommand(countInfo); if (cmd == null) return; SqlDataAdapter adapter = new SqlDataAdapter(cmd); DataTable data = new DataTable(); adapter.Fill(data); // Configures the paged data source component if (_dataSource == null) _dataSource = new PagedDataSource(); _dataSource.AllowCustomPaging = true; _dataSource.AllowPaging = true; _dataSource.CurrentPageIndex = 0; _dataSource.PageSize = ItemsPerPage; _dataSource.VirtualCount = countInfo.RecordCount; _dataSource.DataSource = data.DefaultView; }
In NonCached mode, the PagedDataSource object doesn't hold the whole data source and can't figure out the total number of pages to page through. For this reason, you have to flag the AllowCustomPaging property and provide a virtual count of the records in the data source. The virtual count is normally retrieved using a SELECT COUNT(*) query. This model is nearly identical to DataGrid custom paging. Finally, the current page index to select in the PagedDataSource object is always 0, because one page of records is actually stored.
So much for the implementation of the SqlPager control, let's have a look at how you would use it.
Working with the SqlPager Control
Let's consider a sample page that contains a ListBox control. To use the pager, make sure the .aspx page properly registers the control's assembly.
<%@ Register TagPrefix="expo" Namespace="DevCenter" Assembly="SqlPager" %>
The control's markup depends on the properties actually set. The following markup is a reasonable example:
<asp:listbox <br> <expo:SqlPager <br> <asp:button
Besides the pager, the page contains a listbox and a button. The listbox will show the contents of each page; the button simply serves to populate the listbox for the first time. The button has a click event handler defined as follows.
private void LoadFirst1_Click(object sender, EventArgs e) { SqlPager1.CurrentPageIndex = 0; SqlPager1.DataBind(); }
Figure 2 shows the page in action.
Figure 2. The SqlPager control works with a ListBox control.
An even more interesting example can be built using a DataList control. The idea is to use the pager to navigate through the personal record of each Northwind employee. The DataList looks like in the following listing.
<asp:datalist <ItemTemplate> <table bgcolor="#f0f0f0" style="font-family:verdana;font-size:8pt;"> <tr><td valign="top"> <b><%# DataBinder.Eval(Container.DataItem, "LastName") + ", " + DataBinder.Eval(Container.DataItem, "firstname") %></b></td></tr> <tr><td> <span style="color:blue;"><i> <%# DataBinder.Eval(Container.DataItem, "Title")%></i></span> <p><img style="float:right;" src='image.aspx? id=<%# DataBinder.Eval(Container.DataItem, "employeeid")%>' /> <%# DataBinder.Eval(Container.DataItem, "Notes") %></td></tr> </table> </ItemTemplate> </asp:datalist>
It displays name and title of the employee in the first row of the table and then the picture surrounded by the notes. The picture is retrieved using a special .aspx page that returns JPEG data fetched from the database.
The pager can be placed anywhere in the page. In this case let's place it just above the buddy DataList control.
Figure 3. The SqlPager pages a DataList control
Does it make sense using the SqlPager control with a DataGrid control? It depends. The DataGrid already comes with an embedded paging engine based on the same PagedDataSource object I used here. So as long as you need to page through a single set of records displayed in a tabular format, the SqlPager is unnecessary. However, in master/detail scenarios, using the two controls together is not a farfetched idea. For example, if you add to the previous screenshot a DataGrid to display the orders managed by the employee, you have two related paging engines in the same page—one that pages through employees and one to scroll the related orders.
Summary
No matter what type of application you are building—a Web application, a Microsoft® Windows® application, or a Web service—you can rarely afford downloading and caching the whole data source you are expected to display. Sometimes, test environments might lead you to believe such a solution works great and is preferable. But test environments can be misleading. The size of the data source does matter, and the more the application must be scaled, the more size matters.
In ASP.NET, only the DataGrid control has built-in paging capabilities. However, a paging engine is made of rather boilerplate code and, with a bit of work, can be generalized and adapted to work with several different controls. The SqlPager control presented in this article does just that. It takes care of downloading the data and cuts it into pages to display through a buddy control. The control can retrieve and cache the whole data set or just ask SQL Server for the few records to display in the selected page. I said SQL Server, and that's another important point. The SqlPager works only with SQL Server and can't be used to retrieve data using OLE DB or ODBC. Nor can you use it to access Oracle or DB2 archives.
To make a really generic SQL pager component, you should generalize the data access layer and build a sort of factory class that creates connections, commands, and adapters using the appropriate data provider. On the other hand, bear in mind that setting up a paging engine for various SQL sources is worse than your worst nightmare. The approach presented here works only for SQL Server 7.0 and newer. The TOP clause is the discriminatory feature. Using server cursors and temporary tables, it can be adapted to a larger range of DBMS systems. But that would make your code significantly more complex.. | http://msdn.microsoft.com/en-us/library/ms972960.aspx | CC-MAIN-2013-48 | refinedweb | 4,785 | 56.25 |
NAME
Module::Metadata - Gather package and POD information from perl module files without executing unsafe code.
USAGE
Class methods
new_from_file($filename, collect_pod => 1)
Construct a
Module::Metadataobject given the path to a file. Takes an optional argument
collect_podwhich is a boolean that determines whether POD data is collected and stored for reference. POD data is not collected by default. POD headings are always collected. Returns undef if the filename does not exist.
filenameargument is mandatory or undef will be returned.
new_from_module($module, collect_pod => 1, inc => \@dirs)
Construct a
Module::Metadataobject given a module or package name. In addition to accepting the
collect_podargument as described above, this method accepts a
incargument which is a reference to an array of of directories to search for the module. If none are given, the default is @INC. Returns undef if the module cannot be found.by default) that contains the module
$module. A list of directories can be passed in as an optional parameter, otherwise @INC is searched.
Can be called as either an object or a class method.
provides( %options )
This is a convenience wrapper around
package_versions_from_directoryto generate a CPAN META
providesdata
dirof 'lib' and
prefixof 'lib', the return value is a hashref of the form:
{ 'Package::Name' => { version => '0.123', file => 'lib/Package/Name.pm' }, 'OtherPackage::Name' => ... }
package_versions_from_directory($dir, \@files?)
Scans
$dirfor .pm files (unless
@filesis given, in which case looks for those files in
$dir- and reads each file for packages and versions, returning a hashref of the form:
{ 'Package::Name' => { version => '0.123', file => 'Package/Name.pm' }, 'OtherPackage::Name' => ... }
The
DBand
mainpackages are always omitted, as are any "private" packages that have leading underscores in the namespace (e.g.
Foo::_private)
Note that the file path is relative to
$dirif that is specified. This must not be used directly for CPAN META
provides. See the
providesmethod instead.
log_info (internal)
Used internally to perform logging; imported from Log::Contextual if Log::Contextual has already been loaded, otherwise simply calls warn.
Object methods
namemethodor private packages the way the
providesmethod does.
pod_inside()
Returns a list of POD sections.
contains_pod()
Returns true if there is any POD in the file.
pod($section)
Returns the POD data in the given section.. | https://metacpan.org/pod/release/DAGOLDEN/Module-Metadata-1.000009/lib/Module/Metadata.pm | CC-MAIN-2017-30 | refinedweb | 367 | 50.12 |
.
These model parameters are the components of a vector, $\boldsymbol{w}$ and a constant, $b$, which relate a given input feature vector to the predicted logit or log-odds, $z$, associated with $\boldsymbol{x}$ belonging to the class $y=1$ through $$ z = \boldsymbol{w}^T\boldsymbol{x} + b. $$ In this formulation, $$ z = \ln \frac{\hat{y}}{1-\hat{y}} \quad \Rightarrow \hat{y} = \sigma(z) = \frac{1}{1+\mathrm{e}^{-z}}. $$ Note that the relation between $z$ and the components of the feature vector, $x_j$, is linear. In particular, for a two-dimensional problem, $$ z = w_1x_1 + w_2x_2 + b. $$ It is sometimes useful to be able to visualize the boundary line dividing the input space in which points are classified as belonging to the class of interest, $y=1$, from that space in which points do not. This could be achieved by calculating the prediction associated with $\hat{y}$ for a mesh of $(x_1, x_2)$ points and plotting a contour plot (see e.g. this scikit-learn example).
Alternatively, one can think of the decision boundary as the line $x_2 = mx_1 + c$, being defined by points for which $\hat{y}=0.5$ and hence $z=0$. For $x_1 = 0$ we have $x_2=c$ (the intercept) and
$$
0 = 0 + w_2x_2 + b \quad \Rightarrow c = -\frac{b}{w_2}.
$$
For the gradient, $m$, consider two distinct points on the decision boundary, $(x_1^a,x_2^a)$ and $(x_1^b,x_2^b)$, so that $m = (x_2^b-x_2^a)/(x_1^b-x_1^a)$. Along the boundary line,
$$
\begin{align*}
& 0 = w_1x_1^b + w_2x_2^b + b - (w_1x_1^a + w_2x_2^a + b)\\
\Rightarrow & -w_2(x_2^b - x_2^a) = w_1(x_1^b - x_1^a)\\
\Rightarrow & m = -\frac{w_1}{w_2}.
\end{align*}
$$
To see this in action, consider the data in linpts.txt, which maybe classified using scikit-learn's
LogisticRegression classifier. The following script retrieves the decision boundary as above to generate the following visualization.
import numpy as np import matplotlib.pyplot as plt import sklearn.linear_model plt.rc('text', usetex=True) pts = np.loadtxt('linpts.txt') X = pts[:,:2] Y = pts[:,2].astype('int') # Fit the data to a logistic regression model. clf = sklearn.linear_model.LogisticRegression() clf.fit(X, Y) # Retrieve the model parameters. b = clf.intercept_[0] w1, w2 = clf.coef_.T # Calculate the intercept and gradient of the decision boundary. c = -b/w2 m = -w1/w2 # Plot the data and the classification with the decision boundary. xmin, xmax = -1, 2 ymin, ymax = -1, 2.5 xd = np.array([xmin, xmax]) yd = m*xd + c plt.plot(xd, yd, 'k', lw=1, ls='--') plt.fill_between(xd, yd, ymin, color='tab:blue', alpha=0.2) plt.fill_between(xd, yd, ymax, color='tab:orange', alpha=0.2) plt.scatter(*X[Y==0].T, s=8, alpha=0.5) plt.scatter(*X[Y==1].T, s=8, alpha=0.5) plt.xlim(xmin, xmax) plt.ylim(ymin, ymax) plt.ylabel(r'$x_2$') plt.xlabel(r'$x_1$') plt.show()
Comments are pre-moderated. Please be patient and your comment will appear soon.
There are currently no comments
New Comment | https://scipython.com/blog/plotting-the-decision-boundary-of-a-logistic-regression-model/ | CC-MAIN-2021-39 | refinedweb | 508 | 61.12 |
C9 Lectures: Stephan T Lavavej - Advanced STL, 5 of n
- Posted: May 19, 2011 at 11:13 AM
- 54,525 Views
- 84 5th part of the n-part series, STL digs into the Boost Library (). In his words, it's an open source, super quality, community-driven STL++. Stephan will walk you through a sample application from end to end, using boost.
we could get the source code posted?
Very cool to see some Boost goodness in channel 9 ! Thanks for the vid. (impeccable timing with BoostCon too :)
But Stephan, you miss the perfect importunity in your function read_file to sneak one of the little underused gem of boost: boost.iostreams.mapped_file.
I have found it to be the best way to deal with binary files hands down. It's :
1) Extremely fast. Very often I have seen speed access being an order of magnitude faster than C++ iostream or C stdio functions (on Windows)
2) Super easy to use and concise. begin() or data() return a char* to the first byte of the mapping and then you can simply iterate through it and the system will take care of everything.
For example read_file can be rewrite in two lines like this :
#include <vector>
#include <string>
#include <boost/iostreams/device/mapped_file.hpp>
std::vector<char> read_file(const std::string& name)
{
boost::iostreams::mapped_file_source file(name);
return std::vector<char>(file.begin(), file.end());
}
@Thomas Petit: Thats one of the motives people want the boot::iostreams on next C++. Mapped files are part of any POSIX-compliant system, and provides its benefits and drawbacks on accessing file I/O. It is recommended for big files, for huge chunks of data in and out and for sequential access, it can cause excessive page faults if used too frequently and randomly. In the case of Stephan, conventional file manger should be suffice
(unless he intentionally use mmap for show it like you suggest
)
Joshua Boyce> Any chance we could get the source code posted?
Yep, sorry - I've been slammed with work and haven't gotten a chance to clean up my source code like I planned, or post a How To Build Boost article. Here's the "original recipe" version, as explored in the video:
Disclaimer: the SHA-256 code is very lightly tested. As I recall mentioning, I originally wrote this at home, where I have an SHA-256 implementation written from scratch and exhaustively tested. But it's large and bulky and I wanted to show off Boost.ScopeExit, so I ripped it out and replaced it with the <bcrypt.h> machinery in a matter of minutes. It seemed to work fine but I wouldn't be surprised if I had gotten something wrong in the conversion.
The Boost.Filesystem and Boost.Bimap machinery is essentially unchanged from what I had at home, and I feel much more confident that that works as expected.
Thomas Petit> But Stephan, you miss the perfect importunity in your function read_file to sneak one of the little underused gem of boost: boost.iostreams.mapped_file.
Cool, I wasn't familiar with that. (My knowledge of Boost is surprisingly patchy. There are the parts that I used in college, where I used a bunch of stuff but it's all really old now. Then there are the parts that have been incorporated into TR1/C++0x - I'm very familiar with that, and reasonably familiar with the deltas between Boost and the Standard, since I've spent years working on them. And then there are the parts that I've played with at home recently, like Boost.Bimap. In between, there are a bunch of libraries that I haven't used, and parts of libraries that I haven't explored.)
WOW~!!~ BOOST!!
Thanks Charles and Stephan!!
VERY awesome! Just started using a little boost (just shared_ptr and noncopyable headers), so this comes in handy. I wouldn't mind some more boost videos.
One minor complaint is the time constraint you seem to have. Does Charles decide that, or did you have to get back to work?
You mentioned that use use stdio.h (performance I guess). I've extremely tried to stick with pure C++ (as part of me learning it correctly), a lot seem to recommend stdio though (for performance?).
What would be cool is some OpenGL/DirectX use of STL, like a texture manager, mesh loader etc.
Some general questions:
How would you return the vertices vector to an OpenGL function? What I do is (don't have it here, but somehow) something like:
Probably wrong (and a bunch of things I don't like, const correctness not guaranteed as elements can be modified, returning references are bad, etc. ), and should I return &vert3[0] (I guess it is the same thing?).
I don't want to copy the vector (obviously), especially as it gets called very often. I'm also not sure if my function is inline... would that disable RVO?
How about vector<unsigned char> containing color information, and the format is BGR, but I need RGB. Is there a better way, then using a for loop and store a temporary of B, assigning R to B, and the temporary to R?
...and finally, a last question. Should simple function variables, that are just assigned once (per function call), and not modified later on, be defined const (e.g. const float someValue = x / y;)?
@Deraynger: Well, part of your questions are better fit on the XNA Blog team or use the Ch9 forums. I've opened a thread for use SSE align with std::unique_ptr and std::array (the two play nice with _aligned_malloc), I could expand the forum with stuff you want like the load of textures (pretty easy with WIC) or you can open a new thread and I'll be happy share my experiments with you.
In case of BGR and RGB its a old case of endianness, you need see if OpenGL (or DirectX) provide support for both in hardware (extensions string), if don't have support you only way is switch the positions in host memory or in device memory (gpgpu).
@new2STL: Hey, yes, just thought STL or someone here probably has some tricks up their sleeves.
The BGR/RGB is for OpenGL ES. Would have to look again, but I don't think it supports BGR. I mainly wanted to know, whether there's a better way to swap all.
XNA is C# or .NET, isn't it?
I do have my own loaders/managers etc., was just thinking that I could probably see better code, and improve mine with STL goodness, like maybe making a template class for the different texture types, instead of deriving from an interface/abstract class or automatically deducing the correct loader to return from a factory class without having to explicitly define it (e.g. if (fileType == FileType::PNG) { return new PngTextureLoader(); } else if ... etc.)
@Deraynger: About XNA, its mainly C# for cross-platform, but have lots of native code too due XBOX/PC optimization, most of the helper library in the DirectX SDK falls into the xna namespace
OpenGL ES are a subset of OpenGL 2.1, in 2.1 the BGR textures turned in core but for ES I think they are optional. Anyway, you can run a pixel shader for swap the bits positions, there are many ways to do it. But if the texture is small (lets say, a square tile of 64 or 128 pixels) it can be done in host memory with little impact in the performance.
I had to say I'm watching all the awesome videos here on Going Deep for the same reason as you, to better learn STL and use its power in better code
@new2STL: Hey, ok thanks for the info on that. Never really did anything in DX.
Ok regarding the BGR swapping or doing it with a shader. It isn't anything critical, just wrote what came to mind, and could be improved
What bothers me with the general OpenGL books/examples, ist that it's almost always written for C (also new ones, for portability
). So, definitely no STL in them, meaning that there are also no good examples of how to create some ideal classes that everyone needs.
Yes, I guess these videos are quite popular (if the views on the top are somewhat accurate)
@Deraynger: I don't decide how much time STL gets for his lectures... He is given one hour of studio time which means he can't do one hour of lecture time... Set up and all that. Then there is how much work STL has to do at his day and night job. Its amazing he has 30 minutes in his day to do these. Thank you, STL!!!!!
C
Urgh 35 min limit is to harsh. : Otherwise great video!
I can agree with the boost fans here... great to see Boost videos!
I have a couple of questions.
1. Why no timestamp on the batch file. Seems unsecure to trust old hashes and not to check modification date of the files. Then again maybe I misunderstood the program.
2. Why hash files that have unique size?(btw that could have been a nice example of using STL, although it wouldn't be elegant: I presume that you could do multimap<size, file> and for every block of identical sized files do potential elimination). That would be nice to see because I don't know a nice way to do "for_each_multimap_block_that_has_the_same_key" elegantly.
Then again maybe I misunderstood the program, again. :)
3. Isnt there some other "normal" sha256 lib that doesnt require all the cleanup... TBH I dont know why it does require so much cleanup in the first place-maybe because OS tries to prevent other processes from looking. :)
Boost is truly great, not just from a content standpoint, but in how fast they release new versions/updates. I'm using it primary for TR1 support, but there's alot of other excellent stuff I've used:
I'm off to try mapped files, something I can use that I wasn't aware of - thanks Thomas.
Why not just return the pointer?
Deraynger> I wouldn't mind some more boost videos.
I'll see what I can do - there are several more Boost libraries that I'm familiar with.
Deraynger> One minor complaint is the time constraint you seem to have. Does Charles decide that, or did you have to get back to work?
That's all my fault. I'm insanely busy with VC11, and I didn't have time to prepare an example in advance. So on the day of filming, I furiously hacked out the deduplicate.cpp that I presented, making me run several minutes late. By the time I got to the studio, we had to finish up fast.
Deraynger> You mentioned that use use stdio.h (performance I guess).
I specifically have a love-hate, mostly hate, relationship with iostreams. iostreams are type-safe and buffer-safe, while stdio is not. On the other hand, iostreams performance is typically miserable, and trying to do anything tricky with them will expose you to their Standard-mandated guts which are absolutely awful (the contrast in design between iostreams, full of virtual nonsense, and the STL proper, is remarkable).
iostreams are fine for simple text I/O when performance doesn't matter. For binary I/O, iostreams don't really buy anything, and I prefer to wrap stdio.h instead (it must be wrapped, because manipulating FILE * directly is a recipe for resource leaks - as I eternally have to explain).
Deraynger> What would be cool is some OpenGL/DirectX use of STL, like a texture manager, mesh loader etc.
I have code at home that wraps OpenGL with Boost and the STL, but lifting it out into something presentable would take forever.
Deraynger> should I return &vert3[0] (I guess it is the same thing?).
v.data() is the C++0x way to get a pointer to a vector's elements. Unlike &v[0], it is correct when v is empty (but in that case the pointer can't be dereferenced, obviously).
Deraynger> I don't want to copy the vector (obviously), especially as it gets called very often.
Remember move semantics.
Deraynger> I'm also not sure if my function is inline... would that disable RVO?
According to my knowledge, the compiler's decisions to actually-inline something, and to apply the RVO/NRVO, are independent.
Deraynger> Is there a better way, then using a for loop and store a temporary of B, assigning R to B, and the temporary to R?
Not manipulating the data at all with the CPU, and getting the GPU to do it in a shader, would be ideal. Shaders are crazy powerful. (Even better, if you control the source, just ensure that the data is in the right format to begin with. The cheapest code is the code you never execute.) Otherwise, you can loop through the vector and std::swap() the colors.
Deraynger> Should simple function variables, that are just assigned once (per function call), and not modified later on, be defined const (e.g. const float someValue = x / y;)?
Yes. That clarifies intent and prevents mistakes.
Ivan> Seems unsecure to trust old hashes and not to check modification date of the files.
If a directory is deduplicated, and then files are modified (or deleted!), then data has been irrevocably lost. "Don't do that then."
Ivan> Why hash files that have unique size?
That would be a reasonable optimization - avoid hashing unique-sized files, unless and until duplicate-sized files appear (because the deduplicator can be run on a set of files, more files can be added, and then the deduplicator can be run again). Good idea, I hadn't thought of that.
Ivan> I don't know a nice way to do "for_each_multimap_block_that_has_the_same_key" elegantly.
This is actually super easy and elegant. Use multimap::equal_range().
Ivan> Isnt there some other "normal" sha256 lib that doesnt require all the cleanup...
SHA-256 can be written from scratch, and I've done that - but this is how the Windows API exposes it. The usual advice, which is good advice, is to never attempt to implement anything crypto-related by yourself. In my case, I learned C by implementing SHA-1, and I'm extremely comfortable (and careful!) with such code. Windows' implementation is probably faster than mine, though - I never resort to assembly.
I would have used Boost for this, but they don't yet have an implementation of SHA-256.
>> Windows' implementation is probably faster than mine, though - I never resort to assembly.
You should try some library like OpenSSL which is optimized to use dedicated co-processor (if you have one, such in VIA processors) or special instruction set included in newest Intel processors.
> STL: I have code at home that wraps OpenGL with Boost and the STL, but lifting it out into something presentable would take forever.
O yeah! Your text rendering engine, pretty nice
Marek, exactly, just use the OpenSSL crypto library. It contains every hash algorithm anyone would ever want to use. They're all implemented using the same set of functions (init, update, finalize). You won't have to implement anything yourself to support buffered I/O. The documentation seems to be outdated so browse the source to see what's available (sha256 is in the sha dir).
Marek> You should try some library like OpenSSL which is optimized to use dedicated co-processor (if you have one, such in VIA processors) or special instruction set included in newest Intel processors.
I expect and believe that the Windows implementation uses special instructions when applicable and available. This is true for things like memcpy() in the CRT.
Deraynger> Do you know of a smart (STL?) way to return an instance from a factory class, without having to use some if/else?
If I had to do something like that, I'd construct a std::map<Key, Factory> and populate it accordingly. For example, the Key could be a std::string storing a filename extension, and the Factory could be a std::function<std::shared_ptr<ImageParser> ()>. Having written non-member functions std::shared_ptr<ImageParser> makePNGParser() and std::shared_ptr<ImageParser> makeJPEGParser(), I could say m["png"] = makePNGParser; m["jpeg"] = makeJPEGParser (or m.insert(std::make_pair("png", makePNGParser)) if I were so inclined). Later, given an extension, I could perform a lookup to determine whether the extension was supported, and if so, retrieve a factory function for a parser.
However, when faced with this problem in the past, I've simply performed a sequence of regex_match()es, and called the proper loading function, without any factory functions being involved. I do use inheritance, in a highly structured form - always with non-virtual interfaces, always with shared_ptr<Base> - but only when the situation actually demands it - that is, when I need different types for something, but later I want to forget the specific types and treat them uniformly at runtime. (If I want to treat different types uniformly at compiletime, that calls for templates.)
This Boost.Filesystem library seems cool (especially the iterator tool), thanks for sharing.
BTW: Does it work properly with Unicode characters? I see your code uses std::string, it would be good if we could safely use it with Unicode filenames. Or does Boost.Filesystem use UTF-8 as Unicode encoding instead of UTF-16, so std::string is OK in this case?
Thanks for clarifying this.
Something I just remembered - if you want a quick and easy way to start using boost, this site provides an installer for the headers and pre-built binaries: (VS2003-VS2010, 32-bit only)
The path class supports multiple character types. So you can replace these:
with this:
One function instead of one for each string type you need to support.
Stephen again tnx for the answers.
IMHO good topic for next video could be regex>. Why? Because I have some CS background and I was surprised that VS had regex bugs- I always thought that regex machinery is some DFA/NFA generator that it is hard to do, but when you do it then it is hard to miss bugs.
Also if you will still be making videos in the future the day that VS 2012 goes public beta you could make a video detailing some cool new feature that was NDA just 1 day ago. :)
Also more boost would be great.
ups, post above was me, sorry for the possible confusion.
@STL: Ok, will definitely look into it. I still don't fully get the benefit from the NVI pattern. Gotta re-look it up in the Effective/More Effective C++ book.
Edit: Looked it up, but if I have a one step method (no setup etc. needed, then a public virtual function should be fine without NVI).
Edit 2: After looking at the Template Method Pattern, I think I understand what you mean. If I'm not mistaken, then the Base class(using NVI, possibly having an abstract member function) could be acting instead of a seperate factory class having to return an instance. The Base class would then know load the item itself (possibly having a overriden loader method in a derived class). I'll have to try and let you know if I don't get further
I see the benefit of the map, providing better code-reuse, without having to modify several code-sections. Runtime lookup satisfies my needs completely.
What I also don't understand is the shared_ptr<Base>, is that an aggregation, and not an inheritance?
Looking at this video when you're so stressed is painful !
I get stressed just by watching you
I agree with other viewers, STL needs more studio time.
Good c++ videos a rare on c9 so you should give them more time. Much much more time
I took a look at scope_exit.hpp, oh the horror of macro's, simply unreadable. This can be done so much better with c++0x.
scope_exit.hpp is a perfect example of why i don't use boost. That macro fetish should not be allowed in boost at all.
@STL: That trick in 'hexify'. Is it only good for when you want to write something out fast (quick and dirty) ?
An excellent lecture.
Your code calls the (useful) initial_path() function... which is, annoyingly, deprecated - see
:
More Boost in future lectures would be most welcome: there are so many good things in there, but it is hard to get to know about them.
I hate to say this, but I didn't particularly enjoy this video at all. When it's titled "Advanced STL", then I expect to see, well, things relating to the STL. If you want to teach me about hashing and about Boost, then title the video appropriately.
Int64> The path class supports multiple character types.
Yes - and Filesystem V3 (now the default) does this better than V2 did.
Ivan> I always thought that regex machinery is some DFA/NFA generator that it is hard to do, but when you do it then it is hard to miss bugs.
Actually, regex's implementation is exceedingly tricky, and bugs in it usually aren't obvious precisely because it's building and running a complicated NFA.
Deraynger> What I also don't understand is the shared_ptr<Base>, is that an aggregation, and not an inheritance?
shared_ptr<Derived> is convertible to shared_ptr<Base>. This allows shared_ptr to be used with inheritance hierarchies.
Mr Crash> I took a look at scope_exit.hpp, oh the horror of macro's, simply unreadable.
Are you aware that our STL implementation contains unreadable, horrifying macros? And yet, it gets the job done.
Mr Crash> That trick in 'hexify'. Is it only good for when you want to write something out fast (quick and dirty) ?
There's nothing wrong with it, it's just unusual - many people haven't seen indexing into a string literal like that before.
Will Watts> Your code calls the (useful) initial_path() function... which is, annoyingly, deprecated
Ah, I didn't know that. Calling current_path() at the beginning of main() and saving it somewhere is a reasonable thing to do.
Mark> When it's titled "Advanced STL", then I expect to see, well, things relating to the STL.
Noted. A few points (in order of decreasing hilarity):
1. I prefer the following explanation: that the "STL" in the series title refers to myself. :-> (This was Isaac Asimov's reply to people criticizing the title of The Intelligent Man's Guide To Science.)
2. Boost.Filesystem has been proposed for TR2 (in fact, it was the only proposal before TR2 was put into cryostasis due to C++0x running late), and I consider it highly likely that it'll make its way into the Standard beyond C++0x. So I claim, with some justification, that this really is about the STL - just the STL of the future.
3. Boost is very much related to the STL, both in interface and in authorship. Even if, say, boost::bimap never makes it into the Standard, its similarity to std::map is undeniable, and it definitely solves problems that the STL alone cannot solve nearly as easily. As I've been saying since Intro Part 1, the STL is a library, not a framework, and it's supposed to be used alongside other libraries. More than anything else, Boost plays nice with the STL by design.
Before VC9 SP1 with TR1, Boost would have had a much larger starring role in my videos - I covered shared_ptr way back in Intro Part 3.
Oh, makes sense, was thinking too far ahead of myself
STL>shared_ptr<Derived> is convertible to shared_ptr<Base>. This allows shared_ptr to be used with inheritance hierarchies.
Surprised no one has mentioned the rather elitist use of the forbidden goto command in that CNG code. I took a sharp intake of breath on seeing that.
I agree with the posts above about the time constraint sucking somewhat. STL having to blast through some convoluted programming at the speed of a syntax parser is hard to follow.
@STL
"Actually, regex's implementation is exceedingly tricky, and bugs in it usually aren't obvious precisely because it's building and running a complicated NFA."
If you wrote any part of it it would be interesting for CompSci nerds like me it would be an interesting video.
BTW do you share the code between C# regex and c++ regex? It would seem like a reasonable thing to do, unless the have different expression power.
"This is actually super easy and elegant. Use multimap::equal_range()."
I meant like in for_each_equal_range, I presume it should be something like (while range.first!=range.second) but I hoped that there is some quick way of doing it.
I know you use macros as workaround for N-arguments.
I haven't seen anything like what i saw in scope_exit.hpp thou. I have only seen small marcos for debugging.
Can you give an example ?
What i didnt like in scope_exit.hpp is that the starting / entry code (BOOST_SCOPE_EXIT, BOOST_SCOPE_EXIT_END) was macrofied, having marcos inside functions doesn't bother me because they are contained / simple and no macro on macro action.
// Now also imagine these macros are not defined one after another and instead are being defined all over the file. // Oh the happy code jumping, prepare for confusion, yay! // Now your job is to unwrap 'macro', and understand how it works and what it does, good luck !
Ivan> If you wrote any part of it it would be interesting for CompSci nerds like me it would be an interesting video.
Like the rest of the STL, I didn't write <regex>. The two major exceptions are make_shared<T>() in VC10 and uniform_int_distribution in VC11.
Ivan> BTW do you share the code between C# regex and c++ regex?
Nope.
Ivan> I meant like in for_each_equal_range
Oh, there's no dedicated function for that, but it's easy enough to write:
Wow, very nice of you Stephen to take the time to write the example. Tnx. To bad that this isnt SO so only a few people will see it.
BTW I have a small suggestion. I presume you are under constant deadlines so for next videos if it is easier you should focus on your STL stuff instead on the nonSTL stuff because you can give lecture on the STL in the middle of the night, while for writing boost examples it takes time. Again I like boost, but I would hate it if you would abandon the lectures because you dont have enough time to do them.
>Ivan: Again I like boost, but I would hate it if you would abandon the lectures because you dont have enough time to do them.
I second that!
( Remove )
@KerrekSB: For talk about std::thread Stephan must change the compiler to gcc 4.6 or use the paid implementation from just::thread, or the boost::thread (that is close enough to the std:
Now I can't remember if it is Sutter or Meyers that will do a talk about that topic on next meeting in CPP&B. Keep tracking in theirs blogs for video and slides.
Plus Sutter is doing a 2nd round interview: "The followup interview on Channel 9 has been scheduled, and will be shot on Thursday, June 2."
Re topic suggestion, here's something that's definitely STL and probably qualifies as "advanced". For quite some time I've been looking for a completely generic way to print out STL containers, in a way that fits into the usual stream operation idiom and handles nested containers. After bringing up the problem on StackOverflow and then on Channel 9 here, we finally seem to have pieced together a complete solution.
The original question is at StackOverflow, where Marcelo Cantos provided the first step. Later, Sven Groot improved on that in the Channel 9 Forum. Finally, I added a SFINAE-based type trait class to detect containers and remove the need to mention any explicit container classes.
I posted the final header and a working example on StackOverflow. I think this might be of general interest, as it allows you to output any sort of container type you may have defined without repeatedly writing either your own operator, or a for loop, or even a foreach construct with a lambda over and over again.
Possible ideas for extending the construction: 1) implement output for std::tuple in the same way as is done for std::pair, 2) add a C-array wrapper class to make C arrays amenable to printing through the same mechanism.
@KarrekSB
"Alternative topic suggestion: Allocators, and whether it makes sense two write those yourself. "
+1 I'm kind of unhappy that there isnt a .reserve for map set and "multi" versions of those containers... and always wondered is there a way to do something that will allocate continuous memory space for map. I know that map isnt random acces, but the idea is data locality. Imagine having first 3-4 levels of RB tree loaded into cache when you load the root note. :)
Nice video.
I was sad to miss you at BoostCon this year. Hope you'll find time to come next year.
Sebastian
@STL: Very nice videoS!
Very interesting subjects, extremely well explained, and, for some reason, your accent is perfectly understandable to my (non-english) ears :)
Btw I'm also interested in more boost videos, but maybe more focused on what's happening under the hood, not things we can read in the documentation.
C++0x is fine too.
@STL: Are you aware of Step Into Specific? It's a context menu option that allows you to avoid repeatedly stepping into and out of functions until you hit the right one.
hexify's parameter type is const vector<unsigned char>&. boost::iterator_range<unsigned char> might've been better. STL lacks a ptr pair wrapper (for contiguous memory).
> hash_file.right.find(f) != hash_file.right.end()
Can't you use hash_file.right.count(f) here?
> const bimap<string, string>::left_const_iterator j = hash_file.left.find(h);
Can't you just insert and check the return value to avoid a second lookup?
Before you said vector<unsigned char> should be used for buffers. However, it initializes the buffer to 0, which is unnecessary overhead. Isn't there a better solution?
Greetings,
Olaf
( Nevermind, see STLs answer
)
Ivan> you can give lecture on the STL in the middle of the night, while for writing boost examples it takes time.
Thanks - but the real answer is that everything takes time.
KerrekSB> Here's an idea for the next episode: multithreading with <thread>, synchronised data structures best practices etc. Optional extra: What's the use for <atomic>?
I can't show you VC11 yet, sorry.
(I've already resolved bugs on Connect saying that we've implemented <thread>/etc. in VC11, so I can reveal that much.)
KerrekSB> Alternative topic suggestion: Allocators, and whether it makes sense two write those yourself.
C++0x/VC11 has significantly reworked allocators, so I'd prefer to wait for that.
KerrekSB> For quite some time I've been looking for a completely generic way to print out STL containers
That's a great idea - I think I'll do Part 6 about this. I took a different approach, and my machinery is capable of printing out stuff like this:
[[4]<"khan"'"of"'"the"'"wrath">, [3]<"home"'"the"'"voyage">, [3]<"country"'"the"'"undiscovered">]
Ivan> I'm kind of unhappy that there isnt a .reserve for map set and "multi" versions of those containers...
They're node-based. reserve() doesn't make sense for them.
Ivan> and always wondered is there a way to do something that will allocate continuous memory space for map.
You can write a pool-based allocator. Note that Windows' Low Fragmentation Heap, which is used by default (I'm glossing over subtleties here), is node-friendly..
Well, we need to allocate our internal helper objects, and you REALLY want us to do that with your (rebound) allocator, instead of std::allocator. (This is actually how I broke the compiler immediately after joining VC.) This inherently requires the whole rebinding scheme.
CornedBee> I was sad to miss you at BoostCon this year. Hope you'll find time to come next year.
Yeah, I was insanely busy this year. I do hope to attend next year.
Sebastian> for some reason, your accent is perfectly understandable to my (non-english) ears
I grew up in Colorado, so I have the "standard Midwestern" accent. Very rarely, I'll trip over an R if I'm not paying attention (I think I got that from my father).
XTF> Are you aware of Step Into Specific?
No - I put all my skill points into C++, not IDEs. I'll have to look into that.
XTF> hexify's parameter type is const vector<unsigned char>&. boost::iterator_range<unsigned char> might've been better.
That would be an improvement (I haven't used iterator_range yet).
XTF> Can't you use hash_file.right.count(f) here?
count() is less efficient for multi-containers. I always use find().
XTF> Can't you just insert and check the return value to avoid a second lookup?
I believe so, yes. I would always do that for map, but this was my first time using bimap, and I forgot to consider that.
XTF> Before you said vector<unsigned char> should be used for buffers. However, it initializes the buffer to 0, which is unnecessary overhead. Isn't there a better solution?
You could write a simple class if performance really, really mattered.
Stephen very nice to hear you ans, I was worried that this is the end of the series. :) Also about
"Ivan> I'm kind of unhappy that there isnt a .reserve for map set and "multi" versions of those containers...
They're node-based. reserve() doesn't make sense for them."
Can you explain why. The same problem that you have with vector you can have with them. 10k insertions in multimap will call mem allocation 10k times, right? 10k insetion into (empty vector) will call less than 30. I understand the difference- you never have to reallocate existing part of the RBTree, but still I presume that people would be happy with avoiding a bunch of calls to malloc, data locality(and by that I mean it in a cool way(one level n node, and all his children in one block, not a couple level n nodes in one memory block, like here(figure 6-please don't ignore the picture because the smart-a** tone of the article :) ): ).
So when VC11 pops out lets hope that you will do a allocator video. :)
True, just wanted to point out that using const vector<unsigned char>& isn't very generic. And that I miss a ptr pair wrapper in the STL.
I'm not familiar with bimap, but I assume it's not a multi-container.
I like the you don't pay for what you don't use principle in C++, but using vectors for buffers violates that. I'd also prefer not to write my own class for such basic functionality.
The allocator could allocate a pool and serve requests from that to avoid calling malloc.
Ivan> Stephen very nice to hear you ans, I was worried that this is the end of the series.
I was on vacation the week of Memorial Day.
Ivan> 10k insertions in multimap will call mem allocation 10k times, right?
Yes, and that's fundamental. The node-based containers (list/forward_list, map/set, and their multi/unordered variants) allow any node to be erased at any time. Therefore, they need a separate allocation for each node. In contrast, if you erase a vector's element, it'll move any remaining elements down to fill the hole, and keep the extra capacity in reserve.
XTF> I'm not familiar with bimap, but I assume it's not a multi-container.
No, but following the convention means that I don't have to be extra careful when using multi-containers.
KerrekSB>> For quite some time I've been looking for a completely generic way to print out STL containers
STL> That's a great idea - I think I'll do Part 6 about this. I took a different approach, and my machinery is capable of printing out stuff like this:
[[4]<"khan"'"of"'"the"'"wrath">, [3]<"home"'"the"'"voyage">, [3]<"country"'"the"'"undiscovered">]
That's great, I look forward to the episode! In the meantime, I've put our pretty-printer on GitHub, here's the project website. There's a type-erasing helper class for convenient one-off delimiter overriding, too..
STL> Well, we need to allocate our internal helper objects, and you REALLY want us to do that with your (rebound) allocator, instead of std::allocator. (This is actually how I broke the compiler immediately after joining VC.) This inherently requires the whole rebinding scheme.
Are you saying that it just can't be made to work or doesn't make sense to permit two separate allocators, one for objects and one for bookkeeping, like:
Or would it suffice if I overloaded T::operator new() so that vector<T> would use std::alloc for bookkeeping but my own new for T?
I am wondering, how I can force VS2010 and XCode to make me include the headers, although another included header already contains it.
STL also prefers to include all, and I myself don't always know if I'm using something part of the included header (e.g. A) or a header (B) included in the header (A).
KerrekSB> There's a type-erasing helper class for convenient one-off delimiter overriding, too.
I consider type erasure to be unnecessary here, so it'll be interesting to compare our solutions. :->
I took your is_container that looks for const_iterator - I tried to write my own that detected begin(), but it snapped VC's mind like a twig.
KerrekSB> Are you saying that it just can't be made to work or doesn't make sense to permit two separate allocators, one for objects and one for bookkeeping
It doesn't make sense. Why would you want to treat them differently?
The node-based containers don't even allocate individual elements. They allocate nodes that contain individual elements.
Deraynger> I am wondering, how I can force VS2010 and XCode to make me include the headers, although another included header already contains it.
I'm not aware of any automated way to verify this. Having one would be nice.
Part 6 filming is scheduled for Friday, July 1.
@STL: Cool, thanks for the info!
@STL: Any change you can mention some info on how to do fuzzy grouping ?
This example data:
asd111bla
asd22bla
asd322bla
other stuff_1
other 2 stuff
other stuff_4
Will be grouped into this:
Group 1:
asd111bla
asd22bla
asd322bla
Group 2:
other stuff_1
other 2 stuff
other stuff_4
I've been researching how to do this but i'm unsure how to achieve this.
Comparing similarities of just two strings is relatively simple but the above is a problem.
You can do it using the same technique as comparing two strings but that will be very slow.
So can you show how to achieve this using the STL.
I filmed Part 6 today!
Here's my Pretty Printer code:
I won't answer questions about it until Part 6 is uploaded, though. There's no reason for me to explain stuff twice. :->
Jonas: The "hard parts", which are independent of the STL, are determining the similarity between strings (e.g. with an edit distance algorithm), and determining when strings are dissimilar enough to begin a new group. The STL makes everything else easy.
Here's an example where I'm judging similarity by common prefix length, and I consider strings with similarity < 2 to belong to different groups. You could imagine a more clever algorithm that, instead of using a fixed constant, adjusted it up or down to produce more or fewer groups, depending on the number of strings in the input.
I've made fuzzy_grouping() copy the strings because I'm lazy - you could easily have it return vector<size_t> with group lengths or something else.
@STL:
That works surprisingly quite well when the data is sorted which makes it depend on the sorting algo for good groupings. That's sort of iffy.
'mismatch' isn't even a longest common subsequence (LCS) or a pattern matching algorithm of any kind as far as i know.
'mismatch' algorithm is a nice find though, didn't know of it.
Do not quite understand it yet but i did just learn of it ;)
I apologize for being unclear in my first message, as my defense i was very tired. Was up half the night coding on another thing.
When i was talking about comparing strings for similarities i was talking about stuff like LCS () and pattern matching.
What do you think of this, am i over-thinking it ?
Can the stl help with this, as in an algorithm already exists that does this for me ? I can't find any thought.
@KerrekSB patience dude, the infamous wmv encoder isn't known for it's speed (or efficiency for that matter)
Also C++ videos isn't known to be prioritized on c9. They have disgustingly obvious favoritism issues here (c#, vb, wpf, silverlight, etc, languages and products owned by microsoft take priority / is more important)
Btw have anyone else noticed that all these microsoft owned languages are much slower then the standard non microsoft owned languages ?
Just compare vs2010 and vs2008, wpf did some nasty things to visual studio, a horrible mistake that should be reversed if you ask me but someones ego seems to be in the way for that to happen, sigh typical. No its not a money thing, moving to wpf was a big risk and risks cost much money when they fail period!
anyway..
That's why there's so few c++ videos on c9 although we were promised many c++ videos half a year ago.
c9 have become an ad site for microsoft products ew!
Where did the "for the developers" go ?
Did popularity get to their head ?
Oh please do prove me wrong but a simple search query doesnt lie.
ah oh my who pushed the rant button.
Mr Stephan T Lavavej keep up the good work without you there would be no reason to visit c9 anymore.
Hah, for sure, thanks a lot for these videos - great work, STL!
Whatever, dude.... -> Look here.
C
@Charles lol you just got self owned by providing the c++ tag search link. "simple search query doesnt lie"
Did you not read the post correctly ?
Do you dismis and ignore the post because you are biased or just in a bad mood ?
That post is feedback too. In a cranky form but it is still feedback.
I'm conserned that you do not listen to your users.
Sago sounds cranky but he do make good points. There are not many c++ videos made compared to other languages. There are c++ chitchat videos but they do not provide any useful information.
We do need much more STL like videos. That provide useful information.
The visual studio 2010 performace issues are also sadly true. I can mention it have memory issues too due to c# garage colletion.
@Josh K: Self-owned? OK... I'm not in a bad mood. Here:
If you look at the net new content over the past 6 months, I'd say that native content (C++) is increasing relative to the non-native stuff, but still not as frequent - I wasn't really arguing that. STL has a full time job as a C++ Library Developer - in fact, STL is the only STL-focused developer at Microsoft (meaning, he is the sole owner of the STL that ships with VC++...).
Sorry we don't post one C++ video per day. I truly wish we could.
C
KerrekSB> Are you familiar with EASTL () in any way?
I've looked at , and almost all of its concerns have either been fixed in C++0x or don't apply to VC ("The Microsoft C++ compiler does a rather good job of inlining").
Jonas> That works surprisingly quite well when the data is sorted which makes it depend on the sorting algo for good groupings.
It's looking at prefixes, so std::sort() would work just fine.
Jonas> 'mismatch' algorithm is a nice find though, didn't know of it. Do not quite understand it yet but i did just learn of it
It just reports the first place where two sequences differ.
Jonas> Can the stl help with this, as in an algorithm already exists that does this for me ?
The STL does a lot of stuff, but it doesn't have every data structure and it especially doesn't have every algorithm that's ever been invented. It serves as an extremely useful foundation for implementing more complicated data structures and algorithms, though.
I took the time to implement a more complicated fuzzy_grouping(). This one relies on levenshtein_distance(), which I've written (with Wikipedia's help) to accept arbitrary and possibly different forward iterators (this algorithm needs forward iterators, and can't accept input iterators). I've tested it with forward_list<char> and list<char>, for example.
What it does is create a multiset of strings, ordered by decreasing size (so, largest first). It grabs the largest string from the multiset, and looks for "similar" strings in the rest of the set, where "similar" is defined as having a Levenshtein distance <= N/2, where N is the size of this "primary" string. The primary and all similar strings (possibly 0) are emitted as a group. I calculate the distance from the primary, to avoid the problem of grouping "cold cord card ward warm" together despite the first and last words having no letters in common.
Again, I make no claims that this is an especially intelligent algorithm - it's just an example of what you can write with the STL.
A few newbie questions (while waiting for part 6) regarding namespaces etc.:
1. In the .cpp file, what do you consider best (or what's correct)?
2. If I have a Class in src/Utils/MyClass.h, and it relies on src/Utils/Detail/OtherClass.h, should I
3. Do you add the whole namespace when calling something from another structure?
4. Should I include from the base path (like boost does it)?
5. What's if it's lower, i.e. from Details back to some other class, e.g. Helpers, what do you recommend?
I know about using a shorter name for long namespaces (not sure of the syntax): using Det = Utils::Details; or something like that, but am not such a fan of it.
If I could decide, I'd choose, deeper Namespaces don't need to be used with the whole namespace, just the deeper part, e.g. Detail::OtherClass if calling from Utils to Utils/Detail. Higher ones would need the whole one (but that makes it less readable IMO).
In boost, the only thing I found in a source file was a huge list of using Full::Namespace::SomeClass;
@Charles: I know STL is very busy. I said "STL _like_ videos". You got to have more pepole then just STL that code in c++ and have good information and experience that we can learn from, right ?
"native content (C++) is increasing relative to the non-native stuff"
Most if not all of them are chitchat videos. If you're lucky only 5 minutes of a 60 minutes chitchat video have good information.
@Josh K: You're asking for more tutorial-lecture-demonstration type native content. Got it....
C
PS: Yes, there are obviously many C++ devs capable of doing this... That alone doesn't equate to the real potential for more STL-like content. (not everybody wants to lecture, or be on video, or do screencasts or even blog). I will try and get more folks to do something on C9 in a tutorial style for native developers.
If you have more people like that who would bring that clarity and insight to the screen, then by all means try and record them!
I'm sure that substantial content on any sort of programming (but native especially) would always be appreciated!
How about a series on parallelism, for example?
Deraynger:
#1: I've traditionally defined member functions at global scope with fully qualified names (MyNamespace::MyClass::MyMember). However, this is moderately verbose, especially if return types need to be qualified. (To the right of the parenthesis in ReturnType MyNamespace::MyClass::MyMember(STUFF, you can use names from MyNamespace without qualifying them as such, because the compiler has already seen you mention MyNamespace. This is not true for the ReturnType.)
I might switch to defining member functions in a reopened namespace MyNamespace { }.
A using-directive (that's what "using namespace MyNamespace;" is called) would feel weird for the purposes of defining member functions.
#2: Only Detail::OtherClass is necessary. Within Utils::MyClass::DoSth(), unqualified name lookup (to find "Detail") searches "inside out", and will search Utils (which has the Detail you want) before reaching into the global namespace (which is where you don't want to go).
You can use a using-directive (using namespace Detail;) or using-declaration (using Detail::OtherClass;) if you like.
However, remember that using-directives and using-declarations should NEVER appear at global scope in headers. That is enormously polluting and defeats the whole point of namespaces.
#3: I don't understand this question. As a general rule, I avoid writing unnecessary or redundant code (with very specific exceptions), and that includes unnecessary qualification.
#4: It depends - something like Boost is a very special and complex case.
If I were maintaining something in Boost, I'd include "my own stuff" with "relative paths", so boost/kittens/cat.hpp would include boost/kittens/detail/feedme.hpp with #include "detail/feedme.hpp". cat.hpp's own directory is first in the search order, so this should preserve everything if the whole tree is moved to boost/cute_fluffy_kittens/... I'd include "other stuff" with "base paths", like #include "boost/tuple/tuple.hpp", again to make myself immune to my own directory being moved.
#5: I would avoid ".." if possible.
I may have gotten something wrong with this include path stuff - it's been a long time since I've worked with deeply nested directories. The STL has an extremely flat directory structure, and my projects at home are the same way (and smaller).
> I know about using a shorter name for long namespaces (not sure of the syntax):
> using Det = Utils::Details; or something like that, but am not such a fan of it.
They're called namespace aliases: "namespace MSKittens = Microsoft::Cute::Fluffy::Kittens;"
I have used them for extremely verbose nested namespaces. My idea of a long namespace name is greater than 5 characters.
STL:
Thanks a lot for taking your time to answer my questions.
Regarding #1, so it's fine then that I do namespace kittens { <functions> } in the source file?
Regarding #2, not declare using directives/declarations in headers, I know that from your first STL series
Regarding #3, you basically answered it with boost. What I meant was, I have boost/kittens/detail/feedme.hpp and need to reference something, say boost/kittens/cat.h, e.g. a friend. In feedme.hpp would you do: friend class boost::kittens::cat;? Second part of it was, if it's refering to something in boost/helpers/stl_utils.hpp, would you use the fully qualified name (which you would, based on your answer to #4).
New questions (while still waiting
). This always bothers me, since I don't know these licensing details (maybe you might).
Are we allowed to use your code legally for ourselves, by just downloading it, and should or may we add the following 2 lines?
Can we also just add our copyright (although I would probably add both, so I know where I got it from)?
I got some files from someone online, he said I could use it how I want, commercially/modified etc., does that mean I can also change the copyright notice (would still keep his name in the file, where it still somehow represents the same class/file)?
What happens if I retype a code from someone, by looking at it, and modifying it. Do I still need to include his copyright notice? What's if the code is (maybe partially) on a blog. Can I then just use it without hist copyright notice?
@ Charles & STL
I agree with Josh K, although STL is pretty rare(person that likes his job so much that he is willing to spend time to explain his work in a lecture) I'm sure that you could find some other people like(or similar to) STL. For me I would like(ofc desires of one person are irrelevant but this is an example) a lecture on something that Herb mentioned: C++ Next or C++ Prim-language isomorphic to C++ but much easier to parse(=better tools, better compile speed, maybe even speed if it could make it easier to compiler to prove some things so that it can perform optimizations). I know that this isnt in development(at least not public) but IMHO people who understand it like Herb or C++ compiler people could do a lecture on it with some basic examples with like 2-3 hours of preparations. There is a bunch of stuff that you could ask them.
@ STL
Speaking of the compiler people :) :
1. Why dont you bother them to make you(and all other lib writers ofc) a _Ugly to normal compiler(you use normal naming convention to write libraries but when you want to ship the product you call makemeugly.exe on source code).
2. about namespaces and headers and poluting - why dont you ask (nicely :) ) compiler people to do the similar thing like i proposed in 1.
#local_using - that applies to the current file only.
That means that when they compile code to the "real" file they would remove #local_using namespace std; line and replace all vector with std::vector, swap with std::swap and so on...
P.S. I know that I might sound ungrateful because all this is free and STL lectures are awesome and I apologize for that, but it is like Stephen wrote on the whiteboard- there are no books. So I'm now hooked on STL lectures and I would like more of the same drug.
Deraynger> Regarding #1, so it's fine then that I do namespace kittens { <functions> } in the source file?
Yes, that's fine.
Deraynger> In feedme.hpp would you do: friend class boost::kittens::cat;?
If the class granting friendship is boost::kittens::detail::can_opener, I'd say friend class cat; because unqualified name lookup is inside-out.
Deraynger> Are we allowed to use your code legally for ourselves, by just downloading it, and should or may we add the following 2 lines?
I cannot answer legal questions.
Ivan> 1. Why dont you bother them to make you(and all other lib writers ofc) a _Ugly to normal compiler(you use normal naming convention to write libraries but when you want to ship the product you call makemeugly.exe on source code).
Internal names need to be _Ugly, but external names need to be pretty. It's difficult to decide programmatically which is which, especially given our convention of having external algorithms like count_if() call internal helpers like _Count_if().
Ivan> 2. about namespaces and headers and poluting - why dont you ask (nicely
) compiler people to do the similar thing like i proposed in 1. #local_using - that applies to the current file only.
As a general rule, I am opposed to non-Standard extensions.
@KerrekSB: Did you watch the C++ AMP presentations? I will be doing some more interviews with the PPL people and related tooling. Native parallelism is indeed an interesting topic. I've been wanting to visit the C++ compiler people, but alas they are rather swamped building the next version of the compilers. I would like to spend some time with the VC IDE people, but alas they are swamped building the next version (and like everything else around here -> we can't talk about it until it's ready to be talked about). My hope is that by BUILD, we'll have a lot to share for native devs. Let's see....
C
@STL
one quick one regarding the standard: why doesnt the standard define that compiler/linker should be able to figure that it included the same file already? include guards smell like 80s spirit :)
@Charles
I know that my every comment is something else on my wishlist, but I noticed that it looks like that Herb has a secret crush on Clojure. Could you bug him to make a short video explanation why. Ofc I might be totally wrong about Herbs feelings about Clojure.
Ivan> why doesnt the standard define that compiler/linker should be able to figure that it included the same file already?
Sometimes you do want to be able to include a subheader repeatedly (like VC10's <xxshared>, already deleted in VC11). It would be trivial to Standardize #pragma once with a different name like #once. At the cost of breaking backwards compatibility, idempotency could be made the default, with repeated inclusion enabled by #many or whatever. (Migration could be achieved with compiler/preprocessor switches.)
I suspect that nobody on the Committee considered it an urgent enough problem to fix - expert attention is a finite resource.
@Ivan: I know nothing of Herb's feelings with respect to Clojure. He may like it (many people who like Lisps do), but I don't know what makes you think he has a crush on the language. Next time I speak with him, I'll ask.
C
@STL
.
Good to know variadic templates have been implemented
@ Charles :
I was overstating it a bit, but that is the feeling I got from one interview...I remember Herb mentioning that Clojure does some things in an interesting way, or something like that. Afain I know that Herb and STL time is scarce and precious so I would be happy with ANY deep content(for example I didnt really like AMP presentation because it was very high level overview-that is important but I prefer hard core deep stuff like C++0x design decisions and Advanced STL series..) from Herb
@ STL this might be a stupid question but why would you want to "include subheader repeatedly"
If this is a stupid question skip it. :) Its just that I never heard something like that. I mean I know that you can include header 100x, that is why include guards exist, but I got a feeling that you are talking about something else.
Here is part 6, driven by part 5 and Niners! Thank you, STL. You are a scholar and a gentleman
C
Couldn't you use std::shared_ptr with a custom deleter instead of boost::scoped_exit?
Hi Stephen
A potential bug. Your code seems to assume that sha256 never has collisions. While collisions are rare, they do happen. If you have two different files generate the same hash, you will lose one of them.
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Advanced-STL/C9-Lectures-Stephan-T-Lavavej-Advanced-STL-5-of-n?format=html5 | CC-MAIN-2013-48 | refinedweb | 9,964 | 72.56 |
Yesterday we released a new version of Advanced Scala that changes the library used for examples from Scalaz to Cats. This post explains that change.
It’s Not About the Code
When we started writing Advanced Scala we thought it was a book about how to use Scalaz. This made it natural to put Scalaz in the title. Our thinking has evolved a lot since then. Advanced Scala has become a book about how to structure thinking about code using core abstractions like applicatives and monads. It’s about architecture and design, which is implemented using a specific library, but learning the library is not the goal of the book.
By analogy, consider painting. Painting is not about placing paint onto a canvas using a brush. It’s about using colour and form to convey the artist’s intent. The artist must be skillful in their craft to accurately achieve their desired effect, but the craft is only a means to an end. Similarly programming is not about
import statements, though it is necessary to know how to use them to be an effective Scala programmer.
It is far to say our thinking has evolved more quickly than our writing has, but over time Advanced Scala will focus more on “thinking in types” than on the code level details.
Why Cats?
So, if Advanced Scala is not about the particular library why change from Scalaz to Cats? For a variety of reasons we prefer Cats. We like it has a focus on approachability. We like that is putting effort into buildling a community, via Typelevel. We think Scala needs this and we want to support it. Thus we’re doing a small bit to help by targeting Cats in Advanced Scala.
But I Use Scalaz!
If you’re using Scalaz you will still find Advanced Scala useful. Cats and Scalaz are very similar and many concepts translate directly from one library to another. Instead of importing, say,
cats.Monad you import
scalaz.Monad, for example. The only important differences I have encountered are:
- Cats has a different structure to it’s applicative implementation; and
- A syntax import in Cats only imports syntax for the specific named typeclass, not for typeclasses the named typeclass extends. Concretely,
import scalaz.syntax.monoid._will import syntax for
Semigroupas well (
|+|), while in Cats you must use
import cats.syntax.semigroup._to have the same effect. This prevents collisions between imports that both import the same syntax, as can happen with, say,
import scalaz.syntax.traverse._and
import scalaz.syntax.applicative._, which both define
|@|. | http://underscore.io/blog/posts/2016/02/02/advanced-scala-scalaz-to-cats.html | CC-MAIN-2017-26 | refinedweb | 429 | 66.94 |
Opened 11 years ago
Closed 6 years ago
Last modified 4 years ago
#361 closed defect (wontfix)
Some Basic Math Filters
Description
This may be the wrong place to submit this...
I wanted some basic math filters (eg. add multiply divide subtract), there is already add so I created the others, no doubt probably the easiest filters I could create, though someone else may find them useful.
in django/core/defaultfilters.py
def mult(value, arg): "Multiplies the arg and the value" return int(value) * int(arg) def sub(value, arg): "Subtracts the arg from the value" return int(value) - int(arg) def div(value, arg): "Divides the value by the arg" return int(value) / int(arg) template.register_filter('mult', mult, True) template.register_filter('sub', sub, True) template.register_filter('div', div, True)
Attachments (1)
Change History (24)
comment:1 Changed 11 years ago by
comment:2 Changed 9 years ago by
this is a very simple enhancement that would nicely blend into the set of django-supplied template filters.
(remember there is an add filter already)
maybe someone could reconsider inclusion of this trivial patch.
today it would look like this:
in django/template/defaultfilters.py
def mult(value, arg): "Multiplies the arg and the value" return int(value) * int(arg) mult.is_safe = False def sub(value, arg): "Subtracts the arg from the value" return int(value) - int(arg) sub.is_safe = False def div(value, arg): "Divides the value by the arg" return int(value) / int(arg) div.is_safe = False template.register_filter('mult', mult, True) template.register_filter('sub', sub, True) template.register_filter('div', div, True)
Changed 9 years ago by
comment:3 Changed 9 years ago by
Please don't reopen tickets marked closed by a committer. Take this to djangosnippets - it doesn't need to be part of Django.
comment:4 Changed 6 years ago by
"add" is a filter, yet none of the other basic math operations are. This seems a little inconsistent to me.
it doesn't need to be part of Django.
And yet something like "ipsum lorem dolor sit amet" does? I think this deserves a 2nd look.
comment:5 Changed 6 years ago by
As newcomer to django I find really weird there is no math operations or the elseif operator in the template system. They are so common in others... Nobody willing to explain why there are such limitation?
comment:6 follow-up: 7 Changed 6 years ago by
Django runs it's shop like the Third Reich... ADD MATH FILTERS
comment:7 Changed 6 years ago by
Django runs it's shop like the Third Reich... ADD MATH FILTERS
Your behavior doesn't really deserve a comment, especially that you were brave enough not to sign it with your name. Normally people discuss this kind of stuff on the developers list, but I'm not sure if I want to discuss with this kind of offensive comments.
comment:8 Changed 6 years ago by
comment:9 Changed 5 years ago by
it doesn't need to be part of Django.
May I know why it according to you doesn't need to be part of Django? Every time I find myself writing these filters yet again I have the oposite impression. There are situations where not having the ability to do basic math in templates results in code that is more complicated (thus error prone) and longer to write then it would otherwise be necessary. I find this unwillingness to include something as basic and harmless as these filters are rather absurd.
comment:10 follow-up:.
comment.
Django templates is the most inconsistent and horrible templating language available. Can you not just switch to Jinja2 and be done with it?
I want to add two datetimes together in my template. Nice, now I have to do it in several places in my code. Not good.
comment:12 Changed 5 years ago by
Unfortunately, your criticism lacks details that would allow us to understand your problem and actually improve the template language.
I think template tags are appropriate for the use case you're describing.
comment:13 Changed 5 years ago by
Here is sample code where simple math in the template engine may make sense.
It is totally presentation related.
This post should not be taken as critizam, more as developers trying to improve the framework.
We have a list of items that need to be displayed in a "3*n" table
There are 2 issues
1) Primary Issue: The forloop.counter starts at 1 which, depending on how you look at it, is wrong. It should, to remain consistant with python and most other languages, start at 0.
2) Due to the lack of the simple math functions (requirement driving from incorrect forloop.counter value) I will have to try to implement the code listed above.
{% comment %} Counter is used to perform the "row grouping" when mod 3 == 0 then a new row is started When counter is > 0 then the row is ended {% endcomment %} {% for item in items %} {% if counter|divisibleby:3 or forloop.counter == 1 %} {% if forloop.counter > 1 %} </div> {% endif %} <div class="creator_gallery_row"> {% endif %} {% include "terra_creator/gallery_item.html" %} {% endfor %}
Just a note for reference: divisibleby "DOES need to be part of Django" but multiplication not?
Ok durring writing this i found that
forloop.counter|add:"-1"|divisibleby:3
will solve my problem
I still think that you should listen to your users. It up to them to decide if they want to "bastardise" their application by putting application logic into the view, not the framework.
comment:14 Changed 5 years ago by
Adam, you should try forloop.counter0 instead of forloop.counter|add:"-1"|divisibleby:3.
comment:15 Changed 5 years ago by
Thanks for the reply.
comment:16 Changed 5 years ago by
I think this is a common misconception. Business logic code doesn't belong in the template, but at the same time, template logic doesn't belong in the code. So, not having arithmetic operations in the template do exactly the opposite. It encourages users to put the template logic inside their code, which is as bad as placing business logic code inside templates. Reminds me of Ocam's razor :)
comment:17 Changed 5 years ago by
I know this is not the place, but I have to say it. It would be nice to be able to do things like:
<div style="padding-top: {{ 200 - thumb.heigth }}"> <img src="{{ thumb }}"> </div>
comment:18 Changed 5 years ago by
Today I found the need for having math filters in the template, specifically for visualization (changing 0.01PPM into 1p) which is not business logic but indeed template logic.
Just as I was about to add a comment saying "omfg why doesn't Django support this", I realized why the core devs are rejecting this. And the simple reason is, it would get abused like crazy and it's would be a slippery path to failure. Sure it would be useful for sane developers on specific use cases, but it would be abused by the majority.
Although Adam makes a good point with this comment:
It up to them to decide if they want to "bastardise" their application by putting application logic into the view, not the framework.
dloewenherz also makes another good point:
"add" is a filter, yet none of the other basic math operations are. This seems a little inconsistent to me.
Could a core developer perhaps explain what the reasoning was behind having add, but not any of the others?? I had a look around the archives, but couldn't seem to find any justification for this. It seems to me that we should either have all of them, or none at all??
It seems this topic hits a nerve with a lot of people, so if a core dev is able to give some answers on the above, we will finally be able to put this issue to rest with a concrete justification.
comment:19 Changed 5 years ago by
As a side note, if math filters were to ever get accepted into the core, it would be so much better if they weren't as template tag filters, but instead as native expressions with nested support:
For example:
{{ (someval * 10) + 5 }}
comment:20 follow-up: 21 Changed 4 years ago by
I would like to have a "subtract" filter for this use case:
I want to only show the first 10 items in a list, then have a final line saying how many other items there were,
E.g.
"And 3 other items..."
This is purely a display (template) logic, so it should be supported by the template language (same as 'add').
{% for l in list_items %} {% if forloop.counter < 10 %} <li>l </li> {% else %} {% ifequal forloop.counter 10 %} <li> And {{ list_items|length|subtract:10 }} other objects... </li> {% endifequal %} {% endif %} {% endfor %}
comment:21 Changed 4 years ago by
I would like to have a "subtract" filter for this use case:
If you feel you have a display related argument here - you are better off opening a new ticket than posting a comment on a very old wontfix ticket.
If you choose to do so, please explicitly reference this ticket, and explain why you think it is a different circumstance.
comment:22 Changed 4 years ago by
As a workaround, I created a separate Django app containing "sub", "mul", "div" and "abs" filters:
will@..., you could also add -10 instead of subtracting 10 in your case:
list_items|length|add:-10
comment:23 Changed 4 years ago by
I also have a case where I feel that the logic should rest in the template, not in the code.
I have posts that have variable parent-child levels and I need to add padding to them according to their level. Should I really do "padding = level * 50" in my view code?
The ticket system isn't the rigt place for this; feel free to create a wiki page for it for now; in the future we're going to have some sort of contributed apps repository for stuff like this. | https://code.djangoproject.com/ticket/361 | CC-MAIN-2016-50 | refinedweb | 1,685 | 62.07 |
Flask File Uploading
Flask File Upload
- It is the process of transmitting the binary or normal files to the server. The uploaded file is saved to the temporary directory of the server for a while before it is saved to some desired location.
Syntax
name = request.files['file'].filename
Flask File Config
app.config['UPLOAD_FOLDER']
- It is used to mention the upload folder.
app.config['MAX_CONTENT-PATH']
- It is used to mention the maximum size of the file to be uploaded.
To upload a file from the local file system to the server.
Sample code
- In this code, we will provide a file selector(file_upload_form.html) to the user where the user can select a file from the file system and submit it to the server.
- At the server side, the file is fetched using the request.files['file'] object and saved to the location on the server.
Read Also
upload.py
from flask import * app = Flask(__name__) @app.route('/') def upload(): return render_template("file_upload_form.html") @app.route('/success', methods = ['POST']) def success(): if request.method == 'POST': f = request.files['file'] f.save(f.filename) return render_template("success.html", name = f.filename) if __name__ == '__main__': app.run(debug = True)
file_upload_form.html
<html> <head> <title>upload</title> </head> <body> <form action = "/success" method = "post" enctype="multipart/form-data"> <input type="file" name="file" /> <input type = "submit" value="Upload"> </form> </body> </html>
success.html
<html> <head> <title>success</title> </head> <body> <p>File uploaded successfully</p> <p>File Name: {{name}}</p> </body> </html>
Output
Flask File Upload
- The user has chosen a file named as logo.jpg. It will be upload to the server.
Flask File Upload to server
- The below snapshot is generated for the URL localhost:5000/success. On successfully uploading the file
Flask File Sucess
- To check the directory where the upload.py is located as given in the below image.
Flask File Directory
If you want to learn about Python Course , you can refer the following links Python Training in Chennai , Machine Learning Training in Chennai , Data Science Training in Chennai , Artificial Intelligence Training in Chennai | https://www.wikitechy.com/tutorial/flask/flask-file-uploading | CC-MAIN-2020-50 | refinedweb | 348 | 51.85 |
Hey guys, I've been trying to figure out how to read in a list of integers from a text file using visual c++ express 2010, and I'm completely stumped. This is the kind of thing that I'm trying now:
When I run this all I get is "press any key to continue..."When I run this all I get is "press any key to continue..."Code:#include <iostream> #include <fstream> using namespace std; int main () { int x; ifstream myfile; myfile.open ("text.txt"); if (!myfile) { cout << "unable to load file.\n"; } while (myfile >> x) { cout << x << endl; } myfile.close(); system("pause"); return 0; }
So the file is correctly read, but I can't do anything with the data.
text.txt contains integers on separate lines. Like this:
12
32
14
What am I doing wrong? | http://cboard.cprogramming.com/cplusplus-programming/134424-reading-list-integers.html | CC-MAIN-2014-35 | refinedweb | 138 | 82.75 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.