text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Earth Engine provides access to client-side user interface (UI) widgets through the ui package. Use the ui package to construct graphical interfaces for your Earth Engine scripts. These interfaces can include simple input widgets like buttons and checkboxes, more complex widgets like charts and maps, panels to control the layout of the UI, and event handlers for interactions between UI widgets. Explore the full functionality of the ui API in the Docs tab on the left side of the Code Editor. The following example uses the ui package to illustrate basic functions for making a widget, defining behavior for when the user clicks the widget, and displaying the widget. Hello, world! This example represents a simple UI of a button displayed in the console. Clicking the button results in 'Hello, world!' getting printed to the console: // Make a button widget. var button = ui.Button('Click me!'); // Set a callback function to run when the // button is clicked. button.onClick(function() { print('Hello, world!'); }); // Display the button in the console. print(button); Observe that first, the button is created with a single argument: its label. Next, the button's onClick() function is called. The argument to onClick() is another function that will get run whenever the button is clicked. This mechanism of a function to be called (a "callback" function) when an event happens is called an "event handler" and is used widely in the UI library. In this example, when the button is clicked, the function prints 'Hello, world!' to the console. Mutability Note that unlike objects in the ee.* namespace, objects within the ui.* namespace are mutable. So you don’t need to reassign the object to a variable every time you call an instance function on the object. Simply calling the function will mutate (change) the widget. Appending the following code to the previous example results in registering another callback for the button's click event: // Set another callback function on the button. button.onClick(function() { print('Oh, yeah!'); }); Copy this code to the end of the previous example and click Run. Now when you click the button, both messages are printed to the console. Use the UI pages to learn more about building UIs for your Earth Engine scripts. The Widgets page provides a visual tour and describes basic functionality of the widgets in the ui package. The Panels and Layouts page describes top-level containers and layouts you can use to organize and arrange widgets. The Events page has details on configuring the behavior and interaction of widgets in your UI.
https://developers.google.com/earth-engine/ui?hl=fi
CC-MAIN-2019-43
refinedweb
426
66.44
adam17Member Content count699 Joined Last visited Community Reputation227 Neutral About adam17 - RankAdvanced Member Digipen: The best college for programming? adam17 replied to RadioactiveMicrobe's topic in Games Career DevelopmentJust to give the quick TLDR response to this, I would highly recommend looking into universities with high game development rankings. Check their media school ranking if you want to do game design or check their computer science school ranking if you want to do software development. I did a lot of research for schools and how they rank for software development and game programming. My search showed that USC has the best program. I am currently attending school at USC and I have to say it is very rigorous, but it will open you to so many different aspects of software development. They teach concepts on very detailed levels which will be extremely beneficial to your future in or out of the game industry. Take a look around and see what is out there. Personally I would recommend staying away from tech schools and look at universities instead. That's just my opinion though. OpenGL 3D Perspective Projection (w) adam17 posted a topic in Graphics and GPU ProgrammingI'm working on an assignment for my graphics class and I've encountered a problem. Basically I have a set of coordinates in World space and I have to transform them into Screen space and display them. I have to manually build all of the matrices and do the transformations by myself (no opengl or d3d). I have the following 3 matrices[list] [*]Xsp (proj to screen transform) [*]Xpi (image to projection transform) [*]Xiw (world to image transform) [/list] I'm rendering to a 256x256 frame buffer, so my Xsp matrix looks like this: [code] 128 0 0 128 0 -128 0 128 0 0 1 0 0 0 0 1 [/code] My Xpi matrix has m[3][2] set to tan(fov / 2) where my fov = 35. (In the matrix .32 was just rounded for easier viewing) [code] 1 0 0 0 0 1 0 0 0 0 1 0 0 0 .32 0 [/code] Lastly my Xiw matrix is just the inverse of my camera transform. The camera is at 0,0,-15, looking at 0,0,0 with an up-vector of 0,1,0 [code] 1 0 0 0 0 1 0 0 0 0 1 15 0 0 0 1 [/code] Using just these three matrices I can multiply them against a coordinate and get it transformed into screen space just fine. The catch is when I want them to be perspective correct. I've tried figuring out what 'w' is, but I cannot find a solid definition of how to calculate it. Some places are saying that the final Z value is used to divide X and Y by, but it doesn't look right. Some places are saying the W coord that is calculated by multiplying the matrix with the coordinate (assuming 1.0 for the 4th coord) is the W to divide by. I can't seem to get either method to work. My question is what is W and how is it calculated? Thank you so much in advance! -Adam super mario derivatives adam17 replied to adam17's topic in Game Design and TheoryWOW giana sisters is a blatant ripoff! im betting there was a lot of money lost over that. as far as finding games identical to mario, that wasnt my goal. i was trying to figure out what makes mario sell millions of copies even though the genre is very old. it seems that mario is the only franchise that knows how to make a good 2d platformer. donkey kong for the SNES was amazing. it was a lot of fun. as for it being the mario franchise alone? that im not sure about. if that was the case, younger kids say around 10 years old, would not be interested in mario. they would play it for a bit and leave it unfinished. is it the story? it couldnt be. rescuing the princess is the main story. nothing more than that. the story in a porno is more engrossing lol. is it the mechanics? mario is about timing with a touch of exploring. you need to time the jumps and runs perfectly to avoid enemies and/or get coins. this lends the urge to become perfect and gain higher scores and faster times. is it the graphics? maybe. during the NES years of mario, no other game (for a while) had smooth side scrolling graphics. most games loaded screen by screen. every console after that mario was outshined graphically by tons of other games. the graphic style within the past few years may have some selling appeal. every mario game is cute and cuddly. take the turtles in new SMB. they dance with the music. fortunately it adds to the gameplay. donkey kong for the SNES (namely the second and third games) fall into the cute and cuddly category too. they sold tons, and they had nothing to do with mario. japan has a knack for cute and cuddly. look at pokemon and hello kitty. look at packaging for products. everything is super happy and cheerful. i dont know. there is an essence that mario contains that draws players to it year after year, and i cannot figure it out. super mario derivatives adam17 posted a topic in Game Design and TheoryI was playing New Super Mario Bros Wii over the past few days and started wondering if there are other games out there that are similar to Super Mario. I started brain storming and was only able to come up with Donkey Kong for the SNES. There were some other titles, but I discarded them because they did not involve the same game mechanics. Take for example Contra. It is a 2D platformer, but I discarded it because it involves shooting. Are there any other games out there that share similarities to Super Mario? If not, why are there none? FPS's are a dime a dozen and there are almost no differences between them. Good way for texture animations adam17 replied to B_old's topic in Graphics and GPU ProgrammingPersonally what I have done is built a small app that will take a folder of textures and generate a square sprite sheet. The app also generates an XML file that list every frame's position in a Rectangle format (x, y, w, h). The animation class then just passes the uv coordinates (rectangles) to the render along with the sprite sheet to do rendering. I built it that way so if an artist wants to specify their own coordinates, or use a pre-existing non-square sprite sheet, they could. - Quote:Original post by chlerub You pass "CommonBase" string instead of the actual type you want to instantiate. Pass the type you found (t), not a string. WOOHOO!!! thanks for the help! that got it working! thanks -Adam - i appreciate the help you have provided me so far. im still having a bit of a problem though. here are my base classes, and the derived classes: public class CommonBase { public virtual void DoStuff() { Console.WriteLine("BaseClass - DoStuff()"); } } public class BaseClass : CommonBase { public override void DoStuff() { Console.WriteLine("CommonBase - DoStuff()"); } } public class Class2 : CommonBase { public override void DoStuff() { Console.WriteLine("Class2"); } } public class Class1 : BaseClass { public override void DoStuff() { Console.WriteLine("Class1"); } } here is my experiment code that loads the dlls: string currentDirectory = Directory.GetCurrentDirectory(); string[] libraries = Directory.GetFiles(currentDirectory, "*.dll"); CommonBase baseClass; Console.WriteLine("CurrentDirectory: " + currentDirectory); foreach (string s in libraries) { Console.WriteLine(s); Assembly a = Assembly.LoadFile(s); Type[] types = a.GetTypes(); foreach (Type t in types) { if (t.BaseType == typeof(CommonBase)) { Console.WriteLine(t.FullName + " is a type of CommonBase"); baseClass = (CommonBase)a.CreateInstance("CommonBase"); baseClass.DoStuff(); // crashes here because baseClass is null } Console.WriteLine(); } } am i doing something wrong to cause the crashing? thanks -Adam C# plugin system adam17 posted a topic in General and Gameplay Programmingim trying to build a simple plugin system, but im having some issues. i have an abstract class that all plugin classes need to inherit from. once a plugin is built it will be put into a directory for the program to pull from. i can search the directory from the program to find the dlls and i can load their assembly. the problem that i am having is that i cannot figure out how to create an instance of that plugin class. its very easy when you know the name of the class, but how do you do it when you dont know the name of the class? thanks -adam XNA: Initialize and LoadContent not called adam17 replied to adam17's topic in Graphics and GPU Programmingin all of my methods, i have the call to the base method at the very end ie base.Initialize() shows up at the very end of my overridden Initialize method. i looked through my code a little and found something that might be a culprit but im not sure. here is a simple example: class GameState : DrawableGameComponent {...} class Menu : DrawableGameComponent {...} class TestState : GameState { public Menu menu; // could this be a part of the problem? } to help elaborate (if necessary) GameState is used in a GameStateManager which inherits from DrawableGameComponent. the GameStateManager is setup in Game.cs in the method Initialize and before base.Initialize() is called. XNA: Initialize and LoadContent not called adam17 posted a topic in Graphics and GPU ProgrammingI am running into a weird problem. I have a class that inherits from DrawableGameComponent, and I have overridden the Initialize, LoadContent, UnloadContent, Update and Draw methods. Update and Draw are called as normal by XNA (i dont know who calls the methods exactly, but i know im not calling them explicitly). the weird part is Initialize, and LoadContent are never called. What could be causing this? Thanks -Adam Educational shooter idea adam17 replied to jackolantern1's topic in Game Design and Theoryi had a game similar to that in the third grade called math blaster. i remember it actually being fun. im pretty sure your idea will work pretty well. I would do some play testing with some kids to see if the game will be too difficult or not. personally i wouldnt do anything more difficult than multiplying 2 numbers together. when you add more than that, it becomes really difficult. difficult enough that a lot of adults wouldnt be able to keep up. adding several numbers is a lot easier though. as for the setting up numbers to multiply together, i would suggest having an equation show up on the bullet itself. that way the player only has to concentrate on one spot rather remembering which ship it came from and then looking back at their own ship. during that time the player can type in their answer. Quote:Original post by lightbringer You could also expand the concept from addition all the way to differential calculus and beyond, or to fields beyond math. i REALLY like the idea of calculus. omg that would have helped me out soooo much a few semesters ago! you could teach simple concepts in learning chain rule, product rule, quotient rule, integrals, etc. i havent taken differential cal so i dont know much about it :D - AWESOME! thanks for all of the help! I was able to fix my problem thanks to you. - Thanks for the quick reply! what do you mean by smart pointers? im a little hesitant about using pointers because right now im using c++, but later this will be ported to C#. C++ virtual function problem adam17 posted a topic in General and Gameplay Programmingim having a weird issue with a virtual function. here is my code: class Base { public: int num; virtual void func() { cout << "Base func()\n"; } }; class Child1 : public Base { public: int myNum; void func() { cout << "Child1 " << myNum << endl; } }; class Child2 : public Base { public: int myNum; void func() { cout << "Child2 " << myNum << endl; } }; void main() { vector<Base> obj; Base b; Child1 c1; Child2 c2; b.num = 0; c1.num = 1; c1.myNum = 2; c2.num = 3; c2.myNum = 4; obj.push_back(b); obj.push_back(c1); obj.push_back(c2); for(size_t i=0; i < obj.size(); i++) { cout << obj[i].num << endl; obj[i].func(); } } when i compile and run it, i get the following: 0 Base func() 1 Base func() 3 Base func() Press any key to continue . . . its my understanding that when i call the function func(), it will call func of the class its defined in, not base class every time. is this a problem with vector maybe? Handling Enemies adam17 replied to adam17's topic in General and Gameplay ProgrammingQuote:Original post by direwulf Quote:i also have a class handling the map. my problem is how do i pass information between the managers efficiently? Using references or pointers... Quote:how does the tower get bot positions sent to it? Why does a tower have bot positions sent to it? That doesn't sound like a typical function of a tower. Please word your question more clearly, as I haven't a clue what you are actually trying to do. If I had to GUESS what you are trying to do, I'd guess that you were trying to find all units that are within some range from a specific point. This is called a range search, and can be implemented in O(1) time using a regular grid if the distribution is uniform, or O(log n) time using a kd-tree for a general distribution. sorry about the wording. i was in a rush before leaving work. the range search sounds like a good idea. how would i go about implementing it in O(1)? do you mean O(n)? currently i have a botManager class, towerManager class, and a map class. the map class contains all of the tiles in the grid, and the models. the botManager handles all of the bots, and the towerManager handles all of the towers. i put a botManager and a towerManager inside of the map class so the map can call the updates. here is a quick example: class Tower { //... }; class TowerManager { public: vector<Tower> towers; //... }; class Bot { //... }; class BotManager { public: vector<Bot> bots; //... }; class Map { public: BotManager botManager; TowerManager towerManager; //... }; should i just have the map class handle tower-bot searches? [Edited by - adam17 on September 22, 2009 4:11:33 PM]
https://www.gamedev.net/profile/38773-adam17/?tab=topics
CC-MAIN-2018-05
refinedweb
2,417
75
25 February 2010 04:04 [Source: ICIS news] GUANGZHOU (ICIS news)--Petrochemical majors Sinopec and Saudi Basic Industries Corp (SABIC) have been running their ?xml:namespace> “Such operation rate would be kept in the next three months at least as the upstream refinery would run at [an] 80% rate during the period,” said the source in Mandarin. The cracker gets naphtha feedstock from a All the downstream facilities of the cracker would also operate at around 70% load, he added. These include an ethylene oxide/ethylene glycol (EO/EG) unit that can produce 40,000 tonnes/year of EO and 360,000 tonnes/year of EG; a 300,000 tonne/year linear low density polyethylene (LLDPE) plant; a 300,000 tonne/year high density polyethylene (HDPE) facility. The petrochemical facility in Tianjin also has a 450,000 tonne/year polypropylene (PP) plant,
http://www.icis.com/Articles/2010/02/25/9337737/sinopec-sabic-to-keep-tianjin-cracker-rate-at-70-over-3-months.html
CC-MAIN-2014-52
refinedweb
142
54.15
WO2008095010A1 - Secure network switching infrastructure - Google PatentsSecure network switching infrastructure Download PDF Info - Publication number - WO2008095010A1WO2008095010A1 PCT/US2008/052475 US2008052475W WO2008095010A1 WO 2008095010 A1 WO2008095010 A1 WO 2008095010A1 US 2008052475 W US2008052475 W US 2008052475W WO 2008095010 A1 WO2008095010 A1 WO 2008095010A1 - Authority - WO - WIPO (PCT) - Prior art keywords - flow - controller - switch - secure - switches - Prior art date Links - 230000015654 memory Effects 0 claims description 11 - 230000000977 initiatory Effects 0 claims description 2 - 230000027455 binding Effects 0 description 23 - 238000000034 methods Methods 0 description 11 - 238000009739 binding Methods 0 description 8 -0000000694 effects Effects 0 description 7 - 101700073717 ARY1 family Proteins 0 description 5 - 101700080191 ARY2 family Proteins 0 description 5 - 102100015499 BRD2 Human genes 0 description 5 - 101700003936 BRD2 family Proteins 0 description 5 - 101700015498 NAT family Proteins 0 description 5 - 230000002708 enhancing Effects 0 description 5 - 238000004891 communication Methods 0 description 4 - 238000001914 filtration Methods 0 description 4 - 230000002633 protecting Effects 0 description 4 - 230000001010 compromised Effects 0 description 3 - 230000001419 dependent Effects 0 description 3 - 238000002955 isolation Methods 0 description 3 - 239000010410 layers Substances 0 description 3 - 239000010912 leaf Substances 0 description 3 - 238000005457 optimization Methods 0 description 3 - 230000004224 protection Effects 0 description 3 - 239000000872 buffers Substances 0 description 2 - 238000004422 calculation algorithm Methods 0 description 2 - 239000011162 core materials Substances 0 description 2 - 230000000875 corresponding Effects 0 description 2 - 238000005242 forging Methods 0 description 2 - 239000000727 fractions Substances 0 description 2 - 230000013016 learning Effects 0 description 2 - 238000005259 measurements Methods 0 description 2 - 230000004044 response Effects 0 description 2 - 241000282414 Homo sapiens Species 0 description 1 - 241000764238 Isis Species 0 description 1 - 241001182316 Linda Species 0 description 1 - 239000004165 Methyl esters of fatty acids Substances 0 description 1 - 206010029412 Nightmare Diseases 0 description 1 - 231100000614 Poisons Toxicity 0 description 1 - 108060007789 SpaN family Proteins 0 description 1 - 230000036462 Unbound Effects 0 description 1 - 230000001133 acceleration Effects 0 description 1 - 238000007792 addition Methods 0 description 1 - 238000004458 analytical methods Methods 0 description 1 - 230000002155 anti-virotic Effects 0 description 1 - 230000003190 augmentative Effects 0 description 1 - 230000033228 biological regulation Effects 0 description 1 - 238000000354 decomposition Methods 0 description 1 - 230000003111 delayed Effects 0 description 1 - 230000001809 detectable Effects 0 description 1 - 230000018109 developmental process Effects 0 description 1 - 238000003745 diagnosis Methods 0 description 1 - 238000009826 distribution Methods 0 description 1 - 239000004744 fabric Substances 0 description 1 - 239000000835 fiber Substances 0 description 1 - 230000036541 health Effects 0 description 1 - 238000005417 image-selected in vivo spectroscopy Methods 0 description 1 - 238000004310 industry Methods 0 description 1 - 239000011133 lead Substances 0 description 1 - 230000000670 limiting Effects 0 description 1 - 230000036629 mind Effects 0 description 1 - 230000004048 modification Effects 0 description 1 - 238000006011 modification Methods 0 description 1 - 230000000051 modifying Effects 0 description 1 - 238000009740 moulding (composite fabrication) Methods 0 description 1 - 230000002093 peripheral Effects 0 description 1 - 230000002085 persistent Effects 0 description 1 - 239000002957 persistent organic pollutant Substances 0 description 1 - 239000002574 poison Substances 0 description 1 - 230000035755 proliferation Effects 0 description 1 - 230000001681 protective Effects 0 description 1 - 230000002829 reduced Effects 0 description 1 - 230000003362 replicative Effects 0 description 1 - 230000011218 segmentation Effects 0 description 1 - 230000003068 static Effects 0 description 1 - 238000003786 synthesis Methods 0 description 1281—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database at program execution time, where the protection is within the operating system - SECURE NETWORK SWITCHING INFRASTRUCTURE BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The invention relates to network packet switching, and more particularly, to secure network packet switching. 2. Description of the Related Art [0002] The Internet architecture was born in a far more innocent era, when there was little need to consider how to defend against malicious attacks. Many of the Internet's primary design goals that were so critical to its success, such as universal connectivity and decentralized control, are now at odds with security. [0003] Worms, malware, and sophisticated attackers mean that security can no longer be ignored. This is particularly true for enterprise networks, where it is unacceptable to lose data, expose private information, or lose system availability. Security measures have been retrofitted to enterprise networks via many mechanisms, including router ACLs, firewalls, NATs, and middleboxes, along with complex link-layer technologies such as VLANs. [0004] Despite years of experience and experimentation, these mechanisms remain far from ideal and have created a management nightmare. Requiring a significant amount of configuration and oversight, they are often limited in the range of policies that they can enforce and produce networks that are complex and brittle. Moreover, even with these techniques, security within the enterprise remains notoriously poor. Worms routinely cause significant losses in productivity and increase potential for data loss. Attacks resulting in theft of intellectual property and other sensitive information are also common. [0005] Various shortcomings are present in the architecture most commonly used in today's networks. Today's networking technologies are largely based on Ethernet and IP, both of which use a destination based datagram model for forwarding. The source addresses of the packets traversing the network are largely ignored by the forwarding elements. This has two important, negative consequences. First, a host can easily forge its source address to evade filtering mechanisms in the network. Source forging is particularly dangerous within a LAN environment where it can be used to poison switch learning tables and ARP caches. Source forging can also be used to fake DNS and DHCP responses. Secondly, lack of in-network knowledge of traffic sources makes it difficult to attribute a packet to a user or to a machine. At its most benign, lack of attribution can make it difficult to track down the location of "phantom-hosts.'' More seriously, it may be impossible to determine the source of an intrusion gi\en a sufficiently clever attacker. [0006] A typical enterprise network today uses several mechanisms simultaneously to protect its network: VLANs, ACLs, firewalls, NATs, and so on. The security policy is distributed among the boxes that implement these mechanisms, making it difficult to correctly implement an enterprise-wide security policy. Configuration is complex; for example, routing protocols often require thousands of lines of policy configuration. Furthermore, the configuration is often dependent on network topology and based on addresses and physical ports, rather than on authenticated end-points. When the topology changes or hosts move, the configuration frequently breaks, requires careful repair , and potentially undermines its security policies. [0007] A common response is to put all security policy in one box and at a choke-point in the network, for example, in a firewall at the network's entry and exit points. If an attacker makes it through the firewall, then they will have unfettered access to the whole network. Further, firewalls have been largely restricted to enforcing coarse-grain network perimeters. Even in this limited role, misconfiguration has been a persistent problem . This can be attributed to several factors; in particular, their low-level policy specification and highly localized view leaves firewalls highly sensitive to changes in topology. [0008] Another way to address this complexity is to enforce protection of the end host via distributed firewalls. While reasonable, this places all trust in the end hosts. For this end hosts to perform enforcement, the end host must be trusted (or at least some part of it, e.g., the OS, a VMM, the NIC, or some small peripheral). End host firewalls can be disabled or bypassed, leaving the network unprotected, and they offer no containment of malicious infrastructure, e.g., a compromised NIDS. Furthermore, in a distributed firewall scenario, the network infrastructure itself receives no protection, i.e., the network still allows connectivity by default. This design affords no defense-in-depth if the end-point firewall is bypassed, as it leaves all other network elements exposed. [0009] Today's networks provide a fertile environment for the skilled attacker, switches and routers must correctly export link state, calculate routes, and perform filtering; over time, these mechanisms have become more complex, with new vulnerabilities discovered at an alarming rate. If compromised, an attacker can take down the network or redirect traffic to permit eavesdropping, traffic analysis, and man-in-the-middle attacks. [0010] Another resource for an attacker is the proliferation of information on the network layout of today's enterprises. This knowledge is valuable for identifying sensitive servers, firewalls, and IDS systems that can be exploited for compromise or denial of service. Topology information is easy to gather: switches and routers keep track of the network topology (e.g., the OSPF topology database) and broadcast it periodically in plain text. Likewise, host enumeration (e.g., ping and ARP scans), port scanning, traceroutes, and SNMP can easily reveal the existence of, and the route to, hosts. Today, it is common for network operators to filter ICMP and disable or change default SNMP passphrases to limit the amount of information available to an intruder. As these services become more difficult to access, however, the network becomes more difficult to diagnose. [0011] Today's networks trust multiple components, such as firewalls, switches, routers, DNS, and authentication services (e.g., Kerberos, AD, and Radius). The compromise of any one component can wreak havoc on the entire enterprise. [0012] Weaver et al. argue that existing configurations of coarse-grain network perimeters (e.g., NIDS and multiple firewalls) and end host protective mechanisms (e.g. antivirus software) are ineffective against worms when employed individually or in combination. They advocate augmenting traditional coarse-grain perimeters with fine-grain protection mechanisms throughout the network, especially to detect and halt worm propagation. [0013] There are a number of Identity-Based Networking (IBN) solutions available in the industry. However, most lack control of the datapath, are passive, or require modifications to the end-hosts. [0014] VLANs are widely used in enterprise networks for segmentation, isolation, and enforcement of course-grain policies; they are commonly used to quarantine unauthenticated hosts or hosts without health certificates. VLANs are notoriously difficult to use, requiring much hand-holding and manual configuration. [0015] Often misconfigured routers make firewalls simply irrelevant by routing around them. The inability to answer simple reachability questions in today's enterprise networks has fueled commercial offerings to help administrators discover what connectivity exists in their network. [0016] In their 4D architecture, Rexford et al., "NetworkWide Decision Making: Toward A Wafer- Thin Control Plane," Proc. Hotnets, Nov. 2004, argue that the decentralized routing policy, access control, and management has resulted in complex routers and cumbersome, difficult- to -manage networks. They argue that routing (the control plane) should be separated from forwarding, resulting in a very simple data path. Although 4D centralizes routing policy decisions, they retain the security model of today's networks. Routing (forwarding tables) and access controls (filtering rules) are still decoupled, disseminated to forwarding elements, and operate the basis of weakly-bound end-point identifiers (IP addresses). [0017] Predicate routing attempts to unify security and routing by defining connectivity as a set of declarative statements from which routing tables and filters are generated. In contrast, our goal is to make users first-class objects, as opposed to end-point IDs or IP addresses, that can be used to define access controls. [0018] In addition to retaining the characteristics that have resulted in the wide deployment of IP and Ethernet networks - simple use model, suitable (e.g.. Gigabit) performance, the ability to scale to support large organizations, and robustness and adaptability to failure - a solution should address the deficiencies addressed here. SUMMARY OF THE INVENTION [0019] In order to achieve the properties described in the previous section, embodiments according to our invention utilize a centralized control architecture, fhe preferred architecture is managed from a logically centralized controller. Rather than distributing policy declaration, routing computation, and permission checks among the switches and routers, these functions are all managed by the controller. As a result, the switches arc reduced to very simple, forwarding elements whose sole purpose is to enforce the controller's decisions. [0020] Centralizing the control functions provides the following benefits. First, it reduces the trusted computing base by minimizing the number of heavily trusted components on the network to one, in contrast to the prior designs in which a compromise of any of the trusted services, LDAP, DNS, DHCP, or routers can wreak havoc on a network. Secondly, limiting the consistency protocols between highly trusted entities protects them from attack. Prior consistency protocols are often done in plaintext (e.g. dyndns) and can thus be subverted by a malicious party with access to the traffic. Finally, centralization reduces the overhead required to maintain consistency. [0021] In the preferred embodiments the network is "off-by-default." That is, by default, hosts on the network cannot communicate with each other; they can only route to the network controller. Hosts and users must first authenticate themselves with the controller before they can request access to the network resources and, ultimately, to other end hosts. Allowing the controller to interpose on each communication allows strict control over all network flows. In addition, requiring authentication of all network principals (hosts and users) allows control to be defined over high level names in a secure manner. [0022] The controller uses the first packet of each flow for connection setup. When a packet arrives at the controller, the controller decides whether the flow represented by that packet should be allowed. The controller knows the global network topology and performs route computation for permitted flows. It grants access by explicitly enabling flows within the network switches along the chosen route. The controller can be replicated for redundancy and performance. [0023] In the preferred embodiments the switches are simple and dumb. The switches preferably consist of a simple How table which forwards packets under the direction of the controller. When a packet arrives that is not in the flow table, they forward that packet to the controller, along with information about which port the packet arrived on. When a packet arrives that is in the flow table, it is forwarded according to the controller's directive. Not every switch in the network needs to be one of these switches as the design allows switches to be added gradually: the network becomes more manageable with each additional switch. [0024] When the controller checks a packet against the global policy, it is preferably evaluating the packet against a set of simple rules, such as "Guests can communicate using HTTP, but only via a web proxy"' or "VoIP phones are not allowed to communicate with laptops." To aid in allowing this global policy to be specified in terms of such physical entities, there is a need to reliably and securely associate a packet with the user, group, or machine that sent it. If the mappings between machine names and IP addresses (DNS) or between IP addresses and MAC addresses (ARP and DHCP) are handled elsewhere and are unreliable, then it is not possible to tell who sent the packet, even if the user authenticates with the network. With logical centralization it is simple to keep the namespace consistent, as components join, leave and move around the network. Network state changes simply require updating the bindings at the controller. [0025] In the preferred embodiments a scries of sequences of techniques are used to secure the bindings between packet headers and the physical entities that sent them. First, the controller takes over all the binding of addresses. When machines use DHCP to request an IP address, the controller assigns it knowing to which switch port the machine is connected, enabling the controller to attribute an arriving packet to a physical port. Second, the packet must come from a machine that is registered on the network, thus attributing it to a particular machine. Finally, users are required to authenticate themselves with the network, for example, via FFfTP redirects in a manner similar to those used by commercial WiFi hotspots, binding users to hosts. Therefore, whenever a packet arrives to the controller, it can securely associate the packet to the particular user and host that sent it. [0026J There are several powerful consequences of the controller knowing both where users and machines are attached and all bindings associated with them. The controller can keep track of where any entity is located. When it moves, the controller finds out as soon as packets start to arrive from a different switch port or wireless access point. The controller can choose to allow the new flow (it can even handle address mobility directly in the controller without modifying the host). [0027] Therefore networks according to the present invention address problems with prior art network architectures, improving overall network security. BRIEF DESCRIPTION OF THE FIGURES [0028] Figure 1 is a block diagram of a network according to the present invention. [0029] Figure 2 is a block diagram of the logical components of the controller of Figure 1. [0030] Figure 3 is a block diagram of switch hardware and software according to the present invention. [0031] Figure 4 is a block diagram of the data path of the switch of Figure 3. [0032] Figure 5 is a block diagram of software modules of the switch of Figure 3. [0033] Figures 6 and 7 arc block diagrams of networks incorporating prior art switches and switches according to the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0034] Referring now to Figure 1, a network 100 according to the present invention is illustrated. A controller 102 is present to provide network control functions as described below. A series of interconnected switches 104A-D are present to provide the basic packet switching function. A wireless access point 106 is shown connected to switch 104A to provide wireless connectivity. For the following discussion, in many aspects the access point 106 operates as a switch 104. Servers 108 A-D and workstations 11 OA-D are connected to the switches 104A-D. A notebook computer 1 12 having wireless network capabilities connects to the access point 106. The servers 108, workstations 110 and notebook 112 are conventional units and are not modified to operate on the network 100. This is a simple network for purposes of illustration. An enterprise network will have vastly more components but will function on the same principles. [0035] With reference to Figure 1, there are five basic activities that define how the network 100 operates. A first activity is registration. All switches 104, users, servers 108, workstations 110 and notebooks 112 are registered at the controller 102 with the credentials necessary to authenticate them. The credentials depend on the authentication mechanisms in use. For example, hosts, collectively the servers 108, workstations 1 10 and notebooks 112, may be authenticated by their MAC addresses, users via usernamc and password, and switches through secure certificates. All switches 104 are also preconfigured with the credentials needed to authenticate the controller 102 (e.g., the controller's public key). [0036] A second activity is bootstrapping. Switches 104 bootstrap connectivity by creating a spanning tree rooted at the controller 102. As the spanning tree is being created, each switch 104 authenticates with and creates a secure channel to the controller 102. Once a secure connection is established, the switches 104 send link-state information to the controller 102 which is then aggregated to reconstruct the network topology. Each switch 104 knows only a portion of the network topology. Only the controller 102 is aware of the full topology, thus improving security. [0037] A third activity is authentication. Assume User A joins the network with host HOC. Because no flow entries exist in switch 104D for the new host, it will initially forward all of the host HOC packets to the controller 102 (marked with the switch 104D ingress port, the default operation for any unknown packet). Next assume Host HOC sends a DIICP request to the controller 102. After checking the host HOC MAC address, the controller 102 allocates an IP address (IP HOC) for it, binding host 1 1OC to IP 1 K)C, IP 1 1OC to MAC HOC, and MAC HOC to a physical port on switch 104D. In the next operation User A opens a web browser, whose traffic is directed to the controller 102, and authenticates through a web-form. Once authenticated, user A is bound to host 102. [0038] A fourth activity is flow setup. To begin, User A initiates a connection to User B at host HOD, who is assumed to have already authenticated in a manner similar to User A. Switch 104D forwards the packet to the controller 102 after determining that the packet does not match any active entries in its flow table. On receipt of the packet, the controller 102 decides whether to allow or deny the flow, or require it to traverse a set of waypoints. If the flow is allowed, the controller 102 computes the flow's route, including any policy-specified waypoints on the path. The controller 102 adds a new entry to the flow tables of all the switches 104 along the path. [0039] The fifth aspect is forwarding. If the controller 102 allowed the path, it sends the packet back to switch 104D which forwards it to the switch 104C based on the new flow entry. Switch 104C in turn forwards the packet to switch 104B, which in turn forwards the packet to host HOD based on its new flow entry. Subsequent packets from the flow are forwarded directly by the switch 104D, and are not sent to the controller 102. The flow-entry is kept in the relevant switches 104 until it times out or is revoked by the controller 102. [0040] Λ switch 104 is like a simplified Ethernet switch. It has several Ethernet interfaces that send and receive standard Ethernet packets. Internally, however, the switch 104 is much simpler, as there are several things that conventional Ethernet switches do that the switch 104 need not do. The switch 104 does not need to learn addresses, support VLANs, check for source-address spoofing, or keep flow-level statistics (e.g., start and end time of flows, although it will typically maintain per (low packet and byte counters for each flow entry). If the switch 104 is replacing a layer-3 "'switch" or router, it does not need to maintain forwarding tables, ACLs, or NAT. It does not need to run routing protocols such as OSPF, ISIS, and RIP. Nor does it need separate support for SPANs and port-replication. Port-replication is handled directly by the flow table under the direction of the controller 102. [00411 It is also worth noting that the flow table can be several orders-of- magnitude smaller than the forwarding table in an equivalent Ethernet switch. In an Ethernet switch, the table is sized to minimize broadcast traffic: as switches flood during learning, this can swamp links and makes the network less secure. As a result, an Ethernet switch needs to remember all the addresses it's likely to encounter; even small wiring closet switches typically contain a million entries. The present switches 104,. [0042] The switch 104 datapath is a managed flow table. Flow entries contain a Header (to match packets against), an Action (to tell the switch 104 what to do with the packet), and Per-Flow Data described below. There are two common types of entries in the flow table: per-flow entries describing application flows that should be forwarded, and pcr-Flow Data, and set an activity bit so that inactive entries can be timed-out. For misbehaving hosts, the Header field contains an Ethernet source address and the physical ingress port. The associated Action is to drop the packet, update a packet-and-byte counter, and set an activity bit to tell when the host has stopped sending. [0043] Only the controller 102 can add entries to the flow table of the switch 104. Entries are removed because they timeout due to inactivity, which is a local decision, or because they are revoked by the controller 102. The controller 102 might revoke a single, badly behaved flow, or it might remove a whole group of flows belonging to a misbehaving host, a host that has just left the network, or a host whose privileges have just changed. [0044] The flow table is preferably implemented using two exact-match tables: One for application flow entries and one for misbehaving host entries. Because flow entries arc exact matches, rather than longest-prefix matches, it is easy to use hashing schemes in conventional memories rather than expensive, power-hungry TCAMs. [0045J Other actions are possible in addition to just forward and drop. For example, a switch 104 might maintain multiple queues for different classes of traffic, and the controller 102 can tell it to queue packets from application flows in a particular queue by inserting queue IDs into the flow table. This can be used for end-to-end layer-2 isolation for classes of users or hosts. A switch 104 could also perform address translation by replacing packet headers. This could be used to obfuscate addresses in the network 100 by ''swapping" addresses at each switch 104 along the path, so that an eavesdropper would not be able to tell which end- hosts are communicating, or to implement address translation for NAT in order to conserve addresses. Finally, a switch 104 could control the rate of a flow. [0046] The switch 104 also preferably maintains a handful of implementation-specific entries to reduce the amount of traffic sent to the controller 102. For example, the switch 104 can set up symmetric entries for flows that are allowed to be outgoing only. This number should remain small to keep the switch 104 simple, although this is at the discretion of the designer. On one hand, such entries can reduce the amount of traffic sent to the controller 102; on the other hand, any traffic that misses on the flow table will be sent to the controller 102 anyway, so this is just an optimization. [0047] It is worth pointing out that the secure channel from a switch 104 to its controller 102 may pass through other switches 104. As far as the other switches 104 are concerned, the channel simply appears as an additional flow- entry in their table. [0048] The switch 104 needs a small local manager to establish and maintain the secure channel to the controller 102, to monitor link status, and to provide an interface for any additional switch-specific management and diagnostics. This can be implemented in the switch's software layer. [0049] There are two ways a switch 104 can talk to the controller 102. The first one, which has been discussed so far, is for switches 104 that are part of the same physical network as the controller 102. This is expected to be the most common case; e.g., in an enterprise network on a single campus. In this case, the switch 104 finds the controller 102, preferably using the modified Minimum Spanning Tree protocol described below. The process results in a secure channel from switch 104 to switch 104 all the way to the controller 102. If the switch 104 is not within the same broadcast domain as the controller 102, the switch 104 can create an IP tunnel to it after being manually configured with its IP address. This approach can be used to control switches 104 in arbitrary locations, e.g., the other side of a conventional router or in a remote location. In one interesting application, the switch 104, most likely a wireless access point 106, is placed in a home or small business, managed remotely by the controller 102 over this secure tunnel. The local switch manager relays link status to the controller 102 so it can reconstruct the topology for route computation. [0050] Switches 104 maintain a list of neighboring switches 104 by broadcasting and receiving neighbor-discovery messages. Neighbor lists are sent to the controller 102 after authentication, on any detectable change in link status, and periodically every 15 seconds. [0051] Figure 2 gives a logical block-diagram of the controller 102. The components do not have to be co-located on the same machine and can operate on any suitable hardware and software environment, the hardware including a CPU, memory for storing data and software programs, and a network interface and the software including an operating system, a network interface driver and various other components described below. [0052] Briefly, the components work as follows. An authentication component 202 is passed all traffic from unauthenticated or unbound MAC addresses. It authenticates users and hosts using credentials stored in a registration database 204 and optionally provides IP addresses when serving as the DHCP server. Once a host or user authenticates, the controller 102 remembers to which switch port they are connected. The controller 102 holds the policy rules, stored in a policy file 206, which are compiled by a policy compiler 208 into a fast lookup table (not shown). When a new flow starts, it is checked against the rules by a permission check module 210 to see if it should be accepted, denied, or routed through a waypoint. Next, a route computation module 212 uses the network topology 214 to pick the flow's route which is used in conjunction with the permission information from the permission check module 210 to build the various flow table entries provided to the switches 104. The topology 214 is maintained by a switch manager 216, which receives link updates from the switches 104 as described above. [0053] All entities that are to be named by the network 100 (i.e., hosts, protocols, switches, users, and access points) must be registered. The set of registered entities make up the policy namespace and is used to statically check the policy to ensure it is declared over valid principles. The entities can be registered directly with the controller 102, or — as is more likely in practice, the controller 102 can interface with a global registry such as LDAP or AD, which would then be queried by the controller 102. By forgoing switch registration, it is also possible to provide the same '"plug-and-play 102" configuration model for switches as Ethernet. Under this configuration the switches 104 would distribute keys on boot-up, rather than requiring manual distribution, under the assumption that the network 100 has not yet been compromised. [0054] All switches, hosts, and users must authenticate with the network 100. No particular host authentication mechanism is specified; a network 100 could support multiple authentication methods (e.g., 802. Ix or explicit user login) and employ entity-specific authentication methods. In a preferred embodiment hosts authenticate by presenting registered MAC addresses, while users authenticate through a web front-end to a Kerberos server. Switches 104 authenticate using SSL with server- and client-side certificates. [0055] One of the powerful features of the present network 100 is that it can easily track all the bindings between names, addresses, and physical ports on the network 100, even as switches 104, hosts, and users join, leave, and move around the network 100. It is this ability to track these dynamic bindings that makes a policy language possible. It allows description of policies in terms of users and hosts, yet implementation of the policy uses flow tables in switches 104. [0056J A binding is never made without requiring authentication, to prevent an attacker assuming the identity of another host or user. When the controller 102 detects that a user or host leaves, all of its bindings are invalidated, and all of its flows are revoked at the switch 104 to which it was connected. Unfortunately, in some cases, we cannot get reliable explicit join and leave events from the network 100. Therefore, the controller 102 may resort to timeouts or the detection of movement to another physical access point before revoking access. [0057] Because the controller 102. [00S8] The controller 102. Therefore the controllers 102 should provide an interface that gives privileged users access to the information. In one preferred embodiment, we built a modified DNS server that accepts a query with a timestamp, and returns the complete bound namespace associated with a specified user, host, or IP address. [0059] The controller 102 can be implemented to be stateful or stateless. A stateful controller 102 keeps track of all the flows it has created. When the policy changes, when the topology changes, or when a host or user misbehaves, a stateful controller 102 can traverse its list of flows and make changes where necessary. A stateless controller 102 does not keep track of the flows it created; it relies on the switches 104 to keep track of their flow tables. If anything changes or moves, the associated flows would be revoked by the controller 102 sending commands to the switch's Local Manager. It as a design choice whether a controller 102 is stateful or stateless, as there are arguments for and against both approaches. [0060] There are many occasions when a controller 102 wants to limit the resources granted to a user, host, or flow. For example, it might wish to limit a flow's rate, limit the rate at which new flows are setup, or limit the number of IP addresses allocated. The limits will depend on the design of the controller 102 and the switch 104, and they will be at the discretion of the network manager. In general, however, the present invention makes it easy to enforce these limits either by installing a filter in a switch's flow table or by telling the switch 104 to limit a flow's rate. [0061] The ability to directly manage resources from the controller 102 is the primary means of protecting the network from resource exhaustion attacks. To protect itself from connection flooding from unauthenticated hosts, a controller 102 can place a limit on the number of authentication requests per host and per switch port; hosts that exceed their allocation can be closed down by adding an entry in the flow table that blocks their Ethernet address. If such hosts spoof their address, the controller 102 can disable the switch port. A similar approach can be used to prevent flooding from authenticated hosts. [0062] Flow state exhaustion attacks are also preventable through resource limits. Since each flow setup request is attributable to a user, host or access point, the controller 102 can enforce limits on the number of outstanding flows per identifiable source. The network 100 may also support more advanced flow allocation policies, such as enforcing strict limits on the number of flows forwarded in hardware per source, and looser limits on the number of flows in the slower (and more abundant) software forwarding tables. [0063] Enterprise networks typically carry a lot of multicast and broadcast traffic. Indeed, VLANs were first introduced to limit overwhelming amounts of broadcast traffic. It is worth distinguishing broadcast traffic which is mostly discovery protocols, such as ΛRP from multicast traffic which is often from useful applications, such as video. In a flow-based network as in the present invention, it is quite easy for switches 104 to handle multicast. The switch 104 keeps a bitmap for each flow to indicate which ports the packets arc to be sent to along the path. [0064] In principle, broadcast discovery protocols are also easy to handle in the controller 102. Typically, a host is trying to find a server or an address. Given that the controller 102 knows all, it can reply to a request without creating a new flow and broadcasting the traffic. This provides an easy solution for ARP traffic, which is a significant fraction of all network traffic. The controller 102 knows all IP and Ethernet addresses and can reply directly. In practice, however, ARP could generate a huge load for the controller 102. One embodiment would be to provide a dedicated ARP server in the network 100 to which that all switches 104 direct all ARP traffic. But there is a dilemma when trying to support other discovery protocols; each one has its own protocol, and it would be onerous for the controller 102 to understand all of them. The preferred approach has been to implement the common ones directly in the controller 102, and then broadcast low-level requests with a rate-limit. While this approach does not scale well, this is considered a legacy problem and discovery protocols will largely go away when networks according to the present invention are adopted, being replaced by a direct way to query the network, sue as one similar to the fabric login used in Fibre Channel networks. [0065] Designing a network architecture around a central controller 102 raises concerns about availability and scalability. While measurements discussed below suggest that thousands of machines can be managed by a single desktop computer, multiple controllers 102 may be desirable to provide fault-tolerance or to scale to very large networks. This section describes three techniques for replicating the controller 102. In the simplest two approaches, which focus solely on improving fault-tolerance, secondary controllers 102 are ready to step in upon the primary's failure. These can be in cold-standby, having no network binding state, or warm-standby, having network binding state, modes. In the fully- replicated model, which also improves scalability, requests from switches 104 are spread over multiple active controllers 102. [0066] In the cold-standby approach, a primary controller 102 is the root of the modified spanning tree (MST) and handles all registration, authentication, and flow establishment requests. Backup controllers sit idly-by waiting to take over if needed. All controllers 102 participate in the MST, sending HELLO messages to switches 104 advertising their ID. Just as with a standard spanning tree, if the root with the "lowest" ID fails, the network 100 will converge on a new root, i.e., a new controller. If a backup becomes the new MST root, they will start to receive flow requests and begin acting as the primary controller. In this way, controllers 102 rc- authenticatc and re-bind upon primary failure. Furthermore, in large networks, it might take a while for the MST to reconverge. [0067] arc kept consistent among the controllers, but now binds must be replicated across controllers as well. Because these bindings can change quickly as new users and hosts come and go, it is preferred that only weak consistency be maintained. Because controllers make bind events atomic, primary failures can at worst lose the latest bindings, requiring that some new users and hosts rc-authenticate themselves. [0068], the job of maintaining consistent journals of the bind events is more difficult. It is preferred- weight, alternatives to provide stronger consistency guarantees if desired (e.g., replicated state machines). [0069] Link and switch failures must not bring down the network 100 as well. Recall that switches 104 always send neighbor-discovery messages to keep track of link-state. When a link fails, the switch 104 removes all flow table entries tied to the failed port and sends its new link-state information to the controller 102. This way, the controller 102 also learns the new topology. When packets arrive for a removed flow-entry at the switch 104, the packets are sent to the controller 102, much like they are new flows, and the controller 102 computes and installs a new path based on the new topology. [0070] When the network 100 starts, the switches 104 must connect and authenticate with the controller 102. On startup, the network creates a minimum spanning tree with the controller 102 advertising itself as the root. Each switch 104 has been configured with credentials for the controller 102 and the controller 102 with the credentials for all the switches 104. If a switch 104 finds a shorter path to the controller 102, it attempts two way authentication with it before advertising that path as a valid route. Therefore the minimum spanning tree grows radially from the controller 102, hop-by-hop as each switch 104 authenticates. [0071] Authentication is done using the preconfigured credentials to ensure that a misbehaving node cannot masquerade as the controller 102 or another switch 104. If authentication is successful, the switch 104 creates an encrypted connection with the controller 102 which is used for all communication between the pair. [0072] By design, the controller 102 knows the upstream switch 104 and physical port to which each authenticating switch 104 is attached. After a switch 104 authenticates and establishes a secure channel to the controller 102, it forwards all packets it receives for which it does not have a flow entry to the controller 102, annotated with the ingress port. This includes the traffic of authenticating switches 104. Therefore the controller 102 can pinpoint the attachment point to the spanning tree of all non-authenticated switches 104 and hosts. Once a switch 104 authenticates, the controller 102 will establish a flow in the network between itself and a switch 104 for the secure channel. [0073] Pol-Eth is a language according to the present invention for declaring policy in the network 100. While a particular language is not required, Pol-Eth is described as an example. [0074] In Pol-Eth, network policy is declared as a set of rules, each consisting of a condition and a corresponding action. For example, the rule to specify that user bob is allowed to communicate with the HTTP server (using HTTP) is: [0075] [(usrc="bob")Λ(protocol="http")Λ(hdst='Veb-scrver")]:allow; [0076] "http-server",, lisrc, hdst, apsrc, apdst, protocol}, which respectively signify the user, host, and access point sources and destinations and the protocol of the flow. [0077] In Pol-Eth, the values of predicates may include single names (e.g., "bob"), lists of names (e.g., [''bob'Y'linda*']), or group inclusion (e.g., in("workstations")). All names must be registered with the controller 102 or declared as groups in the policy file, as described below. [0078] Actions include allow, deny, waypoints, and outbound-only (for NAT-like security). Waypoint declarations include a list of entities to route the flow through, e.g., waypoints ("ids","http-proxy"). 10079] Pol-Eth rules are independent and do not contain an intrinsic ordering. Thus, multiple rules with conflicting actions may be satisfied by the same flow. Conflicts are preferably resolved by assigning priorities based on declaration order, though other resolution techniques may be used. If one rule precedes another in the policy file, it is assigned a higher priority. [0080] As an example, in the following declaration, bob may accept incoming connections even if he is a student. [0081] # bob is unrestricted [(udst=" bob")]: allow; [0082] # all students can make outbound connections [(usrc=in(' " students"))] : outbound-only; [0083] # deny everything by default) [] : deny ; [0084] Unfortunately, in today's multi-user operating systems, it is difficult from a network perspective to attribute outgoing traffic to a particular user. According to the present invention, if multiple-users are logged into the same machine (and not identifiable from within the network), the network applies the least restrictive action to each of the flows. This is an obvious relaxation of the security policy. To address this, it is possible to integrate with trusted end-host operating systems to provide user-isolation and identification, for example, by providing each user with a virtual machine having a unique MAC. [0085] Pol-Eth also allows predicates to contain arbitrary functions. For example, the predicate (cxpr="foo'") will execute the function "'foo'* at runtime and use the boolean return value as the outcome. Predicate functions are written in C++ and executed within the network namespace. During execution, they have access to all parameters of the flow as well as to the full binding state of the network. [0086] The inclusion of arbitrary functions with the expressibility of a general programming language allows predicates to maintain local state, affect system state, or access system libraries. For example, we have created predicates that depend on the time-of- day and contain dependencies on which users or hosts are logged onto the network. A notable downside is that it becomes impossible to statically reason about safety and execution times: a poorly written function can crash the controller or slow down permission checks. [0087] Given how frequently new flows are created - and how fast decisions must be made - it is not practical to interpret the network policy. Instead, it is preferred to compile it. But compiling Pol-Eth is non-trivial because of the potentially huge namespace in the network. Creating a lookup table for all possible flows specified in the policy would be impractical. [0088] A preferred Pol-Eth implementation combines compilation and just- in-timc creation of search functions. Each rule is associated with the principles to which it applies. This is a one-time cost, performed at startup and on each policy change. [0089] The first time a sender communicates to a new receiver, a custom permission check function is created dynamically to handle all subsequent flows between the same. [0090] We have implemented a source-to-sourcc compiler that generates C++ from a Pol-Eth policy file. The resulting source is then compiled and linked into a binary. As a consequence, policy changes currently require re-linking the controller. We are currently upgrading the policy compiler so that policy changes can be dynamically loaded at runtime. [0091] A functional embodiment of a network according to the present invention has been built and deployed. In that embodiment the network 100 connected over 300 registered hosts and several hundred users. The embodiment included 19 switches of three different types: wireless access points 106, and Ethernet switches in two types dedicated hardware and software. Registered hosts included laptops, printers, VoIP phones, desktop workstations and servers. [0092] Three different switches have been tested. The first is an 802.1 Ig wireless access point based on a commercial access point. The second is a wired 4-port Gigabit Ethernet switch that forwards packets at line-speed based on the NetFPGA programmable switch platform, and written in Verilog. The third is a wired 4-port Ethernet switch in Linux on a desktop-PC in software, as a development environment and to allow rapid deployment and evolution. [0093] For design re-use, the same flow table was implemented in each switch design even though it is preferable to optimize for each platform. The main table for packets that should be forwarded has 8k flow entries and is searched using an exact match on the whole header. Two hash functions (two CRCs) were used to reduce the chance of collisions, and only one flowr was placed in each entry of the table. 8k entries were chosen because of the limitations of the programmable hardware (NetFPGA). A commercial ASIC-based hardware switch, an NPU-based switch, or a software-only switch would support many more entries. A second table wras implemented to hold dropped packets which also used exact-match hashing. In that implementation, the dropped table was much bigger (32k entries) because the controller was stateless and the outbound- only actions were implemented in the flow table. When an outbound flow starts, it is preferable to setup the return-route at the same time. Because the controller is stateless, it does not remember that the outbound-flow was allowed. Unfortunately, wiicn proxy ARP is used, the Ethernet address of packets flowing in the reverse direction were not known until they arrive. The second table was used to hold flow entries for return-routes, with a wildcard Ethernet address, as well as for dropped packets. A stateful controller would not need these entries. Finally, a third small table for flows with wildcards in any field was used. These are there for convenience during prototyping, to aid in determining how many entries a real deployment would need. It holds How entries for the spanning tree messages, ARP and DHCP. [0094] The access point ran on a Linksys WRTSL54GS wireless router running Open WRT. The data-path and flow table were based on 5K lines of C++, of which 1.5K were for the flow table. The local switch manager is written in software and talks to the controller using the native Linux TCP stack. When running from within the kernel, the forwarding path runs at 23Mb/s, the same speed as Linux IP forwarding and layer 2 bridging. [0095] The hardware switch was implemented on NetFPGA v2.0 with four Gigabit Ethernet ports, a Xilinx Virtex-Il FPGA and 4Mbytes of SRAM for packet buffers and the flow table. The hardware forwarding path consisted of 7k lines of Vcrilog; flow-entries were 40 bytes long. The hardware can forward minimum size packets in full-duplex at line -rate of lGb/s. [0096] To simplify definition of a switch, a software switch was built from a regular desktop-PC and a 4-port Gigabit Ethernet card. The forwarding path and the flow table was implemented to mirror the hardware implementation. The software switches in kernel mode can forward MTU size packets at 1 Gb/s. However, as the packet size drops, the CPU cannot keep up. At 100 bytes, the switch can only achieve a throughput of 16Mb/s. Clearly, for now, the switch needs to be implemented in hardware. [0097] The preferred switch design as shown in Figure 3 is decomposed into two memory independent processes, the datapath and the control path. A CPU or processor 302 forms the primary compute and control functions of the switch 300. Switch memory 304 holds the operating system 306, such as Linux; control path software 308 and datapath software 310. A switch ASIC 312 is used in the preferred embodiment to provide hardware acceleration to readily enable line rate operation. If the primary datapath operators arc performed by the datapath software 310, the ASIC 312 is replaced by a simple network interface. The control path software 308 manages the spanning tree algorithm, and handles all communication with the controller and performs other local manager functions. The datapath software 310 performs the forwarding. [0098] The control path software 308 preferably runs exclusively in user- space and communicates to the datapath software 310 over a special interface exported by the datapath software 310. The datapath software 310 may run in user-space or within the kernel. When running with the hardware switch ASIC 312, the datapath software 310 handles setting the hardware flow entries, secondary and tertiary flow lookups, statistics tracking, and timing out flow entries, switch control and management software 314 is also present to perform those functions described in more detail below. [0099] Figure 4 shows a decomposition of the functional software and hardware layers making up the switch datapath. In Block 402 received packets are checked for a valid length and undersized packets are dropped. In preparation for calculating the hash-functions, Block 404 parses the packet header to extract the following fields: Ethernet header, IP header, and TCP or UDP header. fOOlOO] A flow-tuple is built for each received packet; for an IPv4 packet, the tuple has 155 bits consisting of: MAC DA (lower 16 bits), MAC SA (lower 16 bits), Ethertypc (16 bits), IP src address (32 bits), IP dst address (32 bits), IP protocol field (8 bits), TCP or UDP src port number (16 bits), TCP or UDP dst port number (16 bits), received physical port number (3 bits). [00101] Block 406 computes two hash functions on the flow-tuple (padded to 160 bits), and returns two indices. Block 408 uses the indices to lookup into two hash tables in SRAM. The flow table stores 8,192 flow entries. Each flow entry holds the 155 bit flow tuple (to confirm a hit or a miss on the hash table), and a 152 bit field used to store parameters for an action when there is a lookup hit. The action fields include one bit to indicate a valid flow entry, three bits to identify a destination port (physical output port, port to CPU, or null port that drops the packet), 48 bit overwrite MAC DA, 48 bit overwrite MAC SA, a 20-bit packet counter, and a 32 bit byte counter. The 307-bit flow-entry is stored across two banks of SRAM 410 and 412. [00102] Block 414 controls the SRAM, arbitrating access for two requestors: The flow table lookup (two accesses per packet, plus statistics counter updates), and the CPU 302 via a PCI bus. Every 16 system clock cycles, the block 414 can read two flow-tuples, update a statistics counter entry, and perform one CPU access to write or read 4 bytes of data. To prevent counters from overflowing, in the illustrated embodiment the byte counters need to be read every 30 seconds by the CPU 302, and the packet counters every 0.5 seconds. Alternatives can increase the size of the counter field to reduce the load on the CPU or use well- known counter-caching techniques. [00103] Block 416 buffers packets while the header is processed in Blocks 402 - 408, 414. If there was a hit on the flow table, the packet is forwarded accordingly to the correct outgoing port, the CPU port, or could be actively dropped. If there was a miss on the flow table, the packet is forwarded to the CPU 302. Block 418 can also overwrite a packet header if the flow table so indicates. Packets are provided from block 418 to one of three queues 420, 422, 424. Queues 420 and 422 are connected to a mux 426 to provide packets to the Ethernet MAC FIFO 428. Two queues are used to allow prioritization of flows if desired, such as new flows to the controller 102. Queue 424 provides packets to the CPU 302 for operations not handled by the hardware. A fourth queue 430 receives packets from the CPU 302 and provides them to the mux 426, allowing CPU-generated packets to be directly transmitted. [00104] Overall, the hardware is controlled by the CPU 302 via memory- mapped registers over the PCI bus. Packets are transferred using standard DMA. [00105] Figure 5 contains a high-level view of the switch control path. The control path manages all communications with the controller such as forwarding packets that have failed local lookups, relaying flow setup, tear-down, and filtration requests. [00106] The control path uses the local TCP stack 502 for communication to the controller using the datapath 400. By design, the datapath 400 also controls forwarding for the local protocol stack. This ensures that no local traffic leaks onto the network that was not explicitly authorized by the controller 102. [00107] All per-packet functions that do not have per-packet time constraints are implemented within the control path. This ensures that the datapath will be simple, fast and amenable to hardware design and implementation. The implementation includes a DHCP client 504, a spanning tree protocol stack 506, a SSL stack 508 for authentication and encryption of all data to the controller, and support 510 for flow setup and flow-learning to support outbound-initiated only traffic. [00108] The switch control and management software 314 has two responsibilities. First, it establishes and maintains a secure channel to the controller 102. On startup, all the switches 104 find a path to the controller 102 by building a modified spanning-tree. with the controller 102 as root. The control software 314 then creates an encrypted TCP connection to the controller 102. This connection is used to pass link-state information (which is aggregated to form the network topology) and all packets requiring permission checks to the controller 102. Second, the software 314 maintains a flow table for flow entries not processed in hardware, such as overflow entries due to collisions in the hardware hash table, and entries with wildcard fields. Wildcards are used for the small implementation-specific table. The software 314 also manages the addition, deletion, and timing-out of entries in the hardware. [00109] If a packet does not match a flow entry in the hardware flow table, it is passed to software 314. The packet did not match the hardware flow table because: (i) It is the first packet of a flow and the controller 102 has not yet granted it access (ii) It is from a revoked flow or one that was not granted access (iii) It is part of a permitted flow but the entry collided with existing entries and must be managed in software or (iv) It matches a flow entry containing a wildcard field and is handled in software. [00110] In the full software design of the switch two flow tables were maintained, one as a secondary hash table for implementation-specific entries and the second as an optimization to reduce traffic to the controller. For example, the second table can be set up symmetric entries for flows that are allowed to be outgoing only. Because you cannot predict the return source MAC address when proxy ARP is used, traffic to the controller is saved by maintaining entries with wildcards for the source MAC address and incoming port. The first flow table is a small associative memory to hold flow-entries that could not find an open slot in either of the two hash tables. Tn a dedicated hardware solution, this small associative memory would be placed in hardware. Alternatively, a hardware design could use a TCAM for the whole flow table in hardware. [00111] The controller was implemented on a standard Linux PC (1.6GHz Intel Celeron processor and 512MB of DRAM). The controller is based on 45K lines of C++, with an additional 4K lines generated by the policy compiler, and 4.5K lines of python for the management interface. [00112] Switches and hosts were registered using a web-interface to the controller and the registry was maintained in a standard database. For access points, the method of authentication was determined during registration. Users were registered using a standard directory service. [00113] In the implemented system, users authenticated using the existing system, which used Kerberos and a registry of usernames and passwords. Users authenticate via a web interface. When they first connect to a browser they are redirected to a login web-page. In principle, any authentication scheme could be used, and most enterprises would have their own. Access points also, optionally, authenticate hosts based on their Ethernet address, which is registered with the controller. [00114] The implemented controller logged bindings whenever they were added, removed or on checkpointing the current bind-state. Each entry in the log was timestamped. [00115] The log was easily queried to determine the bind-state at any time in the past. The DNS server was enhanced to support queries of the form key.domain.type-time, where "type" can be "host", "'user"', "MAC", or "port". The optional time parameter allows historical queries, defaulting to the present time. [00116] Routes were pre-computcd using an all pairs shortest path algorithm. Topology recalculation on link failure was handled by dynamically updating the computation with the modified link-state updates. Even on large topologies, the cost of updating the routes on failure was minimal. For example, the average cost of an update on a 3,000 node topology was 10ms. [00117] The implementation was deployed in an existing 100Mb/s Ethernet network. Included in the deployment were eleven wired and eight wireless switches according to the present invention. There were approximately 300 hosts on the network, with an average of 120 hosts active in a 5 -minute window. A network policy was created to closely match, and in most cases exceed, the connectivity control already in place. The existing policy was determined by looking at the use of VLANs. end-host firewall configurations. NATs and router ACLs. omitting rules no longer relevant to the current state of the network. [00118] Briefly, within the policy, non-servers (workstations, laptops, and phones) were protected from outbound connections from servers, while workstations could communicate uninhibited. Hosts that connected to a switch port registered an Ethernet address, but required no user authentication. Wireless nodes protected by WPA and a password did not require user authentication, but if the host MAC address was not registered they can only access a small number of services (HTTP, HTTPS, DNS, SMTP, IMAP, POP, and SSH). Open wireless access points required users to authenticate through the existing system. The VoIP phones were restricted from communicating with non-phones and were statically bound to a single access point to prevent mobility (for R91 1 location compliance). The policy file was 132 lines long. [00119] By deploying this embodiment, measurements of performance were made to understand how the network can scale with more users, end-hosts and switches. In the deployed 300 host network, there were 30-40 new flow-requests per second with a peak of 750 flow7-requests per second. Under load the controller flows were set up in less than 1.5ms in the worst case, and the CPU showed negligible load for up to 11,000 flows per second, which is larger than the actual peak detected. This number would increase with design optimization. [00120] With this in mind, it is worth asking to how many end-hosts this load corresponds. Two recent datasets were considered, one from an 8,000 host network and one from a 22.000 host network at a university. The number of maximum outstanding flows in the traces from the first network never exceeded 1,500 per second for 8.000 hosts. The university dataset had a maximum of under 9,000 new flow-requests per second for 22,000 hosts. [00121J This indicates that a single controller could comfortably manage a network with over 20,000 hosts. Of course, in practice, the rule set would be larger and the number of physical entities greater; but on the other hand, the ease with which the controller handled this number of flows suggests there is room for improvement. [00122J Next the size of the flow table in the switch was evaluated. Ideally, the switch can hold all of the currently active flows. In the deployed implementation it never exceeded 500. With a table of 8,192 entries and a two- function hash-table, there was never a collision. [00123] In practice, the number of ongoing flows depends on where the switch is in the network. Switches closer to the edge will see a number of flows proportional to the number of hosts they connect to — and hence their fanout. The implemented switches had a fanout of four and saw no more than 500 flows. Therefore a switch with a fanout of, say, 64 would see at most a few thousand active flows. A switch at the center of a network will likely see more active flows, presumably all active flows. From these numbers it is concluded that a switch for a university-sized network should have a flow table capable of holding 8- 16k entries. If it is assumed that each entry is 64B, it suggests the table requires about 1MB; or as large as 4MB if using a two-way hashing scheme.). Therefore the memory requirements of the present switch are quite modest in comparison to current Ethernet switches. [00124] To further explore the scalability of the controller, its performance was tested with simulated inputs in software to identify overheads. The controller was configured with a policy file of 50 rules and 100 registered principles. Routes were pre-calculated and cached. Under these conditions, the system could handle 650,845 bind events per second and 16,972,600 permission checks per second. The complexity of the bind events and permission checks is dependent on the rules in use and in the worst case grows linearly with the number of rules. [00125] Because the implemented controller used cold-standby failure recovery, a controller failure would lead to interruption of service for active flows and a delay while they were re-established. To understand how long it took to reinstall the flows, the completion time of 275 consecutive HTTP requests, retrieving 63MBs in total was measured. While the requests were ongoing, the controller was crashed and restarted multiple times. There was clearly a penalty for each failure, corresponding to a roughly 10% increase in overall completion time. This could be largely eliminated, of course, in a network that uses warm- standby or fully-replicated controllers to more quickly recover from failure. [00126] Link failures require that all outstanding flows re-contact the controller in order to re-establish the path. If the link is heavily used, the controller will receive a storm of requests, and its performance will degrade. A topology with redundant paths was implemented, and the latencies experienced by packets were measured. Failures were simulated by physically unplugging a link. In all cases, the path re-convcrged in under 40ms; but a packet could be delayed by up to a second while the controller handled the flurry of requests. [00127] The network policy allowed for multiple disjoint paths to be setup by the controller when the flow was created. This way, convergence could occur much faster during failure, particularly if the switches detected a failure and failed over to using the backup flow-entry. [00128] Figures 6 and 7 illustrate inclusion of prior art switches in a network according to the present invention. This illustrates that a network according to the present invention can readily be added to an existing network, thus allowing additional security to be added incrementally instead of requiring total replacement of the infrastructure. [00129] In Figure 6, a prior art switch 602 is added connecting to switches 104B, 104C and 104D, with switches 104B and 104D no longer being directly connected to switch 104C. Figure 7 places a second prior art switch 702 between switch 602 and switch 104D and has new workstations HOE and HOF connected directly to it. [00130] Operation of the mixed networks 600 and 700 differs from that of network 100 in the following manners. In the network 600, full network control can be maintained even though a prior art switch 602 is included in the network 600. Any flows to or from workstations 11OA, HOB and HOC, other than those between those workstations, must pass through switch 602. Assuming a flow from workstation 110Λ, after passing through switch 602 the packet will either reach switch 104B or switch 104C. Both switches will know the incoming port which receives packets from switch 602. Thus a flow from workstation HOC to server 108D would have flow table entries in switches 104D and 104C. The entry in switch 104D would be as in network 100, with the TCP/UDP, IP and Ethernet headers and the physical receive port to which the workstation 1 1OC is connected. T he flow table would include an action entry of the physical port to which switch 602 is connected so that the flow is properly routed. The entry in switch 104C would include the TCP/UDP, IP and Ethernet headers and the physical receive port to which the switch 602 is connected. Thus if switch 104C receives a similarly addressed packet but at another port, it knows it should forward the packet to the controller 102. Therefore, because the controller 102 will learn the routing table of the switch 602 during the initial flow set ups, it will be able to properly set up flows in all o the switches 110 in the network to maintain full security. [00131] The network 700 operates slightly differently due to the interconnected nature of switches 602 and 702 and to the workstations HOE and 1 1OF being connected to switch 702. Communications between workstations HOE and 11 OF can be secured only using prior art techniques in switch 702. Any other communications will be secure as they must pass through switches 104. [00132] Thus it can be seen that a fully secure network can be developed if all of the switches forming the edge of the network are switches according to the present invention, even if all of the core switches are prior art switches. In that case the controller 102 will flood the network to find the various edge switches 104. As the switches 104 will not be configured, they will return the packet to the controller 102, thus indicating their presence and locations. [00133] Appreciable security can also be developed in a mixed network which uses core switches according to the present invention and prior art switches at the edge. As in network 700, there would be limited security between hosts connected to the same edge switch but flows traversing the network would be secure. [00134] A third mixed alternative is to connect all servers to switches according to the present invention, with any other switches of less concern. This arrangement would secure communications with the servers, often the most critical. One advantage of this alternative is that fewer switches might be required as there are usually far fewer servers than workstations. Overall security would improve as any prior art switches are replaced with switches according to the present invention. [00135] Thus switches according to the present invention, and the controller, can be incorporated into existing networks in several ways, with the security level varying dependent on the deployment technique, but not requiring a complete infrastructure replacement. [00136] This description describes a new approach to dealing with the security and management problems found in today's enterprise networks. Ethernet and IP networks are not well suited to address these demands. Their shortcomings are many fold. First, they do not provide a usable namespace because the name to address bindings and address to principle bindings are loose and insecure. Secondly, policy declaration is normally over low-level identifiers (e.g., IP addresses, VLANs, physical ports and MAC addresses) that don't have clear mappings to network principles and are topology dependant. Encoding topology in policy results in brittle networks whose semantics change with the movement of components. Finally, policy today is declared in many files over multiple components. This requires the human operator to perform the labor intensive and error prone process of manual consistency. [00137] Networks according to the present invention address these issues by offering a new architecture for enterprise networks. First, the network control functions, including authentication, name bindings, and routing, are centralized. This allows the network to provide a strongly bound and authenticated namespace without the complex consistency management required in a distributed architecture. Further, centralization simplifies network-wide support for logging, auditing and diagnostics. Second, policy declaration is centralized and over high- level names. This both decouples the network topology and the network policy, and simplifies declaration. Finally, the policy is able to control the route a path takes. This allows the administrator to selectively require traffic to traverse middleboxes without having to engineer choke points into the physical network topology. [00138] While the invention has been particularly shown and described with respect to preferred embodiments thereof, it will be understood by those skilled in the art that changes in form and details may be made therein without departing from the scope and spirit of the invention. Claims Priority Applications (4) Publications (1) Family ID=39674487 Family Applications (1) Country Status (2) Cited By (144) Families Citing this family (97) Citations (5) - 2008 - 2008-01-08 US US11/970,976 patent/US20080189769A1/en not_active Abandoned - 2008-01-30 WO PCT/US2008/052475 patent/WO2008095010A1/en active Application Filing
https://patents.google.com/patent/WO2008095010
CC-MAIN-2019-51
refinedweb
11,938
50.97
in reply to Re^2: How to create soap server script? in thread How to create soap server script? I’m afraid that I don’t understand your point/question. Could you please rephrase it? Elaborate a little? server script -> hibye.cgi #!perl -w use SOAP::Transport::HTTP; SOAP::Transport::HTTP::CGI -> dispatch_to('Demo') -> handle; package Demo; sub hi { return "hello, world"; } sub bye { return "goodbye, cruel world"; } sub languages { return ("Perl", "C", "sh"); } [download] The above sample code is from "" Correct me if I'm wrong: If client call server->hibye.cgi->sub hi hibye.cgi will go to package Demo->sub hi and return "hello, world" So, my questions are: If there is a wsdl, can I still use the above code? and the above sample code, the dispatch_to is "hard coded", can I do dynamic? Please advise. Thanks.
http://www.perlmonks.org/?node_id=863714
CC-MAIN-2015-48
refinedweb
141
75.81
Nick Gunn wrote: > Is there a simple way to make a test for this? > > A simple way to test for it is to make a simple program that sets a > value to both sets of MIN/MAX and compile it so that any warning is an > error. However, I don't think that its simple to implement that to work > on all platforms from within the build/test mechanism. There is a semi-portable way to test it: the compiler complained about LDBL_MIN causing a floating point underflow. The only values smaller than LDBL_MIN that could cause an underflow are denormalized numbers. The C99 macro isnormal() returns a nonzero value if and only if its argument has a normal value (i.e., is neither zero, subnormal, infinite, nor NaN). Here's the test case (compiled with Intel C 8.1 on IA64): $ cat t.c && icc -c99 t.c && ./a.out #include <assert.h> #include <float.h> #include <math.h> #include <stdio.h> long double foo (long double); int main () { const long double x = 3.3621031431120935e-4932L; const long double y = LDBL_MIN; printf ("isnormal (%.*Lg) = %d\n" "isnormal (%.*Lg) = %d\n", LDBL_DIG, x, isnormal (x), LDBL_DIG, y, isnormal (y)); assert (!isnormal (x)); assert (isnormal (y)); } t.c(10): warning #239: floating point underflow const long double x = 3.3621031431120935e-4932L; ^ isnormal (3.3621031431120935e-4932) = 0 isnormal (3.36210314311209351e-4932) = 1
http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/200508.mbox/%3C42EEE4B3.20503@roguewave.com%3E
CC-MAIN-2014-41
refinedweb
230
70.5
Like for example: Writing a blog, a weekly column, hosting a podcast, participating in a Let’s Play, and writing a novel-sized retrospective on Mass Effect. Description:We’ve had a lot of debates over whether we need a score system, or how much people will care, or how much of focus it should be, so don’t feel the need to ask those questions in the comments. We’ve gnawed on that argument for hours..Really? I didn’t even know this was a thing until now..Imagine if the best way to get the high score in Pac-Man was to simply stay on level 1 and eat the same ten dots over and over until you passed out from exhaustion. That pretty much ruins high-level play by turning it into an endurance test.. behavior. Footnotes: [1] Like for example: Writing a blog, a weekly column, hosting a podcast, participating in a Let’s Play, and writing a novel-sized retrospective on Mass Effect. [2] We’ve had a lot of debates over whether we need a score system, or how much people will care, or how much of focus it should be, so don’t feel the need to ask those questions in the comments. We’ve gnawed on that argument for hours. [3] Really? I didn’t even know this was a thing until now. [4] Imagine if the best way to get the high score in Pac-Man was to simply stay on level 1 and eat the same ten dots over and over until you passed out from exhaustion. That pretty much ruins high-level play by turning it into an endurance test. The Best of 2017 My picks for what was important, awesome, or worth talking about in 2017. Was it a Hack? A big chunk of the internet went down in October of 2016. What happened? Was it a hack? The Truth About Piracy What are publishers doing to fight piracy and why is it all wrong? Project Octant A programming project where I set out to make a Minecraft-style world so I can experiment with Octree data. Spec Ops: The Line A videogame that judges its audience, criticizes its genre, and hates its premise. How did this thing get made? 108 thoughts on “Good Robot #43: Un-UnSolved Mysteries” Out of curiosity on bug #1, why use an infinite line rather than something like a collision rectangle? Edit: Oooh, base case! Even a non-infinite line should do the trick. No need to make it 2D yet. :) I think at the time of coding, there was no need for it to NOT be infinite. Strictly speaking, I suspect that it was merely “if the player is beyond this point on this axis, the level’s over.” There was no accounting for the fact that the exit would later be moved away from the edge of the map. Counter-intuitively, sometimes something that appears ‘infinite’ is simply ‘less strictly defined.’ My thought excatly. It was probably some sort of simple checks like if(player.Position.Y>Level_Bound_Upper) Now this is my favorite kind of bug. At least, when it results in something good instead of something bad… systems colliding to provide unexpected behavior is one of the great things about organically-grown games. Heck, even when it does result in something negative, it’s often fun just to find. I read a good article about Spelunky bugs recently. Like, if you got punished by Kali you’d get a ball and chain attached to your leg, and the ball was supposed to destroy walls when it got stuck, and it could sometimes do this to walls that weren’t supposed to be destructible, and this allowed you to get somewhere you couldn’t normally. The creators don’t fix bugs like that because they don’t break the game, they just give skilled players another trick they can learn. Or,in case of skyrim,they dont fix it because everyone likes them. Or, in the case of Skyrim, they don’t fix it because the USKP takes care of it. It sure is a good thing high-level Pac-Man play isn’t an endurance test. Yeah, since on the original cabinet the machine fails before human endurance does. Because it only has a set amount of levels and all of them have solutions, so barring mistakes you can play PacMan perfectly forever. Right, or am I remembering it incorrectly. The original Pac-Man cabinets have (I think) ten different level layouts — the developers apparently never bothered to make more because nobody ever made it that far in their testing. However, every level can be beaten if you learn the right patterns to make you untouchable by the ghosts. After you beat the tenth level, the eleventh level is the same as the tenth, and the tenth level keeps repeating over and over again while the level counter keeps ticking up. You “win” the game after you beat level 255 — the level counter is stored as an eight-bit binary integer, so the game crashes when the counter overflows. The original Pac-Man ends at level 256 when you hit the kill screen: And backing up to the point that began this side thread: Yes, it takes some endurance to play for 6 hours to reach board 256. But it also takes a great deal of skill / practice. Which is better than a system that requires ONLY endurance. (Although I probably could have used Donky Kong as an example and been less susceptible to nitpickery.) Then you’d have to deal with nitpickery over your misspelling of Donkey Kong. *shakes fist at universe* So what is the number of the rule “you can never prevent nitpickery”? Whatever number you think, you’re off by one. Actually, whatever number you think, you’re off by two. Oooh, Fistful of Quarters was a fun movie. Commented three times from my PC to no avail, so this might be the fourth time I make the same comment (from phone this time) I read your preview for Good Robot linked in Twitter. Someone should tell the reviewer that Rutskarn’s real name isn’t actually Rutskarn. It failed. There is no comment here. … Who said that? Why is my comment suddenly moved to the side when its not part of any nest? Maybe it’s a bunch of ghost comments. Another important note for people: the summary of the story he uses is taken from the website and was a placeholder. To be fair, the story in-game is indirect, hard to explain, and not especially important outside of the occasional bit of texture or gag. Bugs like #1 and #3 are incredibly satisfying to explain, precisely because they make perfect sense when you look at the actual rules which are themselves reasonable in isolation. The explanation for bug #2 took a second read to really get what was going on, and since it basically is an uninitialized variable problem (respawn location’s default value was a nonsensical, albeit consistent one) it isn’t quite as fun to talk about, though that consistency is also why it’s much less irritating to actually find. I am curious about how much time was spent on building debug tools, like ways to force specific level layouts or telemetry (like pushing a key when one of these rare bugs happens that saves a text dump of the current and previous level layouts). The usual automation tradeoff applies here: Will the time spent building this tool make up for the saved time on finding these things? Will the tool work in enough scenarios to actually be reusable? I know Warframe had some success with a special screenshot key that stores metadata about level layout and player location, for reporting map holes and other bugs. As soon as he mentioned the spawn platform I assumed it was a variable initialization problem, though in part because I’ve run into those in the past myself. (“Oh that’s funny. Because this school doesn’t have location data filled out, it appears in the Atlantic, just south of Morocco. At 0,0 lat/lon.”) My guess, however, was that the spawn platform was ending up at 0,0 and the robot was spawning there, instead of the robot itself just spawning at 0,0. Now I am interested to see if my absolute favourite class of “unintended behaviour” exists in the Good Robot config language. Let me take a step back. At a previous job, we were (essentially) running everything “in the cloud” (not as you’d expect us to, by running Amazon EC2 instances, but by having our own internal cloud; and not by configuring machines, but by defining “jobs”, and saying “I want N of these”). Writing job configurations is boring, so some rather enterprising individuals wrote a config language to do this. So far, so good. However, it turns out that when you’re defining a whole bunch of jobs, that need to have some thing in common and some not (let’s call this “a service”), having things like templates is really handy. But that means that you need to have a way of overriding things in templates. And for reasons, it’s vastly easier implementing this as “lazy evaluation” (basically, this means that until you use a value, it’s not computed). And sometimes you need to enable or disable chunks of these templates by conditionals. Now, it turns out that “instantiate a template with overrides” is isomorphic (um, “essentially the same as”) and if you have function calls and conditionals, you can implement surprisingly complicated stuff. Like something that computes a Mandelbrot fractal, encodes it as a PNG, base64-encodes that and emits an IMG tag with inline image data (yes, that is actually allowed by the HTML spec). Or, as I once proved by doing so, a small lisp interpreter. Yes, my favourite class of “unintended behaviour” is “accidental Turing Completeness”. From Shamus’ brief description of it, it doesn’t sound like the GR config language has loops or branching conditionals, which is required for Turing-completeness. >(yes, that is actually allowed by the HTML spec) Really, with only HTML, not Javascript? Yeah. data:/ URIs. That’s why the base64 step. :) This exists, and is actually kind of handy for teeeny tiny images – since those would otherwise cause your browser to download another image (but it will only download like 4-5 items at a time, so the fewer extra items you make it load, the faster your page response) – and otherwise take up too much disk for their contribution (a modern HugeAss disk has like a 4096 block size – meaning if you have a 100 byte file, it takes 4k of disk space – 3996 bytes are wasted) Are all enemies in GR automatically allied with each other? If you can set up enemy AI such that Type A Robots shoot at Type B robots, but not Type C robots, I’m pretty sure you could set up a series of robots spawning other robots that fight each-other to act as conditional events. “Type N robots alive” would be the allocated memory, which is checked by spawning a type M robot which will die if there are type Ns alive to attack it. If robots are all on the same team, I could imagine a way to achieve the same “Type As will kill type Bs but not type Cs” if there is friendly fire and healing. Just do an N damage explosion with infinite range, followed by an N healing explosion with infinite range, destroying everything with health <= N. For anyone who hasn’t studied theoretical computer science in a fair bit of detail, Turing completeness¹ is actually really easy to stumble on accidentally. The quintessential example of an incredibly simple construct which is actually Turing complete is the two-counter machine: a device with some finitely-complex control, and two counters which can each increment, decrement or be reset to zero. It turns out, with a sufficiently clever encoding scheme and control program, such a device can do anything a Turing machine or computer program can. And if something that simple is Turing complete, it’s easy to imagine that lots of other simple things are also Turing complete. It’s actually fairly common. ¹ Basically, being computationally powerful enough to compute anything a Turing machine can. This more-or-less includes real computers², along with a couple of other standard theoretical constructs like the Lambda calculus, and a bunch of things that are not intentionally Turing complete, like the C++ templating language or Magic: the Gathering. Turing completeness is not always a desirable property, because the Halting problem makes it impossible to automatically prove a bunch of useful things about the system in question (like “it will run within a certain amount of time”). ² Modulo some assumptions about memory: technically, Turing completeness requires an infinite amount of memory, but if we construct an imaginary equivalent to a real computer that has infinite memory, it’s Turing complete. Pedantic question regarding “normal PCs” being upgraded to true Turing Completeness. Would we also need to have an infinite address space (in the sense of 32-bit PCs only being able to handle up to 4GB of RAM, would be need an infinite-bit version of x86)? Or, if we “just” allow infinite amounts of RAM and/or hard disk space, is it possible for a standard PC architecture to make use of all that (perhaps using a compiler trick to make all the memory addresses relative, rather than absolute)? Whenever running on actual hardware, Turing Machines are assumed to never outstrip the physical capability of the hardware. Trivially, no physical machine can duplicate the behavior of a Turing Machine that writes a number of 1s to the tape greater than the number of quarks in the observable universe and then halts. Oh, sure; but I was thinking of the thought experiment where you could conceive of adding infinite RAM to a normal x86 PC – could the PC even theoretically use that much storage. Ignore the fact that that much silicon would immediately collapse into a black hole; that there would be an infinite light-travel delay time from one end of the infinite chip to the other; and other mere laws of physics like that. Is the instruction set up to the task? It wouldn’t be. The instruction set’s design is limited by what would be practical to implement in hardware. In order to try and keep things somewhat sane most if not all hardware uses memory addresses with a constant size. Even if certain instructions allow you to specify a relative memory address (like jumping backwards 18 bytes in code to make a loop) that just causes it to do basic addition/subtraction on the current relevant memory addresses. If you try to go above the highest possible address (~4GB on x86, ~16EB on x64) it would either cause an error, wrap around to 0, or possibly (although not likely) stay at the max value. An instruction set, and by extension the hardware that runs it, would need to have this infinite addressable memory stuff kept in mind during design and would need to have an insane if not outright impossible addressing system because infinite memory means that addresses need to be infinitely large as well. Unless the hardware was also infinite it would need to handle these addresses in constant size pieces which would cause processing to take an infinite amount of CPU cycles and time anyways even with physics defying instant response from memory. Suffice to say infinity does not mix well with stuff outside of mathematics as usual. Sorry if that rambled around a bit, or wasn’t very clear. I’m pretty tired at the moment. Theoretically, one could encode the address in an infinitely extensible form. An address could for example be an array of bytes terminating with a zero-byte, so the further you go from the start of the tape, the longer the addresses get. I actually thought that too initially, but infinity kind of makes optimizing the smaller addresses pointless since an infinite number of addresses would be so large that they may as well be considered infinite. You would effectively be optimizing for an immeasurably tiny part of the total address range. I’ve noticed that the properties of something that interacts with infinity has the nasty habit of themselves becoming infinite as well (at least some of the properties anyways), which makes these thought experiments a bit unfulfilling since the answer tends to be infinity regardless of the actual question. Don’t be sorry – that’s exactly the sort of answer I wanted, thanks :-) . I… I think I read the internal slide deck about the “Mandelbrot PNG through config language” project at work a few hours ago. That’s quite a coincidence. I’m pretty sure I know exactly which language this is. If I’m right, I use it every day. …I’m also not a huge fan of it. :-) It’s pretty good when your jobs and services are small. But if you try to do something complicated, or you try to apply normal programming practice (like only defining things in one place) too much, you end up writing a mess that nobody can understand. Then there was the other configuration language, for our monitoring tools, in which one … enterprising individual … managed to write an implementation of Conway’s game of life. Can I just say “go perfectlittlehorror”? :) Also, yes, you really want to avoid the “multiple inheritance” situation (if that’s your wish, use the config language named like a flute), because if you use it, strange things will happen. The extraction dudes have a pretty good set of “best practices” in a document, somewhere, you should totes have a look at that. Oh, there’s also a collection of various references at “go the-tentacles” (including, but not limited to, the rather excellent slide deck by Misha, a former TL of the language, showing ALL SORTS of interesting weird shit, including the undocumented and somewhat useless side-effects in what purports to be purely a purely functional language). Thus proving Greenspun’s tenth rule once again. I’d say that it’s not so much a proof of, as a variation of. The fact that it is an incomplete implementation of “more scheme than Common Lisp” isn’t accidental. Re bug #1, I see why you would have done it that way, given that it is so expensive to detect a collision between two rectangles… Context for non-coders: Keating is being sarcastic. The thing is, it had ALREADY done rectangle collision when you opened the door by bumping into it. So from the perspective of the first round of changes, I would had been evaluating something I already knew to be true. It wasn’t a speed optimization, it was avoiding “pointless” clutter. Dunno, the original implementation felt iffy to me in first place. Without a timeout, then you have the threshold permanently sitting there once the door’s opened. If some quirk of the level generator had let you cross it on some other part of the map you’d’ve ended up with a similar situation. Also it feels kinda wrong to not check twice when there’s two distinct collisions (the first to open, the second to cross). If both checks were made in the same frame then yeah it’d be doing the same thing twice but you have to preserve state across frames and that makes it two distinct operations in my mind. As a non-coder…I’m uncertain how/if you solved these bugs. It seems like maybe #2 was more of a problem to fix than leave alone. I’m just curious if the fixes are all obvious to coders, as I wouldn’t really understand the technical aspect of correcting them. Frequently (but not always) with bugfixing the challenge isn’t fixing the bug, but figuring out the root cause. Most of these bugs sound like things that are probably reasonably easy to actually fix once you’ve figured out what storm of circumstances is causing them to happen in the first place. Of course, that’s just a ‘it may be easy to fix’ – One of the things you learn with coding is that if you’re not actually staring at the code, making a statement more predictive than “there is a way to fix this given enough time” is unwise since there are multiple ways to accomplish any given task and you don’t know which one was used. Yeah, reliably reproducing bugs can often take up the lion’s share of debugging. Or at the very least, that’s usually the hardest part. The worst bugs are the ones that disappear when you actually look into them, never to return again. Because even if it appears the problem has been solved, your mind is continually thinking “When is it gonna come back? It HAS to come back.” But once you can reproduce the exact circumstances in which a bug happens, the causes and results of said circumstance are generally pretty obvious. And then there are bugs which happen sporadically, no one ever gets to reproduce predictably, the cause never gets found, but at least you find a way to automatically detect it afterwards and repair the mess. Or the variant “when X happens Y breaks”, where you have no idea why X is happening but you can prevent Y from breaking at least. One of my best achievements was tracking down a really rare crash bug that only affected PS3 (IIRC) and was never seen on XBox 360 or PC. It happened maybe 1 time in 10000 or less in the menu system, and the core dumps were never terribly informative. One of those “it doesn’t matter that much, we can’t meaningfully reproduce it and have no idea what is causing it” cases, which got fixed in the traditional way: I stumbled across the answer whilst looking at something else entirely. Thanks guys for the responses. Makes it sound similar to my own living (medicine). The main difference is that it’s much easier to rip code apart, instrument it to insane levels, put it back together and then start digging through the data. Humans, I am given to understand, do not really tend to work very well once you’ve done that. Just read all of this recently and I’m now super-hyped for this game! That review you linked on Twitter, somehow reminded me of a game series I enjoyed a lot in my childhood: “Ratchet and Clank”. Sure, it is 3D instead of 2D and on different planets and whatnot, but I think this theme of one vs. many, dodging or shooting down incoming fire while also dishing out damage to your enemies in the process, and especially a big company that sells you lots of fine tools for your task via vending machines are big similarities… Okay, I really have to get rid of all that nostalgia or I will never do this game justice. Y’know, I hadn’t caught that comparison but now that you make it I find myself salivating over the game’s upcoming release even more. I love the Ratchet and Clank series and, being a short-sighted human, just the idea of Good Robot playing remotely like one of my favorite series has me more interested in it than I already was. Which was already quite a lot, I assure you. (please just release this game already ;~;) I recently replayed the Ratchet and Clank games I played in my childhood (1, 2 and 3) and boy are they hard sometimes… I mean, not soul-crushingly hard, but rather a “Get good faster!” kind of hard, like Dark Souls. I still miss Ratchet Gladiator, which for some reason I enjoyed most of all the ones I played, but, unfortunately, there is no PS3 version to be found. Dont be silly.No one cares about bug fixing.What you should do now is make as many dlcs as possible and cram them in the first month after release,so that you can maximize the profits. No, see, making DLC is for suckers. What you want to do is make marketing pitches for DLC and sell a season pass, then you can get around to making the DLC nine months later. Or just cut a chunk out of the game, and sell it as DLC. Or force the player to buy bullets for real-world currency. Premium ammunition of course, which does 10% more damage than the one found by playing the game. But have the bosses receive only one tenth the damage from regular ammo,while premium one does normal damage to them. I cast my vote for robot armor! Actually, rather than making DLCs, you can much easier just declare half the weapons and half the levels DLC and be done already! When you’re bug hunting, don’t you typically have screen-recording software up, so that you can go back and figure “When did my score explode” or “What happened before I went into the next room”? In my experience, no. There are surprisingly few bugs I’ve had to track down where that would have helped. Good debug logging is much, much more useful. The uninitialized variables theory in #3 confused me for a minute because in D (which I will keep plugging until Shamus tries it out, because it addresses almost all of his complaints about C and C++), integer variables are initialized to 0 on declaration, and floating-point objects are initialized to NaN. (I quickly learned that if meshes aren’t displaying, something probably wasn’t initialized and all of the vertexes are being fed to OpenGL as NaN, with undefined behavior. In practice, OGL generally interprets them as zeros.) This means the scores would have been lower than expected. It doesn’t mean you shouldn’t still always initialize your variables, even when you want them to be 0 or NaN, but it helps reduce undefined behavior and makes tracking bugs down easier. Which is where you run into the classical problem that any change will affect someone’s workflow negatively. Initializing numbers with garbage can be useful, because it’s immediately obvious (usually) that such a number is garbage. Comparably, zero looks like a pretty clean number, and may fool people into thinking that it’s the right number, when really it’s also a garbage value. Gotta agree with Xeorm. Why are floats initialized to NaN and ints to 0? That’s horribly inconsistent! As Peter says below (paraphrasing) zero is not null. This is why I dislike the desire to prevent errors by having the language “fix” them instead of highlighting them to the programmer. The problem with an uninitialized variable isn’t that it contains random garbage. The problem is it does not contain a meaningful value. When you “initialize” it to zero, you fix the symptom, since it no longer has *random* garbage. You have done nothing to fix the underlying problem, however, since zero is still not a meaningful value. On the contrary, you have now hidden what was a clear error in something that might or might not be an error. This is one of the first things I noticed when I first started writing Go code. You immediately get errors, that you haven’t initialized your variables on line X, Y and Z. :) As often as not, zero is a good default value. It works fine in C#, and if you don’t want it to default to zero, you set it to something else when you define it. e.g. public class Enemy { int mScoreValue = -1; int mMaxMinions; //Defaults to zero float mMinionCreationRate; //Also defaults to zero int mMaxHitPoints = 100; int mCurrentHitPoints = mMaxHitPoints; string mName = “Unknown”; public int GetScoreValue() { if (mScoreValue < 0) { Debug.LogWarning("Score value not set in enemy "+mName+" defaulting to HP "+mMaxHitPoints); return mMaxHitPoints; } return mScoreValue; } //etc } And all without needing a .h file. Last time I worked in C++, we wrote our own memory allocation system that set memory to 0xCECECECE when allocated, and 0xCDCDCDCD when freed – this at least gave us some clue as to what we were looking at when we got a crash, and made the crashes more consistent than random garbage values would. I may have to remember that. D also allows compile-time initialization of primitive variables, so long as the initial value can be calculated at compile-time. (naturally.) Mostly because floating point values have several flags that integers don’t, partly for compatability reasons. As Matt Downie pointed out, though, 0 is a pretty definitive value if you’re expecting a positive one. The major downside is that it won’t propagate through your code like a NaN will, alerting testers that a floating point variable somewhere in the chain wasn’t initialized. Also, to be clear, I am in no way suggesting Shamus should be switching Good Robot from C++ to D. That would add at least 2 years to the development time, and I’d like to play it some time today. I’m just saying that I think if Shamus were to, say, try using D for his next experimental project, we’d be treated to a series of posts about how nice it is to, among other things, have strings handled internally to the programming language before even importing any libraries. Half of the comments I’ve seen Shamus make about programming are complaints about shortcomings in C/++ which D fixes. Fully half, if not more. Initializing to zero has the advantage of consistency; a random garbage value can be anything and may sometimes be the correct one, thus producing a heisenbug. I generally prefer to have the compiler throw an error instead and manually set things to zero if I want them to be zero, but that’s not always practical. The big one I remember from Java is arrays; it has to allocate the memory for all the elements when it creates the array even if you don’t want to fill it in on that exact line. Though I honestly forget if Java actually initializes integer array contents to zero automatically; as a matter of coding practice when I want an array filled with zeros I fill the array with zeros. Best practices vs. language conveniences. I typically provide explicit initialization for all of my variables, even the integers that I want initialized to zero. When in doubt, don’t leave decision making up to the compiler. But yes, I think the basic theory is “Oh, this zero shouldn’t be a zero” is easier to parse than “I think this value might not be what it’s supposed to be.” I am a fan of of the “Treat Warnings As Errors” setting in Visual Studio. Build will be fail if you even leave a single integer uninitialized. Probably wouldn’t apply in this situation – I believe that this is data set in text files. Anybody who wants to write reliable code should have that flag enabled on their compiler. If I were the head of a software development organization (instead of one guy working solo), I would mandate it for all check-ins and build tests. If only the library code we were using wasn’t riddled with warnings.. Right, I keep forgetting that we don’t always work in a world where we have control over everything. (which is part of the reason I opted to create my own game engine rather than learn how to use Unity or UE4) And here is where we have an entirely illustrative difference between “zero” and “null”. And why having both is handy. And Shamus is thinking of about eight ways he could have done the structures for the robots read in from the file slightly differently that would have avoided the entire problem and telling himself it really is too late to go back and change it. This is actually the core of the bug – I explain it a bit more below, but my core wrong assumption was that the scripting parser treated undefined variables as 0, as is common in the projects I’ve worked on so far, but not vice versa. The former makes perfect sense to prevent scripts from easily crashing the project if the scripter makes a typo or leaves out a definition, but the latter is never (to my knowledge) done on purpose. What made this bug so damnably hard to unravel was finding out exactly where this huge injection of score was coming from, and in such cleanly even numbers. You kill tens of thousands of robots over the game, and somewhere between the start (where you’re looking the hardest) and the end (where you realize you’ve stopped looking at your score gains subconsciously) a single instance of robots dying spat out this oddly specific result. Correct me if I’m being dense, but do you not manually spawn the enemies you edit and shoot them to test your new scripts? Is checking the score not part of that testing? Manual spawning and the like aren’t part of the game engine as it stands, so the way to test robots is to put them in a zone’s list of enemies, spawn into that zone via the console, and play it until you encounter them “naturally”. Needless to say, there are lots of enemies in a zone and ample opportunity to miss the score counter jumping on a specific one when you’re testing for other behaviors. It’s a good point though: if these new bots needed more than one or two tests to verify that they (otherwise) worked as intended it would have been more obvious. Shamus’s scripting setup makes it so easy to create new robots that the only real testing they need is a handful of encounters. Fine-tuning their balance is done all together with the other bots, in passes as part of my playtesting runs. NO! Please tell me you haven’t been working for all these months and you didn’t know you could spawn robots from the console! Damn it. I remember I wrote up a document detailing all the console commands at some point. I hope I didn’t leave this one out. :( Anyway, it’s pretty simple. Just open up the console and: spawn robotname [number] If you omit number it just spawns one. Just for fun: Try spawning 100 robots sometime. It is stupid / hilarious. Oh man, after checking out the full command list it looks like I somehow guessed the majority of them through sheer inference without ever seeing the document. Well, at least that’s another good game development story to tell! On the plus side… free mandatory playtesting hours? Shamus, are these console commands going to make it into the launch game? I was always a fan of messing around with cheats in games (special shoutout to the extremely powerful console of Jedi Outcast) and I’m disappointed by how modern games seem to have been moving away from this kind of thing. Yeah, upon further reflection after asking my question I remembered all the messes I’ve made trying to make scripting systems do things the original creators never intended. I can totally understand the issue. It’s interesting, because Shamus has said before that he obsesses over details where someone using his scripting will do something he never intended, but something still managed to squeak by. Makes me feel less bad for similar mistakes I have made when I try to make a square peg fit in a round hole. As an aside, what was the actual fix for #3? Assign the minions a score of 1? Have a programmer fix the code so it treats NULL different from 0? Change the default behavior so it doesn’t use HP any more? Multi-fix: * Robots can now be invincible by assigning them negative HP. Furthermore, it won’t show damage counters when hitting invincible bots and it will make a different sound when you shoot them. Also, invincible robots won’t shake, flinch, blink, or let off sparks like robots normally do when damaged. * The default-to-hitpoints if score is zero “feature” was removed. Wild guess, the robot parser does something like: if command.has_prefix(“score:”) { score_set = true; robot_class.score = get_score(command) } if !score_set { robot_class.score = robot_class.hp } The best kind of correct. Well, technically yes! What’s the point of a score? Dunno what you’re talking about, Shamus: I send terse, passive-aggressive emails to my career planning and personal development team all the time. ‘Check out this interesting job listing, you procrastinating asshole. And don’t forget to sign up for that course, or you’re directly responsible for ruining my life.‘ I never get any replies, though. If you want some more background on Bug #3, the story actually goes a bit further: The boss of the 5th level of Good Robot generates invulnerable “child” turrets that orbit its body as a means of attack, which die reactively when their mother does. This actually presents a few problems: The first issue is the invulnerability. As a scripter for projects this size I find it’s sometimes necessary to do things in ways which put the least burden on the programmers – at least while they’re still developing core features. A quick and dirty hack to accomplish something simple nobody will ever see is sometimes better than another three entries on the busiest developer’s task list – especially when he’s busy in addition sleepless, sick, and writing about a game that features Kai Leng. In short, the quickest, dirtiest hack I could conceive of was robots with health so high it would be unreasonable to ever kill them. And hell – even if they did die, the robot was worth 0 points, right? (Interesting side-note: there are many other robots that spawn children worth 0 points, but they all have HP values in the 5-15 range – exactly the same amount as normal ‘bots give in score points on death. It’s nearly impossible to tell the bug is happening from killing them.) The second issue is one of game design. There is a single type of robot that can’t be killed in the entire game, and it’s an accessory to a boss. It has no visible HP bar, and is attached to something that takes a very long time to die. This is a recipe for wasted effort, confusion, and just generally poor game feel. How do we tell the player that it’s immune to damage? Initially, we made these enemies white to match the game’s visual language (for instance, red=homing and yellow=projectiles you can shoot down), but the players still had no frame of reference for what white objects meant on their first encounter. For quite a while the turrets shifted through various forms of confusing. We needed this mechanic to be commonly seen and explored before the boss, so that by the time players entered the room they would understand the implicit meaning of white, orbiting turrets. The rest follows naturally: I took the boss turrets, changed them into small firing platforms, and attached them to a new type of robot I placed at the start of the level. These tiny minions immediately get the message across after a few seconds of fire don’t kill them, and visibly connect to the thing you need to shoot instead. And so a devilish bug is spawned: invulnerable robots cause no issues because realistically they never die; minions are (supposed to be) worth nothing; but create invulnerable minions… It looks like it’s actually a good thing you decided to break the rules like that, thereby creating an obvious bug encompassing a couple of much more subtle ones which may never have been detected otherwise. Soo, wouldn’t Bug #1 also have produced scenarios where you shoot Door 1, then shoot Door 2 (so both are open), then reconsider which one you want, and then the game genuinely has no idea which one the player just picked (|i.e. will probably take the last one being shot at)? You don’t actually say how it’s been solved, but I assume you do have rectangle collision in there now? First question: Yes, that would have been possible. But if someone did that, it would be a coin-flip which door “won” and triggered the level-change. (The winning door would have been whichever one was created first, which there’s no way to know that without looking at the state of memory.) To see the bug under those conditions: 1) You’d have to open two doors on the same wall at the same time. For some reason. 2) Go though one of them before either one closes. (Tricky, since doors snap shut very quickly. In fact, pulling this off might require some dedication to get the timing just right.) 3) You’d need to lose (win?) the coin-toss between the two doors, so that the door you chose wasn’t the door that was triggered. 4) The door you chose wasn’t a mystery door, which by definition you don’t know where it leads. 5) This would need to take place late enough in development that there WAS a door icon, and after we’d designed the icons and settled on their meanings. 6) You’d have to remember what you chose, know what that specific door icon meant, and then notice that where you ended up doesn’t match. But you are correct, the bug could have been discovered this way. To answer the second question: Yes, now it uses a bounding rectangle that covers the passage behind the door. So how *did* you figure out what was causing bug #1? It seems like one of those things that require a moment of pure inspiration to hit upon and I would *love* to know the context. If I was the one writing the code, I probably would have started with the level-transition code and worked my way back from there, since the issue is “level transitions happen at inappropriate times.” I kind of wonder if it would have been possible to leave to “open door” code as it was, but move the rectangle so that it floats in front of the door instead of in the same space as the door, to get an “automatic sliding door” kind of effect. ohhkay! alright, didn’t know that the doors snap shut again. That alone makes my scenario much less likely to happen. Thanks for the explanation. To my surprise, I had a dream about Good Robot last night. I will say you need to address the balance issues in the robot that splits in two, whose children split in two, until all available space is filled. When the process starts just offscreen, you can often end up already too far behind to deal with it with your default starting weapons by the time you notice it. It shifts the entire focus of the game from “shoot all the bad robots” to “watch out for this particular robot and make sure you kill it on sight at all costs, and incidentally, do all the other things too when you have spare time”. Also it strains suspension-of-disbelief that the robot that tries to absorb Good Robot by surrounding it would be able to continues successfully doing so while Good Robot is unloading its full arsenal directly down the absorbing robot’s, errr, throat for lack of a better term. I think for gameplay balance, it really needs to slow the process down or even stop it from happening entirely. If you’re basically going to force a game over whenever you end up within a certain distance of this robot you might as well save the player time and just make it an instant death rather than a long doomed struggle. Thanks. If you can just address these issues before release I’ll definitely purchase it. Exponential growth really is a big problem. How about only making the first robot able to split, and if that one dies the next one, so that there is always a way the robot can still attack you, but none that gets too out-of-hand by just not noticing it quickly enough? Of course, the time it needs to split would have to go up, but since there is only one more with N robots, instead of N more, it’s still manageable as long as the reproduction time is not shorter than the time you need to kill one off. Love all these posts, and they make me want to play the game more and more. +1 to this Hey shamus, I came out of the shadows where I have been lurking forever as an avid fan (seriously, whenever i forget to check this site for a week my girlfriend asks me why I’ve been glued to my screen for 2 hours reading text and comment threads) to tell you two things: 1. I will buy good robot on launch day and i will get as many people as I think will like the game to buy it on launch day too. I never knew how important launch day was on steam so thank you for enlightening me. 2. In the spirit of the comment thread: minor nitpick over spelling In the dev blog link of pyrodactil’s good robot page I found this sentence: “It’s slipped my mind at them moment.” Probably should be THE moment. Better correct it before the 5th of April or you might get a 2% decreased sales figure due to grammar nazi boycot! Points are in reverse order of importance, obviously. Keep up the good work and know that I am a great fan of almost all the content this site has delivered over the years. Kasper>
https://www.shamusyoung.com/twentysidedtale/?p=31026&replytocom=1018589
CC-MAIN-2019-47
refinedweb
7,761
67.28
On one of our client's Ubuntu 10.04 machines, I needed to upgrade Python from 2.6 to 2.7. Unfortunately, after installing Python 2.7 from apt the virtualenv, version 1.4.5, did not work correctly. This bug was fixed in a newer virtualenv version, however there were no Ubuntu packages available. I thought about trying something else: why not install all the software locally in my home directory on the server? When virtualenv is used to create a new environment, it copies the Python executable in to the virtualenv directory. First I install pythonbrew, which is great software for installing many different Python versions in a local directory. $ curl -kL | bash Then I activate pythonbrew with: $ source "$HOME/.pythonbrew/etc/bashrc" And install the Python version I want: $ pythonbrew install 2.7.3 The installation took a couple of minutes. The script downloaded the tarball with the Python source code for the required version, compiled it and installed. It was writing all the information into a log file, which I was looking at by running the command below in another console: $ tail -f $HOME/.pythonbrew/log/build.log You can also add the following lines to ~/.bashrc to activate this python after starting new bash session. [[ -s "$HOME/.pythonbrew/etc/bashrc" ]] && source $HOME/.pythonbrew/etc/bashrc pythonbrew switch 2.7.3 I run the below command to activate the pythonbrew script: $ source $HOME/.pythonbrew/etc/bashrc The python version changed: $ python --version Python 2.6.5 $ source $HOME/.pythonbrew/etc/bashrc $ python --version Python 2.7.2 As you can see, Python from my local installation is used: $ which python /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/python The only thing left is to create the virtual environment for the new Python version. I use virtualenvwrapper for managing virtualenv, so the obvious way to create a new environment is: $ mkvirtualenv --no-site-packages envname Unfortunately, it creates an environment with the wrong Python version: $ which python /home/szymon/.virtualenvs/envname/bin/python $ python --version Python 2.6.5 So let's try to tell virtualenvwrapper which Python file should be used: $ deactivate $ rmvirtualenv envname Removing envname... $ mkvirtualenv --no-site-packages -p /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/python envname Unfortunately this ended with an error: Running virtualenv with interpreter /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/python New python executable in envname/bin/python Traceback (most recent call last): File "/home/szymon/.virtualenvs/envname/lib/python2.7/site.py", line 67, in import os File "/home/szymon/.virtualenvs/envname/lib/python2.7/os.py", line 49, in import posixpath as path File "/home/szymon/.virtualenvs/envname/lib/python2.7/posixpath.py", line 17, in import warnings ImportError: No module named warnings ERROR: The executable envname/bin/python is not functioning ERROR: It thinks sys.prefix is '/home/szymon/.virtualenvs' (should be '/home/szymon/.virtualenvs/envname') ERROR: virtualenv is not compatible with this system or executable The problem is that the virtualenv version, used by virtualenvwrapper, doesn't work with Python 2.7. As I wrote at the begining, there is no newer version available via apt. The solution is pretty simple. Let's just install the newer virtualenvwrapper and virtualenv version using pip. $ pip install virtualenv Requirement already satisfied: virtualenv in /usr/lib/pymodules/python2.6 Installing collected packages: virtualenv Successfully installed virtualenv As you can see, there is a problem. The problem is that there is used pip from system installation. There is no pip installed in my local Python 2.7 version. However there is easy_install: $ which pip /usr/bin/pip $ which easy_install /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/easy_install So let's use it for installing virtualenv and virtualenvwrapper: $ easy_install virtualenv virtualenvwrapper I've checked the whole installation procedure once again, it turned out that there was some network error while downloading pip, but unfortunately I didn't notice the error. If everything is OK, then pip should be installed, and you should be able to install virtualenv using pip as well with: $ pip install virtualenv virtualenvwrapper Cool, let's check which version is installed: $ which virtualenv /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/virtualenv $ virtualenv --version 1.8.4 Before creating the brand new virtual environment, I have to activate the new virtualenvwrapper. I have the following line in my ~/.bashrc file: source /usr/local/bin/virtualenvwrapper.sh I just have to change it to the below line and login once again: source /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/virtualenvwrapper.sh Let's now create the virtual environment using the brand new Python version: $ mkvirtualenv --no-site-packages -p $HOME/.pythonbrew/pythons/Python-2.7.3/bin/python envname I want to use this environment each time I log into this server, so I've added this line to my ~/.bashrc: workon envname Let's check if it works. I've logged out and logged in to my account on this server before running the following commands: $ which python /home/szymon/.virtualenvs/envname/bin/python $ python --version Python 2.7.3 Looks like everything is OK.
http://blog.endpoint.com/2013_02_01_archive.html
CC-MAIN-2016-30
refinedweb
859
51.04
1. Introduction to docker 1.1 what is docker Docker is an open source application container engine, which allows developers to package their applications and dependency packages into a portable container, and then publish them to other machines. A container can be simply understood as a modified runtime environment that isolates all resources (unless explicitly allowed). UNIX has been using containers to isolate resources for a long time, but it is difficult to use containers directly, configuration is complex and error prone. Docker uses the existing container technology and provides consistent docker construction scheme according to the best practices, so that we can easily use the container to isolate resources and provide stronger security. At present, docker can run on Linux natively, and can also run through a separate virtual machine in OS X and windows environments. 1.2 running software in isolated containers The container structure of docker running on Linux is shown in the following figure: Command line tools or cli run in memory called user space, just like other programs running on the operating system. The docker cli is running in the space of docker, and the other is the docker. The above figure also shows three running containers, each running as a child process of the docker daemons and encapsulated in the container. Programs running in a container can only access the memory space and resources inside the container (unless it is specified that it can access resources outside the container). 1.3 distribution containers docker can execute, copy and easily distribute containers. Docker completes the encapsulation of traditional containers through a package and distribution software. This component used to act as the container distribution role is called Image。The relationship between image and container is similar to the relationship between class and instance. Multiple instances can be created based on a class, and each instance has its own resources. However, multiple containers can be created based on one image, and each container does not interfere with each other (if special cases such as container connection are not considered) 2. Docker image 2.1 introduction to docker image docker image is an entity existing on the system. Inside the image, there is a simplified operating system, files and all dependencies required by the operation of the application(The image does not contain the kernel, and the container is the kernel shared by the docker host)。 Image repository is a named image bucket, which is used to store images. The centralized storage of images is to facilitate people to obtain the required images. The image warehouse is managed by an image registry. The image warehouse service of the docker client is configurable, and the default is docker hub. In each image warehouse, label is the only important way to specify the image, and it is also a traversal method to create useful aliases. A label can only be applied to a single image in a warehouse, but one image can have multiple labels. The following figure shows the relationship among the image warehouse service, the image warehouse, and the images: the image warehouse service manages multiple image warehouses, and each image warehouse can contain multiple images. 2.2 common operations of docker image 2.2.1 search pull image docker image is stored in the image warehouse, from which you can search and pull the image. the following is an example of a search image, and its parameters are described: - Name: the name of the mirror warehouse source - Description: the description of the image - Official: is it officially released by docker - Stars: similar to star in GitHub, which means like and like. - Automated: automatic build. $ docker search ubuntu NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 10873 [OK] dorowu/ubuntu-desktop-lxde-vnc Docker image to provide HTML5 VNC interface … 422 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 244 [OK] consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 217 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 108 [OK] ... Pull the image. You can specify the label to pull. If you do not specify the image label, the image with the last label will be pulled by default $ docker pull ubuntu Using default tag: latest latest: Pulling from library/ubuntu d51af753c3d3: Downloading [========================================> ] 23.13MB/28.56MB fc878cd0a91c: Download complete 6154df8ff988: Download complete fee5db0ff82f: Waiting latest: Pulling from library/ubuntu d51af753c3d3: Pull complete fc878cd0a91c: Pull complete 6154df8ff988: Pull complete fee5db0ff82f: Pull complete Digest: sha256:747d2dbbaaee995098c9792d99bd333c6783ce56150d1b11e333bbceed5c54d7 Status: Downloaded newer image for ubuntu:latest docker.io/library/ubuntu:latest Specify label pull image: $ docker pull ubuntu:18.04 18.04: Pulling from library/ubuntu 23884877105a: Pull complete bc38caa0f5b9: Pull complete 2910811b6c42: Pull complete 36505266dcc6: Pull complete Digest: sha256:3235326357dfb65f1781dbc4df3b834546d8bf914e82cce58e6e6b676e23ce8f Status: Downloaded newer image for ubuntu:18.04 docker.io/library/ubuntu:18.04 2.2.2 view image To see which images are available: $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 1d622ef86b13 2 weeks ago 73.9MB ubuntu 18.04 c3c304cb4f22 2 weeks ago 64.2MB To view image details: $ docker inspect ubuntu:18.04 [ { "Id": "sha256:c3c304cb4f22ceb8a6fcc29a0cd6d3e4383ba9eb9b5fb552f87de7c0ba99edac", "RepoTags": [ "ubuntu:18.04" ], "RepoDigests": [ "[email protected]:3235326357dfb65f1781dbc4df3b834546d8bf914e82cce58e6e6b676e23ce8f" ], "Parent": "", "Comment": "", "Created": "2020-04-24T01:07:05.743682549Z", "Container": "f607979929fd999f71996754275dc5058e7345748f52d58ba72b6baf449c1fb2", The content is long, which is omitted here } ] 2.2.3 making images There are two ways to create an image: one is to generate an image from a local container and the other is to use dockerfile to generate an image. See section 6 for details. 2.2.4 mirror label #Here you are ubuntu:18.04 Add a new tag ubuntu:v18 $ docker tag ubuntu:18.04 ubuntu:v18 #View the image and find the new label ubuntu:v18 and ubuntu:18.04 In fact, the docker tag just creates a label, which points to the same image as the original tag. $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 1d622ef86b13 2 weeks ago 73.9MB ubuntu 18.04 c3c304cb4f22 2 weeks ago 64.2MB ubuntu v18 c3c304cb4f22 2 weeks ago 64.2MB 2.2.5 delete image To delete an image, it should be noted that the image cannot be deleted until all containers started based on the image are stopped. #To delete an image, specify a label. Otherwise, the image with the label of latest will be deleted by default. You can specify the image ID to delete. $ docker image rm ubuntu:18.04 Untagged: ubuntu:18.04 #The view image is indeed deleted $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 1d622ef86b13 2 weeks ago 73.9MB ubuntu v18 c3c304cb4f22 2 weeks ago 64.2MB 2.3 image distribution 2.3.1 distribution through mirror warehouse Upload the image to the image warehouse, and the user pulls the image from the image warehouse. Usage: docker push [OPTIONS] NAME[:TAG] Push an image or a repository to a registry Options: --disable-content-trust Skip image signing (default true) 2.3.2 manual distribution An image is an entity that can be stored on a disk and transmitted through a USB flash disk. $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 1d622ef86b13 2 weeks ago 73.9MB #Will be mirrored ubuntu:latest Save as ubuntu.tar 。 So you can pass it on ubuntu.tar To distribute the images. $ docker save -o ubuntu.tar ubuntu:latest #Delete mirror image ubuntu:lastest $ docker image rm ubuntu:latest Untagged: ubuntu:latest Untagged: [email protected]:747d2dbbaaee995098c9792d99bd333c6783ce56150d1b11e333bbceed5c54d7 Deleted: sha256:1d622ef86b138c7e96d4f797bf5e4baca3249f030c575b9337638594f2b63f01 Deleted: sha256:279e836b58d9996b5715e82a97b024563f2b175e86a53176846684f0717661c3 Deleted: sha256:39865913f677c50ea236b68d81560d8fefe491661ce6e668fd331b4b680b1d47 Deleted: sha256:cac81188485e011e56459f1d9fc9936625a1b62cacdb4fcd3526e5f32e280387 Deleted: sha256:7789f1a3d4e9258fbe5469a8d657deb6aba168d86967063e9b80ac3e1154333f $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE #From ubuntu.tar Load image $ docker load -i ubuntu.tar 7789f1a3d4e9: Loading layer [==================================================>] 75.22MB/75.22MB 9e53fd489559: Loading layer [==================================================>] 1.011MB/1.011MB 2a19bd70fcd4: Loading layer [==================================================>] 15.36kB/15.36kB 8891751e0a17: Loading layer [==================================================>] 3.072kB/3.072kB Loaded image: ubuntu:latest $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 1d622ef86b13 2 weeks ago 73.9MB 2.4 image layering docker image is composed of some loosely coupled read-only image layers. Docker is responsible for stacking these image layers and representing them as a single unified object. All docker images start from a basic image layer. When modifying or adding new content, a new image layer will be created on top of the current image layer. When additional mirror layers are added, the mirror always remains the combination of all the current mirror layers. For example, if an image consists of two layers, the first layer contains files 1, 2, 3, and the second layer contains files 4, 5, and 6, then from a system perspective, this image contains six files 1, 2, 3, 4, 5, and 6. The figure below shows a slightly more complex three-tier image, where file 7 is an updated version of file 5. In this case, the files in the upper mirror layer cover the files in the lower mirror layer. In this way, the updated version of the file is added to the image as a new mirror layer(I feel that the bottom layer of the image layer is a basic layer, and then each layer above is like a step. Each step is executed from bottom to top, and the final result is the information displayed by the whole image)。 Always remember that the mirror layer is read-only. In this way, the image layer can be shared among multiple mirrors, which can save space and improve performance. The most intuitive is that when pulling an image, if some image layers contained in the image already exist, they will not be pulled again. the image itself is a configuration object, which contains the list of image layers and some metadata information. The image layer is the place where the actual data is stored (for example, files, etc., the image layers are completely independent, and there is no concept of belonging to a certain image collection). The unique identity of the mirror is an encrypted ID, which is a hash value of the configuration object itself. Each mirror layer also has an encryption ID distinction, whose value is a hash value of the content of the mirror layer itself. This means that modifying the contents of the mirror or any of its mirror layers will result in changes in the encrypted hash value. Therefore, the image and its mirror layer are immutable, and any changes can be easily identified (this is called content hashing). 2.5 multi architecture image The purpose of multi architecture image is to solve the problem that image supports different architectures (Linux, windows, arm, etc.). To implement this feature, the mirror repository service API supports two important structures: manifest list and manifest. Manifest list refers to the list of architectures supported by a certain image tag. Each architecture it supports has its own manifest definition, which lists the composition of the image. As shown in the figure below, on the left is a list of manifest, which contains each architecture supported by the image. Each item in the manifest list has an arrow pointing to the specific manifest, which contains the image configuration and image layer data. Multi architecture image principle: when pulling the image, the docker client will call the API of the docker image warehouse service to complete the pull. If the image has a manifest list, the docker client will find the manifest corresponding to the current host architecture, resolve the encryption ID of the image layer that constitutes the image, and then pull each image layer from the image warehouse. Some software can’t cross platform (maybe others don’t need it), so the manifest list is optional. If there is no manifest list, the image warehouse service will return a normal manifest. 3. Docker container 3.1 introduction to docker container a container is a runtime instance of a mirror, and you can start one or more containers from a single mirror. Compared with virtual machines, containers are lightweight and start up very fast — compared with virtual machines running on a complete operating system, The container shares the operating system / kernel of its host(a very simple method is to check the process inside the container. The process running inside the container can also be found on the operating system of the host where the container is located (because the container has an isolated PID, the process number will be different, but the process is the same). When creating containers, docker will assign a unique identifier to each container, as well as a personalized name (if the user does not specify a name). The docker container has four states: up, suspended, exited, and restarting. The state transition diagram is as follows: 3.2 common operations of docker container 3.2.1 start and stop docker container introduction to common options when starting a container: -i: keep standard input on; -t: assign a TTY terminal; -d: background running container; -e MYENV=123: inject environment variables; -p 9000:80: requests published to port 9000 of the host will be mapped to port 80 of the container; -v /opt/soft:/soft: mount the host’s / opt / soft directory to the / soft directory of the container file system (detailed in Section 5).Here’s a tip: if it’s a foreground started container, pressing the ctrl-pq key combination will exit the container, but not terminate the container. #Docker container running, stop / start, pause / restart docker container run docker container stop docker container start docker container pause docker container restart 3.2.2 view docker container #List running containers docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b601d970cd1d web:v1 "node app.js" 4 seconds ago Up 2 seconds zealous_leakey #List all containers $ docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b601d970cd1d web:v1 "node app.js" 27 seconds ago Up 26 seconds zealous_leakey 9e3328fa1308 ubuntu:latest "/bin/bash" 16 minutes ago Exited (0) 16 minutes ago silly_nash 3e45ce78a31d ubuntu:latest "/bin/bash" 17 minutes ago Exited (130) 16 minutes ago beautiful_easle #View container configuration details and runtime information docker inspect zealous_leakey [ { "Id": "b601d970cd1d051df92f8dcb2f8b9acd39d0a1e9a0138db6e597b690f134b57b", "Created": "2020-05-11T03:01:21.064780355Z", "Path": "node", "Args": [ "app.js" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 23692, "ExitCode": 0, "Error": "", "StartedAt": "2020-05-11T03:01:22.4655386Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:84f04d8b5d32a6d5b6dee7a67d2b25dcf9e12a5c6e36039353baf75d551c4dd1", ... } ] 3.2.3 shell connected to container docker container exec allows the user to start a new process in a running container. This command is useful when connecting the docker host shell to a running container terminal. docker container exec -it <container-name or container-id> bashThe command starts a bash shell process inside the container and connects to the shell. For this command to work, the image used to create the container must contain a bash shell. $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b601d970cd1d web:v1 "node app.js" 2 minutes ago Up 2 minutes zealous_leakey $ docker container exec -it zealous_leakey bash [email protected]:/# 3.2.4 deleting containers #The container needs to be stopped before it can be deleted docker rm #If you add - F to kill the container and delete it docker rm -f 3.3 restart strategy of docker container 3.3.1 automatic restart strategy of docker container it is generally recommended that you configure the restart policy when you run the container. This is a self-healing capability of the container, which can be restarted after a specified event or error to complete self-healing. The restart policy applies to each container and is passed in when the container is started --Restart < restart policy >That’s fine. There are three common container restart strategies:always、unless-stopped、on-failure。 The always policy is a simple way to always try to restart the container in the stopped state unless the container is explicitly stopped (for example, through the docker container stop command). When using the always policy, it should be noted that when the docker daemon is restarted, the stopped container will also be restarted. The biggest difference between always and unless stopped is that containers that use the unless stopped policy and are in the exited state will not be restarted when the docker daemon is restarted. The on failure policy will restart the container when it exits the container and the return value is not 0. Even if the container is in the exited state, the container will be restarted when the docker daemon restarts. 3.3.2 use init / systemctl and supervisor to monitor the process in the container Use the daemons (service, systemctl, etc.) or the third-party process monitoring software (supervisor) to monitor the process in the container. When creating the image, configure these services. When starting the container, you only need to start the monitoring program, which is responsible for starting the application. 4. Docker network 4.1 introduction to docker network container model (CNM) docker has four network container models: closed container, bridged container (default), joined container and open container. All docker containers must conform to one of the four models. These models define how a container communicates with other local containers and host networks. The following figure vividly depicts each model, with the strongest (most isolated) on the far left and the most vulnerable on the far right: 4.2 closed container add when starting container --network noneA closed container is created. The closed container does not allow any network traffic, and processes running in this container can only access the local loopback interface. #As you can see, the closed container has only one loopback interface $ docker run --name closed-container --network none 4.3 bridged container bridged container is the default network container model when docker runs the container. It can also be added when the container is started --network bridgeExplicitly create a closed container. The bridged container has two interfaces, one is the local loopback interface, and the other is connected to the host network through the bridge. The bridged container can access any external network that any host network can access through the host network. #A local loopback interface, a bridge to the host network interface $ docker run --network bridge node:7 ip addr; ping -c 255: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever PING (180.101.49.12) 56(84) bytes of data. 64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=1 ttl=50 time=29.0 ms 64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=2 ttl=50 time=28.4 ms --- ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 28.441/28.761/29.082/0.362 ms 4.4 joined container add when starting container --Network container: < containers that need to share networks >A joined container is created.Notice here --networkYou need to add one after the option container:Represents the creation of a joined container from an existing container, if not added container:Then the docker daemons think that the networkCreating containers, about networkAt present, it is not used much, and will not be introduced here。 In this case, none of the network containers are shared between these containers. The joined container is built by providing access to one container interface to another new container. #Create a closed container and look at its internal network status. You can see that the container is listening to port 39439 $ docker run --name join-base-container --network none -d alpine:latest nc -l 8000 e2907c7a889d209734f63309a5351687ac2761489e129cd6a7d6a392234a3cde $ docker exec join-base-container #Create a joined container based on the above container and find that it shares the network in the closed container above $ docker run --network container:join-base-container alpine:latest 4.5 open container add when starting container --network hostAn open container is created. The open container has no network container, shares the host network, and has full access to the host network. As follows, start an open container, in which you can see all the network interfaces of the host. #Let's first see what network interfaces the host has $ dynamic noprefixroute eno1 valid_lft 522699sec preferred_lft 522699 #It is found that all network interfaces of the host can be accessed in the open container $ docker run --network host noprefixroute dynamic eno1 valid_lft 522693sec preferred_lft 522693 5. Volume and persistent data 5.1 introduction to storage volume a directory tree for a host or container is created by a set of mount points that describe how one or more file systems can be built. Storage volume is the mount point on the container directory tree. Storage volume provides container independent data management mode.The explanation of terms in the book is vague. I understand it as follows: mount a directory of the host disk to the file system of the container (the directory on the host is called the storage volume), then the operation on the mount point in the container is actually the operation on that directory on the host. For example, if the host directory / opt / soft is mounted to the / soft directory of the container, all operations on / soft in the container are actually operations on the host directory / opt / soft. 5.2 storage volume type there are two types of storage volumes: bound mounted storage volumes and managed storage volumes. The binding mount storage volume uses the host directory or file provided by the user, and the option needs to be added when starting the container -V < host location >: < container mount point >。 The management storage volume uses the location controlled by the docker daemons, which is called the docker management space. Commands need to be added when starting the container -V < container mount point >The docker daemon will automatically create a directory in the host file system to mount to the mount drop point specified in the container. The storage volume types are as follows: 5.3 shared storage volume there are two ways for multiple containers to share the same storage volume. The first method is to mount to the same host directory when the container is started. The second way to start a new container is the join option --Volume from < other container name or ID >The following is a demonstration of this method: $ docker run --name fowler -v ~/example-books:/library/PoEAA -v /library/DSL alpine:latest echo "OK" OK $ docker run --name knuth -v /library/test1 -v /library/test2 -v /library/test3 alpine:latest echo "OK" OK #Next, use -- volumes from to create a new container based on container Fowler and container Knuth. You can access the storage volume inside the container Fowler and container Knuth $ docker run --volumes-from fowler --volumes-from knuth alpine:latest ls -l /library total 20 drwxr-xr-x 2 root root 4096 May 11 03:41 DSL drwxr-xr-x 2 root root 4096 May 7 02:18 PoEAA drwxr-xr-x 2 root root 4096 May 11 03:43 test1 drwxr-xr-x 2 root root 4096 May 11 03:43 test2 drwxr-xr-x 2 root root 4096 May 11 03:43 test3 5.4 managing volume deletion add when deleting a container -vOption to delete the management volume.The bound storage volume cannot be deleted by using the docker command, because the bound storage volume is not under the management of the docker daemons. 6. Containerization of application 6.1 creating images from local containers creating images from local containers is relatively simple, using the command directly Docker commit < container name or ID > < name of generated image >。 But be careful,The command that comes with starting the original container is committed to the new imageThis sentence will be explained below. First, we pull the image of Ubuntu, and then execute the docker run -it --name git-container ubuntu:latest /bin/bashRun the container, install git in the container, exit the container, and then execute image generation. $ docker commit git-container ubuntu-git:v1 sha256:c1f13209eb865c7b726a80cc570e13cf1ad2e37b6d614b54b21017ecb3881920 $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu-git v1 c1f13209eb86 14 seconds ago 197MB Then we ran the container with the new image of Ubuntu GIT and found that nothing seemed to have happened. $ docker run --rm ubuntu-git:v1 $ The above phenomenon occurs because the command attached when starting the original container will be submitted to the new image, while the command attached when starting the container to create the new image is / bin / bash. So when you start a container from a new image using this default command, it starts a shell and stops it immediately. we can set the entry point when creating a new image. To set the entry point, we need to use the --Entrypoint < entry point command >Create a new container and rerun from the new container. #Based on Ubuntu- git:v1 Mirror starts a container with an entry point $ docker run --name base-entrypoint --entrypoint git ubuntu-git:v1 usage: git [--version] [--help] [-C <path>] [-c <name>=<value>] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path] [-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare] [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>] <command> [<args>] ... #Create the image based on the container with the storage point, and keep the image name unchanged $ docker commit base-entrypoint ubuntu-git:v1 sha256:7ab43fc4a1c911f4f2eeeb62f5a946f2b687303a423ae64451a7a846db7cf036 #Clear the container $ docker rm base-entrypoint base-entrypoint #Starting the container from the new image, you can see that it has its own entry point $ docker run --rm ubuntu-git:v1 version git version 2.25.1 6.2 create image by dockerfile 6.2.1 usage of dockerfile let’s first introduce how to use dockerfile to create git images created from local containers in the previous section. Create a DockerfileCopy the following code to the file to save: FROM ubuntu:latest RUN apt-get update \ && apt-get install -y git ENTRYPOINT ["git"] Execute the order docker build -t ubuntu-git:auto .Create a new image, and then docker image lsYou can see the image you just created. Start the container to verify the new mirror: $ docker build -t ubuntu-git:auto . Sending build context to Docker daemon 2.048kB Step 1/3 : FROM ubuntu:latest ---> 1d622ef86b13 Step 2/3 : RUN apt-get update && apt-get install -y git ---> Running in 24ca27336db7 Removing intermediate container 24ca27336db7 ---> 6e0f2d7b38e1 Step 3/3 : ENTRYPOINT ["git"] ---> Running in bd1435f0e46d Removing intermediate container bd1435f0e46d ---> 88475690ecbc Successfully built 88475690ecbc Successfully tagged ubuntu-git:auto $ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu-git auto 88475690ecbc 3 minutes ago 197MB ubuntu latest 1d622ef86b13 2 weeks ago 73.9MB $ docker run --rm ubuntu-git:auto version git version 2.25.1 6.2.2 common dockerfile instructions note when using dockerfile:Each dockerfile instruction causes a new mirror layer to be created, so instructions should be merged as much as possible. The creation of the image is done by the docker daemons, not by the docker client. The docker client will send the context to the docker daemon, which is responsible for creating the image. Therefore, do not add irrelevant data to the image when writing the dockerfile. Some instructions such as run, entrypoint and CMD have two formats: shell format and exec format. The shell format is similar to a shell command, for example, the entry point is set to ENTRYPOINT pyhon /app/run.py, where the parameters are separated by spaces. The exec format is an array of strings, where the first value is the command to execute, and the rest are parameters. The command specified in the shell format will be executed as an argument to the default shell. Specifically, the specified command will run with the /bin/sh -c "python /app/run.py"In the form of execution. Most importantly, if entrypoint uses the shell format, all parameters provided by the CMD instruction and additional parameters specified when running the container are ignored. It is a best practice to use the exec format whenever possible. - FROM The from instruction specifies the underlying mirror layer. format: FROM [--platform=<platform>] <image>[:<tag>] [AS <name>] example: FROM ubuntu:latest - RUN creates a new mirror layer above the current mirror layer and executes commands in the new layer. format: RUN <command>or RUN ["executable", "param1", "param2"] example: RUN apt-get update && install -y git - CMD provides default parameters when running the container. format: CMD ["param1","param2"]or CMD command param1 param2 example: CMD ["/usr/bin/wc","--help"] - LABEL is used to define key value pairs, which are recorded as metadata of mirror or container. This is the same function as the — label option when starting the container. format LABEL <key>=<value> <key>=<value> <key>=<value> ... example: LABEL multi.label1="value1" multi.label2="value2" other="value3" - Maintainer (not recommended) maintainer information. format: MAINTAINER <name> example: LABEL maintainer="[email protected]" - EXPOSE notifies the docker container to listen on the specified network port when it starts. You can use the -pOption to override the setting. format: EXPOSE <port> [<port>/<protocol>...] example: EXPOSE 80/udp - ENV sets the environment variable for the mirror, similar to the – E (- – Env) option when starting the container. format: ENV <key> <value>or ENV <key>=<value> ... example: ENV myName="John Doe" myDog=Rex\ The\ Dog myCat=fluffy - ADD copies the file to the image. format: ADD [--chown=<user>:<group>] <src>... <dest>or ADD [--chown=<user>:<group>] ["<src>",... "<dest>"] example: ADD test.txt relativeDir/ - COPY copies the file to the image. format: COPY [--chown=<user>:<group>] <src>... <dest>or COPY [--chown=<user>:<group>] ["<src>",... "<dest>"] example: COPY test.txt relativeDir/ - ENTRYPOINT sets the entry point and specifies the executable program to be run when the container starts. format: ENTRYPOINT ["executable", "param1", "param2"] example: ENTRYPOINT ["top", "-b"] - VOLUME create a docker management volume. format: VOLUME ["/data"] example: VOLUME ["/data"] - USER specifies the user when the container is started. format: USER <user>[:<group>] example: USER patrick - WORKDIR specifies the default working directory, which is created if the specified directory does not exist. format: WORKDIR /app example: WORKDIR /app 6.2.3. Introduction to dockerignore Before the docker client sends the context to the docker daemon, it looks for a file named. Dockerignore in the root directory of the context. If this file exists, the client modifies the context to exclude files and directories that match the patterns in it. This helps to avoid sending unnecessary large files or sensitive files and directories to daemons, and to avoid adding them to the image using add or copy. The pattern matching behavior of. Dockerignore is as follows: For example, we only add the necessary files to the image. The following example shows that only the app (whether it is a file or a directory) will be added to the image app.py , requirements.txt Add to image #Exclude everything first * #Add the required file or directory !app !app.py !requirements.txt 7. Docker others docker can limit the access of CPU, memory and device used by container when running container, and can also run privileged container( docker container run --privileged)。In the process of using docker, you can use the Docker help < command >perhaps Docker < command > -- helpTo see help docker run --help Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] Run a command in a new container Options: --add-host list Add a custom host-to-IP mapping (host:ip) -a, --attach list Attach to STDIN, STDOUT or STDERR ... 8. References - Docker in simple language - Docker practice -…
https://developpaper.com/systematization-of-docker-knowledge-points/
CC-MAIN-2021-21
refinedweb
5,291
52.29
How Markovify works Markovify is one of the more elegant libraries I’ve come across. While it can be extended for general Markov Process use, it was written to generate novel sentences using Markov Chains on a corpus of text. Its interface is extremely clean and astoundingly fast, fitting and predicting instantaneously import re import markovify with open("Harry Potter and the Sorcerer.txt") as f: text = f.readlines() # drop chapter headings and blank lines text = [x for x in text if not x.isupper()] text = [x for x in text if x != "\n"] model = markovify.Text(text) for _ in range(5): print(model.make_sentence(), '\n') print('--'*10) for _ in range(5): print(model.make_sentence_with_start('Snape'), '\n') So if I look in the setting sun. Where there should have let it fall open. There was no reason for not wanting Snape near him and Mrs. Weasley smiled down at the other boys. He clicked it again in Ron's eyes, but Harry, instead of looking up at once. Professor McGonagall had reached a fork in the same room, while Aunt Petunia and Uncle Vernon went off to look at him. -------------------- Snape gave Harry a nasty grin on his face. Snape was still rising higher and higher, and started pulling them off anyway. Snape was trying to look on Malfoy's face during the next table, wasn't having much more luck. Snape bent over the points they'd lost. Snape was in a large silver key that had just chimed midnight when the owls flooded into the classroom. I’m not going to go into the straight-up regex wizardry (heh) that the library employs to parse and validate all of the text. Instead, I want to peel back the curtain a bit and show how the library constructs the underlying Markov Chain and subsequently uses it to generate new sentences. But first, a motivating example. Text and Markov Chains The basic idea of Markov Chains and text is that you get a corpus of text import re example = ('Space: the final frontier. These are the voyages of the starship Enterprise. ' 'Its five-year mission: to explore strange new worlds. ' 'To seek out new life and new civilizations. ' 'To boldly go where no man has gone before!') example = example.lower() example = re.sub(r'[^\w\s]', '', example) example = example.split(' ') print(example) ['space', 'the', 'final', 'frontier', 'these', 'are', 'the', 'voyages', 'of', 'the', 'starship', 'enterprise', 'its', 'fiveyear', 'mission', 'to', 'explore', 'strange', 'new', 'worlds', 'to', 'seek', 'out', 'new', 'life', 'and', 'new', 'civilizations', 'to', 'boldly', 'go', 'where', 'no', 'man', 'has', 'gone', 'before'] and build a parser that will scan through the text, one pair at a time for idx in range(len(example)-1): print(example[idx], example[idx+1]) space the the final final frontier frontier these these are are the the voyages voyages of of the the starship starship enterprise enterprise its its fiveyear fiveyear mission mission to to explore explore strange strange new new worlds worlds to to seek seek out out new new life life and and new new civilizations civilizations to to boldly boldly go go where where no no man man has has gone gone before More-accurately, you want to build a dictionary of key=word with values as a list of all words that have followed that keyword. d = {} for idx in range(len(example)-1): word = example[idx] new_word = example[idx+1] if word not in d: d[word] = [new_word] else: d[word].append(new_word) print(d) {'space': ['the'], 'the': ['final', 'voyages', 'starship'], 'final': ['frontier'], 'frontier': ['these'], 'these': ['are'], 'are': ['the'], 'voyages': ['of'], 'of': ['the'], 'starship': ['enterprise'], 'enterprise': ['its'], 'its': ['fiveyear'], 'fiveyear': ['mission'], 'mission': ['to'], 'to': ['explore', 'seek', 'boldly'], 'explore': ['strange'], 'strange': ['new'], 'new': ['worlds', 'life', 'civilizations'], 'worlds': ['to'], 'seek': ['out'], 'out': ['new'], 'life': ['and'], 'and': ['new'], 'civilizations': ['to'], 'boldly': ['go'], 'go': ['where'], 'where': ['no'], 'no': ['man'], 'man': ['has'], 'has': ['gone'], 'gone': ['before']} So in this case, there’s some repetition after the words the (the final, the voyages, the starship) and similarly for to (to explore, to seek, to boldly). And so we can use this dictionary to reconstruct new phrases by following the keys. Let’s say that we start with mission d['mission'] ['to'] then we follow that key and see that there are 3 different paths that we can go down d['to'] ['explore', 'seek', 'boldly'] the model will randomly select one of these, but for simplicity, we’ll use explore d['explore'] ['strange'] d['strange'] ['new'] And find another fork d['new'] ['worlds', 'life', 'civilizations'] d['life'] ['and'] d['and'] ['new'] it’s easy to see how you can get caught up in a loop here. d['new'] ['worlds', 'life', 'civilizations'] But this goes on until we happen upon the last word in the phrase, before d.get('before') Couple caveats to this - In practice, most implementations use pairs of words as their keys so capture more authentic-sounding phrases - The list of words is appended with a couple sentinel values denoting beginning and end of the list. This way, the dictionary lookup terminates when it sees the end of the sequence, as opposed to the messy dict.get()that we did above. That out of the way, let’s return to the implementation. Model Fit At the time that we instantiated a markovify.Text object a flurry of data processing happened to build a (Markov) Chain object that lives within our Text object, model. Text Pre-pocessing Hand-waving past all of this, I’ll just say that there’s a lot of care to remove weird quotes and characters, and figure out how to split the entire body of text into a neat list of sentences. Then within those lists, split the sentences by word. from IPython.display import Image Image('images/parsed_sentences.PNG') Chain Once we have the corpus worked out, we move onto the meat-and-potatoes object. In the .build() method of the Chain object, we loop through all of our sentences, repeating the exercise that we did in our motivating example above. Image('images/build.PNG') Fast-forwarding a bit and we’re starting to see some patterns emerge. It’s worth highlighting that our model’s state_size is 2, and so we’re using pairs of words as our keys. Thus, values for the tuple ( __BEGIN__, __BEGIN__) represent words that start sentences and tuples like ( __BEGIN__, Harry) correspond to sentences that started with the word ‘Harry’ Image('images/ffwd.PNG') Clever construction of first values So there are a finite number of sentence starts found in the book. Under the hood, all of their lists are constructed as [__BEGIN__, __BEGIN__, <WORDS>, __END__]. So we start off by taking all of the words found in model[('__BEGIN__', '__BEGIN__')], as well as all of their counts. Image('images/next_dict.PNG') Then, because we want our model to start sentences with words and phrases proportional to their appearance in the corpus, we do some clever sampling. Using the dictionary above, we attach two lists to our Chain object: choices (the words) and cumdist (the cumulative sum of their counts). Building Sentences Then, when we make calls to model.make_sentence() we sample from this cumulative distribution. Concretely, we take the last value in cumdist (the total number of values available) and sample randomly between 0 and that value. Randomly running it as I type this, I got the following Image('images/sample.PNG') then the library uses a really cool standard library called bisect, which essentially allows you to pass a float key as a list index, and it will round your input to the closest valid key. For example, the following code gives us the index of the value directly above our input float from bisect import bisect a = [0, 1, 2, 3, 4, 5] bisect(a, 1.5) 2 So the value r=831.044 yields the word He Image('images/first_value.PNG') We repeat the process, tossing it back into the dictionary (as ( __BEGIN__, He)) and find a follow-up word, couldn't. Then find another list of choices and their weights using ( He, couldn't) Image('images/choices.PNG') We repeat this exercise until one of our dictionary lookups yields a __END__ key, kicking us out. In this case, we arrive at that confusing, but valid, sentence He couldn't direct it at the note. Note: As an added bonus, markovify does multiple runs for your initial keys to try and generate sentences with minimal overlap to our original corpus.
https://napsterinblue.github.io/notes/algorithms/markov/markovify/
CC-MAIN-2021-04
refinedweb
1,420
68.81
Data copying C | FORTRAN copyprivate Definition copyprivate is a clause that can be used in a single construct; it acts as a mechanism to broadcast the value of a variable private to an implicit task to those of other implicit tasks in the parallel region. Copy Feedback copyprivate(list) Parameters - list - The variables to pass as copyprivate, separated by commas. Example Copy Feedback #include <stdio.h> #include <stdlib.h> #include <omp.h> /** * @brief Illustrates how to use the copyprivate clause. * @details This application passes a variable as firstprivate to a parallel * region. Then, a single construct receives this variable as a copyprivate and * modifies its values. All threads print the value of their own copy before and * after the single construct. Although each thread has its own copy, the * copyprivate will have broadcasted the new value to all threads after the * single construct. **/ int main(int argc, char* argv[]) { int a = 123; #pragma omp parallel default(none) firstprivate(a) { printf("Thread %d: a = %d.\n", omp_get_thread_num(), a); #pragma omp barrier #pragma omp single copyprivate(a) { printf("Thread %d executes the single construct and changes a to %d.\n", omp_get_thread_num(), a); a = 456; } printf("Thread %d: a = %d.\n", omp_get_thread_num(), a); } return EXIT_SUCCESS; }
https://www.rookiehpc.com/openmp/docs/copyprivate.php
CC-MAIN-2019-43
refinedweb
202
57.87
This post contains all the code that’s been written in this YouTube video. ScoreScript.cs PuckScript.cs PlayerMovement.cs AiScript.cs This post contains all the code that’s been written in this YouTube video. Session expired Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page. After finish this tutorial , i got this error in the unity console when try to check the (play mode ) all compiler error have to be fixed before enter the play mode and error appear like this ( Assets/Art/Script/AiScript.cs(14,13): error CS0246: The type or namespace name Boundary' could not be found. Are you missingUnityEngine.Experimental.VR’ using directive?) any solution for that ??
https://resocoder.com/2017/06/02/5-make-an-air-hockey-game-in-unity-ui-score-code/
CC-MAIN-2020-29
refinedweb
129
68.26
Introduction Matplotlib is one of the most widely used data visualization libraries in Python. From simple to complex visualizations, it's the go-to library for most. In this tutorial, we'll take a look at how to plot a bar plot in Matplotlib. Matplotlib Plotting a Bar Plot in Matplotlib is as easy as calling the bar() function on the PyPlot instance, and passing in the categorical and continuous variables that we'd like to visualize. import matplotlib.pyplot as plt x = ['A', 'B', 'C'] y = [1, 5, 3] plt.bar plt.bar(). This results in a clean and simple bar graph: Plot a Horizontal Bar Plot in Matplotlib Oftentimes, we might want to plot a Bar Plot horizontally, instead of vertically. Changing the color of the bars themselves is as easy as setting the color argument with a list of colors. If you have more bars than colors in the list, they'll start being applied from the first color again: import matplotlib.pyplot as plt x = ['A', 'B', 'C'] y = [1, 5, 3] plt.bar(x, y, color=['red', 'blue', 'green']) plt.show() Now, we've got a nicely colored Bar Plot: Of course, you can also use the shorthand versions or even HTML codes: plt.bar(x, y, color=['red', 'blue', 'green']) plt.bar(x, y, color=['r', 'b', 'g']) plt.bar(x, y, color=['#ff0000', '#00ff00', '#0000ff']) plt.show() Or you can even put a single scalar value, to apply it to all bars: plt.bar(x, y, color='green') Bar Plot with Error Bars in Matplotlib When you're plotting mean values of lists, which is a common application for Bar Plots, you'll have some error space. It's very useful to plot error bars to let other observers, and yourself, know how truthful these means are and which deviation is expected. For this, let's make a dataset with some values, calculate their means and standard deviations with Numpy and plot them with error bars: import matplotlib.pyplot as plt import numpy as np x = np.array([4, 5, 6, 3, 6, 5, 7, 3, 4, 5]) y = np.array([3, 4, 1, 3, 2, 3, 3, 1, 2, 3]) z = np.array([6, 9, 8, 7, 9, 8, 9, 6, 8, 7]) x_mean = np.mean(x) y_mean = np.mean(y) z_mean = np.mean(z) x_deviation = np.std(x) y_deviation = np.std(y) z_deviation = np.std(z) bars = [x_mean, y_mean, z_mean] bar_categories = ['X', 'Y', 'Z'] error_bars = [x_deviation, y_deviation, z_deviation] plt.bar(bar_categories, bars, yerr=error_bars) plt.show() Here, we've created three fake datasets with several values each. We'll visualize the mean values of each of these lists. However, since means, as well as averages can give the false sense of accuracy, we'll also calculate the standard deviation of these datasets so that we can add those as error bars. Using Numpy's mean() and std() functions, this is a breeze. Then, we've packed the bar values into a bars list, the bar names for a nice user experience into bar_categories and finally - the standard deviation values into an error_bars list. To visualize this, we call the regular bar() function, passing in the bar_categories (categorical values) and bars (continuous values), alongside the yerr argument. Since we're plotting vertically, we're using the yerr arguement. If we were plotting horizontally, we'd use the xerr argument. Here, we've provided the information about the error bars. This ultimately results in: Plot Stacked Bar Plot in Matplotlib Finally, let's plot a Stacked Bar Plot. Stacked Bar Plots are really useful if you have groups of variables, but instead of plotting them one next to the other, you'd like to plot them one on top of the other. For this, we'll again have groups of data. Then, we'll calculate their standard deviation for error bars. Finally, we'll need an index range to plot these variables on top of each other, while maintaining their relative order. This index will essentially be a range of numbers the length of all the groups we've got. To stack a bar on another one, you use the bottom argument. You specify what's on the bottom of that bar. To plot x beneath y, you'd set x as the bottom of y. For more than one group, you'll want to add the values together before plotting, otherwise, the Bar Plot won't add up. We'll use Numpy's np.add().tolist() to add the elements of two lists and produce a list back: import matplotlib.pyplot as plt import numpy as np # Groups of data, first values are plotted on top of each other # Second values are plotted on top of each other, etc x = [1, 3, 2] y = [2, 3, 3] z = [7, 6, 8] # Standard deviation rates for error bars x_deviation = np.std(x) y_deviation = np.std(y) z_deviation = np.std(z) bars = [x, y, z] ind = np.arange(len(bars)) bar_categories = ['X', 'Y', 'Z']; bar_width = 0.5 bar_padding = np.add(x, y).tolist() plt.bar(ind, x, yerr=x_deviation, width=bar_width) plt.bar(ind, y, yerr=y_deviation, bottom=x, width=bar_width) plt.bar(ind, z, yerr=z_deviation, bottom=bar_padding, width=bar_width) plt.xticks(ind, bar_categories) plt.xlabel("Stacked Bar Plot") plt.show() Running this code results in: Conclusion In this tutorial, we've gone over several ways to plot a bar plot using Matplotlib and Python. We've also covered how to calculate and add error bars, as well as stack bars on top of each other. If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python: Data Visualization in Python >>IMAGE.
https://stackabuse.com/matplotlib-bar-plot-tutorial-and-examples/
CC-MAIN-2021-17
refinedweb
966
66.44
Adding Metadata to Entities in The Data Model Join the DZone community and get the full member experience.Join For Free sometimes i’m being asked how to add metadata to a generated entity in entity framework. this metadata can be data annotation or other attributes which can help the developer during runtime. one answer that i give is to edit the t4 template in order to add the attributes. this solution can be combined with the building of an extension to entity framework designer which can add more details to the edm. but it can take some time to develop. another solution is to create a metadatatype for the entity and use the entity’s partial class behavior to add this type. this post will show you the second solution. adding metadata to a generated entity in the example i’m going to use the following simple entity: this type is part of a dynamic data site and the requirements for it are not to show the typeid and that the url needs to be up to 100 characters. since dynamic data work with data annotations i want to add this metadata to the entity. but the problem is that the entity is generated by entity framework code generation. so how can i add the annotations? using the metadatatype attribute. the metadatatype is an attribute that is part of the system.componentmodel.dataannotations assembly. it indicates that the a data model class has an associated metadata class. the metadatatype attribute gets a type parameter to specify which type is holding the metadata for the class. we can use the fact that the entity is generated as partial class and add the metadatatype attribute to it. lets look at an example of how to use the metadatatype attribute: public class crmtypemetadata { [scaffoldcolumn(false)] public int typeid { get; set; } [stringlength(100)] public string url { get; set; } } [metadatatype(typeof(crmtypemetadata))] public partial class crmtype { } the first thing to notice in the example is that i’ve created a public class by the name crmtypemetadata which hold the properties annotated with the relevant attributes. the metadata postfix in the class name is a convention that i encourage you to use. after i create the metadata class all i need to do is to create a partial class for the generated entity and annotate it with the metadatatype attribute. the metadatatype attribute will get as a parameter the type of the metadata type which is crmtypemetadata in the example. that is all. now the expected behavior was achieved. summary one of the solutions to add metadata to entities in the entity data model is by using the metadatatype attribute. it is very simple to use and can help you in frameworks like asp.net mvc, dynamic data, wcf ria services and more. Published at DZone with permission of Gil Fink, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/adding-metadata-entities-data
CC-MAIN-2022-40
refinedweb
491
54.73
loxun 1.3 large output in XML using unicode and namespaces loxun is a Python module to write large output in XML using Unicode and namespaces. Of course you can also use it for small XML output with plain 8 bit strings and no namespaces. loxun's features are: - small memory foot print: the document is created on the fly by writing to an output stream, no need to keep all of it in memory. - easy to use namespaces: simply add a namespace and refer to it using the standard namespace:tag syntax. - mix unicode and string: pass both unicode or plain 8 bit strings to any of the methods. Internally loxun converts them to unicode, so once a parameter got accepted by the API you can rely on it not causing any messy UnicodeError trouble. - automatic escaping: no need to manually handle special characters such as < or & when writing text and attribute values. - robustness: while you write the document, sanity checks are performed on everything you do. Many silly mistakes immediately result in an XmlError, for example missing end elements or references to undeclared namespaces. - open source: distributed under the GNU Lesser General Public License 3 or later. Here is a very basic example. First you have to create an output stream. In many cases this would be a file, but for the sake of simplicity we use a StringIO here: >>> from StringIO import StringIO >>> out = StringIO() Then you can create an XmlWriter to write to this output: >>> xml = XmlWriter(out) Now write the content: >>> xml.addNamespace("xhtml", "") >>> xml.startTag("xhtml:html") >>> xml.startTag("xhtml:body") >>> xml.text("Hello world!") >>> xml.tag("xhtml:img", {"src": "smile.png", "alt": ":-)"}) >>> xml.endTag() >>> xml.endTag() >>> xml.close() And the result is: >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <xhtml:html xmlns: <xhtml:body> Hello world! <xhtml:img </xhtml:body> </xhtml:html> Writing a simple document The following example creates a very simple XHTML document. To make it simple, the output goes to a string, but you could also use a file that has been created using codecs.open(filename, "wb", encoding). >>> from StringIO import StringIO >>> out = StringIO() First create an XmlWriter to write the XML code to the specified output: >>> xml = XmlWriter(out) This automatically adds the XML prolog: >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> Next add the <html> start tag: >>> xml.startTag("html") Now comes the <body>. To pass attributes, specify them in a dictionary. So in order to add: <body id="top"> use: >>> xml.startTag("body", {"id": "top"}) Let' add a little text so there is something to look at: >>> xml.text("Hello world!") Wrap it up: close all elements and the document. >>> xml.endTag() >>> xml.endTag() >>> xml.close() And this is what we get: >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <html> <body id="top"> Hello world! </body> </html> Specifying attributes First create a writer: >>> from StringIO import StringIO >>> out = StringIO() >>> xml = XmlWriter(out) Now write the content: >>> xml.tag("img", {"src": "smile.png", "alt": ":-)"}) Attribute values do not have to be strings, other types will be converted to Unicode using Python's unicode() function: >>> xml.tag("img", {"src": "wink.png", "alt": ";-)", "width": 32, "height": 24}) And the result is: >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <img alt=":-)" src="smile.png" /> <img alt=";-)" height="24" src="wink.png" width="32" /> Using namespaces Now the same thing but with a namespace. First create the prolog and header like above: >>> out = StringIO() >>> xml = XmlWriter(out) Next add the namespace: >>> xml.addNamespace("xhtml", "") Now elements can use qualified tag names using a colon (:) to separate namespace and tag name: >>> xml.startTag("xhtml:html") >>> xml.startTag("xhtml:body") >>> xml.text("Hello world!") >>> xml.endTag() >>> xml.endTag() >>> xml.close() As a result, tag names are now prefixed with "xhtml:": >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <xhtml:html xmlns: <xhtml:body> Hello world! </xhtml:body> </xhtml:html> Working with non ASCII characters Sometimes you want to use characters outside the ASCII range, for example German Umlauts, the Euro symbol or Japanese Kanji. The easiest and performance wise best way is to use Unicode strings. For example: >>> from StringIO import StringIO >>> out = StringIO() >>> xml = XmlWriter(out, prolog=False) >>> xml.text(u"The price is \u20ac 100") # Unicode of Euro symbol >>> out.getvalue().rstrip("\r\n") 'The price is \xe2\x82\xac 100' Notice the "u" before the string passed to XmlWriter.text(), it declares the string to be a unicode string that can hold any character, even those that are beyond the 8 bit range. Also notice that in the output the Euro symbol looks very different from the input. This is because the output encoding is UTF-8 (the default), which has the advantage of keeping all ASCII characters the same and turning any characters with a code of 128 or more into a sequence of 8 bit bytes that can easily fit into an output stream to a binary file or StringIO. If you have to stick to classic 8 bit string parameters, loxun attempts to convert them to unicode. By default it assumes ASCII encoding, which does not work out as soon as you use a character outside the ASCII range: >>> from StringIO import StringIO >>> out = StringIO() >>> xml = XmlWriter(out, prolog=False) >>> xml.text("The price is \xa4 100") # ISO-8859-15 code of Euro symbol Traceback (most recent call last): ... UnicodeDecodeError: 'ascii' codec can't decode byte 0xa4 in position 13: ordinal not in range(128) In this case you have to tell the writer the encoding you use by specifying the the sourceEncoding: >>> from StringIO import StringIO >>> out = StringIO() >>> xml = XmlWriter(out, prolog=False, sourceEncoding="iso-8859-15") Now everything works out again: >>> xml.text("The price is \xa4 100") # ISO-8859-15 code of Euro symbol >>> out.getvalue().rstrip("\r\n") 'The price is \xe2\x82\xac 100' Of course in practice you will not mess around with hex codes to pass your texts. Instead you just specify the source encoding using the mechanisms described in PEP 263, Defining Python Source Code Encodings. Pretty printing and indentation By default, loxun starts a new line for each startTag and indents the content with two spaces. You can change the spaces to any number of spaces and tabs you like: >>> out = StringIO() >>> xml = XmlWriter(out, indent=" ") # <-- Indent with 4 spaces. >>> xml.startTag("html") >>> xml.startTag("body") >>> xml.text("Hello world!") >>> xml.endTag() >>> xml.endTag() >>> xml.close() >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <html> <body> Hello world! </body> </html> You can disable pretty printing all together using pretty=False, resulting in an output of a single large line: >>> out = StringIO() >>> xml = XmlWriter(out, pretty=False) # <-- Disable pretty printing. >>> xml.startTag("html") >>> xml.startTag("body") >>> xml.text("Hello world!") >>> xml.endTag() >>> xml.endTag() >>> xml.close() >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?><html><body>Hello world!</body></html> Changing the XML prolog When you create a writer, it automatically write an XML prolog processing instruction to the output. This is what the default prolog looks like: >>> from StringIO import StringIO >>> out = StringIO() >>> xml = XmlWriter(out) >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> You can change the version or encoding: >>> out = StringIO() >>> xml = XmlWriter(out, encoding=u"ascii", version=u"1.1") >>> print out.getvalue().rstrip("\r\n") <?xml version="1.1" encoding="ascii"?> To completely omit the prolog, set the parameter prolog=False: >>> out = StringIO() >>> xml = XmlWriter(out, prolog=False) >>> out.getvalue() '' Adding other content Apart from text and tags, XML provides a few more things you can add to documents. Here's an example that shows how to do it with loxun. First, create a writer: >>> from StringIO import StringIO >>> out = StringIO() >>> xml = XmlWriter(out) Let's add a document type definition: >>> xml.raw("<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" SYSTEM \"\">") >>> xml.newline() Notice that loxun uses the generic XmlWriter.raw() for that, which allows to add any content without validation or escaping. You can do all sorts of nasty things with raw() that will result in invalid XML, but this is one of its reasonable uses. Next, let's add a comment: >>> xml.comment("Show case some rarely used XML constructs") Here is a processing instruction: >>> xml.processingInstruction("xml-stylesheet", "href=\"default.css\" type=\"text/css\"") And finally a CDATA section: >>> xml.cdata(">> this will not be parsed <<") And the result is: >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" SYSTEM ""> <!-- Show case some rarely used XML constructs --> <?xml-stylesheet href="default.css" type="text/css"?> <![CDATA[>> this will not be parsed <<]]> Optimization Loxun automatically optimized pairs of empty start/end tags. For example: >>> out = StringIO() >>> xml = XmlWriter(out) >>> xml.startTag("customers") >>> xml.startTag("person", {"id": "12345", "name": "Doe, John"}) >>> xml.endTag("person") # without optimization, this would add </person>. >>> xml.endTag() >>> xml.close() >>> print out.getvalue().rstrip("\r\n") <?xml version="1.0" encoding="utf-8"?> <customers> <person id="12345" name="Doe, John" /> </customers> Despite the explicit startTag("person") and matching endtag(), the output only contains a simple <person ... /> tag. Contributing If you want to help improve loxun, you can access the source code at <>. Future Currently loxun does what it was built for. There are is no real plans to improve it in the near future, but here is a list of features that might be added at some point: - Add validation of tag and attribute names to ensure that all characters used are allowed. For instance, currently loxun does not complain about a tag named "a#b*c$d_". - Raise an XmlError when namespaces are added with attributes instead of XmlWriter.addNamespace(). - Logging support to simplify debugging of the calling code. Probably XmlWriter would get a property logger which is a standard logging.Logger. By default it could log original exceptions that loxun turns into an XmlError and namespaces opened and closed. Changing it to logging.DEBUG would log each tag and XML construct written, including additional information about the internal tag stack. That way you could dynamically increase or decrease logging output. - Rethink pretty printing. Instead of a global property that can only be set when initializing an XmlWriter, it could be a optional parameter for XmlWriter.startTag() where it could be turned on and off as needed. And the property could be named literal instead of pretty (with an inverse logic). - Add a DomWriter that creates a xml.dom.minidom.Document. Some features other XML libraries support but I never saw any real use for: - Specify attribute order for tags. Version history Version 1.3, 2012-01-01 - Added endTags() to close several or all open tags (issue #3, contributed by Anton Kolechkin). - Added ChainXmlWriter which is similar to XmlWriter and allows to chain methods for more concise source code (issue #3, contributed by Anton Kolechkin). Version 1.2, 2011-03-12 - Fixed AttributeError when XmlWriter(..., encoding=...) was set. Version 1.1, 08-Jan-2011 - Fixed AssertionError when pretty was set to False (issue #1; fixed by David Cramer). Version 1.0, 11-Oct-2010 - Added support for Python's with so you don not have to manually call XmlWriter.close() anymore. - Added Git repository at <>. Version 0.8, 11-Jul-2010 - Added possibility to pass attributes to XmlWriter.startTag() and XmlWriter.tag() with values that have other types than str or unicode. When written to XML, the value is converted using Python's built-in unicode() function. - Added a couple of files missing from the distribution, most important the test suite. Version 0.7, 03-Jul-2010 Added optimization of matching start and end tag without any content in between. For example, x.startTag("some"); x.endTag() results in <some /> instead of <some></some>. - Fixed handling of unknown name spaces. They now raise an XmlError instead of ValueError. Version 0.6, 03-Jun-2010 - Added option indent to specify the indentation text each new line starts with. - Added option newline to specify how lines written should end. - Fixed that XmlWriter.tag() did not remove namespaces declared immediately before it. - Cleaned up documentation. Version 0.5, 25-May-2010 - Fixed typo in namespace attribute name. - Fixed adding of namespaces before calls to XmlWriter.tag() which resulted in an XmlError. Version 0.4, 21-May-2010 - Added option sourceEncoding to simplify processing of classic strings. The manual section "Working with non ASCII characters" explains how to use it. Version 0.3, 17-May-2010 - Added scoped namespaces which are removed automatically by XmlWriter.endTag(). - Changed text() to normalize newlines and white space if pretty printing is enabled. - Moved writing of XML prolog to the constructor and removed XmlWriter.prolog(). To omit the prolog, specify prolog=False when creating the XmlWriter. If you later want to write the prolog yourself, use XmlWriter.processingInstruction(). - Renamed *Element() to *Tag because they really only write tags, not whole elements. Version 0.2, 16-May-2010 - Added XmlWriter.comment(), XmlWriter.cdata() and XmlWriter.processingInstruction() to write these specific XML constructs. - Added indentation and automatic newline to text if pretty printing is enabled. - Removed newline from prolog in case pretty printing is disabled. - Fixed missing "?" in prolog. Version 0.1, 15-May-2010 - Initial release. - Author: Thomas Aglassinger - Keywords: xml output stream large big huge namespace unicode memory footprint - License: GNU Lesser General Public License 3 or later - Categories - Development Status :: 5 - Production/Stable - Environment :: Plugins - Intended Audience :: Developers - License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL) - Natural Language :: English - Operating System :: OS Independent - Programming Language :: Python :: 2.5 - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Topic :: Internet :: WWW/HTTP :: Dynamic Content - Topic :: Software Development :: Libraries - Topic :: Text Processing :: Markup :: XML - Package Index Owner: roskakori - DOAP record: loxun-1.3.xml
http://pypi.python.org/pypi/loxun/1.3
crawl-003
refinedweb
2,333
57.87
You can define an inner class anywhere inside a class where you can write a Java statement. There are three types of inner classes. The type of inner class depends on the location and the way it is declared. A member inner class is declared inside a class the same way a member field or a member method is declared. It can be declared as public, private, protected, or package-level. The instance of a member inner class may exist only within the instance of its enclosing class. The following code creates a member inner class. class Car {// w w w .ja v a 2 s . co m private int year; // A member inner class named Tire public class Tire { private double radius; public Tire(double radius) { this.radius = radius; } public double getRadius() { return radius; } } // Member inner class declaration ends here // A constructor for the Car class public Car(int year) { this.year = year; } public int getYear() { return year; } } A local inner class is declared inside a block. Its scope is limited to the block in which it is declared. Since its scope is limited to its enclosing block, its declaration cannot use any access modifiers such as public, private, or protected. Typically, a local inner class is defined inside a method. However, it can also be defined inside static initializers, non-static initializers, and constructors. The following code shows an example of a local inner class. import java.util.ArrayList; import java.util.Iterator; //from ww w . j av a 2 s . c o m public class Main { public static void main(String[] args) { StringList tl = new StringList(); tl.addTitle("A"); tl.addTitle("B"); Iterator iterator = tl.titleIterator(); while (iterator.hasNext()) { System.out.println(iterator.next()); } } } class StringList { private ArrayList<String> titleList = new ArrayList<>(); public void addTitle(String title) { titleList.add(title); } public void removeTitle(String title) { titleList.remove(title); } public Iterator<String> titleIterator() { // A local inner class - TitleIterator class TitleIterator implements Iterator<String> { int count = 0; @Override public boolean hasNext() { return (count < titleList.size()); } @Override public String next() { return titleList.get(count++); } } TitleIterator titleIterator = new TitleIterator(); return titleIterator; } } The code above generates the following result. The following code has a local inner class which inherits from another public class. import java.util.Random; //from ww w.java 2s . c o m abstract class IntGenerator { public abstract int getValue() ; } class LocalGen { public IntGenerator getRandomInteger() { class RandomIntegerLocal extends IntGenerator { @Override public int getValue() { Random rand = new Random(); long n1 = rand.nextInt(); long n2 = rand.nextInt(); int value = (int) ((n1 + n2) / 2); return value; } } return new RandomIntegerLocal(); } // End of getRandomInteger() method } public class Main { public static void main(String[] args) { LocalGen local = new LocalGen(); IntGenerator rLocal = local.getRandomInteger(); System.out.println(rLocal.getValue()); System.out.println(rLocal.getValue()); } } The code above generates the following result. An anonymous inner class does not have a name. Since it does not have a name, it cannot have a constructor. An anonymous class is a one-time class. You define an anonymous class and create its object at the same time. The general syntax for creating an anonymous class and its object is as follows: new Interface() { // Anonymous class body goes here } and new Superclass(<argument-list-for-a-superclass-constructor>) { // Anonymous class body goes here } The new operator is used to create an instance of the anonymous class. It is followed by either an existing interface name or an existing class name. The interface name or class name is not the name for the newly created anonymous class. If an interface name is used, the anonymous class implements the interface. If a class name is used, the anonymous class inherits from the class. The <argument-list> is used only if the new operator is followed by a class name. It is left empty if the new operator is followed by an interface name. If <argument-list> is present, it contains the actual parameter list for a constructor of the existing class to be invoked. The anonymous class body is written as usual inside braces. The anonymous class body should be short for better readability. The following code contains a simple anonymous class, which prints a message on the standard output. public class Main { public static void main(String[] args) { new Object() { // An instance initializer {//w w w . j a v a2 s . c o m System.out.println("Hello from an anonymous class."); } }; // A semi-colon is necessary to end the statement } } The code above generates the following result. The following code use an anonymous class to create an Iterator. import java.util.ArrayList; import java.util.Iterator; /*from w ww .j av a 2 s.c o m*/ public class Main { private ArrayList<String> titleList = new ArrayList<>(); public void addTitle(String title) { titleList.add(title); } public void removeTitle(String title) { titleList.remove(title); } public Iterator<String> titleIterator() { // An anonymous class Iterator<String> iterator = new Iterator<String>() { int count = 0; @Override public boolean hasNext() { return (count < titleList.size()); } @Override public String next() { return titleList.get(count++); } }; // Anonymous inner class ends here return iterator; } }
http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0250__Java_Inner_Classes_Types.htm
CC-MAIN-2017-22
refinedweb
839
50.84
> > Looks sane. However, we still have a problem here - just what would> > happen if vfsmount is detached while we were grabbing namespace> > semaphore? Refcount alone is not useful here - we might be held by> > whoever had detached the vfsmount. IOW, we should check that it's> > still attached (i.e. that mnt->mnt_parent != mnt). If it's not -> > just leave it alone, do mntput() and let whoever holds it deal with> > the sucker. No need to put it back on lists.> > Right. I'll fix that too.> > On a bit unrelated node, in do_unmount() why is that> DQUOT_OFF()/acct_auto_close() thing only called for the base of a tree> being detached, and not for any submounts? Is that how it's supposed> to work? I guess the code is there since the good old times when eachfilesystem could be mounted at most once and you had to call umount onit directly ;). I see two possibilites there: 1) Call DQUOT_OFF() when the last reference to the superblock should be dropped. This has a problem that currently quota code holds the reference to the vfsmount of the mountpoint it was called on (to protect itself against umount). So if you try something like mount /home, quotaon /home, mount --bind /home /home2, umount /home, it will fail with EBUSY. 2) Make quota code protect against umount in a different way without holding the vfsmount references (any ideas?). Then the above described use will work. But I'm not sure it's worth the problems especially with userspace tools not being able to see the proper mount options and so on.So personally I'd prefer 1). For the namespace code it means only thatit should call DQUOT_OFF() whenever it intends to drop the lastreference to the superblock (and check for business only after quotahas been turned off).
https://lkml.org/lkml/2005/5/24/52
CC-MAIN-2016-30
refinedweb
303
70.94
I wanted to get clarity on the _id and _rev fields. As I'm modeling this out in c#, I want to create a base Document class that I use for both creating new documents, and updating existing documents. The question I have is: will it always be OK to include _id and _rev fields in my POSTs to create new documents, and just set them to null values. The behavior I see today seems to be that couchdb will ignore those properties on POST, as I had hoped. Will this always be the case or is it a bad practice to include those fields? Otherwise, I need to create separate Document base classes for create vs. updates. Please tell me if this makes sense: // documents have at a minimum the _id and _rev properties public abstract class Document : IDocument { public string _id { get; set; } public string _rev { get; set; } } public class Entity<T> : Document { public T entityobject; } This way I always store my serialized objects in an entityobject field, and can use this for all operations.
http://mail-archives.apache.org/mod_mbox/incubator-couchdb-user/200806.mbox/%3C888cd9180806120708i7dadbd9bq601c082261c867e6@mail.gmail.com%3E
CC-MAIN-2016-36
refinedweb
177
54.56
There are a lot of templating modules on CPAN. Some are obvious, some are hidden in very strange namespaces (eg HTML::Processor). Some are used a lot, some not. I read a lot of manpages, but definitly not all and none completly. If there is a module doing what I am proposing, please inform me! Before we continue, read Perrin Harkins' "Choosing a Templating System", available here There are different types of Templating Systems available. Some are complete Application Frameworks, including stuff like Session Management, Form Handling etc. Examples include Mason, AxKit and Embperl. They are nice. They work. But that's not what I'm looking for. I want Just Templates. Why? Because IMO, the main reason for using templates is to seperate code from markup. The code produces some data. The markup displays the data. Those Application Frameworks don't seem to be too good at seperating code and markup (I have to admit though, that I know next to nothing about them, only that they are too big/powerfull). After all, they embed code in markup. So I am looking for a "pipeline"-type, Just-Template System, which reduces the number of available modules somewhat. The best-known contestors here are TemplateToolkit resp. Apache::Template and HTML::Template. But if you look at there manpages, you'll quickly find references to stuff like "TPL_LOOP" (HTML::Template). TT2 even has it's own mini-language. So, once again, code (even rather trivial) mixed with the markup. There is one module, CGI::FastTemplate, that does seperate code from markup completly. But the way different templates are strung together seems rather comlicated to me. But why is there no Templating System with a clean seperation of code and markup? There are two types of code (at least) that pollute nearly all Templating Systems: Loops are one of the things computers do best (for very good reasons, mainly lazyness of humans). So, a template should be able to handle large amounts of similar data using ... a template. Obvious. So a Templating System must handle Loops. Most (all?) do it by adding some sort of LOOP or FOREACH Syntax, thereby introducing code into the markup. But there is another way to loop over data: Recursion. Often IF-Blocks are also used to present different kinds of data differently, which can lead to long series of IF-ELSIF-ELSE blocks. Which is a clear pointer that one should use Object Orientation instead. Another way would be to add something like attributes to the data. But as far as I know, attributes aren't included that thightly into Perl as OO. I didn't find anything. So I am proposing this: The templates are completly dumb. There is absolutly no piece of code in a template - neither Perl nor "mini language". A template consists of arbitrary text (e.g. HTML) and Template Tags, e.g. % title % Your application builds up a data structure. The data structure consists of various Perl Data Types (Strings, Arrays, Hashes) and Template::YetAnother Objects (or Data Structures marked with some other kind of metainformation, e.g. with attributes) The data structure gets passed to Template::YetAnother, which magically find the right template for each object and replaces all Template Tags (recursivly) with the dumped/stringified data structure. Template::YetAnother is like Data::Dumper on steroids. It's the big Stringifyer. Template::YetAnother doesn't use one monolithic template, but a lot of small template fragments, each one correlating to a data type generated by the application. Template::YetAnother is just an idea right now. I am trying the "write documentation, write tests, write code" way of development... There is only a small prove-of-concept type bit of code (I can send it/post it if somebody cares..). I'll really appreciate feedback on this. I hope that this descripction is clear enought. If not, let me know and I'll post some clarification / examples. # generate a new template handler my $th=Template::YetAnother->new ({ namespace=>'acme', template_dir=>'/projects/acme/templates/', }); # build up a data structure $data={ title=>$th->title('this is the title'), breadcrumb=>[ $th->link({url=>'/index.html',text=>'Home'}), $th->separator_breadcrumb, $th->link({url=>'/acme/index.html',text=>'Acme'}), $th->separator_breadcrumb, $th->link({url=>'/acme/bleach.html',text=>'Bleach'}), ], content=>[ $th->element({heading=>'SYNOPSIS',value=>'blabla'}), $th->element({heading=>'DESCRIPTION',value=>'foo bar'}), ], lastmod=>scalar localtime, }; # fill template & print print $th->fill({data=>$data}); ################################################## # for this to work, we need the following files in # /projects/acme/templates # file: main.tpl <html><head><title>[% title %]</title></head> <body> <center>[% breadcrumb %]</center> <h1>[% title %]</h1> [% content %] <hr> [% lastmod %] # file: link.tpl <a href='[% url %]'>[% text %]</a> # file: seperator.tpl / # file: element.tpl <h3>[% heading %]</h3>; <p>[% value %]</p>; ################################################## # the finished template should look like this: <html><head><title>this is the title</title></head> <body> <center> <a href='/index.html'>Home</a> / <a href='/acme/index.html'>Acme</a> / <a href='/acme/bleach.html'>Bleach</a> </center> <h1>this is the title</h1> <h3>SYNOPSIS<h3> <p>blabla</p>; <h3>DESCRIPTION</h3> <p>foo bar</p> <hr> Thu Nov 7 21:51:05 200 [download] my $th=Template::YetAnother->new({ template_dir=>'/path/to/templates/', # namespace=>'projectname', # start_tag=>'<--', # end_tag=> '-->', }); [download] fill $th->fill($data); [download] Fill the template with the data in the data structure. _gen my $fragment=$th->_gen('type',$data) [download] Generates a new Template Fragment You usually do not have to call this. You just say $th->type($data) [download] and AUTOLOAD passes it to "_gen" -- #!/usr/bin/perl for(ref bless{},just'another'perl'hacker){s-:+-$"-g&&print$_.$/} [download] domm wrote: So, once again, code (even rather trivial) mixed with the markup. With all due respect, I think you are misunderstanding something. While minimizing the amount of code in a template is a Good Thing, the benefit of templates is not separating code from the presentation. It's decoupling the application logic and the presentation logic. Consider the following Template Toolkit snippet: [% IF people; FOREACH person = people %] <td>[% person.last_name %]</td> <td>[% person.first_name %]</td> <td>[% person.department %]</td> [% END; # FOREACH person; ELSE %] <td colspan="3" class="warning">No people found</td> [% END; # end if %] [download] There's nothing wrong with that. We're not calculating overtime. We're not pulling department data from a database. In fact, we're doing nothing but controlling the presentation logic. By pushing the presentation logic back into the application, then the application needs to know how the data is to be presented. If later you need to change the presentation of the data, you're forced to change the application! Of course, you could get around this by assembling your data and then passing it off to another portion of the code that knows about the presentation, but that portion of code is often referred to as a "template" :) Cheers, Ovid New address of my CGI Course. Silence is Evil My problem with presentation logic vs application logic is where to draw the line. In your example, I would consider the warning about no people to be found to be part of the application logic. The template should just display this warning. In my proposed system, your example probably would look like this: # in the App: if ($sth->rows == 0) { $data->{'people'}=$th->people_none(); return; } my @people; while (my $r=$sth->fetchrow_hashref) { push(@people,$th->person($r); } $data->{'people'}=@people; return; # template people_none.tpl <td colspan=3No people found</td> # template person.tpl <tr><td>[% last_name %]</td> <td>[% first_name %]</td> <td>[% department %]</td></tr> [download] By pushing the presentation logic back into the application, then the application needs to know how the data is to be presented. If later you need to change the presentation of the data, you're forced to change the application! The application doesn't need to know how the data is to be presented. It only needs to know what sort of data it is handling. The data then gets tagged (mis)using Perl's OO fetures (attributes would be another solution, I guess). The Templating System looks at those tags that describe data, chooses the appropriate template and fills in the data. One reason for starting to to think about my proposal was the problem of testing web applications. I asked Schwern about that on YAPC::EU 2002 and he said (more or less) one simple way to test Web Apps is to test the data each function/method returns before it gets passed to the template (So you do not have to parse the HTML in the test...) Until now I use a simple homegrown regex as a "templating system". It sucks. But none of the multitude of Templating Systems an CPAN really convinced me. But maybe I should take a much closer look at TT (which seems to be one of the better Templating Systems). I used TT a little bit when working on the mod_perl site and it definitly wasn't love at the first sight... Thanks for the feedback, anyway! I + Hmm. There is indeed some similarity. But there are two important differences: I am still not sure if i missed with my idea or just didn't describe it properly... -- I'm not belgian but I play one on TV. *Q::include = \&Text::Template::_load_text; [download] I don't (yet) have strong feelings on the technical merits of your proposal, I vascilate with each argument that I read. On the subject of the name, I think you already said a good candidate should you decide to go ahead. Just::Templates Thats seems to sum up your intent exactly:) Actually, I can think of quite a few other modules that could do with being factored into the Just::* namespace. I think a set of small, (fast), clean modules that do one thing and do it exeptionally well would be a good antithisis to some of the modules I've looked at that take a single good idea and then wrap it up whith a gazillion unrequired or little used variations on the theme.. Makes sense. And I withdraw the (originally somewhat lighthearted) suggestion. However, there are many modules that sit in one namespace, but could equally well sit in several namespaces. The obvious example is CGI.pm. Belg4mit has just:^) recently pointed out CGI::Minimal which I hadn't encountered and maybe Template::Minimal is a better choice. My thought was that I would appreciate a top level space where there were modules that did exactly one thing and nothing more. Within that they could be categorised in the normal way. Oh well. T'was j..er.. only a thought. Quite why you associate the word "just" with arrogance escapes me. You might want to ask yourself why this is; I find that invariably as a system I am working on grows I end up needing a LOOP or IF construct in the templates -- and I think the prevelance of these two constructs in templating engines perhaps shows that I am not alone. Basically what you are proposing is to simply drop functionality from an existing templating system. What does this gain you over simply not using those options? If the answer is nothing, then there is no real need for this "new" templating system. Of course developing for the learning experience is always an option, but I don't think CPAN needs YATS ( Yet Another Templating System ). | | No recent polls found
https://www.perlmonks.org/index.pl/?node_id=213300
CC-MAIN-2021-31
refinedweb
1,905
65.32
Pythonista 3 + StaSh + Plotly... I'm Close to a meltdown. Hi everyone! I have tried to install Plotly in Pythonista and I keep getting "cannot import name"-errors. In a final try I made a fresh install of Phytonista 3 and gave it a new go! Same procedure on iphone 6SE (12.1.2), ipad mini 4 (12.1.1) and ipad air 2 (12.1.1). TLDR; Pythonista + StaSh: different folder-structure on 12.1.2 vs 12.1.1. Pythonista + StaSh + Plotly: same folder-structure for 12.1.2 and 12.1.1. Empty local folders (/bin /fsi /man /patches). Is that normal? PIP: I can upgrade on one ipad but the other. --upgrade is an unrecognized argument. Same error on all installations for line 1 in script: import plotly Everything crashes. 😿 (I have managed to install qhue ()[] with StaSh, and use it, without any problems. ) Installing Stash - In Console: import requests as r; exec(r.get('').text) - Restarted Pythonista - Ran the file launch_stash.py - Restarted Pythonista - Ran the file launch_stash.pyagain. I get a shell. The folders on iPhone Is this how the folder-structure should be built? With a folder for stash_extensions with 4 empty folders? They remain empty, even after I have installed plotly. SCRIPT LIBRARY |-- This iPhone | |-- [Examples] |-- [stash_extensions] |-- [bin] - empty |-- [fsi] - empty |-- [man] - empty |-- [patches] - empty |-- ICloud |-- Python Modules | |-- Standard Library 2.. |-- Standard Library 3.. |-- [site-packages] (empty) |-- [site-packages-2] (empty) |-- [site-packages-3] (empty) |-- File Templates | |-- Readme.md Installing plotly 1. Ran the pip install plotlycommand in the shell. Took half forever. Got a warning right in the beginning saying "cannot find dependencies".. this is the result: [...] Running handler 'console_scripts installer'... No entry_points.txt found, skipping. Cleaning up... Package installed: plotly [~/Documents]$ [site-packages-3] now have several folders. It seems they are placed in root, there is no folder specific to plotly. As a plot-twist: The iPads don't get the empty folders inside [This iPad] when installing StaSh. Instead the folders appeared after I installed plotly. They also remain empty. I get the same error on the ipads as the iPhone when running the same script. I tried from both iCloud and This iPhone/iPad. 2. I run a script get stuck on line 1. import plotly. ImportError cannot import name 'exceptions' plotly.py <module> (Line 30) Print Traceback Traceback (most recent call last): File "/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/Temperature/plotly_entropia.py", line 1, in <module> import plotly File "/private/var/mobile/Containers/Shared/AppGroup/63784CB9-E25E-4FFB-A7F4-D38A7B74A24B/Pythonista3/Documents/site-packages-3/plotly/__init__.py", line 10, in <module> from . plotly import ( File "/private/var/mobile/Containers/Shared/AppGroup/63784CB9-E25E-4FFB-A7F4-D38A7B74A24B/Pythonista3/Documents/site-packages-3/plotly/plotly.py", line 30, in <module> from plotly import exceptions, files, session, tools, utils ImportError: cannot import name 'exceptions' Not working. Going bananas 1. But I read that an update could help! plotly.py issue 104 However - as the mess that I am - I did an upgrade of StaSh and not of plotly. selfupgrade -f, on iphone, made Pythonista crash. And as it keeps crashing I will probably have to start over from scratch there... 2 Instead! I move to the ipad air 2! pip install plotly --upgrade As stated on, did not work. pip install --upgrade plotly >> No download available for upgrade: 0.0.1.. 3. The last straw: change an url in /bin/ From topic/4733/stash-install: The PyPi API was changed. The new link is '{}/json'. Thus, the simplest way to fix this is to find the pip.py (~/Modules/site-packages/stash/bin/pip.py) and change the old link({}/json) to '{}/json' I tried that and restarted the app. Now also Pythonista on ipad air 2 is crashing... 4. Trying out the upgrade of step 8 on the iPad mini 4 Either way I formatted it: >> usage: pip.py [-h] [--verbose] [-6] sub-command ... >> pip.py: error: unrecognized arguments: --upgrade I need to take a walk now. Any help would be greatly appreciated! 💟 Hi @hecate, I can confirm that this is currently broken. However, there is a workaround (though this was tested using py2, not py3): 1. fixing your install When trying to reproduce this issue, my StaSh install broke, so i assume this is the same for you. - delete everything in site-packages-3(or 2, if you use py2) - force-quit pythonista and restart. - Run pip uninstall plotly. Pythonista crashed for me during this step. If it does, just repeat this. 2. install plotly Go to site-packages/stash/bin/pip.pyand replace in PyPIRepository.install() wheel_priority=Truewith wheel_priority=False. This line should be somewhere around 1052. Then run pip install plotly. If you get an error telling you that plotlyis already installed, run pip uninstall plotly. Then repeat. Plotly should now be installed. I still got some import errors regarding decorators, but it should now be possible to simply install this package/module. So, now the other issues: Empty local folders (/bin /fsi /man /patches). Is that normal? Depends. If you mean the directory /stash_extensions/bin, then yes. This directory is used for extensions, like external commands installed by pip. If you mean directories inside site-packages/stash/, then no. PIP: I can upgrade on one ipad but the other. --upgrade is an unrecognized argument. In StaSh, there is a pip updatecommand. I am surprised that pip install --upgradeworked. Maybe one of your devices uses an old version? If you want to check, try running version. The latest version is 0.7.1. TLDR; It seems like there is a bug in the wheel installation handling. Sorry. Edit/Update: I found the bug and fixed it, but the fix is not yet in the main StaSh repo. To install the fixed version, simply run selfupdate -f bennr01:wheelfix. Please note that you should you should still remove the wrongly installed directories in site-packages-3. If you delete everything in site-packages-3, then you should also delete site-packages-3/.pypi_packages(e.g. rm $HOME/site-packages-3/.pypi_packages). Just curious, but does Dash run under Pythonista or are there incompatibilities? Thank you kindly for your detailed and very helpful (and very fast!!) answer! I have more busy days now that vacation is over.. I did a reinstall of both Pythonista and StaSh On my iphone this far, and updated per your instructions to get the fixed version. It doesn't seem to work still, and I wonder if there could be something with my code. I thought maybe it would be easier to find the problem if you have the same as me. import plotly plotly.tools.set_credentials_file(username='[username]', api_key='[token]') import plotly.plotly as py from plotly.graph_objs import * data = Data([ Scatter(x=[1, 2], y=[3, 4]) ]) plot_url = py.plot(data, filename='my plot') I'ts an example I found in a tutorial and I have replaced username and token for my own when I run it. This generated the following traceback: Traceback (most recent call last): File "/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/Temperature/plotly_entropia.py", line 1, in <module> import plotly File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/__init__.py", line 31, in <module> from plotly import (plotly, dashboard_objs, graph_objs, grid_objs, tools, File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/plotly/__init__.py", line 10, in <module> from . plotly import ( File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/plotly/plotly.py", line 31, in <module> from plotly.api import v1, v2 File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/api/v1/__init__.py", line 3, in <module> from plotly.api.v1.clientresp import clientresp File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/api/v1/clientresp.py", line 9, in <module> from plotly.api.v1.utils import request File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/api/v1/utils.py", line 9, in <module> from plotly.api.v2.utils import should_retry File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/api/v2/__init__.py", line 3, in <module> from plotly.api.v2 import (dash_apps, dashboards, files, folders, grids, File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/api/v2/dash_apps.py", line 6, in <module> from plotly.api.v2.utils import build_url, request File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/plotly/api/v2/utils.py", line 127, in <module> stop_max_delay=180000, retry_on_exception=should_retry) File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/retrying.py", line 47, in wrap @six.wraps(f) AttributeError: module 'six' has no attribute 'wraps If I run my darksky-app, (that I for sure can have broken myself!) it doesn't seem to get 'core'? The traceback **the Traceback (most recent call last): File "/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/weather_bagarmossen.py", line 4, in <module> from darksky import forecast File "/private/var/mobile/Containers/Shared/AppGroup/B45A435E-6A06-4C49-A432-C693B398B81F/Pythonista3/Documents/site-packages-3/darksky/__init__.py", line 1, in <module> from core import * ModuleNotFoundError: No module named 'core' The folders On the phone I still get a fsi, man, patchesin stash_extensionsthat are empty... binis also empty. I seem to be running the last version of stash in either case :) I will try this on the ipads asap and come back with the results! After getting the six wraps error, try import six print(six.__file__) print (sys.path) It seems possible that either pip tried to install six (and failed), or you have a file six.py oor folder named six in your Documents folder, which gets precedence on sys.path. For tour darksky problem, again, try printing sys.path and see if that explains anything. In the pythonista environment, whenever you hit "play", the path of the open script gets inserted at the start of sys.path. Meaning any folders under Documents (in your example) get precedence. One would expect for darksky you should have a core.py (or folder called core that includes an __init__.py) within site-packages-3, site-packages-3/darksky, or Documents. Do you know if you are running stash in the 2.7 or 3.6 enviroment? import six print(six.__file__) print (sys.path) Gave this response: >>> import six >>> print(six.__file__) ... /var/containers/Bundle/Application/A2676392-B9EC-4C7E-94C4-E1AA04500EDF/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/site-packages/six.py >>> print (sys.path) Traceback (most recent call last): File "<string>", line 1, in <module> NameError: name 'sys' is not defined In site-packages-3/darksky/ there is a core.py and init.py. When I run versionin StaSh: StaSh v0.7.1 Pythonista 3.2 (320000) iOS 12.1.2 (64-bit iPhone8,4) Python 3.6.1 root: ~/Documents/site-packages/stash core.py: 2019-01-06 15:13:31 SELFUPDATE_TARGET: master BIN_PATH: ~/Documents/bin ~/Documents/stash_extensions/bin ~/Documents/site-packages/stash/bin I did upgrade per instructions of @bennr01 : selfupdate -f bennr01:wheelfix DarkSky worked when I installed this before, without the wheelfix-upgrade. On the phone I still get a fsi, man, patches in stash_extensions that are empty... bin is also empty. Do not worry about these folders. They are meant for extensions and are empty by default. In fact, stash_extensions/bin/is the only one of these folders which is actively used ( pipinstalls commands here). I'ts an example I found in a tutorial and I have replaced username and token for my own when I run it. I just tested this example on both py2 and py3 and it seems to work (I get some warnings that plotly.graph_objs.Datais deprecated and also an error message that there is no account for [username](obviously), but it imported and executed fine). I had to install decoratorand retryingand also restart pythonista a few times. This generated the following traceback: Have you tried restarting pythonista (double tap home, the force-quit the app and start again)? Pythonista caches imports, which sometimes leads to weird issues and old imports. You should always restart pythonista if you installed a package to fix an 'ImportError` (or something related). If I run my darksky-app, (that I for sure can have broken myself!) it doesn't seem to get 'core'? Same as above (btw, import works fine for me, but i have not tested actual functionality). BTW, you should avoid from module import *-style imports, as they may lead to import-related problems. A better import would be from darksky.core import *. If you are interested in an explanation: As @JonB said, sys.pathis modified when a script is executed. This means that python search for imports in a different order depending on the executed script and your CWD. For example, if you would execute/import darksky/__init__.pyfrom within the site-packages/stash-repository, import corewould import stash.coreinstead of darksky.core. This would lead to problems or unexpected behavior. Python 3.6.1 The py3 version of StaSh is still a bit experimental, so there may be more errors compared to py2. That being said, everything above should work fine (i tested your example on both py2 and py3 and darksky on py2) . I did upgrade per instructions of @bennr01 : selfupdate -f bennr01:wheelfix DarkSky worked when I installed this before, without the wheelfix-upgrade. Is there a core.py-file in site-packages-3/darksky/? If there is, then you are probably experiencing some of the CWD-related problems mentioned above.. Do not worry about these folders. They are meant for extensions and are empty by default. Ok! I just tested this example on both py2 and py3 and it seems to work Wow. I cannot get it to work. :/ Have you tried restarting pythonista (double tap home, the force-quit the app and start again)? I have restarted the app countless of times, exactly that way. Last try of installing Plotly, Pythonista couldn't reopen again, it crashed while loading. But hey, I didn't have to restarting it? Same as above (btw, import works fine for me, but i have not tested actual functionality). Wow again. I will have to give this another go! BTW, you should avoid from module import *-style imports, as they may lead to import-related problems. A better import would be from darksky.core import *. I changed this part of the code, restarted again and again but it still won't run. I tried to remove darksky from Python Modules/site-packages-3 where I removed the darksky-folder. I did about 10 restarts of the app and then tried to reinstall through StaSh; "Error: Package is already installed" I did a reinstall of pythonista but I don't feel like trying to get this Plotly to work anymore. I would be so happy if the scripts I had could just work as they used too.. Is there a core.py-file in site-packages-3/darksky/? If there is, then you are probably experiencing some of the CWD-related problems mentioned above. Yep there was before I reinstalled it all over again. The change to from darksky.core import *didn't make any difference unfortunatly. And thank you and @JonB for explaining how the sys.path works!. I tried installing darksky with StaSh running python 2.7. It was installed. Restarted the app a couple of times. Ran the script with python 2.7, I get an error with the syntax, fair enough, though it doesn't print the traceback, I get an empty console. Regardless if I use from darksky import forecastor from darksky.core install *I get the same result. I guess that's as far as I can manage to get with this.. can I use a js-based graph-tool in Pythonista? If you have any suggestions for this I will gladly listen. Thank you again @bennr01 and @JonB for all your help!! Sorry, is darksky your own repo? And you installed with pip? Can you provide the pip commands you used? This post is deleted!last edited by @JonB Thank you!! I actually wrote an answer along "I used pip install darkskyand lalala", the second I clicked [Submit] I realized I have been writing darksky instead of darkskylib... installing the ****** wrong library. 😳 😫 🙁 The library: It's not my own, and it actually doesn't say how to install it in the readme. I have just assumed.. So I found the reason, I thought..! On the ipad air 2, that had a clean install of pythonista... - I installed stash, restarted app. pip install darkskylib, restarted app. pip install requests, restarted app. - Ran the script: "Cannot find module for chardet. Which I thought was strange but I realized I didn't update stash this time around. So I did selfupdate -f bennr01:wheelfix. Tried again, same thing. - I removed all the files I could find for darkskylib. - I restarted pythonista. - Ran pip install darkskylibagain, it said it was already installed. Did a full reinstall of the pythonista. New round. - Install stash. restart app. - Update stash, restart app. - Install requests, restart app. - Install darkskylib: [~/Documents]$ pip install darkskylib <class 'ModuleNotFoundError'>: No module named 'chardet' [~/Documents]$ pip install chardet <class 'ModuleNotFoundError'>: No module named 'chardet' Is there a reason you need the latest requests, vs the one built into pythonista? You need to check site-packages, site-packages-3, and site-packages-2, and remove (via file browser menu in pythonista, under Modules) requests, chardet, and idna folders, if present. Then force quit pythonista. Then import requests should work. Then, pip should be working again. If you need new requests version, you could try pip install idna pip install chardet pip install requests I have not tried this with the current stash version... A safer approach might be to use pip download, then manually unzip/untar and copy the folders manually. I finally got around to trying this on the latest stash. @bennr01, it seems the issue is that pip is not detecting doendencies using the whl. Because of this, it is necesary to manually instal each dependency: rm -f site-packages*/requests rm -f site-packages*/chardet rm -f site-packages*/idna pip install requests pip install chardet pip install idna Do this all before running any other pip commands after starting the app. The last three must be done within a single sesson. Then force quit pythonista. import requests, and all should be happy. @bennr01 here is the whl install, showing it couldnt find dependencies. [~/Documents]$ pip install chardet Querying PyPI ... Downloading package ... Opening: Save as: /private/var/mobile/Containers/Data/Application/06FD02C2-F3B4-4512-8629-018C9F64EA15/tmp/chardet-3.0.4-py2.py3-none-any.whl (133356 bytes) 133356 [100.00%] Installing wheel: chardet-3.0.4-py2.py3-none-any.whl... Extracting wheel.. Extraction finished, running handlers... Running handler 'WHEEL information checker'... Wheel generated by: bdist_wheel (0.29.0) Running handler 'dependency handler'... Running handler 'top_level.txt installer'... Copying /private/var/mobile/Containers/Data/Application/06FD02C2-F3B4-4512-8629-018C9F64EA15/tmp/wheel_tmp/chardet-3.0.4-py2.py3-none-any.whl/chardet -> /private/var/mobile/Containers/Shared/AppGroup/C534C622-2FDA-41F7-AE91-E3AAFE5FFC6B/Pythonista3/Documents/site-packages-3 Running handler 'console_scripts installer'... Cleaning up... Package installed: chardet [~/Documents]$ pip install idna Querying PyPI ... Downloading package ... Opening: Save as: /private/var/mobile/Containers/Data/Application/06FD02C2-F3B4-4512-8629-018C9F64EA15/tmp/idna-2.8-py2.py3-none-any.whl (58594 bytes) 58594 [100.00%] Installing wheel: idna-2.8-py2.py3-none-any.whl... Extracting wheel.. Extraction finished, running handlers... Running handler 'WHEEL information checker'... Wheel generated by: bdist_wheel (0.32.2) Running handler 'dependency handler'... Warning: could not find 'metadata.json', can not detect dependencies! Running handler 'top_level.txt installer'... Copying /private/var/mobile/Containers/Data/Application/06FD02C2-F3B4-4512-8629-018C9F64EA15/tmp/wheel_tmp/idna-2.8-py2.py3-none-any.whl/idna -> /private/var/mobile/Containers/Shared/AppGroup/C534C622-2FDA-41F7-AE91-E3AAFE5FFC6B/Pythonista3/Documents/site-packages-3 Running handler 'console_scripts installer'... No entry_points.txt found, skipping. Cleaning up... Package installed: idna [~/Documents]$ ls site-packages-3 site-packages-3/: Readme.md chardet idna midiutil requests There should be a fix in the dev branch of the main repo, or did the fix not work? I finally got around to trying this on the latest stash. @bennr01, it seems the issue is that pip is not detecting doendencies using the whl. Thanks for the info, i will take a look at it. This line looks interesting: Warning: could not find 'metadata.json', can not detect dependencies!. Apparently the .whl-format is a bit more flexible than i tought... - headsphere Not sure if this is related or if I should start another thread but the error I’m getting when installing plotly is related to installing the attrs dependency: Querying PyPI ... Downloading package ... Opening: Save as: /private/var/mobile/Containers/Data/Application/AA8947EF-3A9D-4275-BC4A-9708A8765CD1/tmp//attrs-19.1.0-py2.py3-none-any.whl (35784 bytes) 35784 [100.00%] Installing wheel: attrs-19.1.0-py2.py3-none-any.whl... <class 'UnicodeDecodeError'>: 'ascii' codec can't decode byte 0xe2 in position 5690: ordinal not in range(128) [~/Documents]$ version Here is the output I’m getting from version: StaSh v0.7.2 Pythonista 3.2 (320000) iOS 12.2 (64-bit iPad7,4) Python 3.6.1 root: ~/Documents/site-packages/stash core.py: 2019-05-04 17:47:40 SELFUPDATE_TARGET: master BIN_PATH: ~/Documents/bin ~/Documents/stash_extensions/bin ~/Documents/site-packages/stash/bin Any clever ideas? @headsphere You could try disabling wheels for the installation using pip --verbose install --no-binary :all: attrs.
https://forum.omz-software.com/topic/5340/pythonista-3-stash-plotly-i-m-close-to-a-meltdown/10
CC-MAIN-2019-35
refinedweb
3,704
61.63
Chad Crabtree I reside in Livonia, Michigan and I work at a coffee shop. I have been using python for about 5 years after reading How to Become a Hacker article. I have thus far written about 10k loc and feel that I'm finally starting to get this programing thing. I recently took part in a Code Clinic from the Tutor mailing list check it out on BrianvandenBroek's page. My company can be found at all the development I do is in python. Random Writer Code Clinic My Analysis Thanks for your implementations. I looked it over interesting how differently we all ended up doing this and how similar they where also. I think Christians object approach is the most elegant I've seen thus far for this problem. Mine is object based with procedural driver code. Davids is purely procedural. Brians was objective. I did a little bench marking for the fun of it and I wanted to know why my implementation was so much slower than Christians and about the same as Davids, I figured when I wrote this it would be very competitive on the timing because I only needed to loop through the file once. However the initial benchmark gave the below results with the parameters (5 500 tom.txt out.txt) Mine: 2.41900018019 David's: 2.3409288052 Christian's:1.92717454313 Brians: 8.40194857168 I was thinking why are mine about the same even though I pre-cached all seed combinations? After looking through their code I saw that they looped through the file each time they needed a new character which is how I did it at first, however I did it like this approximately s[n:n+len(seed)] for n,v in enumerate(s) if s[n:n+len(seed)]==seed] I did a pure python loop instead of the implied C loops that the other two did. This is why their implementation of looping through the file for each character is pretty fast. So I thought 'Ah-ha I know how to hit their algorithms. With the parameters of (10 5000 tom.txt out.txt) I got theses results. Mine:3.67060087246 David's:18.8382646398 Christian's:15.3224465679 Brians: 143.980883604 Now here's some profiles with the settings (10 5000 tom.txt out.txt) Mine: 9988 function calls in 3.856 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 3.829 3.829 <string>:1(?) 1 0.026 0.026 3.856 3.856 profile:0(main()) 0 0.000 0.000 profile:0(profiler) 4991 0.057 0.000 0.057 0.000 random.py:229(choice) 1 2.637 2.637 2.637 2.637 randomwriter.py:11(index_text) 1 0.040 0.040 0.040 0.040 randomwriter.py:24(get_first_seed) 4990 0.180 0.000 0.237 0.000 randomwriter.py:36(get_next_letter) 1 0.000 0.000 2.637 2.637 randomwriter.py:4(__init__) 1 0.074 0.074 2.989 2.989 randomwriter.py:51(buildoutput) 1 0.840 0.840 3.829 3.829 randomwriter.py:59(main) David's: 15010 function calls in 18.463 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 18.436 18.436 <string>:1(?) 1 0.027 0.027 18.463 18.463 profile:0(main()) 0 0.000 0.000 profile:0(profiler) 5001 0.115 0.000 0.115 0.000 random.py:229(choice) 1 0.000 0.000 0.000 0.000 random.py:90(seed) 1 0.096 0.096 18.436 18.436 rwspiffy.py:102(main) 1 0.001 0.001 0.001 0.001 rwspiffy.py:38(validateArgs) 1 0.000 0.000 0.175 0.175 rwspiffy.py:65(newSeed) 5000 17.773 0.004 17.773 0.004 rwspiffy.py:76(nextAfterSeed) 5000 0.277 0.000 18.164 0.004 rwspiffy.py:85(doSeed) 1 0.000 0.000 0.000 0.000 sets.py:119(__iter__) 1 0.175 0.175 0.175 0.175 sets.py:356(_update) 1 0.000 0.000 0.175 0.175 sets.py:425(__init__) Christian's: 29958 function calls in 16.217 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 16.050 16.050 <string>:1(?) 1 0.166 0.166 16.217 16.217 profile:0(main()) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 random.py:135(randrange) 1 0.000 0.000 0.000 0.000 random.py:198(randint) 4990 0.096 0.000 0.096 0.000 random.py:229(choice) 1 0.001 0.001 16.050 16.050 randomwrite.py:8(main) 1 0.000 0.000 0.000 0.000 randomwriter.py:28(_selectSeed) 4990 15.404 0.003 15.404 0.003 randomwriter.py:33(_getMatches) 1 0.004 0.004 0.005 0.005 randomwriter.py:4(__init__) 4990 0.094 0.000 0.094 0.000 randomwriter.py:43(_getSubChars) 5000 0.075 0.000 0.075 0.000 randomwriter.py:55(_writeChar) 4990 0.046 0.000 0.046 0.000 randomwriter.py:60(_updateSeed) 4990 0.281 0.000 15.995 0.003 randomwriter.py:64(Step) 1 0.049 0.049 16.045 16.045 randomwriter.py:74(Run) Brians: 81353 function calls in 141.329 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.089 0.089 141.303 141.303 <string>:1(?) 1 0.000 0.000 0.000 0.000 brian_random_writer.py:177(__init__) 1 50.521 50.521 141.208 141.208 brian_random_writer.py:192(get_randomwriter_text) 1 0.001 0.001 0.001 0.001 brian_random_writer.py:228(get_input) 1 0.116 0.116 0.130 0.130 brian_random_writer.py:255(get_orig_text) 1 0.000 0.000 0.000 0.000 brian_random_writer.py:269(get_novel_seed) 27871 0.908 0.000 90.555 0.003 brian_random_writer.py:287(get_new_character) 25600 89.043 0.003 89.043 0.003 brian_random_writer.py:316(get_choices) 1 0.006 0.006 141.214 141.214 brian_random_writer.py:365(main) 1 0.014 0.014 0.014 0.014 brian_random_writer.py:79(reader) 1 0.026 0.026 141.329 141.329 profile:0(main()) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 random.py:135(randrange) 1 0.000 0.000 0.000 0.000 random.py:198(randint) 27871 0.604 0.000 0.604 0.000 random.py:229(choice) I'm going to paste the code from the hot spots identified here. This is the one that came with mine. def index_text(self): index={} textsize=len(self.text) for n,v in enumerate(self.text): if n+self.size<=textsize: seed=self.text[n:n+self.size] else: seed=self.text[n:textsize] if index.has_key(seed): index[seed].append(n) else: index[seed]=[n] return index The reason I ended up doing this is because my first time solving the problem I got correct results but terrible execution time so I profiled it to find out what the problem was. I was using a loop very similar to the one I'm using above. Before I was doing this for each character, instead I cached my results here and used them on every seed look up. Every possible seed is already in the dictionary. Apparently dictionary lookups are quite fast. This is David's hot spot. def nextAfterSeed(seed): """ takes seed to Sort and find probability for next character based on source """ # abstracted for onceandonlyonce principle from doSeed tempvariable = sourcefile.split(seed) del tempvariable[0] # first entry of split sourcefile is not a probable character # OneLiner, return list of first characters in tempvariable, no blanks return [item[:1] for item in tempvariable if len(item) > 0] I thought this was interesting and it never crossed my mind, and gives good performance on small values because it uses implied 'C' based loops. The one I used was python only so it was very slow. Where he loops through the file each time for each character needed in the output. This makes good sense split the file on the seed then the first character of each subsequent string is the one that is wanted. I have a feeling the string slices are slowing this one down a little I'm not really sure. This is Christian's hot spot. def _getMatches(self): """build a list of indexes of the current seed in the text""" matches = [] match = self.text.find(self.seed) matches.append(match) while match != -1: match = self.text.find(self.seed, match+1) matches.append(match) return matches I needed to look this up so I'm putting it here for the sake of others too. find(sub[,start[,end]]) Return the lowest index in the string where substring sub is found, such that sub is contained in the range [start, end). Optional arguments start and end are interpreted as in slice notation. Return |-1| if sub is not found. This is not so different in concept from the previous hot spot. Find the seed matches then keep their index then he passes this to the character grabber function that takes the index adds the seed length then take that character. It is fast enough because of the implied C loops also. This is Brians hot spot. def get_choices(self): search_point = self.orig_text.find(self.seed, 0, -1) choices = [] while True: if search_point == -1: if choices == []: self.get_novel_seed else: break else: choices.append( self.orig_text[search_point + len(self.seed)]) search_point = self.orig_text.find(self.seed, search_point + 1, -1) return choices This does not look to different from how Christian solved the problem. From the profile and comparing it to our profiles Brians has an unreasonable number of calls to the biggest hotspot. 27k calls to this function. To figure this out we need to look at function that is the second heavy hotspot, to look at why this is so. def get_randomwriter_text(self): self.get_input() self.get_orig_text() word_count = old_word_count = 0 self.get_novel_seed() while word_count < self.output_text_length: new_character = self.get_new_character() # stored as used x2 self.output_text = self.output_text + new_character old_word_count = word_count word_count = len(self.output_text.split()) self.seed = self.seed[1:] + new_character\ I was baffled for a little while what was going on in the above code that caused all the function calls, then on the line Word_count=len(self.output_text.split()) I noticed this would cause the output to be much longer. I changed the line in his program to not split() the string and got the correct output and a much speedier 1.77 seconds on 5 500 tom.txt. The rest of his algorithm was the same as ours just those 8 characters changed things and caused a great slow down. On the parameters 10 5000 tom.txt he got 16.5 seconds, he uses basically the same algorithm as Christian but he caches his previous results, so every time he gets a new seed he looks in the cache and if it already exists then he uses that if not he makes a new one and caches it. This causes his implementation sans .split() to be a bit faster than Christians. Yet another metric to look at when inspecting code is lines of code. Brians code was 364 loc with a whole bunch of comments and doc strings. Christians Code was 128 loc with comments and spaces between functions. Davids was 124 with copious comments and good spacing. Mine was 108 after I took out all the extra white you can find the originals here Randomwriter Clinic Source Note: I did make changes to the sources in order to do the profiling and to use the timeit module. - In the case of Brians code I hard coded the paramaters in order to get accurate results for profile and timeit ...
https://wiki.python.org/moin/ChadCrabtree?action=fullsearch&context=180&value=linkto%253A%2522ChadCrabtree%2522
CC-MAIN-2018-05
refinedweb
2,026
68.77
in reply to Re: Re: Re^2: Private method variations in thread Private method variations Gee, you can have scalars and arrays and hashes with (almost) the same name too. Somehow people manage to keep those straight... Really, I think it's a much worse problem to put public and private methods into the same namespace, because then not only do you have to keep the names straight, but anyone who inherits from you has to keep them straight. Private names should not show up in the public interface at all. Even within the class, it's vitally important to be able to distinguish when you're calling yourself privately from when you're calling yourself through the public interface. C++ style rules just make things completely ambiguous, visually speaking. Drives me
http://www.perlmonks.org/?node_id=333125
CC-MAIN-2015-22
refinedweb
132
66.88
This holds for both version 1.4 & 2. When a python code has annotation for a function's return type,the syntax highlighting of the following code is all broken. For example: def foo( value: int ) -> int: return value + 2 # all syntax highlighting broken below this line ... This still seems to be the case... I just registered and was about to make a post about it. I find this really annoying. Hope they will fix it soon! 2 years have passed. Any news on this issue? Here's a wrinkle. In ST3, If I do this, syntax highlighting is only broken for the next function header. After that it works correctly. The part of the syntax that is causing problems is the -> int. Hopefully that will give some hints as to what in the highlighting engine is breaking. I'd recommend using another python syntax file. Such as github.com/facelessuser/sublime ... tmLanguage
https://forum.sublimetext.com/t/unrecognized-python-3-code-in-syntax-highlighting/4161
CC-MAIN-2017-43
refinedweb
153
79.16
Compare the Content of Two Files in Java Last modified: September 29, 2021 1. Overview In this tutorial, we'll review different approaches to determine if the contents of two files are equal. We'll be using core Java Stream I/O libraries to read the contents of the files and implement basic comparisons. To finish, we'll review the support provided in Apache Commons I/O to check for content equality of two files. 2. Byte by Byte Comparison Let's start with a simple approach to reading the bytes from the two files to compare them sequentially. To speed up reading the files, we'll use BufferedInputStream. As we'll see, BufferedInputStream reads large chunks of bytes from the underlying InputStream into an internal buffer. When the client reads all the bytes in the chunk, the buffer reads another block of bytes from the stream. Obviously, using BufferedInputStream is much faster than reading one byte at a time from the underlying stream. Let's write a method that uses BufferedInputStreams to compare two files: public static long filesCompareByByte(Path path1, Path path2) throws IOException { try (BufferedInputStream fis1 = new BufferedInputStream(new FileInputStream(path1.toFile())); BufferedInputStream fis2 = new BufferedInputStream(new FileInputStream(path2.toFile()))) { int ch = 0; long pos = 1; while ((ch = fis1.read()) != -1) { if (ch != fis2.read()) { return pos; } pos++; } if (fis2.read() == -1) { return -1; } else { return pos; } } } We use the try-with-resources statement to ensure that the two BufferedInputStreams are closed at the end of the statement. With the while loop, we read each byte of the first file and compare it with the corresponding byte of the second file. If we find a discrepancy, we return the byte position of the mismatch. Otherwise, the files are identical and the method returns -1L. We can see that if the files are of different sizes but the bytes of the smaller file match the corresponding bytes of the larger file, then it returns the size in bytes of the smaller file. 3. Line by Line Comparison To compare text files, we can do an implementation that reads the files line by line and checks for equality between them. Let's work with a BufferedReader that uses the same strategy as InputStreamBuffer, copying chunks of data from the file to an internal buffer to speed up the reading process. Let's review our implementation: public static long filesCompareByLine(Path path1, Path path2) throws IOException { try (BufferedReader bf1 = Files.newBufferedReader(path1); BufferedReader bf2 = Files.newBufferedReader(path2)) { long lineNumber = 1; String line1 = "", line2 = ""; while ((line1 = bf1.readLine()) != null) { line2 = bf2.readLine(); if (line2 == null || !line1.equals(line2)) { return lineNumber; } lineNumber++; } if (bf2.readLine() == null) { return -1; } else { return lineNumber; } } } The code follows a similar strategy as the previous example. In the while loop, instead of reading bytes, we read a line of each file and check for equality. If all the lines are identical for both files, then we return -1L, but if there's a discrepancy, we return the line number where the first mismatch is found. If the files are of different sizes but the smaller file matches the corresponding lines of the larger file, then it returns the number of lines of the smaller file. 4. Comparing with Files::mismatch The method Files::mismatch, added in Java 12, compares the contents of two files. It returns -1L if the files are identical, and otherwise, it returns the position in bytes of the first mismatch. This method internally reads chunks of data from the files' InputStreams and uses Arrays::mismatch, introduced in Java 9, to compare them. As with our first example, for files that are of different sizes but for which the contents of the small file are identical to the corresponding contents in the larger file, it returns the size (in bytes) of the smaller file. To see examples of how to use this method, please see our article covering the new features of Java 12. 5. Using Memory Mapped Files A memory-mapped file is a kernel object that maps the bytes from a disk file to the computer's memory address space. The heap memory is circumvented, as the Java code manipulates the contents of the memory-mapped files as if we're directly accessing the memory. For large files, reading and writing data from memory-mapped files is much faster than using the standard Java I/O library. It's important that the computer has an adequate amount of memory to handle the job to prevent thrashing. Let's write a very simple example that shows how to compare the contents of two files using memory-mapped files: public static boolean compareByMemoryMappedFiles(Path path1, Path path2) throws IOException { try (RandomAccessFile randomAccessFile1 = new RandomAccessFile(path1.toFile(), "r"); RandomAccessFile randomAccessFile2 = new RandomAccessFile(path2.toFile(), "r")) { FileChannel ch1 = randomAccessFile1.getChannel(); FileChannel ch2 = randomAccessFile2.getChannel(); if (ch1.size() != ch2.size()) { return false; } long size = ch1.size(); MappedByteBuffer m1 = ch1.map(FileChannel.MapMode.READ_ONLY, 0L, size); MappedByteBuffer m2 = ch2.map(FileChannel.MapMode.READ_ONLY, 0L, size); return m1.equals(m2); } } The method returns true if the contents of the files are identical, otherwise, it returns false. We open the files using the RamdomAccessFile class and access their respective FileChannel to get the MappedByteBuffer. This is a direct byte buffer that is a memory-mapped region of the file. In this simple implementation, we use its equals method to compare in memory the bytes of the whole file in one pass. 6. Using Apache Commons I/O The methods IOUtils::contentEquals and IOUtils::contentEqualsIgnoreEOL compare the contents of two files to determine equality. The difference between them is that contentEqualsIgnoreEOL ignores line feed (\n) and carriage return (\r). The motivation for this is due to operating systems using different combinations of these control characters to define a new line. Let's see a simple example to check for equality: @Test public void whenFilesIdentical_thenReturnTrue() throws IOException { Path path1 = Files.createTempFile("file1Test", ".txt"); Path path2 = Files.createTempFile("file2Test", ".txt"); InputStream inputStream1 = new FileInputStream(path1.toFile()); InputStream inputStream2 = new FileInputStream(path2.toFile()); Files.writeString(path1, "testing line 1" + System.lineSeparator() + "line 2"); Files.writeString(path2, "testing line 1" + System.lineSeparator() + "line 2"); assertTrue(IOUtils.contentEquals(inputStream1, inputStream2)); } If we want to ignore newline control characters but otherwise check for equality of the contents: @Test public void whenFilesIdenticalIgnoreEOF_thenReturnTrue() throws IOException { Path path1 = Files.createTempFile("file1Test", ".txt"); Path path2 = Files.createTempFile("file2Test", ".txt"); Files.writeString(path1, "testing line 1 \n line 2"); Files.writeString(path2, "testing line 1 \r\n line 2"); Reader reader1 = new BufferedReader(new FileReader(path1.toFile())); Reader reader2 = new BufferedReader(new FileReader(path2.toFile())); assertTrue(IOUtils.contentEqualsIgnoreEOL(reader1, reader2)); } 7. Conclusion In this article, we've covered several ways to implement a comparison of the contents of two files to check for equality. The source code can be found over on GitHub.
https://www.baeldung.com/java-compare-files
CC-MAIN-2022-27
refinedweb
1,141
56.86
Are you sure? This action might not be possible to undo. Are you sure you want to continue? 3rd International Linux Audio Conference April 21 – 24, 2005 ZKM | Zentrum fur Kunst und Medientechnologie ¨ Karlsruhe, Germany Published by ZKM | Zentrum f¨ r Kunst und Medientechnologie u Karlsruhe, Germany April, 2005 All copyright remains with the authors Content Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Thursday, April 21, 2005 – Lecture Hall 11:45 AM Peter Brinkmann MidiKinesis – MIDI controllers for (almost) any purpose . . . . . . . . . . . . . . . . . . . . . . . 9 01:30 PM Victor Lazzarini Extensions to the Csound Language: from User-Defined to Plugin Opcodes and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 02:15 PM Albert Gr¨f a Q: A Functional Programming Language for Multimedia Applications . . . . . . . . . 21 03:00 PM St´phane Letz, Dominique Fober and Yann Orlarey e jackdmp: Jack server for multi-processor machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 03:45 PM John ffitch On The Design of Csound5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37 04:30 PM Pau Arum´ and Xavier Amatriain ı CLAM, an Object Oriented Framework for Audio and Music . . . . . . . . . . . . . . . . . . 43 Friday, April 22, 2005 – Lecture Hall 11:00 AM Ivica Ico Bukvic “Made in Linux” – The Next Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 11:45 AM Christoph Eckert Linux Audio Usability Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 01:30 PM Marije Baalman Updates of the WONDER software interface for using Wave Field Synthesis . . . 69 02:15 PM Georg B¨nn o Development of a Composer’s Sketchbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Saturday, April 23, 2005 – Lecture Hall 11:00 AM J¨ rgen Reuter u SoundPaint – Painting Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 11:45 AM Michael Sch¨ epp, Rene Widtmann, Rolf “Day” Koch and u Klaus Buchheim System design for audio record and playback with a computer using FireWire . 87 01:30 PM John ffitch and Tom Natt Recording all Output from a Student Radio Station . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 LAC2005 3 02:15 PM Nicola Bernardini, Damien Cirotteau, Free Ekanayaka and Andrea Glorioso AGNULA/DeMuDi – Towards GNU/Linux audio and music . . . . . . . . . . . . . . . . . 101 03:00 PM Fernando Lopez-Lezcano Surviving On Planet CCRMA, Two Years Later And Still Alive . . . . . . . . . . . . . . 109 Saturday, April 23, 2005 – Media Theater 11:00 AM Julien Claassen Linux As A Text-Based Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 11:45 AM Frank Eickhoff “terminal rasa” – every music begins with silence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 01:30 PM Werner Schweer and Frank Neumann The MusE Sequencer: Current Features and Plans for the Future . . . . . . . . . . . . .127 02:15 PM Nasca Octavian Paul ZynAddSubFX – an open source software synthesizer . . . . . . . . . . . . . . . . . . . . . . . . 131 03:00 PM Tim Janik Music Synthesis Under Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Sunday, April 24, 2005 – Lecture Hall 11:00 AM Davide Fugazza and Andrea Glorioso AGNULA Libre Music – Free Software for Free Music . . . . . . . . . . . . . . . . . . . . . . . . 141 11:45 AM Dave Phillips Where Are We Going And Why Aren’t We There Yet? . . . . . . . . . . . . . . . . . . . . . . .147 LAC2005 4 Preface We are very happy to welcome you to the 3rd International Linux Audio Conference. It takes place again at the ZKM | Institute for Music and Acoustics in Karlsruhe/Germany, on April 21-24, 2005. The “Call for Papers” which has resulted in the proceedings you hold in your hands has changed significantly compared to the previous LAC conferences because this time we were asking for elaborated papers rather than short abstracts only. We are very glad that in spite of this new hurdle we have received quite a lot of paper submissions and we are confident that many people will appreciate the efforts which the authors have put into them. Each paper has been reviewed by at least 2 experts. Many thanks go to the authors and reviewers for their great work! We hope that the 2005 conference will be as successful and stimulating as the previous ones and we wish you all a pleasant stay. Frank Neumann and G¨tz Dipper o Organization Team LAC2005 Karlsruhe, April 2005 The International Linux Audio Conference 2005 is sponsored by LAC2005 5 Staff Organization Team LAC2005 G¨tz Dipper o Frank Neumann ZKM | Institute for Music and Acoustics LAD ZKM Marc Riedel J¨ rgen Betker u Hartmut Bruckner Ludger Br¨ mmer u Uwe Faber Hans Gass Joachim Goßmann Achim Heidenreich Martin Herold Martin Kn¨tzele o Andreas Liefl¨nder a Philipp Mattner Alexandra M¨ssner o Caro M¨ssner o Chandrasekhar Ramakrishnan Martina Riedler Thomas Saur Theresa Schubert Joachim Sch¨ tze u Bernhard Sturm Manuel Weber Monika Weimer Susanne Wurmnest Organization of LAC2005 Concerts/Call for Music Graphic Artist Sound Engineer Head of the Institute for Music and Acoustics Head of the IT Department Technical Assistant Tonmeister Event Management Technical Assistant Technical Assistant Technical Assistant Technical Assistant Assistant of Management Event Management Software Developer Head of the Event Department Sound Engineer Event Management IT Department Production Engineer Technical Director of the Event Department Event Management Event Management LAD Matthias Nagorni J¨rn Nettingsmeier o Relay Servers Marco d’Itri Eric Dantan Rzewnicki Chat Operator Sebastian Raible Icecast/Ices Support Jan Gerber Karl Heyes SUSE LINUX Products GmbH Coordination of the Internet Audio Stream Italian Linux Society Radio Free Asia, Washington Xiph.org LAC2005 6 Paper Reviews Fons Adriaensen Frank Barknecht Ivica Ico Bukvic Paul Davis Fran¸ois D´chelle c e Steve Harris Jaroslav Kysela Fernando Lopez-Lezcano J¨rn Nettingsmeier o Frank Neumann Dave Phillips Alcatel Space, Antwerp/Belgium Deutschlandradio, K¨ln/Germany o University of Cincinnati, Ohio/USA Linux Audio Systems, Pennsylvania/USA France University of Southampton, Hampshire/UK SUSE LINUX Products GmbH, Czech Republic CCRMA/Stanford University, California/USA Folkwang-Hochschule Essen/Germany Karlsruhe/Germany Findlay, Ohio/USA (In alphabetical order) LAC2005 7 LAC2005 8 MidiKinesis — MIDI controllers for (almost) any purpose Peter Brinkmann Technische Universit¨t Berlin a Fakult¨t II – Mathematik und Naturwissenschaften a Institut f¨r Mathematik u Sekretariat MA 3-2 Straße des 17. Juni 136 D-10623 Berlin brinkman@math.tu-berlin.de Abstract MidiKinesis is a Python package that maps MIDI control change events to user-defined X events, with the purpose of controlling almost any graphical user interface using the buttons, dials, and sliders on a MIDI keyboard controller such as the Edirol PCR30. Key ingredients are Python modules providing access to the ALSA sequencer as well as the XTest standard extension. 2 Basics Before implementing MidiKinesis, I settled on the following basic decisions: • MidiKinesis will perform all its I/O through the ALSA sequencer (in particular, no reading from/writing to /dev/midi*), and it will act on graphical user interfaces by creating plain X events. • MidiKinesis will be implemented in Python (Section 8), with extensions written in C as necessary. Dependencies on nonstandard Python packages should be minimized. Limiting the scope to ALSA and X effectively locks MidiKinesis into the Linux platform (it might work on a Mac running ALSA as well as X, but this seems like a far-fetched scenario), but I decided not to aim for portability because MidiKinesis solves a problem that’s rather specific to Linux audio.1 The vast majority of Mac or Windows users will do their audio work with commercial tools like Cubase or Logic, and their MIDI support already is as smooth as one could possibly hope. The benefit of limiting MidiKinesis in this fashion is a drastic simplification of the design; at the time of this writing, MidiKinesis only consists of about 2000 lines of code. When I started thinking about this project, my first idea was to query various target programs in order to find out what widgets their user interfaces consist of, but this approach turned out to be too complicated. Ultimately, it would have required individual handling of toolkits like GTK, Qt, Swing, etc., and in many cases the required information would not have been forthcoming. So, I decided to settle for I did put a thin abstraction layer between low-level implementation details and high-level application code (Section 3), so that it is theoretically possible to rewrite the low-level code for Windows or Macs without breaking the application code. 1 Keywords ALSA sequencer, MIDI routing, X programming, Python 1 Introduction When experimenting with Matthias Nagorni’s AlsaModularSynth, I was impressed with its ability to bind synth parameters to MIDI events on the fly. This feature is more than just a convenience; the ability to fine-tune parameters without having to go back and forth between the MIDI keyboard and the console dramatically increases productivity because one can adjust several parameters almost simultaneously, without losing momentum by having to navigate a graphical user interface. Such a feature can make the difference between settling for a “good enough” choice of parameters and actually finding the “sweet spot” where everything sounds just right. Alas, a lot of audio software for Linux does not expect any MIDI input, and even in programs that can be controlled via MIDI, the act of setting up a MIDI controller tends to be less immediate than the elegant follow-and-bind approach of AlsaModularSynth. I set out to build a tool that would map MIDI control change events to GUI events, and learn new mappings on the fly. The result was MidiKinesis, the subject of this note. MidiKinesis makes it possible to control almost any graphical user interface from a MIDI controller keyboard. LAC2005 9 the lowest common denominator — MidiKinesis directly operates on the X Window System, using the XTest standard extension to generate events. I expected this approach to be somewhat fragile as well as tedious to calibrate, but in practice it works rather well (Section 5). For the purposes of MidiKinesis, Python seemed like a particularly good choice because it is easy to extend with C (crucial for hooking into the ALSA sequencer and X) and well suited for rapidly building MIDI filters, GUIs, etc. Python is easily fast enough to deal with MIDI events in real time, so that performance is not a concern in this context. On top of Python, MidiKinesis uses Tkinter (the de facto standard toolkit for Python GUIs, included in many Linux distributions), and ctypes (Section 8) (not a standard module, but easy to obtain and install). constructor) and callback (called when a MIDI event arrives at an input port of the corresponding sequencer). MidiThread is a subclass of threading.Thread that provides support for handling incoming MIDI events in a separate thread. An instance of MidiThread keeps a pointer to an instance of PySeq whose callback method is called when a MIDI event comes in. Using this class structure, a simple MIDI filter might look like this: from pyseq import * class MidiTee(PySeq): def init(self, *args): self.createInPort() self.out=self.createOutPort() def callback(self, ev): print ev self.sendEvent(ev, self.out) return 1 seq=MidiTee(’miditee’) t=MidiThread(seq) t.start() raw_input(’press enter to finish’) This filter acts much like the venerable tee command. It reads MIDI events from its input port and writes them verbatim to its output port, and it prints string representations of MIDI events to the console. The last line is necessary because instances of MidiThread are, by default, daemon threads. Once started, an instance of MidiThread will spend most of its time in the C function midiLoop, which in turn spends most of its time waiting for events in a poll(...) system call. In other words, instances of MidiThread hardly put any strain on the CPU at all. 3.2 Capturing and sending X events Structurally, the module pyrobot.py is quite similar to pyseq.py. It uses ctypes to create a Python shadow class for the XEvent union of X, and it introduces a simple abstraction layer that protects application programmers from such details. The main classes are as follows: PyRobot is named after Java’s java.awt.Robot. It provides basic functionality for capturing and sending X events. 3 The bottom level At the lowest level, the modules pyseq.py and pyrobot.py provide access to the ALSA sequencer and the XTest library, respectively. There are many ways of extending Python with C, such as the Python/C API, Boost.Python, automatic wrapper generators like SIP or SWIG, and hybrid languages like pyrex. In the end, I chose ctypes because of its ability to create and manipulate C structs and unions. This is crucial for working with ALSA and X since both rely on elaborate structs and unions (snd seq event t and XEvent) for passing events. 3.1 Accessing the ALSA sequencer The module pyseq.py defines a Python shadow class that provides access to the full sequencer event struct of ALSA (snd seq event t). Moreover, the file pyseq.c provides a few convenience functions, most importantly midiLoop(...), which starts a loop that waits for MIDI events, and calls a Python callback function when a MIDI event comes in. The module pyseq.py also provides an abstraction layer that protects application programmers from such implementation details. To this end, pyseq.py introduces the following classes: PySeq manages ALSA sequencer handles, and it provides methods for creating MIDI ports, sending MIDI events, etc. Application programmers will subclass PySeq and override the methods init (called by the LAC2005 10 Script uses PyRobot to record and play sequences (scripts) of mouse and keyboard events. The following code records a sequence of mouse and keyboard events and replays it. from pyrobot import * R=PyRobot() S=Script() print ’record script, press Escape’ S.record(R) raw_input(’press enter to replay’) S.play(R) Counter handles counter widgets whose values are changed by repeatedly clicking on up/down arrows. Specimen uses widgets of this kind. Counter mappings are calibrated by clicking on the location of the up/down arrows and by specifying a step size, i.e., the number of clicks that a unit change of the controller value corresponds to. Circular Dial behaves much like Slider, except it drags the mouse in a circular motion. amSynth has dials that work in this fashion. Linear Dial sounds like an oxymoron, but the name merely reflects the dual nature of the widgets that it targets. To wit, there are widgets that look like dials on the screen, but they get adjusted by pressing the mouse button on the widget and dragging the mouse up or down in a linear motion. Rosegarden4, ZynAddSubFX, and Hydrogen all have widgets of this kind. These six mappings cover most widgets that commonly occur in Linux audio software. Button mappings are probably the most general as well as the least obvious feature. Here is a simple application of some of Button mappings, using Rosegarden4: • Map three buttons on the MIDI keyboard to mouse clicks on Start, Stop, and Record in Rosegarden4. • Map a few more buttons to mouse clicks on different tracks in Rosegarden4, followed by the Delete key. Pressing one of the latter buttons will activate and clear a track, so that it is possible to record a piece consisting of several tracks (and to record many takes of each track) without touching the console after the initial calibration. 4 The top level The module midikinesis.py is the main application of the package. It waits for incoming MIDI control change events. If it receives a known event, it triggers the appropriate action. Otherwise, it asks the user to assign one of six possible mappings to the current event: Button maps a button on the MIDI keyboard to a sequence of X events (Section 3.2) that the user records when setting up this sort of mapping. This sequence will typically consist of mouse motions, clicks, and keystrokes, but mouse dragging events are also admissible. It is also possible to assign Button mappings to dials and sliders on the keyboard. To this end, one defines a threshold value between 0 and 127 (64 is the default) and chooses whether a rising edge or a falling edge across the threshold is to trigger the recorded sequence of X events. Slider maps MIDI events from a slider or dial on the MIDI keyboard to mouse dragging events along a linear widget (such as a linear volume control or scrollbar). For calibration purposes, midikinesis.py asks the user to click on the bottom, top, and current location of the widget. Hydrogen and jamin are examples of audio software with widgets of this kind. The button used to click on the bottom location determines the button used for dragging. Selection divides the range of controller values (0 . . . 127) into brackets, with each bracket corresponding to a mouse click on a userdefined location on the screen. It primarily targets radio buttons. 5 Subtleties One might expect the approach of mapping MIDI events to plain X events to be rather fragile because simple actions like moving, resizing, or covering windows may break existing mappings. In practice, careful placement of application windows on the desktop will eliminate most problems of this kind. Moreover, MidiKinesis provides a number of mechanisms that increase robustness: • MidiKinesis computes coordinates of X events relative to a reference point. If LAC2005 11 all mappings target just one window, then one makes the upper left-hand corner of this window the reference point, and if the window moves, one only needs to update the reference point. • The notion of reference points also makes it possible to save mappings to a file and restore them later. • It helps to start one instance of MidiKinesis for each target window, each with its individual reference point. This will resolve most window placement issues. In order to make this work, one launches and calibrates them individually. When an instance of MidiKinesis has been set up, one tells it to ignore unknown events and moves on to the next instance. Like this, instances of MidiKinesis won’t compete for the same MIDI events. • Instances of MidiKinesis only accept events whose channel and parameter match certain patterns.2 By choosing different patterns for different instances of MidiKinesis, one keeps them from interfering with each other, while each still accepts new mappings. possibility of using MIDI events to control all kinds of software. For instance, when I was implementing Slider mappings, I used the vertical scrollbar of Firefox as a test case. In order to illustrate the basic idea of creative misuse of MIDI events, I implemented a simple game of Pong, controlled by a slider or dial on a MIDI keyboard. Generally speaking, a mouse is a notoriously sloppy input device, while MIDI controllers tend to be rather precise. So, a tool like MidiKinesis might be useful in a nonaudio context requiring speed and accuracy. Finally, I feel that the applications of MidiKinesis that I have found so far barely scratch the surface. For instance, the ability to record and replay (almost) arbitrary sequences of X events has considerable potential beyond the examples that I have tried so far. 8 Resources MidiKinesis is available at http: // software/midikinesis/midikinesis. tgz. MidiKinesis requires Python, Tkinter, and ctypes, as well as an X server that supports the XTest standard extension. Python is a powerful scripting language, available at. Chances are that you are using a Linux distribution that already has Python installed. Tkinter is the de facto standard toolkit for Python GUIs, available at. python.org/moin/TkInter. It is included in many popular Linux distributions. ctypes is a package to create and manipulate C data types in Python, and to call functions in shared libraries. It is available at. net/crew/theller/ctypes/. 6 Fringe benefits Using the module pyseq.py, one can rapidly build MIDI filters. I even find the simple MidiTee example from Section 3.1 useful for eavesdropping on conversations between MIDI devices (tools like amidi serve a similar purpose, but I like being able to adjust the output format on the fly, depending on what I’m interested in). One of the first applications I built on top of pyseq.py was a simple bulk dump handler for my MIDI keyboard. It shouldn’t be much harder to build sophisticated configuration editors for various MIDI devices in a similar fashion. I was surprised to find out that there seemed to be no approximation of java.awt.Robot for Python and Linux before pyrobot.py, so that pyrobot.py might be useful in its own right, independently of audio applications. 9 Acknowledgements Special thanks go to Holger Pietsch for getting me started on the X programming part of the project, and to the members of the LAU and LAD mailing lists for their help and support. 7 Where to go from here While the focus of MidiKinesis has squarely been on audio applications, it also opens the 2 By default, MidiKinesis accepts events from General Purpose Controllers on any channel. LAC2005 12 Extensions to the Csound Language: from User-Defined to Plugin Opcodes and Beyond. Victor Lazzarini Music Technology Laboratory National University of Ireland, Maynooth Victor.Lazzarini@nuim.ie Abstract This article describes the latest methods of extending the csound language. It discusses these methods in relation to the two currently available versions of the system, 4.23 and 5. After an introduction on basic aspects of the system, it explores the methods of extending it using facilities provided by the csound language itself, using user-defined opcodes. The mechanism of plugin opcodes and function table generation is then introduced as an external means of extending csound. Complementing this article, the fsig signal framework is discussed, focusing on its support for the development of spectral-processing opcodes. Keywords: Computer Music, Music Processing Languages, Application Development, C / C++ Programming 1 Introduction pool of expert-users, filtering into a wider music community. The constant development of csound has been partly fuelled by the existence of a simple opcode API (Fftich 2000) (Resibois 2000), which is easy to understand, providing a good, if basic, support for unit generator addition. This was, for many years, the only direct means of extending csound for those who were not prepared to learn the inside details of the code. In addition, the only way of adding new unit generators to csound was to include them in the system source code and rebuild the system, as there was no support for dynamically-loadable components (csound being from an age where these concepts had not entered mainstream software development). Since then, there were some important new developments in the language and the software in general providing extra support for extensions. These include the possibility of language extension both in terms of C/C++-language loadable modules and in csound’s own programming language. Another important development has been the availability of a more complete C API (Goggins et al 2004), which can be used to instantiate and control csound from a calling process, opening the door for the separation of language and processing engine. 2 Csound versions The csound (Vercoe 2004) music programming language is probably the most popular of the textbased audio processing systems. Together with cmusic (Moore 1990), it was one of the first modern C-language-based portable sound compilers (Pope 1993), but unlike it, it was adopted by composers and developers world-wide and it continued to develop into a formidable tool for sound synthesis, processing and computer music composition. This was probably due to the work of John Ffitch and others, who coordinated a large developer community who was ultimately responsible for the constant upgrading of the system. In addition, the work of composers and educators, such as Richard Boulanger, Dave Phillips and many others, supported the expansion of its user base, who also has been instrumental in pushing for new additions and improvements. In summary, csound can be seen as one of the best examples of music open-source software development, whose adoption has transcended a Currently there are two parallel versions of the socalled canonical csound distribution, csound 4.23, which is a code-freeze version from 2002 , and csound 5, a re-modelled system, still in beta stage of development. The developments mentioned in the introduction are present in csound 4.23, but have been further expanded in version 5. In this system, apart from the core opcodes, most of the unit generators are now in loadable library modules and further opcode addition should be in that format. The plugin opcode mechanism is already present in version 4.23, although some differences exist between opcode formats for the LAC2005 13 two versions. These are mainly to do with arguments to functions and return types. There is also now a mechanism for dynamic-library function tables and an improved/expanded csound API. Other changes brought about in csound 5 are the move to the use of external libraries for soundfile, audio IO and MIDI. Csound 4.23 is the stable version of csound, so at this moment, it would be the recommended one for general use and, especially, for new users. Most of the mechanisms of language extension and unit generator development discussed in this paper are supported by this version. For Linux users, a GNU building system-based source package is available for this version, making it simple to configure and install the program on most distributions. It is important to also note that csound 5 is fully operational, although with a number of issues still to be resolved. It indeed can be used by anyone, nevertheless we would recommend it for more experienced users. However, the user input is crucial to csound 5 development, so the more users adopting the new version, the better for its future. 3 Extending the language csound instrument code effectively has a hidden processing loop, running at the control-rate and affecting (updating) only control and audio signals. An instrument will execute its code lines in that loop until it is switched off Under this loop, audio variables, holding a block of samples equivalent to sr/kr (ksmps), will have their whole vector updated every pass of the loop: instr 1 /* start of the loop */ iscl = 0.5 /* i-type, not affected by the loop */ asig in /* copies ksmps samples from input buffer into asig */ atten = asig*iscl /* scales every sample of asig with iscl */ out atten /* copies kmsps samples from atten into output buffer */ endin /* end of the loop */ As mentioned earlier, csound has mechanisms for addition of new components both by writing code in the csound language itself and by writing C/C++ language modules. This section will concentrate on csound language-based development, which takes the basic form of user-defined opcodes. Before examining these, a quick discussion of csound data types, signals and performance characteristics is offered 3.1 Data types and signals The csound language provides three basic data types: i-, k- and a-types. The first is used for initialisation variables, which will assume only one value in performance, so once set, they will usually remain constant throughout the instrument code. The other types are used to hold scalar (k-type) and vectorial (a-type) variables. The first will hold a single value, whereas the second will hold an array of values (a vector) and internally, each value is a floating-point number, either 32- or 64-bit, depending on the version used. A csound instrument code can use any of these variables, but opcodes might accept specific types as input and will generate data in one of those types. This implies that opcodes will execute at a certain update rate, depending on the output type (Ekman 2000). This can be at the audio sampling rate (sr), the control rate (kr) or only at initialisation time. Another important aspect is that This means that code that requires sample-bysample processing, such as delays that are smaller than one control-period, will require setting the arate vector size, ksmps, to 1, making kr=sr. This will have a detrimental effect on performance, as the efficiency of csound depends a lot on the use of different control and audio rates. 3.2 User-defined opcodes The basic method of adding unit generators in the csound language is provided by the user-defined opcode (UDO) facility, added by Istvan Varga to csound 4.22. The definition for a UDO is given using the keywords opcode and endop, in a similar fashion to instruments: opcode NewUgen a,aki /* defines an a-rate opcode, taking a, k and i-type inputs */ endop The number of allowed input argument types is close to what is allowed for C-language opcodes. All p-field values are copied from the calling instrument. In addition to a-,k- and i-type arguments (and 0, meaning no inputs), which are audio, control and initialisation variables, we have: K, control-rate argument (with initialisation); plus o, p and j (optional arguments, i-type variables defaulting to 0,1 and -1). Output is permitted to be to any of a-, k- or i-type variables. Access to input and output is simplified through the use of a special pair of opcodes, xin and xout. UDOs will have one extra argument in addition to those defined in the declaration, the internal number of the a-signal vector samples iksmps. This sets the value of a local control rate (sr/iksmps) and LAC2005 14 defaults to 0, in which case the iksmps value is taken from the caller instrument or opcode. The possibility of a different a-signal vector size (and different control rates) is an important aspect of UDOs. This enables users to write code that requires the control rate to be the same as audio rate, without actually having to alter the global values for these parameters, thus improving efficiency. An opcode is also provided for setting the iksmps value to any given constant: setksmps 1 /* sets a-signal vector to 1, making kr=sr */ The only caveat is that when the local ksmps value differs from the global setting, UDOs are not allowed to use global a-rate operations (global variable access, etc.). The example below implements a simple feedforward filter, as an example of UDO use: #define LowPass 0 #define HighPass 1 opcode NewFilter a,aki to it. This is something that might require more than a passing acquaintance with its workings, as a rebuild of the software from its complete source code. However, the addition of unit generators and function tables is generally the most common type of extension to the system. So, to facilitate this, csound offers a simple opcode development API, from which new dynamically-loadable (‘plugin’) unit generators can be built. In addition, csound 5 also offers a similar mechanism for function tables. Opcodes can be written in the C or C++ language. In the latter, the opcode is written as a class derived from a template (‘pseudo-virtual’) base class OpcodeBase, whereas in the former, we normally supply a C module according to a basic description. The following sections will describe the process of adding an opcode in the C language. An alternative C++ class implementation would employ a similar method. 3.3.1 Plugin opcodes C-language opcodes normally obey a few basic rules and their development require very little in terms of knowledge of the actual processes involved in csound. Plugin opcodes will have to provide three main programming components: a data structure to hold the opcode internal data, an initialising function or method, and a processing function or method. From an object-oriented perspective, all we need is a simple class, with its members, constructor and perform methods. the data structure for same filter implemented in previous sections: #include "csdl.h" typedef struct _newflt { OPDS h; MYFLT *outsig;/* output pointer */ MYFLT *insig,*kcoef,*itype;/* input pointers */ MYFLT delay; /* internal variable, setksmps 1 /* kr = sr */ asig,kcoef,itype xin adel init 0 if itype == HighPass then kcoef = -kcoef endif afil = asig + kcoef*adel adel = asig /* 1-sample delay, only because kr = sr */ xout afil endop Another very important aspect of UDOs is that recursion is possible and only limited to available memory. This allows, for instance, the implementation of recursive filterbanks, both serial or parallel, and similar operations that involve the spawning of unit generators. The UDO facility has added great flexibility to the csound language, enabling the fast development of musical signal processing operations. In fact, an on-line UDO database has been made available by Steven Yin, holding many interesting new operations and utilities implemented using this facility (). This possibly will form the foundation for a complete csound-languagebased opcode library. 3.3 Adding external components Csound can be extended in variety of ways by modifying its source code and/or adding elements LAC2005 15 int mode; } newfilter; the 1-sample delay */ /* filter mode */ The initialisation function is only there to initialise any data, such as the 1-sample delay, or allocate memory, if needed. The new plugin opcode model in csound5 expects both the initialisation function and the perform function to return an int value, either OK or NOTOK. In addition, both methods now take a two arguments: pointers to the csound environment and the opcode dataspace. In version 4.23 the opcode function will only take the pointer to the opcode dataspace as argument. The following example shows an initialisation function in csound 5 (all following examples are also targeted at that version): int newfilter_init(ENVIRON *csound, newfilter *p){ p->delay = (MYFLT) 0; p->mode = (int) *p->itype; return OK; } To complete the source code, we fill an opcode registration structure OENTRY array called localops (static), followed by the LINKAGE macro: static OENTRY localops[] = { { "newfilter", S(newfilter), 5, "a", "aki", (SUBR)newfilter_init, NULL, (SUBR)newfilter_process } }; LINKAGE The OENTRY structure defines the details of the new opcode: 1. the opcode name (a string without any spaces). 2. the size of the opcode dataspace, set using the macro S(struct_name), in most cases; otherwise this is a code indicating that the opcode will have more than one implementation, depending on the type of input arguments. 3. An int code defining when the opcode is active: 1 is for i-time, 2 is for k-rate and 4 is for a-rate. The actual value is a combination of one or more of those. The value of 5 means active at i-time (1) and arate (4). This means that the opcode has an init function and an a-rate processing function. 4. String definition the output type(s): a, k, s (either a or k), i, m (multiple output arguments), w or f (spectral signals). 5. Same as above, for input types: a, k, s, i, w, f, o (optional i-rate, default to 0), p (opt, default to 1), q (opt, 10), v(opt, 0.5), j(opt, –1), h(opt, 127), y (multiple inputs, atype), z (multiple inputs, k-type), Z (multiple inputs, alternating k- and atypes), m (multiple inputs, i-type), M (multiple inputs, any type) and n (multiple inputs, odd number of inputs, i-type). 6. I-time function (init), cast to (SUBR). 7. K-rate function. 8. A-rate function. The LINKAGE macro defines some functions needed for the dynamic loading of the opcode. This macro is present in version 5 csdl.h, but not in 4.23 (in which case the functions need to be added manually): #define LINKAGE long opcode_size(void) \ { return sizeof(localops);} \ OENTRY *opcode_init(ENVIRON *xx) \ { return localops;} \ The processing function implementation will depend on the type of opcode that is being created. For audio rate opcodes, because it will be generating audio signal vectors, it will require an internal loop to process the vector samples. This is not necessary with k-rate opcodes, as we are dealing with scalar inputs and outputs, so the function has to process only one sample at a time. This means that, effectively, all processing functions are called every control period. The filter opcode is an audio-rate unit generator, so it will include the internal loop. int newfilter_process(ENVIRON *csound, newfilter *p){ int i; /* signals in, out */ MYFLT *in = p->insig; MYFLT *out = p->outsig; /* control input */ MYFLT coef = *p->kcoef; /* 1-sample delay */ MYFLT delay = *p->delay; MYFLT temp; if(p->mode)coef = -coef; /* processing loop */ for(i=0; i < ksmps; i++){ temp = in[i]; out[i] = in[i] + delay*coef ; delay = temp; } /* keep delayed sample for next time */ *p->delay = delay; return OK; } LAC2005 16 The plugin opcode is build as a dynamic module, and similar code can be used both with csound versions 4.23 or 5: gcc -02 -c opsrc.c -o opcode.o ld -E --shared opcode.o –o opcode.so numeric array (p) with all p-field data passed from the score f-statement (or ftgen opcode). static NGFENS localfgens[] = { { "tanh", (void(*)(void))tanhtable}, { NULL, NULL} }; However, due to differences in the interface, the binaries are not compatible, so they will need to built specificially for one of the two versions.Another difference is that csound 5 will load automatically all opcodes in the directory set with the environment variable OPCODEDIR, whereas version 4.23 needs the flag –opcodelib=myopcode.so for loading a specific module. 3.3.2 Plugin function tables A new type of dynamic module, which has been introduced in csound 5 is the dynamic function table generator (GEN). Similarly to opcodes, function table GENs were previously only included statically with the rest of the source code. It is possible now to provide them as dynamic loadable modules. This is a very recent feature, introduced by John Ffitch at the end of 2004, so it has not been extensively tested. The principle is similar to plugin opcodes, but the implementation is simpler. It is only necessary to provide the GEN routine that the function table implements. The example below shows the test function table, written by John Ffitch, implementing a hyperbolic tangent table: #include "csdl.h" #include <math.h> void tanhtable(ENVIRON *csound, FUNC *ftp, FGDATA *ff,) { /* the function table */ MYFLT fp = ftp->ftable; /* f-statement p5, the range */ MYFLT range = ff->e.p[5]; /* step is range/tablesize */ double step = (double) range/(ff->e.p[3]); int i; double x; /* table-filling loop */ for(i=0, x=FL(0.0); i<ff->e.p[3]; i++,x+=step) *fp++ = (MYFLT)tanh(x); } The structure NFGENS holds details on the function table GENs, in the same way as OENTRY holds opcode information. It contains a string name and a pointer to the GEN function. The localfgens array is initialised with these details and terminated with NULL data. Dynamic GENs are numbered according to their loading order, starting from GEN 44 (there are 43 ‘internal’ GENs in csound 5). #define S sizeof static OENTRY *localops = NULL; FLINKAGE Since opcodes and function table GENs reside in the same directory and are loaded at the same time, setting the *localops array to NULL, will avoid confusion as to what is being loaded. The FLINKAGE macro works in the same fashion as LINKAGE. 4 Spectral signals The GEN function takes three arguments, the csound environment dataspace, a function table pointer and a gen info data pointer. The former holds the actual table, an array of MYFLTs, whereas the latter holds all the information regarding the table, e.g. its size and creation arguments. The FGDATA member e will hold a As discussed above, Csound provides data types for control and audio, which are all time-domain signals. For spectral domain processing, there are two separate signal types, ‘wsig’ and ‘fsig’. The former is a signal type introduced by Barry Vercoe to hold a special, non-standard, type of logarithmic frequency analysis data and is used with a few opcodes originally provided for manipulating this data type. The latter is a self-describing data type designed by Richard Dobson to provide a framework for spectral processing, in what is called streaming phase vocoder processes (to differentiate it from the original csound phase vocoder opcodes). Opcodes for converting between time-domain audio signals and fsigs, as well as a few processing opcodes, were provided as part of the original framework by Dobson. In addition, support for a self-describing, portable, spectral file format PVOCEX (Dobson 2002) has been added to csound, into the analysis utility program pvanal and with a file reader opcode. A library of processing opcodes, plus a spectral GEN, has been added to csound by this author. This section will explore the fsig framework, in relation to opcode development. Fsig is a self-describing csound data type which will hold frames of DFT-based spectral analysis LAC2005 17 data. Each frame will contain the positive side of the spectrum, from 0 Hz to the Nyquist (inclusive). The framework was designed to support different spectral formats, but at the moment, only an amplitude-frequency format is supported, which will hold pairs of floating-point numbers with the amplitude and frequency (in Hz) data for each DFT analysis channel (bin). This is probably the most musically meaningful of the DFT-based output formats and is generated by Phase Vocoder (PV) analysis. The fsig data type is defined by the following C structure: typedef struct pvsdat { /* framesize-2, DFT length */ long N; /* number of frame overlaps */ long overlap; /* window size */ long winsize; /* window type: hamming/hanning */ int wintype; /* format: cur. fixed to AMP:FREQ */ long format; /* frame counter */ unsigned long framecount; /* spectral sample is a 32-bit float */ AUXCH frame; } PVSDAT; the processing function of a streaming PV opcode is actually registered as a k-rate routine. In addition, opcodes allocate space for their fsig frame outputs, unlike ordinary opcodes, which simply take floating-point buffers as input and output. The fsig dataspace is externally allocated, in similar fashion to audio-rate vectors and controlrate scalars; however the DFT frame allocation is done by the opcode generating the signal. With that in mind, and observing that type of data we are processing is frequency-domain, we can implement a spectral unit generator as an ordinary (k-rate) opcode. The following example is a frequencydomain version of the simple filter implemented in the previous sections: #include "csdl.h" #include "pstream.h" /* fsig definitions */ typedef struct _pvsnewfilter { OPDS h; /* output fsig, its frame needs to be allocated */ PVSDAT *fout; PVSDAT *fin; /* input fsig */ /* other opcode args */ MYFLT *coef, *itype; MYFLT mode; /* filter type */ unsigned long lastframe; } pvsnewfilter; int pvsnewfilter_init(ENVIRON *csound, pvsnewfilter *p) { long N = p->fin->N; p->mode = (int) *p->itype; /* this allocates an AUXCH struct, if non-existing */ if(p->fout->frame.auxp==NULL) auxalloc((N+2)*sizeof(float), &p->fout->frame); /* output fsig description */ p->fout->N = N; p->fout->overlap = p->fin->overlap; p->fout->winsize = p->fin->winsize; p->fout->wintype = p->fin->wintype; p->fout->format = p->fin->format; p->fout->framecount = 1; p->lastframe = 0; /* check format */ if (!(p->fout->format==PVS_AMP_FREQ || p->fout>format==PVS_AMP_PHASE)) return initerror("wrong format\n"); /* initerror is a utility csound function */ return OK; } The structure holds all the necessary data to describe the signal type: the DFT size (N), which will determine the number of analysis channels (N/2 + 1) and the framesize; the number of overlaps, or decimation, which will determine analysis hopsize (N/overlaps); the size of the analysis window, generally the same as N; the window type, currently supporting PVS_WIN_HAMMING or PVS_WIN_HANN; the data format, currently only PVS_AMP_FREQ; a frame counter, for keeping track of processed frames; and finally the AUXCH structure which will hold the actual array of floats with the spectral data. The AUXCH structure and associated functions are provided by csound as a mechanism for dynamic memory allocation and are used whenever such operation is required. A number of other utility functions are provided by the csound opcode API (in csdl.h), for operations such as loading, reading and writing files, accessing function tables, handling string arguments, etc.. Two of these are used in the code below to provide simple error notification and handling (initerror() and perferror()). A number of implementation differences exist between spectral and time-domain processing opcodes. The main one is that new output is only produced if a new input frame is ready to be processed. Because of this implementation detail, The opcode dataspace contains pointers to the output and input fsig, as well as the k-rate coefficient and the internal variable that holds the filter mode. The init function has to allocate space for the output fsig DFT frame, using the csound LAC2005 18 opcode API function auxalloc(), checking first if it is not there already. int pvsnewfilter_process(ENVIRON *csound, pvsnewfilter p) { long i,N = p->fout->N; MYFLT cosw, tpon; MYFLT coef = *p->kcoef; float *fin = (float *) p->fin >frame.auxp; float *fout = (float *) p->fout->frame.auxp; if(fout==NULL) return perferror("not initialised\n"); /* perferror is a utility csound function */ if(mode) coef = -coef; /* if a new input frame is ready */ if(p->lastframe < p->fin->framecount) { /* process the input, filtering */ pon = pi/N; /* pi is global*/ for(i=0;i < N+2;i+=2) { cosw = cos(i*pon); /* amps */ fout[i] = fin[i] * sqrt(1+coef*coef+2*coef*cosw); /* freqs: unchanged */ fout[i+1] = fin[i+1]; } /* update the framecount */ p->fout->framecount = p->lastframe = p->fin->framecount; } return OK; } 5 Conclusion Csound is regarded as one of the most complete synthesis and processing languages in terms of its unit generator collection. The introduction of UDOs, plugin opcode and function table mechanisms, as well as a self-describing spectral signal framework, has opened the way for further expansion of the language. These methods provide simpler and quicker ways for customisation. In fact, one of the goals of csound 5 is to enhance the possibilities of extension and integration of the language/processing engine into other systems. It is therefore expected that the developments discussed in this article are but only the start of a new phase in the evolution of csound. 6 References The processing function keeps track of the frame count and only processes the input, generating a new output frame, if a new input is available. The framecount is generated by the analysis opcode and is passed from one processing opcode to the next in the chain. As mentioned before, the processing function is called every control-period, but it is independent of it, only performing when needed. The only caveat is that the fsig framework requires the control period in samples (ksmps) to be smaller or equal to the analysis hopsize. Finally, the localops OENTRY structure for this opcode will look like this: static OENTRY localops[] = { {"pvsnewfilter", S(pvsnewfilter), 3, "f", "fkp", (SUBR)pvsnewfilter_init, (SUBR)pvsnewfilter_process} }; Richard Dobson. 2000. PVOCEX: File format for Phase Vocoder data, based on WAVE FORMAT EXTENSIBLE. .. html. Rasmus Ekman. 2000. Csound Control Flow.. John Ffitch. Extending Csound. In R. Boulanger, editor, The Csound Book, Cambridge, Mass., MIT Press. Michael Goggins et Al. 2004. The Csound API. d_8h.html F Richard Moore. 1990. Elements of Computer Music, Englewood Cliffs, NJ: Prentice-Hall, 1990. Stephen T Pope. 1993. Machine Tongues XV: Three Packages for Software Sound Synthesis. Computer Music Journal 17 (2). Mark Resibois. 2000. Adding New Unit Generators to Csound. In R. Boulanger, editor, The Csound Book, Cambridge, Mass., MIT Press. Barry Vercoe. 2004. The Csound and VSTCsound Reference Manual, und5/csound.pdf. From the above, it is clear to see that the new opcode is called pvsnewfilter and its implementation is made of i-time and k-rate functions. It takes fsig, ksig and one optional i-time arguments and it outputs fsig data. LAC2005 19 LAC2005 20 Q: A Functional Programming Language for Multimedia Applications ¨ Albert GRAF Department of Music-Informatics Johannes Gutenberg University 55099 Mainz Germany ag@muwiinfa.geschichte.uni-mainz.de Abstract Q is a functional programming language based on term rewriting. Programs are collections of equations which are used to evaluate expressions in a symbolic fashion. Q comes with a set of extension modules which make it a viable tool for scientific programming, computer music, multimedia, and other advanced applications. In particular, Q provides special support for multimedia applications using PortAudio, libsndfile, libsamplerate, FFTW, MidiShare and OSC (including a SuperCollider interface). The paper gives a brief introduction to the Q language and its multimedia library, with a focus on the facilities for MIDI programming and the SuperCollider interface. Keywords Computer music, functional programming, multimedia programming, Q programming language, SuperCollider 1 Introduction The pseudo acronym “Q” stands for “equational programming language”. Q has its roots in term rewriting, a formal calculus for the symbolic evaluation of expressions coming from universal algebra and symbolic algebra systems (Dershowitz and Jouannaud, 1990). It builds on Michael O’Donnell’s ground-breaking work on equational programming in the 1980s (O’Donnell, 1985) and the author’s own research on efficient term pattern matching and rewriting techniques (Gr¨f, 1991). a In a sense, Q is for modern functional programming languages what BASIC is for imperative ones: It is a fairly simple language, thus easy to learn and use, yet powerful enough to tackle most common programming tasks; it is an interpreted (rather than compiled) language, offering adequate (though not C-like) execution speed; and it comes with a convenient interactive environment including a symbolic debugger, which lets you play with the parts of your program to explore different solution approaches and to test things out. Despite its simplicity, Q should not be mistaken for a “toy language”; in fact, it comes with a fairly comprehensive collection of libraries which in many areas surpasses what is currently available for its bigger cousins like ML and Haskell. Moreover, Q’s SWIG interface makes it easy to interface to additional C and C++ libraries if needed. The Q programming environment is GPL’ed software which has been ported to a large variety of different platforms, including Linux (which has been the main development platform since 1993), FreeBSD, Mac OS X, BeOS, Solaris and Windows. Q also has a cross-platform multimedia library which currently comprises MIDI (via Grame’s MidiShare), audio (providing interfaces to PortAudio v19, libsndfile, libsamplerate and FFTW) and software synthesis (via OSC, the “Open Sound Control” protocol developed by CNMAT, with special support for James McCartney’s SuperCollider software). Additional modules for 3D graphics (OpenGL) and video (libxine) are currently under development. In the following we give a brief overview of the language and the standard library, after which we focus on Q’s multimedia facilities. More information about Q can be found on the Q homepage at. 2 The language At its core, Q is a fairly simple language which is based entirely on the notions of reductions and normal forms pertaining to the term rewriting calculus. A Q program or script is simply a collection of equations which establish algebraic identities. The equations are interpreted as rewriting rules in order to reduce expressions to normal forms. The syntax of the language was inspired by the first edition of Bird and Wadler’s influential book on functional pro- LAC2005 21 gramming (Bird and Wadler, 1988) and thus is similar to other modern functional languages such as Miranda and Haskell. For instance, here is how you define a function sqr which squares its argument by multiplying it with itself: sqr X = X*X; solve P Q = (-P/2+sqrt D,-P/2-sqrt D) if D >= 0 where D = P^2/4-Q; You can also define global variables using a def statement. This is useful if a value is used repeatedly in different equations and you don’t want to recalculate it each time it is needed. def PI = 4*atan 1; When this equation is applied to evaluate an expression like sqr 2, the interpreter performs the reduction sqr 2 => 2*2. It then goes on to apply other equations (as well as a number of built-in rules implementing the primitive operations such as arithmetic) until a normal form is reached (an expression is said to be in normal form if no more equations or built-in rules can be applied to it). In our example, the interpreter will invoke the rule which handles integer multiplication: 2*2 => 4. The resulting expression 4 is in normal form and denotes the “value” of the original expression sqr 2. Note that, as in Prolog, capitalized identifiers are used to indicate the variables in an equation, which are bound to the actual values when an equation is applied. We also remark that function application is denoted simply by juxtaposition. Parentheses are used to group expressions and to indicate “tuple” values, but are not part of the function application syntax. This “curried” form of writing function applications is ubiquitous in modern functional languages. In addition, the Q language also supports the usual infix notation for operators such as + and *. As in other modern functional languages, these are just “syntactic sugar” for function applications; i.e., X*X is just a convenient shorthand for the function application (*) X X. Operator “sections” are also supported; e.g., (+1) denotes the function which adds 1 to its argument, (1/) the reciprocal function. Equations may also include a condition part, as in the following (recursive) definition of the factorial function: fact N = N*fact (N-1) if N>0; = 1 otherwise; Functions on structured arguments are defined by “pattern matching”. E.g., the quicksort function can be implemented in Q with the following two equations. (Note that lists are written in Prolog-like syntax, thus [] denotes the empty list and [X|Xs] a list starting with the head element X and continuing with the list of remaining elements Xs. Furthermore, the ++ operator denotes list concatenation.) qsort [] = []; qsort [X|Xs] = qsort (filter (<X) Xs) ++ [X] ++ qsort (filter (>=X) Xs); Higher-order functions which take other functions as arguments can also be programmed in a straightforward way. For instance, the filter function used above is defined in the standard library as follows. In this case, the function argument P is a predicate expected to return the value true if an element should be included in the result list, false otherwise. filter P [] = []; filter P [X|Xs] = [X|filter P Xs] if P X; = filter P Xs otherwise; Another useful extension to standard term rewriting are the “where clauses” which allow you to bind local variables in an equation. For instance, the following equation defines a function for solving quadratic equations x2 + px + q = 0. It first checks whether the discriminant D = p2 /4 − q is nonnegative before it uses this value to compute the two real solutions of the equation. In contrast to “pure” functional languages such as Haskell, Q takes the pragmatic route in that it also provides imperative programming features such as I/O operations and mutable data cells (“references”), similar to the corresponding facilities in the ML programming language. While one may argue about the use of such “impure” operations with side-effects in a functional programming language, they certainly make life easier when dealing, e.g., with complex I/O situations and thread synchronization. The || operator can be employed to execute such actions in sequence. For instance, using the built-in reads (“read string”) and writes (“write string”) functions, a simple prompt/input interaction would be written as follows: prompt = writes "Input: " || reads; LAC2005 22 References work like pointers to expressions. Three operations are provided: ref which creates a reference from its initial value, put which changes the referenced value, and get which returns the current value. With these facilities you can realize mutable data structures and maintain hidden state in a function. For instance, the following function counter returns the next integer at each invokation, starting at zero: def COUNTER = ref 0; counter = put COUNTER (N+1) || N where N = get COUNTER; Despite its conceptual simplicity, Q is a full-featured functional programming language which allows you to write your programs in a concise and abstract mathematical style. Since it is an interpreted language, programs written in Q are definitely not as fast as their counterparts in C, but they are much easier to write, and the execution speed is certainly good enough for practical purposes (more or less comparable to interpreted Lisp and Haskell). Just like other languages of its kind, Q has automatic memory management, facilities for raising and handling exceptions, constructs for defining new, application-specific data types, and means for partitioning larger scripts into separate modules. Functions and data structures using “lazy” evaluation can be dealt with in a direct manner. Q also uses dynamic typing, featuring a Smalltalk-like object-oriented type system with single inheritance. This has become a rare feature in contemporary functional languages which usually employ a static Hindley/Milner type system to provide more safety at the expense of restricting polymorphism. Q gives you back the flexibility of good old Lispstyle ad-hoc polymorphism and even allows you to extend the definition of existing operations (including built-in functions and operators) to your own data types. 3 The library No modern programming or scripting language is complete without an extensive software library covering the more mundane programming tasks. In the bad old times of proprietary software, crafting such a library has always been a major undertaking, since all these components had to be created from scratch. Fortunately, nowadays there is a large variety of open source software providing more or less standardized solutions for all these areas, so that “reinventing the wheel” can mostly be avoided. This is also the approach taken with the Q programming system, which acts as a kind of “nexus” connecting various open source technologies. To these ends, Q has an elaborate C/C++ interface including support for the SWIG wrapper generator (), which makes it easy to interface to existing C/C++ libraries. This enabled us to provide a fairly complete set of cross-platform extension modules which, while not as comprehensive as the facilities of other (much larger) language projects such as Perl and Python, make it possible to tackle most practical programming tasks with ease. This part of the Q library also goes well beyond what is offered with most other modern functional languages, especially in the multimedia department. The core of the Q programming system includes a standard library, written mostly in Q itself, which implements a lot of useful Q types and functions, such as complex numbers, generic list processing functions (including list comprehensions), streams (a variant of lists featuring lazy evaluation which makes it possible to represent infinite data structures), container data structures (sets, dictionaries, hash tables, etc.), the lambda calculus, and a PostScript interface. Also included in the core is a POSIX system interface which provides, e.g., lowlevel I/O, process and thread management, sockets, filename globbing and regular expression matching. In the GUI department, Q relies on Tcl/Tk (). While Tk is not the prettiest toolkit, its widgets are adequate for most purposes, it can be programmed quite easily, and, most importantly, it has been ported to a large variety of platforms. Using SWIG, it is also possible to embed GTK- and Qt-based interfaces, if a prettier appearance and/or more sophisticated GUI widgets are needed. (Complete bindings for these “deluxe” toolkits are on the TODO list, but have not been implemented yet.) For basic 2D graphics, Q uses GGI, the “General Graphics Interface” (), which has been augmented with a FreeType interface to add support for advanced font handling (). Moreover, a module with bindings for the ImageMagick library () allows you to work LAC2005 23 with virtually all popular image file formats and provides an abundance of basic and advanced image manipulation functions. To facilitate scientific programming, Q has interfaces to Octave, John W. Eaton’s well-known MATLAB-like numerical computation software (), and to IBM’s “Open Data Explorer”, a comprehensive software for doing data visualization (). Web programming is another common occupation of the contemporary developer. In this realm, Q provides an Apache module and an XML/XSLT interface (xmlsoft.org) which allow you to create dynamic web content with ease. Moreover, an interface to the Curl library enables you to perform automated downloads and spidering tasks (curl.haxx.se). If you need database access, an ODBC module (,) can be used to query and modify RDBMSs such as MySQL and PostgreSQL. 4 MIDI programming Q’s MIDI interface, embodied by the midi module, is based on Grame’s MidiShare library (Fober et al., 1999). We have chosen MidiShare because it has been around since the time of the good old Atari and thus is quite mature, it has been ported to a number of different platforms (including Linux, Mac OS X and Windows), it takes a unique “client graph” approach which provides flexible dynamic routing of MIDI data between different applications, and, last but not least, it offers comprehensive support for handling standard MIDI files. While MidiShare already abstracts from all messy hardware details, Q’s midi module even goes one step further in that it also represents MIDI messages not as cryptic byte sequences, but as a high-level “algebraic” data type which can be manipulated easily. For instance, note on messages are denoted using data terms of the form note_on CHANNEL NOTE VELOCITY. The functions midi_get and midi_send are used to read and write MIDI messages, respectively. For example, Fig. 1 shows a little script for transposing MIDI messages in realtime. The midi module provides all necessary data types and functions to process MIDI data in any desired way. It also gives access to MidiShare’s functions to handle standard MIDI files. In order to work with entire MIDI sequences, MIDI messages can be stored in Q’s built-in list data structure, where they can be manipulated using Q’s extensive set of generic list operations. Q’s POSIX multithreading support allows you to run multiple MIDI processing algorithms concurrently and with realtime scheduling priorities, which is useful or even essential for many types of MIDI applications. These features make it possible to implement fairly sophisticated MIDI applications with moderate effort. To demonstrate this, we have employed the midi module to program various algorithmic composition tools and step sequencers, as well as a specialized graphical notation and sequencing software for percussion pieces. The latter program, called “clktrk”, was used by the composer Benedict Mason for one of his recent projects (felt | ebb | thus | brink | here | array | telling, performed by the Ensemble Modern with the Junge Deutsche Philharmonie at the Donaueschingen Music Days 2004 and the Maerzmusik Berlin 2005). Other generally useful tools with KDE/Qtbased GUIs can be found on the Q homepage. For instance, Fig. 2 shows the QMidiCC program, a MidiShare patchbay which can be configured to take care of your MidiShare drivers and to automatically connect new clients as soon as they show up in the MidiShare client list. QMidiCC can also be connected to other MidiShare applications to print their MIDI output and to send them MIDI start and stop messages. 5 Audio and software synthesis The audio interface consists of three modules which together provide the necessary facilities for processing digital audio in Q. The audio module is based on PortAudio (v19), a cross-platform audio library which provides the necessary operations to work with the audio interfaces of the host operating system (). Under Linux this module gives access to both ALSA () and Jack (jackit.sf.net). The sndfile module uses Erik de Castro Lopo’s libsndfile library which allows you to read and write sound files in a variety of formats (file). The wave module provides basic operations to create, inspect and manipulate wave data represented as “byte strings” (a lowlevel data structure provided by Q’s system interface which is used to store raw binary data). It also includes operations for sample rate conversion (via libsam- LAC2005 24 import midi; /* register a MidiShare client and establish I/O connections */ def REF = midi_open "Transpose", IO = midi_client_ref "MidiShare/ALSA Bridge", _ = midi_connect IO REF || midi_connect REF IO; /* transpose note on and off messages, leave other messages unchanged */ transp K (note_on CH N V) = note_on CH (N+K) V; transp K (note_off CH N V) = note_off CH (N+K) V; transp K MSG = MSG otherwise; /* the following loop repeatedly reads a message, transposes it and immediately outputs the transformed message */ transp_loop K = midi_send REF 0 (transp K MSG) || transp_loop K where (_,_,_,MSG) = midi_get REF; Figure 1: Sample MIDI script. Figure 2: QMidiCC program. plerate,) and fast Fourier transforms (via FFTW, www.fftw.org), as well as a function for drawing waveforms in a GGI visual. Q’s audio interface provides adequate support for simple audio applications such as audio playback and recording, and provides a framework for programming more advanced audio analysis and synthesis techniques. For these you’ll either have to provide your own C or C++ modules to do the necessary processing of wave data, or employ Q’s osc module which allows you to drive OSC-aware software synthesizers (). We also offer an sc module which provides special support for James McCartney’s SuperCollider (McCartney, 2002). The osc module defines an algebraic data type as a high-level representation of OSC packets which can be manipulated easily. All standard OSC features are supported, including OSC bundles. The module also implements a simple UDP transport layer for sending and receiving OSC packets. In addition, the sc module offers some convenience functions to control SuperCollider’s sclang and scsynth applications. Fig. 3 shows a little Q script implementing some common OSC messages which can be used to control the SuperCollider sound server. Using these facilities in combination with the midi module, it is a relatively straightforward matter LAC2005 25 import osc, sc; // load a synthdef into the server d_load NAME = sc_send (osc_message CMD_D_LOAD NAME); // create a new synth node (add at the end of the main group) s_new NAME ID ARGS = sc_send (osc_message CMD_S_NEW (NAME,ID,1,0|ARGS)); // free a synth node n_free ID = sc_send (osc_message CMD_N_FREE ID); // set control parameters n_set ID ARGS = sc_send (osc_message CMD_N_SET (ID|ARGS)); Figure 3: Sample OSC script. /* get MIDI input */ midiin = (TIME,MSG) where (_,_,TIME,MSG) = midi_get REF; /* current pitch wheel value and tuning table */ def WHEEL = ref 0.0, TT = map (ref.(*100.0)) [0..127]; /* calculate the frequency for a given MIDI note number N */ freq N = 440*2^((get (TT!N)-6900)/1200+get WHEEL/6); /* The MIDI loop: Assign voices from a queue Q of preallocated SC synth units in a round-robin fashion. Keep track of the currently assigned voices in a dictionary P. The third parameter is the MIDI event to be processed next. */ /* note offs: set the gate of the synth to 0 and put it at the end of the queue */ loop P Q (_,note_on _ N 0) = n_set I ("gate",0) || loop P Q midiin where (I,_) = P!N, P = delete P N, Q = append Q I; = loop P Q midiin otherwise; loop P Q (T,note_off CH N _) = loop P Q (T,note_on CH N 0); /* note ons: turn note off if already sounding, then get a new voice from the queue and set its gate to 1 */ loop P Q (T,note_on CH N V) = n_set I ("gate",0) || loop P Q (T,note_on CH N V) where (I,_) = P!N, P = delete P N, Q = append Q I; = n_set I ("freq",FREQ,"gain",V/127,"gate",1) || loop P Q midiin where [I|Q] = Q, FREQ = freq N, P = insert P (N,(I,FREQ)); Figure 4: Excerpt from a MIDI to OSC processing loop. LAC2005 26 to implement software synthesizers which can be played in realtime via MIDI. All actual audio processing takes place in the synthesis engine, the Q script only acts as a kind of “MIDI to OSC” translator. For instance, Fig. 4 shows an excerpt from a typical MIDI processing loop. An example of such a program, called “QSCSynth”, can be found on the Q homepage (cf. Fig. 5). QSCSynth is a (KDE/Qt based) GUI frontend for the sclang and scsynth programs which allows you to play and control SuperCollider synthesizers defined in an SCLang source file. It implements a monotimbral software synth which can be played via MIDI input and other MidiShare applications. Moreover, with MidiShare’s ALSA file. QSCSynth can also be configured to map arbitrary MIDI controller messages to corresponding OSC messages which change the control parameters of the synthesizer and effect units defined in the SCLang source file. Moreover, QSCSynth also provides its own control surface (constructed automatically from the parameter descriptions found in the binary synth definition files) which lets you control synth and effect units from the GUI as well. from existing open source software, such as Lilypond (lilypond.org), the GUIDO library () and Rosegarden (). • Add a “patcher”-like visual programming interface, such as the one found in IRCAM’s OpenMusic. 7 Conclusion 6 The future While Q’s multimedia library already provides a fairly complete framework for programming multimedia and computer music applications on Linux, there still remain a few things to be done: • Finish the OpenGL and video support. • Provide modules for some Linux-specific libraries such as Jack, LADSPA and DSSI. • Provide high-level interfaces for computer music applications such as algorithmic composition. There are a few lessons to be learned from existing environments here, such as Rick Taube’s Common Music (Taube, 2005), Grame’s Elody (Letz et al., 2000) and Paul Hudak’s Haskore (Hudak, 2000b). • Add graphical components for displaying and editing music (piano rolls, notation, etc.). For this we should try to reuse parts Functional programming has always played an important role in computer music, because it eases the symbolic manipulation of complex structured data. However, to our knowledge no other “modern-style” functional language currently provides the necessary interfaces to implement sophisticated, realtime-capable multimedia applications. We therefore believe that Q is an interesting tool for those who would like to explore MIDI programming, sound synthesis and other multimedia applications, in the context of a high-level, general-purpose, nonimperative programming language. While the Q core system is considered stable, the language and its libraries continue to evolve, and it is our goal to turn Q into a viable tool for rapid application development in many different areas. We think that multimedia is an attractive playground for functional programming, because modern FP languages allow many problems in this realm to be solved in new and interesting ways; see in particular Paul Hudak’s book on multimedia programming with Haskell (Hudak, 2000a) for more examples. As the multithreading and realtime capabilities of mainstream functional languages mature, it might also be an interesting option to port some of Q’s libraries to other environments such as the Glasgow Haskell compiler which offer better execution speed than an interpreted language, for the benefit of both the functional programming community and multimedia application developers. References Richard Bird and Philip Wadler. 1988. Introduction to Functional Programming. Prentice Hall, New York. Nachum Dershowitz and Jean-Pierre Jouannaud. 1990. Rewrite systems. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, chapter 6, pages 243–320. Elsevier. Dominique Fober, Stephane Letz, and Yann Orlarey. 1999. MidiShare joins the open sources LAC2005 27 Figure 5: QSCSynth program. softwares. In Proceedings of the International Computer Music Conference, pages 311–313, International Computer Music Association. See also. Albert Gr¨f. 1991. Left-to-right tree pattern a matching. In Ronald V. Book, editor, Rewriting Techniques and Applications, LNCS 488, pages 323–334. Springer. Paul Hudak. 2000a. The Haskell School of Expression: Learning Functional Programming Through Multimedia. Cambridge University Press. Paul Hudak. 2000b. Haskore Music Tutorial. Yale University, Department of Computer Science. See. Stephane Letz, Dominique Fober, and Yann Orlarey. 2000. Realtime composition in Elody. In Proceedings of the International Computer Music Conference, International Computer Music Association. See also. James McCartney. 2002. Rethinking the computer music language: SuperCollider. Computer Music Journal, 26(4):61–68. See also. Michael O’Donnell. 1985. Equational Logic as a Programming Language. Series in the Foundations of Computing. MIT Press, Cambridge, Mass. Heinrich K. Taube. 2005. Notes from the Metalevel: Introduction to Algorithmic Music Composition. Swets & Zeitlinger. To appear.. LAC2005 28 jackdmp: Jack server for multi-processor machines S.Letz, D.Fober, Y.Orlarey Grame - Centre national de cr´ation musicale e {letz, fober, orlarey}@grame.fr Abstract jackdmp is a C++ version of the Jack low-latency audio server for multi-processor machines. It is a new implementation of the jack server core features that aims in removing some limitations of the current design. The activation system has been changed for a data flow model and lock-free programming techniques for graph access have been used to have a more dynamic and robust system. We present the new design and the implementation for MacOSX. based on a data-flow model, that will naturally take profit of multi-processor machines • a more robust architecture based on lockfree programming techniques has been developed to allow the server to keep working (not interrupting the audio stream) when the client graph changes or in case of client execution failure, especially interesting in live situations. • various simplifications have been done in the internal design. The section 2 explains the requirements, section 3 describes the new design, section 4 describes the implementation, and finally section 5 describes the performances. Keywords real-time, data-flow model, audio server, lock-free 1 Introduction Jack is a low-latency audio server, written for POSIX conformant operating systems such as GNU/Linux. It can connect a number of different applications to an audio device, as well as allowing them to share audio between themselves (Vehmanen, Wingo and Davis 2003). The current code base written in C, developed over several years, is available for GNU/Linux and MacOSX systems. An additional integration with the MacOSX CoreAudio architecture has been realized (Letz, Fober and Orlarey 2004). The system is now a fundamental part of the Linux audio world, where most of musicoriented audio applications are now Jack compatible. On MacOSX, it has extended the CoreAudio architecture by adding low-latency interapplication audio routing capabilities in a transparent manner. 1 The new design and implementation aims in removing some limitations of the current version, by isolating the ”heart” of the system and simplifying the implementation: • the sequential activation model has been changed to a new graph activation scheme All CoreAudio applications can take profit of Jack features without any modification 1 2 Multi-processing Taking profit of multi-processor architectures usually requires applications to be adapted. A natural way is to develop multi-threaded code, and in audio applications a usual separation consists in executing audio DSP code in a realtime thread and normal code (GUI for instance) in one or several standard threads. The scheduler then activates all runnable threads in parallel on available processors. In a Jack server like system, there is a natural source of parallelism when Jack clients depend of the same input and can be executed on different processor at the same time. The main requirement is then to have an activation model that allows the scheduler to correctly activate parallel runnable clients. Going from a sequential activation model to a completely distributed one also raise synchronization issues that can be solved using lock-free programming techniques. LAC2005 29 3 3.1 New design Graph execution Input A C D Ouput In the current activation model (either on Linux or MacOSX), knowing the data dependencies between clients allows to sort the client graph to find an activation order. This topological sorting step is done each time the graph state changes, for example when connections are done or removed or when a new client opens or closes. This order is used by the server to activate clients in sequence. Forcing a complete serialization of client activation is not always necessary: for example clients A and B (Fig 1) could be executed at the same time since they both only depend of the ”Input” client. In this graph example, the current activation strategy choose an arbitrary order to activate A and B. This model is adapted to mono-processor machines, but cannot exploit multi-processor architectures efficiently. 3.2 Data flow model B Figure 1: Client graph: Client A and B could be executed at the same time, C must wait for A and B end, D must wait for C end. Data flow diagrams (DFD) are an abstract general representation of how data flows around a system. In particular they describe systems where the ordering of operations is governed by data dependencies and by the fact that only the availability of the needed data determines the execution of one of the process. A graph of Jack clients typically contains sequencial and parallel sub-parts (Fig 1). When parallel sub-graph exist, clients can be executed on different processors at the same time. A data-flow model can be used to describe this kind of system: a node in a data-flow graph becomes runnable when all inputs are available. The client ordering step done in the monoprocessor model is not necessary anymore. Each client uses an activation counter to count the number of input clients which it depends on. The state of client connections is updated each time a connection between ports is done or removed. Activation will be transfered from client to client during each server cycle as they are executed: a suspended client will be resumed, executes itself, propagates activation to the output clients, go back to sleep, until all clients have been activated. 2 The data-flow model still works on mono-processor machines and will correctly guaranty a minimum global number of context switches like the ”sequential” model. 2 3.2.1 Graph loops The Jack connection model allows loops to be established. Special feedback connections are used to close a loop, and introduce a one buffer latency. We currently follow Simon Jenkins 3 proposition where the feedback connection is introduced at the place where the loop is established. This scheme is simple but has the drawback of having the activation order become sensitive to the connection history. More complex strategies that avoid this problem will possibly be tested in the future. 3.3 Lock-free programming In classic lock-based programming, access to shared data needs to be serialized using mutual exclusion. Update operations must appear as atomic. The standard way is to use a mutex that is locked when a thread starts an update operation and unlocked when the operation is finished. Other threads wanting to access the same data check the mutex and possibly suspend their execution until the mutex becomes unlocked. Lock based programming is sensitive to priority inversion problems or deadlocks. Lock-free programming on the contrary allows to build data structures that are safe for concurrent use without needing to manage locks or block threads (Fober, Letz, and Orlarey 2002). Locks are used at several places in the current Jack server implementation. For example, the client graph needs to be locked each time a server update operation access it. When the real-time audio thread runs, it also needs to access the client graph. If the graph is already locked and to avoid waiting an arbitrary long time, the Real-Time (RT) thread generates an empty buffer for the given audio cycle, causing an annoying interruption in the audio stream. A lock-free implementation aims at remov3 Discussed on the jack-dev mailing list LAC2005 30 ing all locks (and particularly the graph lock) and allowing all graph state changes (add/remove client, add/remove ports, connection/disconnection...) to be done without interrupting the audio stream. 4 As described in the implementation section, this new constraint requires also some changes in the client side threading model. 3.3.1 Lock-free graph state change All update operations from clients are serialized through the server, thus only one thread updates the graph state. RT threads from the server and clients have to see the same coherent state during a given audio cycle. Non RT threads from clients may also access the graph state at any time. The idea is to use two states: one current state and one next state to be updated. A state change consists in atomically switching from the current state to the next state. This is done by the RT audio server thread at the beginning of a cycle, and other clients RT threads will use the same state during the entire cycle. All state management operations are implemented using the CAS 5 operation and are described with more details in the implementation section. 3.4 A ”robust” server Having a robust system is especially important in live situations where one can accept a temporary graph execution fault, which is usually better that having the system totally failing with a completely silent buffer and an audio stream interruption for example. In the current sequential version, the server waits for the client graph execution end before in can produce the output audio buffers. Thus a client that does not run during one cycle will cause the complete failure of the system. In a multi-processor context, it is interesting to have a more distributed system, where a part of the graph may still run on one processor even if another part is blocked on the other one. 3.4.1 Engine cycle The engine cycle has been redesigned. The server no longer waits for the client execution end. It uses the buffers computed at the previous cycle. The server cycle is fast and take alSome operations like buffer size change will still interrupt the audio stream. 5 CAS is the basic operation used in lock-free programming: it compares the content of a memory address with an expected value and if success, replaces the content with a new value. 4 most constant time since it is totally decoupled from the clients execution. This allows the system to keep running even if a part of the graph can not be executed during the cycle for whatever reason (too slow client, crash of a client...). The server is more robust: the resulting output buffer may be incomplete, if one or several clients have not produced their contribution, but the output audio stream will still be produced. The server can detect abnormal situations by checking if all clients have been executed during the previous cycle and possibly notify the faulty clients with an XRun event. 3.4.2 Latency Since the server uses the output buffers produced during the previous cycle, this new model adds a one buffer more latency in the system.6 But according to the needs, it will be possible to choose between the current model where the server is synchronized on the client graph execution end and the new more robust distributed model with higher latency. 4 Implementation The new implementation concentrates on the core part of the system. Some part of the API like the Transport system are not implemented yet. 4.1 Data structure Accessing data in shared memory using pointers on the server and client side is usually complex: pointers have to be described as offset related to a base address local to each process. Linked lists for example are more complex to manage and usually need locked access method in multithread cases. We choose to simplify data structures to use fixed size preallocated arrays that will be easier to manipulate in a lock free manner. 4.2 Shared Memory Shared memory segments are allocated on the server side. A reference (index) on the shared segment must be transfered on the client side. Shared memory management is done using two classes: • On the server side, the JackShmMem class overloads new and delete operators. Objects of sub-classes of JackShmMem will At least on OSX where the driver internal behaviour concerning input and output latencies values cannot be precisely controlled 6 LAC2005 31 be automatically allocated in shared memory. The GetShmIndex method retrieves the corresponding index to be transfered and used on the client side. • Shared memory objects are accessed using a standard pointer on the server side. On the client side, the JackShmPtr template class allows to manipulate objects allocated in shared memory in a transparent manner: initialized with the index obtained from the server side, a JackShmPtr pointer can be used to access data and methods 7 of the corresponding server shared memory object. Shared memory segments allocated on the server will be transfered from server to client when a new client is registered in the server, using the corresponding shared memory indexes. 4.3 Graph state Connection state was previously described as a list of connected ports for a given port. This list was duplicated both on the server and client side thus complicating connection/disconnection steps. Connections are now managed in shared memory in fixed size arrays. The JackConnectionManager class maintains the state of connections. Connections are represented as an array of port indexes for a given port. Changes in the connection state will be reflected the next audio cycle. The JackGraphManager is the global graph management object. It contains a connection manager and an array of preallocated ports. 4.4 Port description Ports are a description of data type to be exchanged between Jack clients, with an associated buffer used to transfer data. For audio input ports, this buffer is typically used to mix buffers from all connected output ports. Audio buffers were previously managed in a independent shared memory segment. For simplification purpose, each audio buffer is now associated with a port. Having all buffers in shared memory will allow some optimizations: an input port used at several places with the same data dependencies could possibly be computed once and shared. Buffers are preallocated with the maximum possible size, there is no re-allocation operation needed anymore. Ports are implemented in the JackPort class. 7 4.5 Client activation At each cycle, clients that only depend of the input driver and clients without inputs have to be activated first. To manage clients without inputs, an internal freewheel driver is used: when first activated, the client will be connected to it. At the beginning of the cyle, each client has its activation counter containing the number of input client it depends on. After being activated, the client decrements the activation counter of all its connected output. The last activated input client will resume the following client in the graph. (Fig 2) Each client uses an inter-process suspend/resume primitive associated with an activation counter. An implementation could be described with the following pseudo code. Execution of a server cycle follows several steps: • read audio input buffers • write output audio buffers computed the previous cycle • for each client in client list, reset the activation counter to its initial value • activate all clients that depends on the input driver client or without input • suspend until next cycle A(0) C (2) B(0) A(0) C (1) B(0) A(0) C (0) B(0) Running client Figure 2: Example of graph activation: C is activated by the last running of its A and B input. After being resumed by the system, execution of a client consists of: • call the client process callback • propagate activation to output clients • suspend until the next cycle On each platform, an efficient synchronization primitive is needed to implement the suspend/resume operation. Mach semaphores are used on MacOSX. They are allocated and published by the server in a global namespace (using the mach bootstrap service mechanism). Running clients are notified when a new Only non virtual methods LAC2005 32 client is opened and access the corresponding semaphore. Linux kernel 2.6 features the Fast User space mutEx (futex), a new facility that allows two process to synchronize (including blocking and waking) with either no or very little interaction with the kernel. It seems likely that they are better suited to the task of coordinating multiple processes than the FIFO’s that the Linux implementation currently uses. 4.6 Lock-free graph access Lock-free graph access is done using the JackAtomicState template class. This class implement the two state pattern. Update methods use on the next state and read methods access the current state. The two states can be atomically exchanged using a CAS based implementation. • code updating the next state is protected using the WriteNextStateStart and WriteNextStateStop methods. When executed between these two methods, it can freely update the next state and be sure that the RT reader thread can not switch to the next state.8 • the RT server thread switch to the new state using the TrySwitchState method that returns the current state if called concurrently with a update operation and switch to the next state otherwise. • other RT threads read the current state, valid during the given audio cycle using the ReadCurrentState method. • non RT threads read the current state using the ReadCurrentState method and have to check that the state was not changed during the read operation (using the GetCurrentIndex method): void ClientNonRTCode(...) { int cur_index,next_index; State* current_state; next_index = GetCurrentIndex(); do { cur_index = next_index; current_state = ReadCurrentState(); ... < copy current_state > ... The programming model is similar to a lock-based model where the update code would be written inside a mutex-lock/mutex-unlock pair. 8 next_index = GetCurrentIndex(); } while (cur_index != next_index); } 4.7 Server client communications A global client registration entry point is defined to allow client code to register a new client (a JackServerChannel object). A private communication channel is then allocated for each client for all client requests, and remains until the client quits. Possible crash of a client is detected and handled by the server when the private communication channel is abnormally closed. A notification channel is also allocated to allow the server to notify clients: graph reorder, xrun, port registration events... Running clients can also detect when the server no more runs as soon as waiting on the input suspend/resume primitive fails. (Fig 3) The current version uses socked based channels. On MacOSX, we use MIG (Mach Interface Generator), a very convenient way to define new Remote Procedure Calls (RPC) between the server and clients. 9 Server notifications Client A Client requests Client registration Jack Server Client requests Server notifications Client B Figure 3: The server defines a public ”client registration” channel. Each client is linked with the server using two ”request ”and ”notification” channels. 4.8 Server The Jack server contains the global client registration channel, the drivers, an engine, and a graph manager. It receives requests from the global channel, handle some of them (BufferSize change, Freewheel mode..) and redirect other ones on the engine. 4.8.1 Engine The engine contains a JackEngineControl, a global shared server object also visible for clients. It does the following: Both synchronous and asynchronous function calls can be defined 9 LAC2005 33 • handles requests for new clients through the global client registration channel and allocates a server representation of new external clients • handles request from running clients • activates the graph when triggered by the driver and does various timing related operations (CPU load measurement, detection of late clients...) 4.8.2 Server clients Server clients are either internal clients (a JackInternalClient object) when they run in the server process space10 or external clients (a JackExternalClient object) as a server representation of an external client. External clients contain the local data (for example the notification channel, a JackNotifyChannel object) and a JackClientControl object to be used by the server and the client. 4.8.3 Library Client On the client side, the current Jack version uses a one thread model: real-time code and notifications (graph reorder event, xrun event...) are treated in a unique thread. Indeed the server stops audio processing while notifications are handled on the client side. This has some advantages: a much simpler model for synchronization, but also some problematic consequences: since notifications are handled in a thread with real-time behaviour, a non realtime safe notification may disturb the whole machine. Because the server audio thread is not interrupted anymore, most of server notifications will typically be delivered while the client audio thread is also running. A two threads model for client has to be used: • a real-time thread dedicated to the audio process • a standard thread for notifications The client notification thread is started in jack-client-new call. Thus clients can already receive notifications when they are in the opened state. The client real-time thread is started in jack-activate call. A connection manager client for example does not need to be activated to be able to receive graphreorder, or portregistration like notifications (Fig 4). 10 Notification thread running jack_client_new Closed jack_client_close Opened jack_activate Notification + RT thread running Running jack_deactivate Figure 4: Client life cycle This two threads model will possibly have some consequences for existing Jack applications: they may have to be adapted to allow a notification to be called while the audio thread is running. The library client (a JackLibClient object) redirects the external Jack API to the Jack server. It contains a JackClientChannel object that implements both the request and notification channels, local client side resources as well as access to objects shared with the server like the graph manager or the server global state. 4.8.4 Drivers Drivers are needed to activate the client graph. Graph state changes (new connections, port, client...) are done by the server RT thread. When several drivers need to be used, one of them is called the master and updates the graph. Other one are considered as slaves. The JackDriver class implements common behaviour for drivers. Those that use a blocking audio interface (like the JackALSADriver driver) are subclasses of the JackThreadedDriver class. A special JackFreewheelDriver (subclass of JackThreadedDriver) is used to activate clients without inputs and to implement the freewheel mode (see 4.8.5). The JackAudioDriver class implements common code for audio drivers, like the management of audio ports. Callback based drivers (like the JackCoreAudioDriver driver, a subclass of JackAudioDriver) can directly trigger the Jack engine. When the graph is synchronized to the audio card, the audio driver is the master and the freewheel driver is a slave. 4.8.5 Freewheel mode In freewheel mode, Jack no longer waits for any external event to begin the start of the next process cycle thus allowing faster than real-time execution of Jack graph. Freewheel mode is implemented by switching from the audio and freewheel driver synchronization mode to the freewheel driver only: Drivers are a special subclass of internal clients LAC2005 34 • the global connection state is saved • all audio driver ports are deconnected, thus there is no more dependancies with the audio driver • the freewheel driver is synchronized with the end of graph execution: all clients are connected to the freewheel driver • the freewheel driver becomes the master Normal mode is restored with the connections state valid before freewheel mode was done. Thus one consider that no graph state change can be done during freewheel mode. 4.9 XRun detection Two kind of XRun can be detected: • XRun reported by the driver • XRun detected by the server when a client has not be executed the previous cycle: this typically correspond to abnormal scheduler latencies On MacOSX, the CoreAudio HAL system already contains a XRun detection mechanism: a kAudioDeviceProcessorOverload notification is triggered when the HAL detects an XRun. The notification will be redirected to all running clients. All clients that have not been executed the previous cycle will be notified individually. writing audio buffers have been measured. The first slice in the graph also reflects the server behavior: the duration to read and write the audio buffers can be seen as the signal date curve offset on the Y-coordinate. After having signaled the first client, the server returns to the CoreAudio HAL (Hardware Abstract Layer), which mix the output buffers in the kernel driver (offset between the first client signal date and its awake date (Fig 5)). The first client is then resumed. Figure 6: Mono G5, clients connected in sequence. For a server cycle: signal (blue), awake (pink) and finish (yellow) date. End date is about 250 microsecond on average. 5 Performances The multi-processor version has been tested on MacOSX. Preliminary benchmarks have been done on a mono and dual 1.8 Ghz G5 machine. Five jack-metro clients generating a simple bip are running. Audio Interrupt Signal Awake Finish Signal Awake Finish t Server Client 1 Client 2 Figure 5: Timing diagram for a two clients in sequence example For a server cycle, the signal date (when the client resume semaphore is activated), the awake date (when the client actually wakes up) and the finish date (when the client ends its processing and go back to suspended state) relative to the server cycle start date before reading and With all clients running at the same time, the measure is done during 5 seconds. The behavior of each client is then represented as a 5 seconds ”slice” in the graph and all slices have been concatenated on the X axis, thus allowing to have a global view of the system. Two benchmarks have been done. In the first one, clients are connected in sequence (client 1 is connected to client 2, client 2 to client 3 and so on), thus computations are inevitably serialized. One can clearly see that the signal date of client 2 happens after the finished date of client 1 and the same behavior happens for other clients. Measures have been done on the mono (Fig 6) and dual machine (Fig 7). In the second benchmark, all clients are only connected to the input driver, thus they can possibly be executed in parallel. The input driver client signal all clients at (almost) the same date 11 . Measures have been done on the mono (Fig 8) and dual (Fig 9) machine. When parallel clients are executed on the dual Signaling a semaphore has a cost that appears as the slope of the signal curve. 11 LAC2005 35 Figure 7: Dual G5. Since clients are connected in sequence, computations are also serialized, but client 1 can start earlier on the second processor. End date is about 250 microsecond on average. Figure 9: Parallel clients on a dual G5. Client 1 can start earlier on the second processor before all clients have been signalled. Computations are done in parallel. End date is about 200 microsecond on average. machine, one see clearly that computations are done at the same time on the 2 processors and the end date is thus lowered. Figure 8: Parallel clients on a mono G5. Although the graph can potentially be parallelized, computations are still serialized. End date is about 250 microsecond on average. to this requirement: instead of using a ”monolithic” general purpose heavy application, users can build their setup by having several smaller and goal focused applications that collaborate, dynamically connecting them to meet their specific needs. By adopting a data flow model for client activation, it is possible to let the scheduler naturally distribute parallel Jack clients on available processors, and this model works for the benefit of all kind of client aggregation, like internal clients in the Jack server, or multiple Jack clients in an external process. A Linux version has to be completed with an adapted primitive for inter process synchronization as well as socket based communication channels between the server and clients. The multi-processor version is a first step towards a completely distributed version, that will take advantage of multi-processor on a machine and could run on multiple machines in the future. Other benchmarks with different parallel/sequence graph to check their correct activation behavior and comparaison with the same graphs runned on the mono-processor machine have been done. A worst case additional latency of 150 to 200 microseconds added to the average finished date of the last client has been measured. References D.Fober, S.Letz, Y.Orlarey ”Lock-Free Techniques for Concurrent Access to Shared Objects”, Actes des Journes d’Informatique Musicale JIM2002, Marseille, pages 143–150 S.Letz, D.Fober, Y.Orlarey, P.Davis ”Jack Audio Server: MacOSX port and multiprocessor version, Proceedings of the first Sound and Music Computing conference SMC’04”, pages 177–183 Vehmanen Kai, Wingo Andy and Davis Paul ”Jack Design Documentation”, 6 Conclusion With the development of multi-processor machines, adapted architectures have to be developed. The Jack model is particularly suited LAC2005 36 On The Design of Csound5 John ffitch Department of Computer Science University of Bath Bath BA2 7AY, UK, jpff@cs.bath.ac.uk Abstract Csound has been in existence for many years, and is a direct descendant of the MusicV family. For a decade development of the system has continued, via some language changes, new operations and the necessary bug fixes. Two years ago a small group of us decided that rather than continue the incremental process, a code freeze and rethink was needed. In this paper we consider the design and aims for what has been called Csound5, and describe the processes and achievements of the implementation. Keywords Synthesis language, Csound. 1 Introduction and Background The music synthesis language Csound (Boulanger, 2000) was produced by Barry Vercoe(Vercoe, 1993) and was available under the MIT Licence on a small number of platforms. The current author ported the code to the Windows environment in the early 1990s, whereupon a self-defining team of programmers, DSP experts and musicians emerged who have continued to maintain and extend the software package ever since. The original synthesis engine has remained largely unchanged, while a significant number of new operations (opcodes) and table creation routines have been added. Despite various suggestions over the years, the two languages — the score language and the orchestra language — have remained unaltered until very recently, when user-defined opcodes, if..else and score looping constructs were introduced. The user base of Csound is large, and as we have maintained a free download policy we do not know how many copies there are in existence or how many are being used. What is clear from the Csound mailing lists is that the community is very varied, and while some of us think of ourselves as classical “art” composers, there are also live performers, techno and ambient composers, and many other classifications. The subject of this paper is Csound5, and in particular how its design has evolved from the current Csound. But there are two particular phenomena that have had a direct influence on the need for the re-think. The first was legal; Csound had been distributed under the MIT licence since 1986, which stipulates some freedoms and some restrictions. The freedoms are expressed as Permission to use, copy, or modify these programs and their documentation for educational and research purposes only and without fee is hereby granted, provided that this copyright and permission notice appear on all copies and supporting documentation. There was clarification that this should be taken to allow composers to use it without imposing any restriction on the resulting music. However the licence continues For any other uses of this software, in original or modified form, including but not limited to distribution in whole or in part, specific prior permission from M.I.T. must be obtained. When Csound was first made available this was considered a free licence, but with the growth of the Free Software movement, and much wider availability of computers, the restriction stopped developers making use of Csound in larger software systems if they were intending to distribute the resulting system. It also acted to prevent some kinds of publicity, as might be engendered by inclusion in books and magazines. Early attempts to resolve these problems failed, mainly though incomprehension. The publication of Phillips’ book(Phillips, 2000) was a further call to address the problem. The change which influenced the whole approach to the development of Csound was the adoption by MIT of the Lesser GNU Public Licence. The de facto monopoly allowing distribution was gone. The second phenomenon was the apparently remorseless improvements in technology. Csound was conceived as an off-line program, rendering a sound description over however long LAC2005 37 Figure 1: Architecture of original Csound it took. In the mid 1990s there was a project to recreate Csound for an embedded DSP processor(Vercoe, 1996) as a means of making a realtime synthesis system. This has been overtaken by the increase in machine speeds, and this speed has resulted in the Csound community calling for real-time performance, performer interfaces and MIDI controls. While some users had been wanting this for years, the availability of processors that were nearly capable of realtime rendering made all too clear the shortcomings of the 15- year-old design. At the end of 2002 we imposed a code freeze to allow the developer community to catch up with their modifications, and in particular to allow larger scale changes to be made on a fixed target. The previous version was still subjected to bug fixes but mainstream development ceased as we moved to Sourceforge and opened up the system even further. This paper gives one person’s view of the system we are building, usually called Csound5, as we froze at version 4.23. As the system is now running largely satisfactorily it is a good time to reflect on the aims of this major reconstruction, and to what extent our aspirations have been matched by our achievements. 2 Requirements The developers had a number of (distributed) discussions of what was needed in any revision. The strongest requirement was the ability to embed Csound within other systems, be they performance system or experimental research testbeds(ffitch and Padget, 2002). This has a number of software implications. The most significant one is perhaps the need for an agreed application process interface (API) which would allow the controlling program access to some of the internal operations of Csound, and also sep- arate the compilation processes from the execution. Also in the scope of the API is the possibility of adding new opcodes and generators which have access to the opcode mechanisms, memory allocation, navigation of internal structures and so on. Related to the requirement for a documented software interface is a call to make Csound reentrant. This would allow multiple uses both serially and in parallel. The original code was written with no thought for such niceties, and there is a plethora of static variables throughout the system. Removing these would be a major step towards re-entrance, and encapsulating the state within a single structure was the proposed solution, a structure that could also carry parts of the API. A possible lesser goal was to improve the internal reporting of errors. The original system set a global variable to indicate an initialisation error or a performance error, and this is checked at the top event loop. A simpler and more respectable process is for each initialiser and operator to return an error code; such a system can be extended to make use of the error codes. Csound originally generated IRCAM format sound files, and AIFF. Later WAV was added and some variants of AIFC. The code was all ad hoc and as audio formats are continually being developed, it seemed an ideal opportunity to capitalise on the work of others, and to use an external library to provide audio filing. In a similar way the real-time audio output is specially written for each platform, and maintaining this reduces the time available for development and enhancement. Since Csound was written, cross-platform libraries to encapsulate real-time audio have been developed, and while using an external library for files it seemed natural to investigate the same for sound. Another aspect where there was platformdependent code is in graphics. Csound has been able to display waveforms and spectral frames from the beginning, but there are a large number of optional files for DOS, Windows, Macintosh, SGI, SUN, X, and so forth. Using a general graphical system would move this complication into someone else’s hands. It would also be useful if the graphical activity were made external, using the API, so a variety of graphical packages could be used in a fashion like embedding. This leads to the idea of providing a visible software bus to communicate between the Csound engine and the wider environment. LAC2005 38 The last component where an external library could assist is in MIDI. There have been complaints about the support for MIDI for a long time, and so in any reconstruction it was clearly something that should be addressed. The last major component that is in need of reconstruction is the orchestra parser. The original parser is an ad hoc parser very reminiscent of the late 1970s. It is hard to modify and there are bugs lurking there that have evaded all attempts to fix. If a new parser were to be written it could sidestep these problems and also allow things like two-argument functions, which have been requested in the past. Another possible outcome from a new parser might be the ability to experiment with alternative languages which maintain the underlying semantics. That might also incorporate the identification of a parser API. In all this design we were mindful that Csound was and must remain a cross-platform synthesis system, and should behave the same on all implementations. It would also be convenient if the building system were the same or similar on all platforms, and installation should be simple — accessible to users at any computer-literate level. The other overriding requirement is that the system must not change externally, in the sense that all old music pieces must still render to the same audio. We can add new functionality, but visible things must not be removed. 3 Implementation The previous section described the desired features of the new Csound. But they are wishes. In this section we consider the translations of these aspirations to actual code. The API is largely the work of Gogins, but there is a number of basic concepts in the solution. The implementation is by a global structure that is passed as an argument to most functions. Within the structure there are at least three groups of slots. The first group incorporates the main API functions; functions to control Csound, such as Perform, Compile, PerformKsmps, Cleanup and Reset. There are also functions in this structure to allow the controlling program to interrogate Csound, to determine the sampling rate, the current time position and so forth. These functions are also used by user-defined opcode libraries to link to the main engine. The last group are the state variables for the instantiation of Csound. The transition to allowing a re-entrant system is largely one of moving static variables into the system-wide structure. Code simplicity is maintained by creating C macros so access can be via the same text as previously. By adding an additional argument to every opcode of this environment structure a great simplification of much of the code is achieved, especially for user-defined opcodes, as described in more detail below (section 4). Every opcode now returns an error code, or zero if it succeeded. This is a facility that has not been exploited yet, but it should be possible to move more of the error messages from the main engine, and incidentally to facilitate internationalisation. The decision to use an external library for reading and writing sound files was an easy one; what was less easy was deciding which one to use. A number were considered, both the small and simple, and the all-embracing. The one we chose was Libsndfile (de Castro Lopo, 2005). The library is subject to LGPL, but the deciding factor was the helpful attitude of the author. We have not regretted this decision, and it was moderately easy to replace the complex accumulation of AIFF, AIFC and WAV with the cleaner abstraction. The hardest part was that Libsndfile works in frames and Csound has been written in samples or sometimes bytes. Of particular note was the removal of many lines of code that dealt with multiple formats (alaw, µlaw, signed and unsigned...). There seemed less choice with the real-time audio library; PortAudio (Bencina and Burk, 2005; Bencina and Burk, 2001) seemed obvious. As the library was in transition from version 18 to 19 we decided to look ahead and use v19. This has been a more problematic decision. For example Csound is written with a blocking I/O model for audio, but to date of writing this is not implemented on all platforms, and we are using a hybrid, implementing blocking via callbacks and threads on some systems, and simple blocking I/O on others. There have even been suggestions that we abandon this library as it has not (yet) delivered the simplicity we seek. I think this can be overcome, and the decision was correct, but there are clearly problems remaining in this area. The companion to PortAudio in the PortMusic project(Por, 2005) for MIDI is PortMIDI(Dannenberg, 2005). This was the obvious choice to support MIDI. The software mod- LAC2005 39 els are fairly far apart but it has been incorporated. What we do not seem to be able to find is a library for file-based MIDI. At present we are continuing to use the original Vercoe code, with what looks like duplication in places. This needs to be investigated further. There is a surfeit of graphical toolkits, at many levels of complexity. Based on previous experience, both outside Csound and inside with CsoundAV(Maldonado, 2005), FLTK was chosen. This is simple and light-weight. There are undoubtedly faster libraries, but graphics performance is not of the essence and the simplicity is worth the loss. A drawback is that this is a C++ library, whereas Csound is still at heart a C program. However in the medium term I still intend that graphics should be moved out of the core Csound and we should use the API and software bus. A contentious issue (at least within our developer community) has been a framework for common building. For most of the life of Csound there have been three major builds, for Linux, Windows and Macintosh. The Linux and Unix system use a hand crafted makefile; on Windows a Microsoft Visual C++ IDE was used and on Macintosh the Codewarrior IDE. The redesign of Csound coincided with the acceptance of OSX on the Macintosh, and the availability of the MinGW package for Windows. This suggests that it should be possible to have a common tool-chain. Initial experience with the GNU tools (automake, autoconf etc) was highly negative, with incompatibilities between platforms, and between different releases of Linux. We are now using SCons(SCo, 2005) which is a Python-based building system which we have found to work cleanly on our three major platforms, and to have sufficient flexibility. The first implementation of a software bus has been made, by offering an arbitrary number of uni-directional audio and control buses. This facility remains to be exploited. The most problematic area of the implementation is the parser. A Flex-based lexer and a Bison parser have been developed1 and these implement most of the current Csound language. The problem of joining this front-end into the internal structures remains as a major task that has not yet been attempted. The design of the parser will allow user-defined opThe parse is not based on the earlier Bernardini parser, but created with the support of Epigon Audiocare Pvt Ltd 1 Figure 2: Architecture of Csound5 codes as is essential, as well as functions of one or more arguments. The main incompatibilities are in the enforcement of functions as functions, which is not in the original system. It does however mend known bugs in the lexing phase, and also makes better use of temporary variables. 4 User Defined Libraries One reason for the redesign was to allow third parties to provide new opodes, either as open source or as compiled libraries that can be loaded into Csound. The user opcodes are compiled into .DLL or shared libraries, and the initialisation of Csound loads libraries as required. User libraries were introduced in Csound4, but in Csound5 they have been extensively developed. We provide C macros to allow the library to be written in much the same way as base opcodes, and proforma structures to link the opcodes into the system. We have also recently made it possible to have library-defined table generators as well. The macros wrap the usual internal Csound functions as called via the global environment structure. To prove that the mechanism works, many of the opcodes were remade as loadable code. The final decision as to which opcodes will be in the base and which loadable is not settled, but the overall structure of Csound is now changed from the architecture of figure 1 to that of figure 2. With this architecture we hope that clearer separation will make maintenance simpler. 5 Experience In many ways it is too early to make informed judgements on the new Csound. On the other hand the system has been in a running state for many months, and on at least the Linux platform it is usable. Despite some rough edges it renders to both file and audio, and there are no appreciable performance issues. The use of Libsndfile has been a very positive experience on all platforms. PortAudio has LAC2005 40 caused some problems; with ALSA on Linux it is acceptable, but there have been latency problems on Windows and a number of ongoing problems on OSX, with lack of blocking I/O and an apparent need for multiple copying of audio data. There are enough indications from the PortAudio development community to say that this will work to our advantage eventually. It is still too soon to comment on the MIDI components. There are still questions that arise from graphics and in particular the control of multiple threads. I believe that the solution is to use the software bus and outsource graphical activity to a controlling program. The graphics does work as well as it did on Csound4, but problems arise with the internal generation of GUIs for performance systems. The code freeze has had a number of minor positive effects; the code has been subjected to coherent scrutiny without pressures for releases. Multiple identical functions have been combined, and many small coding improvements have been made, for both stylistic and accuracy reasons. The current state is close to release. It might be possible to release before the incorporation of the parser, but this would be a disappointment to me. The other aspect that may delay release is documentation. The manual still needs updating. Basic information on the system already exists. The decision to use SCons has proved excellent. It is working on Windows and OSX as well as the usual development Linux platforms. tribution time soon, the musical community will also see these benefits. 7 Acknowledgements My thanks go to everyone on the Csound Development mailing list for all their input, comments and reports, and also to all the Csound users who made it so clear what they wanted. Particular thanks are due to Michael Gogins for his insistence on a sane API, and to Richard Boulanger who has been a driving force behind me in the development and maintenance of Csound. References Ross Bencina and Phil Burk. 2001. PortAudio – an Open Source Cross Platform Audio API. In ICMC2001. ICMA, September. Ross Bencina and Phil Burk. 2005. PortAudio.. Richard Boulanger, editor. 2000. The Csound Book: Tutorials in Software Synthesis and Sound Design. MIT Press, February. Roger B. Dannenberg. 2005. PortMIDI. portmusic/portmidi. Erik de Castro Lopo. 2005. Libsndfile. http: //. John ffitch and Julian Padget. 2002. Learning to play and perform on synthetic instruments. In Mats Nordahl, editor, Voices of Nature: Proceedings of ICMC 2002, pages 432– 435, School of Music and Music Education, G¨teborg University, September. ICMC2002, o ICMC. Gabriel Maldonado. 2005. Csoundav. download.htm. Dave Phillips. 2000. The Book of Linux Music and Sound. No Starch Press. ISBN: 1886411344. 2005. PortMusic. ~music/portmusic/. 2005. SCons.. Barry Vercoe, 1993. Csound — A Manual for the Audio Processing System and Supporting Programs with Tutorials. Media Lab, M.I.T. Barry Vercoe. 1996. Extended Csound. In On the Edge, pages 141–142. ICMA, ICMA and HKUST. ISBN 0-9667917-4-2. 6 Conclusions In this paper I have described the thoughts behind the creation of the next incarnation of Csound. Evolution rather than revolution has been the key, but we are creating an embeddable system, a system more extensible than previously, and with clear component divisions, while preserving the operations and functionality that our users have learnt to expect. By concentrating on an embeddable core I hope that the tendency to create variants will be discouraged, and from my point of view I will not have to worry about graphics, which interests me not at all! While the system has remained a crossplatform one, development has been mainly on Linux, and we have seen great benefits from all the tools there. When Csound5 reaches its dis- LAC2005 41 LAC2005 42 CLAM, an Object Oriented Framework for Audio and Music Pau Arum´ and Xavier Amatriain ı Music Technology Group, Universitat Pompeu Fabra 08003 Barcelona, Spain {parumi,xamat}@iua.upf.es Abstract CLAM is a C++ framework that is being developed at the Music Technology Group of the Universitat Pompeu Fabra (Barcelona, Spain). The framework offers a complete development and research platform for the audio and music domain. Apart from offering an abstract model for audio systems, it also includes a repository of processing algorithms and data types as well as a number of tools such as audio or MIDI input/output. All these features can be exploited to build cross-platform applications or to build rapid prototypes to test signal processing algorithms. is regularly compiled under GNU/Linux, Windows and Mac OSX using the GNU C++ compiler but also the Microsoft compiler. CLAM is Free Software and all its code and documentation can be obtained though its web page (www CLAM, 2004). 2 What CLAM has to offer ? Keywords Development framework, DSP, audio, music, objectoriented Although other audio-related environments exist 1 —see (Amatriain, 2004) for an extensive study and comparison of most of them— there are some important features of our framework that make it somehow different: • All the code is object-oriented and written in C++ for efficiency. Though the choice of a specific programming language is no guarantee of any style at all, we have tried to follow solid design principles like design patterns (Gamma E. and J., 1996) and C++ idioms (Alexandrescu, 2001), good development practices like test-driven development (Beck, 2000) and refactoring (Fowler et al., 1999), as well as constant peer reviewing. • It is efficient because the design decisions concerning the generic infrastructure have been taken to favor efficiency (i.e. inline code compilation, no virtual methods calls in the core process tasks, avoidance of unnecessary copies of data objects, etc.) • It is comprehensive since it not only includes classes for processing (i.e. analysis, synthesis, transformation) but also for audio and MIDI input/output, XML and SDIF serialization services, algorithms, data visualization and interaction, and multi-threading. • CLAM deals with wide variety of extensible data types that range from low-level signals to cite only some of them: OpenSoundWorld, PD, Marsyas, Max, SndObj and SuperCollider 1 1 Introduction CLAM stands for C++ Library for Audio and Music and is a full-fledged software framework for research and application development in the audio and music domain. It offers a conceptual model as well as tools for the analysis, synthesis and transformation of audio signals. The initial objective of the CLAM project was to offer a complete, flexible and platform independent sound analysis/synthesis C++ platform to meet the needs of all the projects of the Music Technology Group (MTG, 2004) at the Universitat Pompeu Fabra in Barcelona. Those initials objectives have slightly changed since then, mainly because the library is no longer seen as an internal tool for the MTG but as a framework licensed under the GPL (Free Software Foundation, 2004). CLAM became public and Free in the course of the AGNULA IST European project (Consortium, 2004). Some of the resulting applications as well as the framework itself were included in the Demudi distribution. Although nowadays most the development is done under GNU/Linux, the framework is crossplatform. All the code is ANSI C++ and it LAC2005 43 Figure 1: CLAM modules (such as audio or spectrum) to higher-level semantic-structures (a musical phrase or an audio segment) • As stated before, it is cross-platform • The project is licensed under the GPL terms and conditions. • The framework can be used either as a regular C++ library or as a prototyping tool. In order to organise all these features CLAM is divided into different architectonic modules. Figure 1 shows the modules and submodules that exist in CLAM. The most important ones are those related to the processing kernel, with its repositories and infrastructure modules. Furthermore, a number of auxiliary tools are also included. In that sense, CLAM is both a black-box and a white-box framework (Roberts and Johnson, 1996). It is black-box because already built-in components included in the repositories can be connected with minimum programmer effort in order to build new applications. And it is whitebox because the abstract classes that make up the infrastructure can be easily derived in order to extend the framework components with new processes or data classes. 2.1 The CLAM infrastructure The CLAM infrastructure is defined as the set of abstract classes that are responsible for the white-box functionality of the framework and define a related metamodel 2 . This metamodel is very much related to the Object-Oriented paradigm and to Graphical Models of Computation as it defines the object-oriented encapsulation of a mathematical graph that can be effectively used for modeling signal processing systems in general and audio systems in particular. The metamodel clearly distinguishes between two different kinds of objects: Processing objects and Processing Data objects. Out of the two, the first one is clearly more important as the managing of Processing Data constructs can be almost transparent for the user. Therefore, we can view a CLAM system as a set of Processing objects connected in a graph called Network. Processing objects are connected through intermediate channels. These channels are the only mechanism for communicating between Processing objects and with the outside world. Messages are enqueued (produced) and de2 The word metamodel is here understood as a “model of a family of related models”, see (Amatriain, 2004) for a thorough discussion on the use of metamodels and how frameworks generate them. LAC2005 44 queued (consumed) in these channels, which acts as FIFO queues. In CLAM we clearly differentiate two kinds of communication channels: ports and controls. Ports have a synchronous data flow nature while controls have an asynchronous nature. By synchronous, we mean that messages get produced and consumed at a predictable —if not fixed— rate. And by asynchronous we mean that such a rate doesn’t exist and the communication follows an event-driven schema. Figure 2 is a representation of a CLAM processing. If we imagine, for example, a processing that performs a frequency-filter transformation on an audio stream, it will have an input and an out-port for the incoming audio stream and processed output stream. But apart from the incoming and outcoming data, some other entity —probably the user through a GUI slider— might want to change some parameters of the algorithm. This control data (also called events) will arrive, unlike the audio stream, sparsely or in bursts. In this case the processing would want to receive these control events through various (input) control channels: one for the gain amount, another for the frequency, etc. The streaming data flows through the ports when a processing is fired (by receiving a Do() message). Different processings can consume and produce at different velocities or, likewise, a different number of tokens. Connecting these processings is not a problem as long as the ports are of the same data type. The connection is handled by a FlowControl entity that figures out how to schedule the firings in a way that avoids firing a processing with not enough data in its input-port or not enough space into its outputports. Configurations: why not just controls? Apart from the input-controls, a processing receives another kind of parameter: the configurations. Configurations parameters, unlike controls parameters, are dedicated to expensive or structural changes in the processing. For instance, a configuration parameter can decide the number of ports that a processing will have. Therefore, a main difference with controls is that they can only be set into a processing when they are not in running state. Composites: static vs dynamic It is very convenient to encapsulate a group of process- ings that works together into a new composite processing. Thus, enhancing the abstraction of processes. CLAM have two kinds of composites: static or hardcoded and dynamic or nested-networks. In both cases inner ports and controls can published to the parent processing. Choosing between the static vs dynamic composites is a trade-off between boosting efficiency or understandability. See in-band pattern in (Manolescu, 1997). 2.2 The CLAM repositories The Processing Repository contains a large set of ready-to-use processing algorithms, and the Processing Data Repository contains all the classes corresponding to the objects being processed by these algorithms. The Processing Repository includes around 150 different Processing classes, classified in the following categories: Analysis, ArithmeticOperators, AudioFileIO, AudioIO, Controls, Generators, MIDIIO, Plugins, SDIFIO, Synthesis, and Transformations. Although the repository has a strong bias toward spectral-domain processing because of our group’s background and interests, there are enough encapsulated algorithms and tools so as to cover a broad range of possible applications. On the other hand, in the Processing Data Repository we offer the encapsulated versions of the most commonly used data types such as Audio, Spectrum, SpectralPeaks, Envelope or Segment. It is interesting to note that all of these classes have interesting features such as a homogeneous interface or built-in automatic XML persistence. 2.3 Tools XML Any CLAM Component can be stored to XML as long as StoreOn and LoadFrom methods are provided for that particular type (Garcia and Amatrian, 2001). Furthermore, Processing Data and Processing Configurations – which are in fact Components– make use of a macro-derived mechanism that provides automatic XML support without having to add a single line of code (Garcia and Amatrian, 2001). GUI Just LAC2005 45 Figure 2: CLAM processing detailed representation add support to it, offering ways of connecting the framework under development to the widgets and other graphical tools included in the graphical framework. The CLAM team, however, aimed at offering a toolkit-independent support. This is accomplished through the CLAM Visualization Module. This general Visualization infrastructure is completed by some already implemented presentations and widgets. These are offered both for the FLTK toolkit (FLTK, 2004) and the QT framework (Blanchette and Summerfield, 2004; Trolltech, 2004). An example of such utilities are convenient debugging tools called Plots. Plots offer ready-to-use independent widgets that include the presentation of the main Processing Data in the CLAM framework such as audio, spectrum, spectral peaks. . . Platform Abstraction Under this category we include all those CLAM tools that encapsulate system-level functionalities and allow a CLAM user to access them transparently from the operating system or platform. Using these tools a number of services –such as Audio input/output, MIDI input/output or SDIF file support– can be added to an application and then used on different operating systems without changing a single line of code. Figure 3: a CLAM processing network levels of use of the generic infrastructure: Library functions The user has explicit objects with processings and processing data and calls processings Do methods with data as its parameters. Similarly as any function library. Processing Networks The user has explicit processing objects but streaming data is made implicit, through the use of ports. Nevertheless, the user is in charge of firing, or calling a Do() method without parameters. Automatic Processing Networks It offers a higher level interface: processing objects are hidden behind a layer called Network, see Figure 3 Thus, instantiation of processing objects 3 Levels of automation The CLAM framework offers three different levels of automation to a user who wants to use its repositories, which can also be seen as different LAC2005 46 are made through passing strings identifiers to a factory. Static factories are a well documented C++ idiom (Alexandrescu, 2001) that allow us to decouple the factory class with its registered classes in a very convenient way. They makes the process of adding or removing processings to the repository as easy as issuing a single line of code in the processing class declaration. Apart from instantiation, the Network class offers interface for connecting the components processings and, most important, it automatically controls the firing of processings (calling its Do method). Actually, the firing scheduling can follow different strategies, for example a push strategy starting firing the up-source processings, or a pull strategy where we start querying for data to the most down-stream processings, as well as being dynamic or static (fixed list of firings). See (Hylands and others, 2003; www Ptolemy, 2004) for more details on scheduling dataflow systems. To accommodate all this variability CLAM offers different FlowControl sub-classes which are in charge of the firing strategy, and are pluggable to the Network processing container. problem now is that Jack is callback based while current CLAM I/O is blocking based. So we should build an abstraction that would hide this peculiarity and would show those sources and sinks as regular ones. LADSPA plugins: LADSPA architecture is fully supported by CLAM. On one hand, CLAM can host LADSPA plugins. On the other hand, processing objects can be compiled as LADSPA plugins. LADSPA plugins transform buffers of audio while can receive control events. Therefore these plugins map very well with CLAM processings that have exclusively audio ports (and not other data types ports) and controls. CLAM takes advantage of this fact on two ways: The LADSPA-loader gets a .so library file and a plugin name and it automatically instantiate a processing with the correspondent audio ports and controls. On the other hand, we can create new LADSPA plugins by just compiling a C++ template class called LadspaProcessingWrapper, where the template argument is the wrapped processing class. DSSI plugins: Although CLAM still does not have support for DSSI plugins, the recent development of this architecture allowing graphical user interface and audio-instruments results very appealing for CLAM. Thus additions in this direction are very likely. Since CLAM provides visual tools for rapid prototyping applications with graphical interface, these applications are very suitable to be DSSI plugins. 4.1 What CLAM can be used for ? The framework has been tested on —but also has been driven by— a number of applications, for instance: SMSTools, a SMS Analysis/Synthesis (Serra, 1996) graphical tool; Salto (Haas, 2001), a sax synthesizer; Rappid (Robledo, 2002) a real-time processor used in live performances. Other applications using CLAM developed at the research group includes: audio features extraction tools, time-stretching plugins, voice effect processors, etc. Apart from being a programmers framework to develop applications, the latest developments in CLAM have brought important features that fall into the black-box and visual builder categories. That lets a user concentrate on the research of algorithms forgetting about application de- 4 Integration with GNU/Linux Audio infrastructure CLAM input/output processings can deal with a different kinds of device abstraction architectures. In the GNU/Linux platform, CLAM can use audio and midi devices through the ALSA layer (www ALSA, 2004), and also through the portaudio and portmidi (www PortAudio, 2004; www PortMidi, 2004) layers. ALSA: ALSA low-latency drivers are very important to obtain real-time input/output processing. CLAM programs using a good soundcard in conjunction with ALSA drivers and a well tuned GNU/Linux system —with the real-time patch— obtains back-to-back latencies around 10ms. Audio file libraries: Adding audio file writing and reading capability to CLAM has been a very straight-forward task since we could delegate the task on other good GNU/Linux libraries: libsndfile for uncompressed audio formats, libvorbis for ogg-vorbis format and finally libmad and libid3 for the mp3 format. Jack: Jack support is one of the big to-dos in CLAM. It’s planned for the 1.0 release or before —so in a matter of months. The main LAC2005 47 Figure 4: NetworkEditor, the CLAM visual builder velopment. And, apart from research, it is also valuable for rapid application prototyping of applications and audio-plugins. Figure 5: the QT GUI designer tool little boxes with attached inlets and outlets representing its ports and control. The application allows all the typical mouse operations like select, move, delete and finally, connect ports and controls. Since CLAM ports are typed, not all outports are compatible with all in-ports. For example in the Figure 4, the second processing in the chain is called SMSAnalysis and receives audio samples and produces: sinusoidal peaks, fundamental, several spectrums (one corresponding to the original audio and another corresponding to the residual resulting of subtracting the sinusoidal component). Connected to SMSAnalysis out-ports we have placed three processings to perform transformations: one for controlling the gain of the sinusoidal component, another to control the gain of the residual component and the last one for shifting the pitch. The latest modifies both sinusoidal and residual components. Then the signal chain gets into the SMSSynthesis which output the resynthesizes audio ready to feed the AudioOut (which makes the audio-card to sound) Before starting the execution of the network, we can right click upon any processing view to open a dialog with its configuration. For instance, the SMSAnalysis configuration includes the window type and window size parameters among many others. Another interesting feature of the NetworkEditor is that it allows loading visual plots widgets for examining the flowing data in any out-port. And also, slider widgets to connect to 5 Rappid Prototyping in CLAM 5.1 Visual Builder Another important pattern that CLAM uses is the visual builder which arises from the observation that in a black-box framework, when connecting objects the connection script is very similar from one application to another. Acting as the visual builder, CLAM have a graphical program called NetworkEditor that allows to generate an application –or at least its processing engine– by graphically connecting objects. And another application called Prototyper, that acts as the glue between an application GUI designed with a graphical tool and the processing engine defined with the NetworkEditor. 5.2 An example Here we will show how we can set up a graphical stand-alone program in just few minutes. The purpose of this program is to make some spectral transformations in real-time with the audio taken from the audio-card, apply the transformations and send the result back to the audiocard. The graphical interface will consist in a simple pane with different animated representations of the result of the spectral analysis, and three sliders to change transformation parameters. First step: building the processing network (Figure 4) Patching with NetworkEditor is a very intuitive task to do. See Figure 4. We can load the desired processings by dragging them from the left panel of the window. Once in the patching panel, processings are viewed as LAC2005 48 This program is in charge to load the network from its xml file —which contains also each processing configuration parameters— and create objects in charge of converting QT signals and slots with CLAM ports and controls. And done! we have created, in a matter of minutes, a prototype that runs fast C++ compiled code without compiling a single line. 6 Conclusions Figure 6: the running prototype the in-control inlets. Once the patch is finished we are ready to move on directly to the graphical user interface. Second step: designing the program GUI (Figure 5) The screen-shot in Figure 5 is taken while creating a front end for our processing network. The designer is a tool for creating graphical user interfaces that comes with the QT toolkit (Blanchette and Summerfield, 2004; Trolltech, 2004). Normal sliders can be connected to processing in-ports by just setting a suited name in the properties box of the widget. Basically this name specify three things in a row: that we want to connect to an in-control, the name that the processing object have in the network and the name of the specific in-control. On the other hand we provide the designer with a CLAM Plots plugin that offers a set of plotting widgets that can be connected to outports. In the example in Figure 5 the black boxes corresponds to different plots for spectrum, audio and sinusoidal peaks data. Now we just have to connect the plots widgets by specifying —like we did for the sliders— the out-ports we want to inspect. We save the designer .ui file and we are ready to run the application. Third step: running the prototype (Figure 6) Finally we run the prototyper program. Figure 6. It takes two arguments, in one hand, the xml file with the network specification and in the other hand, the designer ui file. CLAM has already been presented in other conferences like the OOPSLA’02 (Amatriain et al., 2002b; Amatriain et al., 2002a) but since then, a lot of progress have been taken in different directions, and specially in making the framework more black-box with visual builder tools. CLAM has proven being useful in many applications and is becoming more and more easy to use, and so, we expect new projects to begin using the framework. Anyway it has still not reached a the stable 1.0 release, and some improvements needs to be done. See the CLAM roadmap in the web (www CLAM, 2004) for details on things to be done. The most prominent are: Library-binaries and separate submodules, since at this moment modularity is mostly conceptual and at the source code organization level. Finish the audio feature-extraction framework which is work-inprogress. Simplify parts of the code, specially the parts related with processing data and configurations classes. Have working nested networks 7 Acknowledgements The authors wish to recognize all the people who have contributed to the development of the CLAM framework. A non-exhaustive list should at least include Maarten de Boer, David Garcia, Miguel Ram´ ırez, Xavi Rubio and Enrique Robledo. Some of the the work explained in this paper has been partially funded by the Agnula Europan Project num.IST-2001-34879. References A. Alexandrescu. 2001. Modern C++ Design. Addison-Wesley, Pearson Education. X. Amatriain, P. Arum´ and M. Ram´ ı, ırez. 2002a. CLAM, Yet Another Library for Audio and Music Processing? In Proceedings of the 2002 Conference on Object Oriented Programming, Systems and Applica- LAC2005 49 tion (OOPSLA 2002)(Companion Material), Seattle, USA. ACM. X. Amatriain, M. de Boer, E. Robledo, and D. Garcia. 2002b. CLAM: An OO Framework for Developing Audio and Music Applications. In Proceedings of the 2002 Conference on Object Oriented Programming, Systems and Application (OOPSLA 2002)(Companion Material), Seattle, USA. ACM. X. Amatriain. 2004. An Object-Oriented Metamodel for Digital Signal Processing. Universitat Pompeu Fabra. K Beck. 2000. Test Driven Development by Example. Addison-Wesley. J. Blanchette and M. Summerfield. 2004. C++ GUI Programming with QT 3. Pearson Education. AGNULA Consortium. 2004. AGNULA (A GNU Linux Audio Distribution) homepage,. FLTK. 2004. The fast light toolkit (fltk) homepage:.fltk.org. M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts. 1999. Refactoring: Improving the Design of Existing Code. Addison-Wesley. Free Software Foundation. 2004. Gnu general public license (gpl) terms and conditions.. Johnson R. Gamma E., Helm R. and Vlissides J. 1996. Design Patterns - Elements of Reusable Object-Oriented Software. Addison-Wesley. D. Garcia and X. Amatrian. 2001. XML as a means of control for audio processing, synthesis and analysis. In Proceedings of the MOSART Workshop on Current Research Directions in Computer Music, Barcelona, Spain. J. Haas. 2001. SALTO - A Spectral Domain Saxophone Synthesizer. In Proceedings of MOSART Workshop on Current Research Directions in Computer Music, Barcelona, Spain. C. Hylands et al. 2003. Overview of the Ptolemy Project. Technical report, Department of Electrical Engineering and Computer Science, University of California, Berklee, California. D. A. Manolescu. 1997. A Dataflow Pattern Language. In Proceedings of the 4th Pattern Languages of Programming Conference. MTG. 2004. Homepage of the Music Technology Group (MTG) from the Universitat Pompeu Fabra.. D. Roberts and R. Johnson. 1996. Evolve Frameworks into Domain-Specific Languages. In Procedings of the 3rd International Conference on Pattern Languages for Programming, Monticelli, IL, USA, September. E. Robledo. 2002. RAPPID: Robust Real Time Audio Processing with CLAM. In Proceedings of 5th International Conference on Digital Audio Effects, Hamburg, Germany. X. Serra, 1996. Musical Signal Processing, chapter Musical Sound Modeling with Sinusoids plus Noise. Swets Zeitlinger Publishers. Trolltech. 2004. Qt homepage by trolltech.. www ALSA. 2004. Alsa project home page.. www CLAM. 2004. CLAM website:. www PortAudio. 2004. PortAudio homepage:. www PortMidi. 2004. Port Music homepage: music/portmusic/. www Ptolemy. 2004. Ptolemy project home page.. LAC2005 50 “Made in Linux” — The Next Step Ivica Ico BUKVIC CollegeConservatory of Music, University of Cincinnati 3346 Sherlock Ave. #21 Cincinnati, OH, U.S.A., 45220 ico@fuse.net Abstract It's been over half a decade since the Linux audio began to shape into a mature platform capable of impressing even the most genuine cynic. Although its progress remains unquestionable, the increasing bleedover of the GNU software onto other platforms, fragmentation of the audio software market, as well as wavering hardware support, pose as a significant threat to its longterm prosperity. “Made in Linux” is a newly proposed incentive to create a nonprofit foundation that will bridge the gap between the Linux audio community and the commercial audio market in order to ensure its longterm success. Linux into a complete Digital Audio Workstation (DAW) solution. 2 The Momentum Keywords Foundation, Initiative, Exposure, Commercial, Incentives 1 Introduction While no single event is necessarily responsible for the sudden proliferation of the Linux audio software, it is undeniable that the maturing of the ALSA and JACK frameworks were indispensable catalysts in this process. Yet, what really made this turning point an impressive feat was the way in which the Linux audio community, amidst the seemingly “standardless anarchy,” was able to not only acknowledge their quality, but also wholeheartedly embrace them. Although some users are still standing in denial of the obvious advantages heralded by these important milestones, nowadays they are but a minority. Since, we've had a number of software tools harness the power of the new framework, complementing each other and slowly shaping Today, while we still enjoy the momentum generated by these important events, increasing worldwide economic problems, bleedover of the GNU software to closed dominant platforms, as well as the cascading sideeffects, such as the questionable proaudio hardware support, now stand as formidable stepping stones to the long term success of this platform. Even though the economic hardship would suggest greater likelihood of Linux adoption for the purpose of cutting costs, this model only works in the cases where Linux has already proven its worth, such as the server market. And while I do not mean to imply that Linux as a DAW has not proven its worth in my eyes (or should I say ears?), it is unquestionable that its value is still a great unknown among the common users who are, after all, the backbone of the consumer market and whose numbers are the most important incentive for the commercial vendors. In addition, Linux audio and multimedia users as well as potential newcomers still face some significant obstacles, such as the impressive but unfortunately unreliable support of the ubiquitous VST standard via the WINE layer, or the lack of a truly complete allin one DAW software. The aforementioned platform transparency of the GNU model is a blessing and a curse. While it may stimulate the user of a closed platform to delve further into the offering of the opensource community, contribute to, and perhaps even switch to an opensource platform, generally such a behavior is still largely an exception. Let us for a moment consider the potential contributors from LAC2005 51 the two dominant platforms: Microsoft and Apple. The dominant Microsoft platform is architecturally too different, so the contributions from the users of this platform will likely be mostly portingrelated and as such will do little for the betterment of the software's core functionalities (as a matter of fact, they may even cause increase in the software maintenance overhead). Similarly, the Apple users as adoring followers of their own platform usually yield similar contributions. Naturally, the exceptions to either trend are noteworthy, yet they remain to be exactly that: exceptions. While it is not my intention to trivialize such contributions nor to question the premise upon which the GNU doctrine has been built, it is quite obvious that the crossplatform model of attracting new users to the GNU/Linux platform currently does not work as expected and therefore should not be counted on as a recipe for the longterm success. What is unfortunate in this climate of dubious crossplatform contributions and dwindling economic prospects through fragmentation of the audio software industry, is the fact that it generates a cascading set of sideeffects. One of such effects is the recently disclosed lack of RME's interest, a longterm supporter of the Linux audio efforts, to provide ALSA developers with the specifications for its firewire audio hardware due to IP concerns. Even though such a decision raises some interesting questions, delving any further into the issue is certainly outside the scope of this paper. Yet, the fact remains that without the proper support for the proaudio interfaces of tomorrow, no matter how good the software, Linux audio has no future. The situation is certainly not as grim as it seems as there are many other audio devices that are, and will continue to be supported. Nonetheless, this may become one of the most important stepping stones in the years to come and therefore should be used as a warning that may very well require a preemptive (re)action from the Linux audio community before it becomes too late. 3 CounterInitiatives attention from the outsiders, such as the Linux Audio Consortium of libre software and companies whose primary purpose is to help steer further developments as well as serve as a liaison between the commercial audio world and the Linux audio community. Another example is the “Made with Linux” CD which is to include a compilation of works made with Linux and whose dissemination would be used as a form of publicity as well as for the fundraising purposes. Other examples include numerous articles and publications in reputable magazines that expose the true power of Linux as well as recently increased traffic on the Linux AudioUser and LinuxAudioDeveloper mailing lists concerning works made using Linux. These are by no means the only gems of such efforts. Nonetheless, their cumulative effect has to this day made but a small dent in exposing the true power of Linux as a DAW. To put this statement into perspective, even the techsavvy and generally proLinux audience of the Slashdot technology news site is largely still ignorant of Linux's true multimedia content creation and production potential. All this points to the fact that Linux audio community has reached the point of critical mass at which all involved need to take the next step in organizing efforts towards expanding the audio software market exposure, whether for the reasons of own gratification, financial gain, or simply for the benefit of the overall community. After all, if the Linux audio supporters do not take these steps then it should certainly not be expected from others to follow or even less likely take the steps in their stead. 4 “Made in Linux” to the Rescue Amidst these developments, our community has certainly not been dormant. There were numerous initiatives that were usually spawned by individuals or small groups of likeminded enthusiasts in order to foster greater cooperation among the community members and attract “Made in Linux” is an initiative carrying a deliberate syntactic pun in its title to separate itself from other similar programs and/or initiatives that may have already taken place and/or are associated with the more generalized form of evangelizing Linux. The title obviously plays a pun on the labels commonly found on commercial products in order to identify their country of assembly, less commonly the country of their origin. Such ubiquitous practice nowadays makes it nearly impossible to find a commercial product without such a label. Considering that the initiative I am proposing should be as vigilant and as all LAC2005 52 encompassing as the aforementioned labels, I felt that the title certainly fit the bill. The initiative calls for formation of a nonprofit foundation whose primary concern will be to oversee the proliferation of Linux as a DAW through widespread publicity of any marketable multimedia work that has utilized Linux, monetary incentives, awards, and perhaps most importantly through establishing of reliable communication channels between the commercial proaudio market and the Linux audio developers, artists, and contributors. With such an agenda there are superficial but nonetheless pronounced similarities with the function and purpose behind the Linux Audio Consortium. However, as we will soon find out, there are some distinguishing differences as well. One of the most important longterm goals of “Made in Linux” foundation will be to accumulate operating budget through fundrising. Such budget would enable the foundation to provide incentives towards the development of most sought audio related software features and/or tools, sponsoring competitions and awards for the recognition of the most important contributions to the community, media exposure, musicoriented incentives (i.e. composition competitions), and beyond. Depending upon the success of the initial deployment, the foundation's programs could easily expand to encompass other possibilities, such as the yearly publications of works in a form of a CD compilation similar to the aforementioned “Made with Linux” collection, as well as other incentives that may help the foundation become more selfdependent while at the same time allowing it to further expand its operations. While the proposal of creating an entity that would foster Linux as a DAW proliferation certainly sounds very enticing, please let us not be deceived. Linux audio market is currently a niche within a niche, and as such does not suggest that such foundation would boast a formidable operating budget. Nonetheless, it is my belief that in time it may grow into a strong liaison between the commercial world and our, whether we like it or not, still widely questioned “GNU underground.” 5 Streamlining Exposure In order to expedite and streamline the aforementioned efforts, the “Made in Linux” program also calls for an establishment of a clearly distinguishable logo which is to be embedded into any audio software that has been conceived on a Linux platform and is voluntarily endorsing this particular initiative. The idea to encourage all contributors, developers and artists alike, to seal their work with a clearly identifying logo is a powerful advertising mechanism that should not be taken lightly, especially considering that it suggest their devotion if not indebtedness to the GNU/Linux audio community, whether by developing software tools primarily for this platform, or by using those tools in their artistic work. More importantly, if a software application were to ported to another platform, the logo's required persistence would clearly and unquestionably reveal its origins likely elevating curiosity among users oblivious to the Linux audio scene. Although we already have several logos for the various Linux audio related groups and/or communities, most of them are denominational and as such may not prove to be the best convergent solution. Therefore, I would like to propose the a creation of a new logo that would be preferably used by anyone who utilizes Linux for multimedia purposes. The following example being a mere suggestion, a starting point if you like, is distributed under the GNU/GPL license (larger version is freely available and downloadable from the author's website — please see references for more info): It is of utmost importance once the logo's appearance has been finalized and ratified, that it remains constant and its use consistent in order to enable endusers to familiarize themselves with it and grasp the immensity and versatility of Linux audio offering. Naturally, the software applications that do not offer graphical user interface could LAC2005 53 simply resort to incorporating the title of the incentive. With these simple, yet important steps, the Linux multimedia software would, to the benefit of the entire community, attain greater amount of exposure and publicity. Contrary to the aforementioned foundation, this measure is neither hard to implement nor does should it generate a significant impact from the developer's standpoint, yet it does pose as a powerful statement to the GNU/Linux cause. Just like the Linux's “Tux” mascot, which now dominates the Internet, this too can become a persistent image associated with the Linux audio software. However, in order to be able to attain this seemingly simple goal, it is imperative that the Linux audio community extends a widespread (if not unanimous) support and cooperation towards this initiative. Only then will this idea yield constructive results. Needless to mention that this prerequisite will not only test the appeal of the initiative, but also through its fruition assess the community's interest (or lack thereof) in instituting the aforementioned foundation. Once the foundation would assume its normal daytoday operations, this measure would become integral part of the foundation's agenda in its efforts to widen the exposure of the rich Linux audio software offering. 6 Linux Audio Consortium Concerns proposed as part of this initiative, should they be implemented under the patronage of the consortium, they would require a reasonably substantial alterations to its bylaws and daytoday operations. Naturally, it would be unfortunate if the two initiatives were to remain separate as such situation would introduce unnecessary fragmentation of an already humblysized community. Nonetheless, provided that the “Made in Linux” program creates an adequatelysized following, it may become necessary at least in initial stages for the two programs to remain separate until the logistical and other concerns of their merging are ironed out. 7 Conclusion By now it is certainly apparent that the aforementioned initiative bears resemblance to the Linux Audio Consortium agenda which has been in existence for over a year now. After all, both initiatives share the same goal: proliferation of Linux audio scene by offering means of communication as well as representation. However, there are some key differences between the two initiatives. In its current state the Linux Audio Consortium could conceivably sponsor or at least serve as the host for the “Made in Linux” initiative. Yet, in order for the consortium to be capable of furnishing full spectrum of programs covered by this initiative, including the creation of the aforementioned foundation, there is an unquestionable need for a source of funding. Currently, the consortium does not have the facilities that would enable such steady source of income. As such, the additional programs It is undeniable that the Linux audio community is facing some tough decisions in the imminent future. These decisions will not only test the community's integrity, but will likely determine the very future of the Linux as a software DAW. Introducing new and improving existing software, while a quintessential factor for the success in the commercial market, unfortunately may not help solve some of the fundamental issues, such as the dubious state of the proaudio hardware support. As such, this sole incentive will not ensure the longterm success of the Linux platform. Furthermore, whether one harbors interest in a joint effort towards promoting Linux also may not matter much in this case. After all, if Linux fails to attract professional audio hardware vendors, no matter how good the software offering becomes, it will be useless without the proper hardware support. Therefore, it is the formation of the foundation (or restructuring of the existing Linux Audio Consortium) and its relentless promotion of the Linux audio efforts that may very well be community's only chance to push Linux into the mainstream audio market where once again it will have a relatively secure future as a professional and competitive DAW solution. 8 Acknowledgements My thanks, as always, go to my beloved family who through all these years of hardship and many sleepless nights troubleshooting unstable Linux kernels and modules, X session setups, and audio xrun's, stood by me. Big thanks also go to all the LAC2005 54 members of the Linux audio community without whose generous efforts none of this would have been possible. References ALSA website. (visited on January 10, 2005). JACK website. (visited on January 10, 2005). Linux Audio Consortium website. (visited on January 10, 2005). Linuxaudiodevelopers (LAU) website. (visited on January 10, 2005). “Made in Linux” logo (GIMP format).. Slashdot website. (visited on January 10, 2005). Steinberg/VST website. (visited on January 10, 2005). WINE HQ website. (visited on January 10, 2005). LAC2005 55 LAC2005 56 Linux Audio Usability Issues Introducing usability ideas based on Linux audio software Christoph Eckert Graf-Rhena-Straße 2 76137 Karlsruhe, Germany mchristoph.eckert@t-online.de February 2005 Abstract The pool of audio software for Linux based operating systems has to offer very powerful tools to grant even average computer users the joy of making music in conjunction with free software. However, there are still some usability issues. Basic usability principles will be introduced, while examples will substantiate where Linux audio applications could benefit from usability ideas. One of the basic questions is how much a user needs to know before he is able to use a tool in order to perform a certain task. The amount of required knowledge can be reduced by clever application design which grants an average user an immediate start. Last but not least, clever software design reduces the amount of documentation needed; developers dislike writing documentation as well as users dislike reading it. Keywords Usability & application development 2. Usability Terms Besides the classical paper »Mac OS 8 Human Interface Guidelines« by Apple Computer, Inc[2], the Gnome project has published an excellent paper discovering usability ideas[3]. Joel Spolsky has written a book called »User Interface Design for Programmers«. It is based on Mac and Windows development, but most ideas are also valid for Linux. Reduced in size, it is also available online[4]. To get familiar with the topic, some of the commonly found usability terms will be introduced. Included examples will concretize the ideas. Most of the examples do not only belong to one usability principle but even more. 1. Introduction Free audio software has become very powerful during the last years. It is now rich in features and mature enough to be used by hobbyists as well as by professionals. In the real world, day by day we encounter some odds and ends which are able to make us unhappy or frustrated. May it be a screw driver which does not fit a screw, a door which does not close properly or a knife which simply is not sharp enough. This is also valid for software usage. There often are lots of wee small things which sum up and prevent us from doing what we originally wanted to do. Some of these circumstances can be avoided by applying already existing usability rules to Linux audio software. Usability usually is associated with graphical user interface design, mainly on Mac OS or even Microsoft Windows operating systems. Surprisingly, most of the rules apply to any software on any operating system, including command line interfaces. A bunch of documents discovering this area of programming are available from various projects and companies, e.g. the GNU project, the Gnome desktop environment, the wxWidgets project[1], Apple computers or individuals. 2.1 Knowing the Audience In order to write software which enables a user to perform a certain task, it is necessary to know the audience. It is a difference if the software system will teach children to read notes or enable a musician to create music. Different groups use a different vocabulary, want to perform different tasks and may have different computer knowledge. Each user may have individual expectations on how the software will work. The expectations are derived from the task he wants to perform. The user has a model in mind, and the better the software model fits the user model, the more the user will benefit. This can be achieved by creating use cases, virtual users or asking users for feedback, so some applications include feedback agents. LAC2005 57 To fit the user's expectations is one of the most important and most difficult things. If the same question appears again and again in the user mailing lists, or even has been manifested in a list of frequently asked questions (known as FAQ), it is most likely that the software model does not fit the user model. The target group of audio applications are musicians. They vary in computer skills, the music and instruments they play and finally the tasks they want to perform using the computer. Some want to produce electronic music, others want to do sequencing, hard disk recording or prepare scores for professional printing. An audio application of course uses terms found in the world of musicians. On the other hand too specialized terms confuse the user. A piano teacher, for example, who gets interested in software sequencers, gets an easier start if the software uses terms he knows. A tooltip that reads »Insert damper pedal« can be understood more easily than »Create Controller 64 event«. As soon as a user starts an audio software, he might expect that the software is able to output immediately sound to the soundcard. As we are on Linux, however, the user first needs to know how to set the MIDI and maybe the JACK connections. An application therefore could make this easily accessible or at least remember the settings persistently until the next start and try to reconnect automatically. MusE for example is already doing so: ensure that the user gets an immediate start. In the following example, a new user will be most probably confused by the port names: A metaphor makes it easier to the user to understand what is meant, maybe Soundcard input 1 and 2 and Soundcard output 1 through 6 instead of alsa_pcm:capture or alsa_pcm:playback. Patchage[6] is a nice attempt to use metaphors to visualize the data flow in a manner the audio user is familiar with, compared to connecting real world musical devices. It could become the right tool for intuitively making MIDI connections as well as JACK audio connections. If it contained a LADSPA and DSSI host, it could be a nice tool to easily control most aspects of a Linux audio environment: An example for a misleading metaphor is an LED emulation used as a switch. An LED in real life displays a status or activity. A user will not understand that it is a control to switch something on and off. In MusE, LEDs are used to en- and disable tracks to be recorded, and this often is not clearly understood by users: 2.2 Metaphors Metaphors are widely used to make working on computers more intuitive to the user, especially (but not only) in applications with graphical user interfaces (also known as GUI applications). Metaphors directly depend on the audience, because they often match things the user knows from the real world. Instead of saving an URL, a bookmark gets created while surfing the web. On the other hand, choosing bad metaphors is worse than using no metaphor at all. Well chosen metaphors reduce the need to understand the underlying system and therefore Replacing the LEDs by buttons which include an LED to clearly visualize the recording status of each track would make it easier to understand for the user. LAC2005 58 2.3 Accessibility All over the world, there are citizens with physical and cognitive limitations. There are blind people as well as people with hearing impairments or limited movement abilities. On a Linux system it is easily possible to design software which can be controlled using a command line interface. When designing GUI software it is still needed to include the most important options as command line options. This way even blind users are able to use a GUI software synthesizer by starting it with a certain patch file, connecting it to the MIDI input and playing it using an external keyboard controller. Free software often gets used all over the world. It is desirable that software is prepared to easily get translated and localized for different regions. By including translation template files in the source tarball and the build process, users are able to contribute by doing translation work: knowledge about one application while using another. This also includes to make an application consistent with other applications even if something has been designed wrong in other applications. If an application is a Gnome application, the user expects the file open dialog to behave the same as in other Gnome applications. Writing a new file request dialog in one of the applications will certainly confuse the user even if it was better than the generic Gnome one. Consistency does not only affect GUI programs but command line programs as well. Some Linux audio programs can be started with an option to use JACK for audio output using a command line parameter. As soon as applications behave differently, the user cannot transfer knowledge about one program to another one: There are also programs which do not read any given command line parameters. Invoking such a program with the help parameter will simply cause it to start instead of printing some useful help on the screen: Besides Internationalization and Localization (both also known as i18n and l10n), accessibility includes paying attention to cultural and political aspects of software. Showing flags or parts of human beings causes unexpected results, maybe as soon as an icon with a hand of a western human will be seen by users in Central Africa. Keyboard shortcuts enable access to users who have problems using a mouse. The Alt key is usually used to navigate menus, while often needed actions are directly accessible by Ctrlkeycombos. The tabulator key is used to cycle through the controls of dialogs etc. 2.4 Consistency Consistency is divided into several aspects of an application. It includes consistency with (even earlier versions of) itself, with the windowmanager used, with the use of metaphors and consistency with the user model. Most of the time, users do not only use one application to realize an idea. Instead, many applications are needed to perform a job, maybe to create a new song. The user tries to reapply The GNU coding standards[5] recommend to let a program at least understand certain options. To ask a program for version and help information should really included in any program. Even if there is no help available, it is helpful to clearly state this and point the user to a README file, the configuration file or an URL where he can get more information. If a GUI application contains a help menu, it is useful if it at least contains one entry. It is better to have a single help page clearly stating that there is no help at all and pointing to the project homepage or to a README file than having no help files installed at all. Programs containing uppercase characters in the name of the binary file confuse the user. The binary file of KhdRecord for example really reads as »KhdRecord« making it difficult for the user to start it from a command line, even if he remembers the name correctly. Another example are program's where the binary file name does not fit the application's name exactly, as found on the virtual LAC2005 59 keyboard. The user has to guess the binary representation, and this causes typing errors: GUI programs can add a menu entry to the system menu so the user is able to start the program the same way as other programs and doesn't need to remember the application's name. Therefore, GUI applications must not depend on command line options to be passed and need to print important error messages not only on the command line, but also via the graphical user interface. For consistency reasons, such desktop integration shouldn't be left to the packagers. The user should always find an application on the same place in the menu system, regardless which distribution he is running. Users are used to click on documents to open these in the corresponding application. So it is useful if an audio application registers the used filetypes. MusE for example saves its files as *.med files. Due to the lack of filetype registration, KDE recognizes these files as text files. Clicking on »Jazzy.med« will open it in a text editor instead of starting MusE and loading it: This way the user doesn't have to guess the status of the system. The simpler a notification is, the better the user will be able to understand and to remember it after it is gone. Providing information which enables the user to solve a problem and to avoid it in the future reduces disappointment and frustration. For the same reason message texts should be designed in a manner that the software instead of the user is responsible for errors. When starting MusE without a JACK soundserver already running, the user gets prompted by an alert clearly explaining the problem: The user now has a good starting point and learns how to avoid this error in the future. A further example for good feedback is found in qarecord[7]. It clearly shows the input gain using a nice level meter the user may know from real recording equipment: Consistency also includes the stability of an application, security aspects and its data compatibility with itself as well as other applications. Even in the world of free software it is often not simple for the user to exchange data between different applications. 2.5 Feedback A computer is a tool to enter, process and output data. If a user has initiated a certain task, the system should keep him informed as long as there is work in progress or if something went wrong. On the other hand, the user gets no feedback if currently audio is recorded or not. If no recording is in progress it makes no sense to press the pause and stop buttons, so both need to appear inactive. As soon as recording is in progress it makes no sense to press the record button again, so the record button needs to be set inactive. If the recording is paused, the pause or record button needs to be disabled. Qarecord needs to be started in conjunction with some command line parameters defining the hardware which should be used to capture audio. If qarecord included a graphical soundcard and an input source selector as well as a graphical gain control, it would perfectly fulfill further usability ideas like direct access and user control. The user was able to do all settings directly in qarecord instead of using different applications for a simple job. LAC2005 60 To offer feedback, it is necessary to add different checks to a software system. A user starting an audio application might expect it will immediately be able to output sound. On Linux based systems, there are some circumstances which prevent an application from outputting audio. This for example happens as soon as one application or soundserver blocks the audio device while another one also needs to access it. In this case, there are applications which simply seem to hang or even do not appear on the screen instead of giving feedback to the user. Actually, the application simply waits until the audio device gets freed. In the following example, xmms gets started to play an audio file. After that, Alsa Modular Synth (AMS) gets started, also directly using the hardware device, while xmms still is blocking the device. Unfortunately, AMS does not give any feedback, neither on the command line nor in the graphical user interface: without any user interaction as soon as a disconnect occures. 2.6 Straightforwardness To perform a task, the user has to keep several things in mind, may it be a melody, a drum pattern etc. There are external influences like a ringing telephone or colleagues needing some attention. So, the user only spends a reduced amount of his attention to the computer. This is why applications need to behave as straightforward as possible. One of the goals is that the computer remembers as many things as possible, including commands, settings etc. On Linux, the advantages of the command line interface are well known, but it is also known how badly commands and options are remembered if these are not used or needed often. This is why typed commands are replaced by menus and buttons in end user software whenever possible. Users dislike reading manuals or dialog texts and developers dislike writing documentation. Keeping the interface and dialogs straightforward will reduce the need of documentation. It is also true that it is difficult to keep documentation in sync with the software releases. It is also important to create a tidy user interface by aligning objects properly, grouping them together and giving equal objects the same size. Disabling elements which do not fit the current situation helps the user to find the right tools at the right time. Ordering UI elements to fit the user's workflow reduces to write or to read documentation. Think about analogue synthesizers: Mostly, the elements are ordered to fit the signal flow from the oscillators through the mixer and filter sections to the outputs. According to Dave Phillips, who asked the developers to write documentation, an additional thing is to reduce the amount of documentation needed. The best documentation is the one which does not need to be written and the best dialogs are the ones which do not need to appear. Both will reduce the time of project members spent on writing documentation and designing dialogs. The user will benefit as well as the project members. After invoking an application, the user likes to start immediately to concentrate on the task to perform. He might expect a reasonable preconfigured application he immediately can start to use. When starting, Hydrogen opens a default template file. Further demo files are included, easily accessible via the file menu: The user will only notice that AMS does not start. As soon xmms will be quit, even if it were hours later, AMS will suddenly appear on the screen. Some feedback on the command line and a graphical alert message helped the user to understand, to solve and to avoid this situation in the future. Concerning JACK clients, it is an integral part of the philosophy that applications can be disconnected by jackd. As soon as this happens, some applications simply freeze, need to be killed and restarted. MusE shows an informational dialog instead of simply freezing: Of course it would be better if jackified applications would not need to be restarted and could try to automatically reconnect to JACK LAC2005 61 Hydrogen optionally remembers the last opened file to restore it during the next start and automatically reconnects to the last used JACK ports: Alsa Modular Synth includes a lot of example files, too, but it does not load a template file when starting, and there is no easy access to the example files. The user needs to know that there are example files, and the user needs to know where these files are stored. Another example is the fact that there are different audio subsystems on a Linux box. An audio application which wants to work straightforward included various output plugins. One application which already has a surprising amount of output plugins (including a JACK plugin) is xmms: A further idea to make it even more straightforward could be that xmms did some checks as soon as the preconfigured device is not available during startup and chooses an alternative plugin automatically. If doing these checks in an intelligent order (maybe JACK, esound, aRts, DMIX, ALSA, OSS), xmms will most probably match the user's expectations. If no audio system was found, an error message got printed. Introducing such a behaviour could improve other Linux audio applications, too. Some example code in the ALSA Wiki pages[8] could be a good starting point for future audio developers. Average human beings tend to start counting at 1. In the software world, counting sometimes starts at zero for technical reasons. Software makes it possible to hide this in the user interface. This includes the numbering of sound cards (1, 2, 3 instead of 0, 1, 2) as well as program changes (1, 2, 3 ... 128 instead of 0, 1, 2 ... 127) or MIDI channels (1, 2, 3 ... 16 instead of 0, 1, 2 ... 15). A musician configured his keyboards to send notes on the channels 1 through 4. If the software starts the numbering at zero, the user has to struggle with the setup: A more human readable numbering matched the user's expectations much more: Furthermore, xmms offers feedback as soon as the configured output destination is not available during startup: 2.7 User Control Every human wants to control his environment. Using software, this includes the ability to cancel long lasting operations as well as the possibility to configure the system, like preferred working LAC2005 62 directories and the preferred audio subsystem. An application that behaves the way the user expects makes him feel that he is controlling the environment. It also needs to balance contrary things, allowing the user to configure the system while preventing him from doing risky things. As soon as a musician gets no audio in a real studio, he starts searching where the audio stream is broken. In a software based system, he also needs controls to do so. Such controls are not only needed for audio but also for MIDI connections. An LED in a MIDI sequencer indicating incoming data or a levelmeter in an audio recording program to indicate audio input are very helpful to debug a setup. On some hardware synthesizers corresponding controls exist. A Waldorf Microwave, for example, uses an LED to indicate that it is receiving MIDI data. Rosegarden displays a small bar clearly showing that there is incoming MIDI data on the corresponding track: SysExxer urged the user to make three consecutive decisions and therefore the user does not feel like he is controlling the situation. Instead, the application controls the user. Some Qt based audio programs often forget the last used document path. The user opens a file somewhere on the disk. As soon he wants to open another one, the application has forgotten the path the user has used just before and jumps back to the home directory: A further thing worth some attention is not to urge the user to make decisions. SysExxer is a small application to send and receive MIDI system exclusive data (also known as sysex). After SysExxer has received sysex data the user must decide that the transmission is finished by clicking the OK button: After that the user has to decide if the data gets saved as one single or as multiple split files: SysExxer then asks for a location to store the file: Most probably the user did expect to be put back to the last used directory. Applications can offer preferences for file open and file save paths or even better persistently remember the last used paths for the user. The Linux operating system does not depend on file extensions. On the other hand, file extensions are widely used to assign a filetype to applications. If an application has chosen to use a certain file extension, it should ensure that the extension gets appended to saved files even if the user forgot to specify it. A file open dialog can filter the directory contents so the user only gets prompted with files matching the application. On the other hand, a file open dialog also should make it possible to switch this filter off, so the user is able to open a file even if it is missing the correct extension. If an application does not ensure that the extension gets appended when saving a file and does not enable the user to switch the filter off when trying to reopen the file, it will not appear in the file open dialog even if it resides in the chosen directory, so the user is unable to open the desired file: LAC2005 63 2.8 Forgiveness A software system which allows to easily undo more than one of the last actions performed will encourage the user to explore it. The classical undo and redo commands are even found on hardware synthesizers like the Access Virus[9]. Some applications allow to take snapshots so the user can easily restore a previous state of the system, for example on audio mixing consoles. As soon the user wants to do an irreversible action an alert should ask for confirmation. On Linux configuration options are usually stored in configuration files. A user normally doesn't care about configuration files because settings are done using GUI elements. On the other hand, sometimes things go wrong. Maybe a crashing application will trash its own configuration file or an update has problems reading the old one. Therefore, command line options to ask for the currently used configuration and data files are sometimes very helpful. The same information can be displayed on a tab in the about box. This enables the user to modify it if needed. User control includes to enable the user to configure the base audio environment. Maybe he wants to rename soundcards and ports according to more realistic device names like »Onboard Soundcard Mic In« or similar. If a MIDI device which offers multiple MIDI ports is connected to the computer, it is useful to be able to name the ports according to the instruments connected to the ports. If there is more than one soundcard connected, the user may like to reorder them in a certain manner, perhaps to make the new USB 5.1 card the default device instead of the onboard soundchip. He wants to grant more than one application access to a soundcard, in order to listen to ogg files while a software telephone still is able to ring the bell. In the last few years, alsaconf has done a really great job, but meanwhile it seems to be a little bit outdated because it is unable to configure more than one soundcard or to read an existing configuration. It is unable to configure the DMIX plugin or to put the cards in a certain user defined order. It still configures ISA but no USB cards. A replacement seems to be a very desirable thing. Such a script, used by configuration front ends, will bring many of the existing but mostly unused features to more users. A further script could be created to help configuring JACK, maybe by doing some tests based on the hardware used and automatically creating a base JACK configuration file. 2.9 Direct Access In graphical user interfaces tasks are performed by manipulating graphically represented objects. It is a usability design goal to make accessing the needed commands as easy as possible. Options an application supports should be made dynamically accessible, even during runtime. AMS for example supports a monophonic as well as a polyphonic mode via a command line option. Unfortunately, it is not possible to enter the desired polyphony persistently via the preferences or to change the polyphony during runtime. As soon as the user forgets to pass the desired polyphony to AMS during startup, it needs to be quit and restarted with the correct polyphony applied. Then the user has to reopen the patch file and to redo all MIDI and audio connections. Therefore there is no direct access to the polyphony settings. When a user changes the preference settings of an application, the program should change its behaviour immediately. It is also important to write the preferences file immediately after changes have been made. Otherwise, a crash will make the application forget the settings the user has made. In gAlan for example, preference changes make it necessary to restart the program: Tooltips and »What's this« help are very useful things in GUI applications. Both grant the user direct access to documentation on demand and reduce the need for writing documentation. On the other hand, if an application offers tooltips and LAC2005 64 »What's this« help, both need to contain reasonable contents. Otherwise, the user may believe that similar controls will also be useless in other applications. A further thing to grant users direct access is to give them needed options and controls where and when needed, even if the controls were external applications. A Linux audio user often is in need to access the soundcard's mixer as well as a MIDI or JACK connection tool. Therefore, a sequencer application offering a button to launch these tools from within its user interface grants the user direct access. There are different tools like kaconnect, qjackconnect or qjackctl, so preference settings make it possible that the user enters the programs he likes to use. Rosegarden for example allows the user to enter the preferred audio editor: and enter them into the matching fields in case these are empty at application startup. An application could even offer the possibility to make MIDI and audio connections directly from its interface with UI elements of its own. As long as the connections reappear in aconnect and jack_connect, it fulfills both the usability requirements of direct access and consistence. A further issue concerning direct access is setting a soundcard's mixing controls. On a notebook with an AC '97 chip, alsamixer usually shows all controls the chip has to offer, regardless if all abilities of the chip are accessible from the outside of the computer or not: The same way external MIDI and JACK connection tools as well as a mixer could be made available: Of course ALSA cannot know which pins of the chip are connected to the chassis of the computer. The snd-usb-module really handles a huge amount of more and more hotpluggable devices. This simply makes it impossible for ALSA to offer a reasonable interface for each particular card. Currently, alsamixer for a Terratec Aureon 5.1 USB looks like this: These fields need to be well preconfigured during the first application startup. One possibility is to set the application's defaults to reasonable values during development time. Unfortunately, this seems to be impossible because the configuration of the user's system is not known at this moment. Therefore, Rosegarden could try to check for the existence of some well known tools like qjackctl, qjackconnect, kaconnect, kmix, kamix or qamix PCM1 does not seem to do anything useful, while auto gain is not a switch (like the user might expect) but looks like a fader instead. It cannot be faded, of course, but muted or unmuted to switch it on and off. There are two controls called 'Speaker'. One of them controls the card's direct throughput, while the second one controls the main volume. The average user has no chance to understand this mixer until he plays around with the controls and checks what results are caused by different settings. Qamix tries to solve this building the UI reading an XML file matching the card's name: LAC2005 65 The user configures a card by adjusting this file. This is a nice attempt, but still qamix gets too less information from ALSA to make a really good UI: A further usability goal is consistency. All these points mentioned above require that ALSA needs to remain the central and master instance for all user applications. One idea to realize this is to make ALSA able to give applications more and better information about the hardware. As soon a soundcard gets configured, a matching, human created configuration file got selected manually or via a card ID, so the controls of a particular card could be named more human readable. If the configuration file was in Unicode format, it even can contain names in different languages: Introducing a similar system for ALSA device and MIDI port names also seems to be desirable. This needs writing some configuration files. Keeping the file format as simple unicode text or XML files which simply are stored at a certain loaction can make users easily contribute. 3. The Developer's Point of View After having discussed a lot of usability issues from a user's point of view, it is also necessary to view it from a developer's point of view. There are several reasons why it is difficult to introduce usability into Linux audio software projects. First of all, the question is if there is any reason why a developer of free software should respect usability ideas. If someone has written an application for his own needs and has given it away as free software, the motivation to apply usability ideas has a minor priority. Nobody demands that an application that someone submitted for free has to include anything. In the beginning of a software project, it makes less sense to think about the user interface. At first, the program needs to include some useful functionality. As soon as a project grows and maybe several developers are working on it, the user base might increase. In this case, the ambition of the project members to make the application better and more user friendly usually increases. Even in this case, it is often difficult to achieve a more user friendly design for technical reasons. Applications often are designed to start with command line parameters to make them behave as desired. If the design of the software is made to take options at application startup, it is likely that these cannot easily be changed during runtime. The same is valid for preference settings read from a configuration file during an application's startup. It often does not expect the values to be changed dynamically. Adjusting this behaviour at a later point in the development process is sometimes difficult and can cause consecutive bugs. Keeping this in mind, it requires to design the software system to keep the different values as variable as possible from the very beginning of the development process. Changing this afterwards can be difficult. A further point is the fact that developers tend to be technically interested. A developer might have been working on new features for a long time, having done research on natural phenomena and having created some well designed algorithms to simulate the result of the research in software. The developer now is proud having created a great piece of software and simply lacks the time to make this valuable work more accessible to the users. The developer has put a lot of time and knowledge in the software and then unfortunately limits the user base by saving time while creating the user interface. Sometimes it is also caused by the lack of interest about user interface design. There have always been people who are more interested in backends and others more interested in frontends. Developers who are interested in both cannot be found often. It is important to notice that this is a LAC2005 66 given fact. But not each project has the luck that one of the members is an usability expert, so it is useful if all project members keep usability aspects in mind. Of course, bugs need to be fixed, and even the existing user base tends to beg more for new features instead of asking to improve the existing user interface. This is understandable because the existing user base already knows how to use the program. A software system will never be finished. So developers who want to introduce usability improvements need to clearly decide if they want to spent some time on it at a certain point. Working on usability improvements needs time, and time is a strongly limited resource especially in free software projects which mainly are based on the work of volunteers. Everything depends on the project members and if they are interested in spending some time on usability issues or not. musicians what cool applications are available as free software. 5. License The copyright of this document is held by Christoph Eckert 2005. It has been published under terms and conditions of the GNU free documentation license (GFDL). 6. Resources • • • • 4. Summary Linux is surely ready for the audio desktop. Most applications a musician needs are available, at least including the base functionality. Furthermore, there is free audio software which does things musicians have never heard about on other operating systems. Due to this fact, keeping some usability ideas in mind will make more semi-skilled users enjoying free software. More users cause more acceptance of free software, and this will cause more people to participate. Usability is not limited to graphical user interfaces. It also affects command line interfaces. Linux is known to be a properly designed operating system. Respecting some basic usability rules helps continuing the tradition. On Linux there are many choices which environment to work in. There are different window managers and desktop environments as well as different toolkits for graphical user interface design. Working and developing on a heterogeneous system like Linux does not mean that it is impossible or useless to apply usability rules. It simply means that it is a special challenge to address and work on usability ideas. Paying attention to usability issues is not only important to make user's life easier or improve his impression. It also is important to broaden the user base in order to get more bug reports and project members. And finally it helps spreading Linux when surprising average computer skilled • • • • • [1] The wxWidgets toolkit wxGuide: [2] The Apple human interface principles: pdf/HIGuidelines.pdf [3] The Gnome user interface guidelines: [4] User Interface Design by Joel Spolsky: /navLinks/fog0000000247.html [5] The GNU coding standards: [6] Patchage: [7] Various software of M. Nagorni: [8] The ALSA wiki pages: [9] The Access Virus Synthesizers: LAC2005 67 LAC2005 68 Updates of the WONDER software interface for using Wave Field Synthesis Marije A.J. BAALMAN Communication Sciences, Technische Universität Berlin Sekr. EN8, Einsteinufer 17 Berlin, Germany baalman@kgw.tuberlin.de Abstract WONDER is a software interface for using Wave Field Synthesis for audio spatialisation. Its user group is aimed to be composers of electronic music or sound artists. The program provides a graphical interface as well as the possibility to control it externally using the OpenSoundControl protocol. The paper describes improvements and updates to the program, since last year. Keywords Wave Field Synthesis, spatialisation 1 Introduction Wave Field Synthesis (WFS) is a technique for sound spatialisation, that overcomes the main shortcoming of other spatialisation techniques, as there is a large listening area and no “sweet spot”. In recent years, WFS has become usable with commercially available hardware. This paper describes the further development of the WONDER program, that was designed to make the WFStechnique available and usable for composers of electronic music and sound artists. Figure 1. The Huygens' principle (left) and the Wave Field Synthesis principle (right). 2 Short overview of WFS and WONDER WFS is based on the principle of Huygens, which states that a wave front can be considered as an infinite number of point sources, that each emit waves; their wavefronts will add up to the next wavefronts. With Wave Field Synthesis, by using a discrete, linear array of loudspeakers, one can synthesize correct wavefronts in the horizontal plane (Berkhout e.a. 1993). See figure 1 for an illustration of the technique. WONDER is an open source software program to control a WFS system. The program provides a graphical user interface and allows the user to think in terms of positions and movements, while the program takes care of the necessary calculations for the speaker driver functions. The program is built up in three parts: a grid definition tool, a composition tool and a play interface. Additional tools allow the user to manipulate grids or scores, or view filter data. For the actual realtime convolution, WONDER relies on the program BruteFIR (Torger). This program is capable of doing the amount of filter convolutions that are necessary to use WFS in realtime. On the other hand, BruteFIR has the drawback, that all the filters need to be calculated beforehand and during runtime need to be stored in RAM. This limits the flexibility for realtime use. It is for this reason, that a grid of points needs to be defined in advance, so that the filters for these points can be calculated beforehand and stored in memory. For a more complete description of the WONDER software and the WFStechnique in general, I refer back to previous papers (Baalman, 2003/2004). LAC2005 69 3 Updates to WONDER Since its initial release in July 2004, WONDER has gone through some changes and updates. New tools have been implemented and the functionality of some of the old tools have been improved. Also, some parts were reimplemented to allow for easier extension in the future and resulting in a cleaner and more modular design. The graphical overview, which displays the spatial layout of the speakers and source positions have been made consistent with each other and now all provide the same functionality, such as the view point (a stage view or an audience view), whether or not to show the room, setting the limits and displaying a background image. The room definition has been improved: it is now possible to define an absorption factor for each wall, instead of one for all walls. 3.1 Grid tools The Grid definition tool allows a user to define a grid consisting of various segments. Each segment can have a different spacing of points within its specified area and different characteristics, such as inclusion of high frequency damping and room parameters for reflections. The menu “Tools” now provides two extra tools to manipulate Grids. It is possible to merge several grids to one grid and to transform a grid. The Mergetool puts the points of several grids into one grid, allowing the user to use more than one grid in an environment. The Transformtool applies several spatial transformations to the segments of a grid and then calculates the points resulting from these transformed segments. This transformation can be useful if a piece will be performed on another WFS system, which has another geometry of the speaker setup and the coordinates of the grid need to be transformed. The filter view (fig. 2) is a way to verify a grid graphically. It shows you the coefficients of all the filters of a source position in a plot. The plot shows on the horizontal axis the loudspeakers, in the vertical direction the time (on the top is zero). The intensity indicates the value of the absolute value of the volume. Above the graph, the input parameters of the gridpoint are given, as well as some parameters of the grid as a whole. This filter Figure 2. Filter view tool of WONDER. At the top, information about the grid and the current grid point are given. In the plot itself is in the horizontal direction the speakers, in the vertical direction the time. The intensity (contrast can be changed with the slider at the bottom right) is an indication of the strength of the pulse. The view clearly shows the reflection pattern of the impulse response. overview can be useful for verification of calculations or for eductional purposes. Another way to verify a grid is using the grid test mode during playback with which you can step through the points of a grid and listen to each point separately. 3.2 Composition tools With the composition tool the user can define a spatial composition of the sound source movements. For each source the movement can be divided in sections in time and the spatial parameters can be given segmentwise per section. In the composition definition dialog it is also possible to transform the current composition. The user can define a set of spatial transformations that have to be applied to the sources and sections specified. After the transformations have been applied, the user can continue working on the composition. This tool is especially handy when one source has to make a similar movement as LAC2005 70 another: just make a copy of the one source and apply a transformation to the copy. After a score has been created (either with the composition tool or by recording a score), there are four tools available in the “Tools”menu to manipulate the scores. “Clean score” is handy for a recorded score. This cleans up any double time information and rounds the times to the minimum time step (default: 25 ms). “Merge scores” enables you to merge different scores into one. It allows a remapping of sources per score included. “Transform score” allows you to make transformations to different sources in a score. The last tool is the “timeline view”, which shows the x and ycomponent in a timeline (fig. 4); it shows the selected time section as a path in an xy view. It is also possible to manipulate time points in this view. While playing it shows a playhead to indicate the current time. The concept of this timeline view is inspired by the program “Meloncillo” (Rutz). The timeline view allows for a different way of working on a composition: the user can directly manipulate the score. Figure 3. Transport control of WONDER. The slider enables the user to jump to a new time. The time in green (left) indicates the running time, the time in yellow (right) the total duration of the score. The caption of the window indicates the name of the score file. 3.3 Play interface WONDER provides a graphical interface to move sound sources or to view the movement of the sound sources. The movement of sound sources can be controlled externally with the OSC protocol (Wright, 2003). Score control is possible by using the Transport controls (fig. 3), which have been reimplemented. The sound in and output can be chosen to be OSS, ALSA, JACK or a sound file. In the first three cases the input has to be equal to the output. The program BruteFIR (Torger) is used as audio engine. The communication between BruteFIR and WONDER can be verified by using a logview, which displays the output of BruteFIR. Due to some changes in the command line interface of BruteFIR, the communication between WONDER and BruteFIR could be improved and there is no more a problem with writing and reading permissions for the local socket. The logview, which records (in red) the messages that are shown in the statusbar of WONDER, shows the feedback from the play engine BruteFIR in black. It is possible to save the log to a file. Figure 4. The timeline view of WONDER. The selected time section is shown as a path in the positional overview. The user can edit the score by moving the breakpoints or adding new ones. LAC2005 71 3.4 Help functions References Baalman, M.A.J., 2003, Application of Wave Field Synthesis in the composition of electronic music, International Computer Music Conference 2003, Singapore Baalman, M.A.J., 2004, Application of Wave Field Synthesis in electronic music and sound installations, Linux Audio Conference 2004, ZKM, Karlsruhe, Germany Baalman, M.A.J. & Plewe, D., 2004, WONDER a software interface for the application of Wave Field Synthesis in electronic music and interactive sound installations, International Computer Music Conference 2004, Miami, Fl., USA Berkhout, A.J., Vries, D. de & Vogel, P. 1993, Acoustic Control by Wave Field Synthesis, Journal of the Acoustical Society of America, 93(5):27642778 Kneppers, M., & Graaff, B. van der, SpaceJockey, Rutz, H.H., Meloncillo, Torger, A., BruteFIR, Wright, M., Freed, A. & Momeni, A. 2003, “OpenSoundControl: State of the Art 2003”, 2003 International Conference on New Interfaces for Musical Expression, McGill University, Montreal, Canada 2224 May 2003, Proceedings, pp. 153160 The manual of the program is accessible via the “Contents” item in the “Help”menu. This displays a simple HTMLbrowser with which you can browse through the documentation. Alternately, you can use your own favourite browser to view the manual. Additionally, in most views of WONDER, hovering above buttons, gives you a short explanation about the buttons' functionality. 4 External programs to control WONDER There are two examples available which show how to control WONDER from another program: a SuperCollider Class available and an example MAX/MSP patch. Another program that can be used for external control is the JAVAprogram SpaceJockey (Kneppers), which was designed to enable the use of (customizable) movement patterns and to provide MIDIcontrol over the movements. 5 Conclusion WONDER has improved in the last year and has become more stable and more usable. Several changes have been made to facilitate further development of the program. Current work is to create an interface to EASE for more complex room simulation and to integrate the use of SuperCollider as an optional engine for the spatialisation. It is expected that SuperCollider can provide more flexibility, such as removing the necessity to load all filter files in RAM and the possibility to calculate filter coefficients during runtime. Current research is focused on how to implement a more complex sound source definition. Download A download is available at: LAC2005 72 Development of a Composer’s Sketchbook Georg BÖNN School of Electronics University of Glamorgan Pontypridd CF37 1DL Wales, UK gboenn@glam.ac.uk problem. Maybe a network of different algorithms work together, then only little changes of initial parameters could trigger surprising twists within the final result. Moreover, you can take those outcomes and scrutinise their value by direct and immediate comparison. Thus, a composer can take the time and always look for alternatives. The manifold of structures and ideas is going to be manageable through CAC software. Also, one should take into account the thrill of surprise that well designed CAC algorithms might fuel into one’s work-flow. This paper wants to discuss one specific compositional problem that is the invention and modelling of melodic structures. The proposed software that resolves that particular probelm shall represent a germ for an open source (under the GNU Public License) and free CAC application that is planed to grow as the number of compositional algorithms will hopefully increase in the near future. It is intended that this software is easy to learn and to use and that it benefits from proven concepts of object-oriented design and programming. Therefore, the author hopes that the ideas presented will find the interest and also maybe the support of the Linux Audio Developer’s community. Of course, personal preferences and musical experience influence the work of a composer. Those together with intuition, imagination, phantasy and the joy to play, including the joy for intellectual challenges, they are, in my view, the driving forces behind musical creativity. Would it be possible to define a set of algorithms that would match that experience? Is finding an algorithm not also often a creative process? Abstract The goal of this paper is to present the development of an open source and crossplatform application written in C++, which serves as a sketchbook for composers. It describes how to make use of music analysis and object-oriented programming in order to model personal composition techniques. The first aim was to model main parts of my composition techniques for future projects in computer and instrumental music. Then I wanted to investigate them and to develop them towards their full potential. Keywords Music Analysis, Algorithmic Composition, Fractals, Notation, Object-Oriented Design 0 Introduction Computer-Assisted Composition (CAC) plays an important role in computer music production. Many composers nowadays use CAC applications that are able to support and enhance their work. Applications for CAC help composers to manage the manifold of musical ideas, symbolic representations and musical structures that build the basis of their creative work. Maybe it is time to put a flashlight on CAC again and to look at examples where user-friendly interfaces meet efficient computation and interesting musical concepts. What are the advantages of CAC? For a composer to be able to use the PC as an intelligent assistant that represents a new kind of sketchbook and a well of ideas. Not only, that it is possible to quickly input and save musical data, the work in CAC results in a great freedom of choice between possible solutions of a given compositional 1 Music Analysis The fundamental idea in C omposer’s Sketchbook is the use of a user-defined LAC2005 73 database, that contains the building blocks for organic free-tonal musical stuctures. Those building blocks are three-note cells which stem from my personal examination of the major and minor third within their tonal contexts (harmonics, tuning systems, Schenker’s Ursatz), as well as within atonal contexts (Schoenberg’s 6 kleine Klavierstücke op.19, Ives’ 4th violin sonata). Although in my system, tonality is hardly recognisable anymore because of constant modulations, i cannot deny the historic dimension of the third, nor can i neglect or avoid its tonal connotations. I suppose it helped me to create an equilibrium of free tonality where short zones of tonality balance out a constant stream of modulations. Analysis of scores by Arnold Schoenberg and Charles Ives led me to a very special matrix of three-note cells. Further analysis revealed that it is possible to use a simple logic to concatenate those cells. Algorithms using this logic were created who are able to render an unprecedent variety of results based on a small table of basic musical material. In order to generate the table, a small set of generative rules is applied to a set of primordial notecells. The general rule followed in this procedure is to start with very simple material, then apply combinatorics unfolding its inner complexity and finally generate a manifold of musical variations by using a combination of fractal and stochastic algorithms. The first four cells forming the matrix are variations of the simple melodic figure e-d-c. The chromatic variations are e-flat-d-c, e-flatd-flat-c and e-d-flat-c (see Figure 2, A-D). One of the reasons why I chose these figures was, because they represent primordial, archetypical melodic material that can be found everywhere in music of all periods and ages. An exceptional example for the use of the Ursatz-melody is Charles Ives’ slow movement of the 4th violin sonata. This movement quotes the chorale “Yes, Jesus loves me” and uses its three-note ending e-d-c (“he is strong”) as a motive in manifold variations throughout the piece. Analysis reveals that Ives uses the major and minor versions of the motive and all its possible permutations (3-2-1, 2-1-3, 1-3-2). 2 The Matrix The matrix I developed u exactly ses the same techniques: Four different modes (AD) of the ‘Ur’-melody are submitted to all three possible permutations (see Figure 2). This process yields 12 primordial cells. Each one of those 12 cells is then submitted to a transformation called “partial inversion”. Figure 2: The matrix of 36 cells Partial Inversion means: Invert the first interval but keep the second one untouched. Or, keep the first interval of the cell original LAC2005 74 and invert the second one. This process of partial inversion can be found extensively used by Arnold Schönberg in his period of free atonality. As a perfect example, have a look at his Klavierstück op.19/1, bar 6 with up-beat in bar 5. The reason for using partial inversion is that it produces a greater variety of musical entities than the plain methods of inversion and retrograde. At the same time partial inversion guarantees consistency and coherence within the manifold of musical entities. Applying partial inversion to the 12 cells yields another 24 variants of the original e-d-c-melody. The final matrix of 36 cells contains an extraordinary complex network of relationships. Almost every cell in the matrix has a partner, which is the very cell in a different form, the cell may be inverted, retrograde, permutated or inverse retrograde. Yet, each one of these is closely related to the original ‘Ur’-melody. Going back to Schönberg’s op. 19/1, it came as a surprise that every single note in this piece is part of one or more groups of three notes, which can be identified as a member of the matrix of 36 cells. The discovery of an element of my own language in the matrix gave me yet another good reason to further investigate its application. I call it “chromatic undermining”, breaking into the blank space left behind by a local melodic movement. Cells which belong to that element are the partial inversions of Ursatz B and C, left column. The matrix of cells forms the basis of all further work with algorithms. It can be regarded as a database of selected primordial music cells. Thus it is clear that a userinterface should make it possible to replace that database by any other set of entities where it is totally in the hands of the user to decide which types of entities should belong to the database. The user-interface should also allow to add, delete or edit the database at runtime and make the database persistent. Beginning with two notes, the fractal algorithm seeks to insert a new note between them. It detects the original interval between the two notes, then it queries the database whether there exists a cell, which contains that interval between its first and last note. If it is true, then the middle note of the cell is inserted between the two notes of our beginning (see Figure 3 a). We now have a sequence of three notes whose interval strucure is equal to the cell that was chosen from the database. The pitchclasses of our first two notes are preserved. This algorithm, can be applied recusively. Starting now with three notes from our result, the algorithm steps through the intervals and replaces each one of them by two other intervals that were obtained from another cell within our matrix database (see Figure 3 b). Of course, there are multiple solutions for the insertion of a note according to a given interval, because there are several cells within the matrix that would pass the check. The choice made by the program depends on a first-come-first-served basis. In order to make sure that each cell has an equal chance of getting selected, the order of the matrix has always been scrambled the moment before the query was sent to the database. The fractal algorithm needs a minimum input of two notes but it can work on lists of notes of any length. Fractal chains maintain the tendency of the original structure at the beginning. Therefore, this interval structure is the background structure of the final result after the algorithm has been called a few times. 3 Algorithms The algorithms that are used to concatenate cells from the database are as follows: 3.1 Fractal Chaining LAC2005 75 3.3 Chain overlapping 1 note This method simply takes a random cell from the database and adds it to the end of the melody by taking its last note as the first note of the cell, thus adding two new notes to the melody (see Figure 5). Figure 5: Scheme of Algorithm 3.3 3.4. Chain combining Algorithms 3.2 & 3.3 Figure 3: Sequence of Fractal Chaining The algorithm takes the last interval of the melody, finds, if possible, a match with a cell from the database, adds the last note from the cell to the melody, then taking this note as the starting point of a randomly chosen cell from the database (see Figure 6). 3.2 Chain overlapping 2 notes The chaining algorithms differ from the fractal algorithm because their goal is to add notes to the tail of a given list rather then inserting them between every two notes. The chaining method overlapping two notes looks at the last interval of a melody. It then searches the matrix for a matching three-note cell whose first interval is equal to that last interval of the melody. If a match was found, then the second interval of the cell is added to the melody and so a new note is added, the melody expands (see Figure 4). Figure 6: Scheme of Algorithm 3.4 3.5 Option: check history This option can be switched on or off and triggers a statistical measurment of the pitchclass content of the melody. It ensures that only those cells are chosen from the database whose interval structure generates new pitchclasses when added to the melody. This option leads to the generation of totally chromatic fields and ensures that every pitch-class has an equal chance to occur within the melody. The history-check can be modified in order to meet other requirements, e.g. the algorithm can be told to chose only cells adding pitch-classes to Figure 4: Scheme of Algorithm 3.2 The reason for using overlapping cells was a result of music analysis: One note may belong to more than just one cell, thus providing a high degree of coherence of the inner musical structure. LAC2005 76 the melody, which belong to a specific key or mode previously defined by the user. 3.6 Chain with no overlap The algorithm choses the first interval of a randomly chosen cell and adds the interval to the melody. Then it takes another random cell and adds its content to the melody. It is also possible to use the chaining algorithms in order to build-up chord structures. When using a database containig major and minor thirds (or other intervals) it is easy to imagine that the algorithm “Chain overlapping 1 note” with history-check “on” returns all possible chords containing 3 up to 12 notes using nothing but major and minor thirds (or other intervals). We are also not restricted to the welltempered 12-note scale. The software is open to more notes per octave as well as it is possible to use micro-intervals. 4 Program Development The software development uses the C++ language and started as an application for Windows using MFC. In order to port the program to Linux and MacOS X the development switched to the wxWidgets1 framework. Figure 7: Scheme for Algorithm 3.5 3.7 Chain using different cell contours The term contour means that each cell has a certain profile of up- or down-movements. These contours are automatically measured within the database. The chaining using contour comparisons takes the last three-note group of the melody and mesures the contour. Then it looks-up the database in order to find a cell which has one of the four possible contours: up/down, down/up, up/up or down/down. The user decides which one of those criteria has to be met and the matching cell is added to the melody. 4.1 User-Interface The user can input notes via his computer keyboard that automatically mimics the keys of a piano keyboard: ‘Q’=c, ‘2’=c#, ‘W’=d, ‘3’=d#, ‘E’=e, ‘R’=f, ‘5’=f# and so on; the lower octave starts with ‘Y’=c, ‘S’=c# etc.. By using the up- and down-arrow keys the user can switch to the next octave up or down. The notes are being played instantly via MIDI interface and printed directly on screen in western music notation. After a melody has been entered, it is possible to transform the input in a number of ways. For example, rubber-band-style selections can be made, notes can be transposed or deleted just like with a text editor. Series of notes can be inverted and reversed and the aforementioned fractal and chaining algorithms applied. These commands work on both selections or the entire score. A “Refresh” command exists for the fractal algorithms. It automatically reverts the transformation and gives simply “another shot”, stepping through all possible alternatives. Of course, selections or the entire score can be played and stopped. In future versions, all commands described here shall run in their own worker1 3.8 Using and extending the program By using different chaining methods, possibly in a sequence of processes, the user has an enormous freedom of choice and variety in modelling a melodic line and the structure of a sequence. It will be possible to use within the program an editor for context-free grammar in order to let the user define a set of rules by which musical entites are chosen from the database. It is easy to imagine, that the matrix of 36 cells can be extended and more variations of the e-d-c-melody could join the matrix. For instance, i added 10 more modes of the ‘Ur’melody resulting in a database of 126 cells. Formerly known as wxWindows. The name has been changed to wxWidgets. See also LAC2005 77 thread, not in the user-interface thread, so the user-interface will not be blocked by any calculations. The program supports export of standard MIDI files. It is planned for future versions to support MIDI, XML and CSV file import in order to give more choices for the replacement of the cell database. 4.2 Classes Following the above description of the algorithms and the user-interface, it is evident that we had to implement the classic Mode lView-Controller paradigm. Frameworks like MFC or wxWidgets are built around this paradigm, so it makes sense to use them. Since wxWidgets supports all popular platforms it was chosen as framework for my development. The representaion of music data as objects makes it necessary to design fundamental data structures and base classes. The software uses mainly two different container classes: An Array class, which is used to organise and save the note-cell data of our database. The Arrays save notes as midinote numbers (ints) and they generate automatically information about the intervals they contain, which is a list of delta values. The whole database is kept as an Array of Arrays in memory. In order to facilitate ordering of the cells and random permutations of the database, pointers to the note-cell Arrays are kept in a Double -Linked List. The Double Linked List is a template-based class which manages pointers to object instances. The melody imput from the user is also kept in an instance of the class Double-Linked List, where the elements consist of pointers to a note-table, that can be shared by other objects as well. The Algorithms described in this paper work on a copy of the melody note-list. Since the note-list is of type Double -Linked List, only pointers are being copied. The Algorithm class intitalises its own Array of notes, which itself creates a delta-list of intervals. Pointers to the delta-list elements are the stored inside a Double-Linked List object, so the Algorithms can easily work on the interval- list from the user input. This is done because there is a lot of comparison of interval sizes going on. The history-check that was described as an option is implemented as a Visitor of the Algorithm. This object calculates a history table of all pitch-classes that have been used so far. It uses the Double -Linked List containing the intervals from the Algorithm object it is visiting. All Events that are sent to the MIDI interface are also managed by a Linked-List Container. In order to build the editor for note display and to implement user-interaction, the Composite Design-Pattern will be used. All graphic elements will be either Component or Composite instances. For instance a Staff would be a Composite that contains other Components like Notes, Clefs, Barlines, etc.. 5 Conclusion The use of design-patterns like composite and vistor allows us to achieve a very robust code that is both easy to maintain and to extend. The paper also showed that it is possible to model composition techniques using object-oriented design of musical data. An application has been created that has the flexibility to extend knowledge gained from music analysis and personal experience. The initial goal of creating a sketchbook for composers has been achieved. 6 References J. Dunsby and A. Whittall. 1988. Music Analysis in Theory and Practice. Faber, London E. Gamma, R. Helm, R. Johnson and J. Vlissides. 1995. Design Patterns : Elements of Reusable Object-Oriented Software. AddisonWesley, Reading, MA Bruno R. Preiss. 1999. Data Structures and Algorithms with Object-Oriented Design Patterns in C++. Wiley, New York M. Neifer. 2002. Porting MFC applications to Linux. LAC2005 78 SoundPaint – Painting Music J¨rgen Reuter u Karlsruhe Germany reuter@ipd.uka.de Abstract We present a paradigm for synthesizing electronic music by graphical composing. The problem of mapping colors to sounds is studied in detail from a mathematical as well as a pragmatic point of view. We show how to map colors to sounds in a userdefinable, topology preserving manner. We demonstrate the usefulness of our approach on our prototype implementation of a graphical composing tool. Keywords electronic music, sound collages, graphical composing, color-to-sound mapping 1 Introduction Before the advent of electronic music, the western music production process was clearly divided into three stages: Instrument craftsmen designed musical instruments, thereby playing a key role in sound engineering. Composers provided music in notational form. Performers realized the music by applying the notational form on instruments. The diatonic or chromatic scale served as commonly agreed interface between all participants. The separation of the production process into smaller stages clearly has the advantage of reducing the overall complexity of music creation. Having a standard set of instruments also enhances efficiency of composing, since experience from previous compositions can be reused. The introduction of electro-acoustic instruments widened the spectrum of available instruments and sounds, but in principle did not change the production process. With the introduction of electronic music in the middle of the 20th century however, the process changed fundamentally. Emphasis shifted from note-level composing and harmonics towards sound engineering and creating sound collages. As a result, composers started becoming sound engineers, taking over the instrument crafts men’s job. Often, a composition could not be notated with traditional notation, or, even worse, the composition was strongly bound to a very particular technical setup of electronic devices. Consequently, the composer easily became the only person capable of performing the composition, thereby often eliminating the traditional distinction of production stages. At least, new notational concepts were developed to alleviate the problem of notating electronic music. The introduction of MIDI in the early 80s was in some sense a step back to electro-acoustic, keyed instruments music, since MIDI is based on a chromatic scale and a simple note on/off paradigm. Basically, MIDI supports any instrument that can produce a stream of note on/off events on a chromatic scale, like keyed instruments, wind instruments, and others. Also, it supports many expressive features of nonkeyed instruments like vibrato, portamento or breath control. Still, in practice, mostly keyboards with their limited expressive capabilities are used for note entry. The idea of our work is to break these limitations in expressivity and tonality. With our approach, the composer creates sound collages by visually arranging graphical components to an image, closely following basic principles of graphical notation. While the graphical shapes in the image determine the musical content of the sound collage, the sound itself is controlled by color. Since in our approach the mapping from colors to actual sounds is user-definable for each image, the sound engineering process is independent from the musical content of the collage. Thus, we resurrect the traditional separation of sound engineering and composing. The performance itself is done mechanically by computation, though. Still, the expressive power of graphics is straightly translated into musical expression. The remainder of this paper is organized as follows: Section 2 gives a short sketch of imageto-audio transformation. To understand the role of colors in a musical environment, Section LAC2005 79 3 presents a short survey on the traditional use of color in music history. Next, we present and discuss in detail our approach of mapping colors to sounds (Section 4). Then, we extend our mapping to aspects beyond pure sound creation (Section 5). A prototype implementation of our approach is presented in Section 6. We already gained first experience with our prototype, as described in Section 7. Our generic approach is open to multiple extensions and enhancements, as discussed in Section 8. In Section 9, we compare our approach with recent work in related fields and finally summarize the results of our work (Section 10). 2 Graphical Notation Framework In order to respect the experience of traditionally trained musicians, our approach tries to stick to traditional notation as far as possible. This means, when interpreting an image as sound collage, the horizontal axis represents time, running from the left edge of the image to the right, while the vertical axis denotes the pitch (frequency) of sounds, with the highest pitch located at the top of the image. The vertical pitch ordinate is exponential with respect to the frequency, such that equidistant pitches result in equidistant musical intervals. Each pixel row represents a (generally changing) sound of a particular frequency. Both axes can be scaled by the user with a positive linear factor. The color of each pixel is used to select a sound. The problem of how to map colors to sounds is discussed later on. 3 Color in Musical Notation History The use of color in musical notation has a long tradition. We give a short historical survey in order to show the manifold applications of color and provide a sense for the effect of using colors. Color was perhaps first applied as a purely notational feature by Guido von Arezzo, who invented colored staff lines in the 11th century, using yellow and red colors for the do and fa lines, respectively. During the Ars Nova period (14th century), note heads were printed with black and red color to indicate changes between binary and ternary meters(Apel, 1962). While in medieval manuscripts color had been widely applied in complex, colored ornaments, with the new printing techniques rising up in the early 16th century (most notably Petrucci’s Odhecaton in 1501), extensive use of colors in printed music was hardly feasible or just too expen- sive and thus became seldom. Mozart wrote a manuscript of his horn concert K495 with colored note heads, serving as a joke to irritate the hornist Leutgeb – a good friend of him(Wiese, 2002). In liturgical music, red color as contrasted to black color remained playing an extraordinary role by marking sections performed by the priest as contrasted to those performed by the community or just as a means of readability (black notes on red staff lines). Slightly more deliberate application of color in music printings emerged in the 20th century with technological advances in printing techniques: The advent of electronic music stimulated the development of graphical notation (cp. e.g. Stockhausen’s Studie II (Stockhausen, 1956) for the first electronic music to be published(Simeone, 2001)), and Wehinger uses colors in an aural score(Wehinger, 1970) for Ligeti’s Articulation to differentiate between several classes of sounds. For educational purposes, some authors use colored note heads in introductory courses into musical notation(Neuh¨user et al., a 1974). There is even a method for training absolute hearing based on colored notes(Taneda and Taneda, 1993). Only very recently, the use of computer graphics in conjunction with electronic music has led to efforts in formally mapping colors to sounds (for a more detailed discussion, see the Related Work Section 9). While Wehinger’s aural score is one of the very few notational examples of mapping colors to sounds, music researchers started much earlier to study relationships between musical and aural content. Especially with the upcoming psychological research in the late 19th century, the synesthetic relationship between hearing and viewing was studied more extensively. Wellek gives a comprehensive overview over this field of research(Wellek, 1954), including systems of mapping colors to keys and pitches. Painters started trying to embed musical structures into their work (e.g. Klee’s Fugue in Red). Similarly, composers tried to paint images, as in Mussorgsky’s Pictures at an Exhibition. In Jazz music, synesthesis is represented by coinciding emotional mood from acoustic and visual stimuli, known as the blue notes in blues music. 4 Mapping Colors to Sounds We now discuss how colors are mapped to sounds in our approach. For the remainder of this discussion, we define LAC2005 80 a sound to be a 2π-periodic, continuous function s : R → R, t → s(t). This definition meets the real-world characteristic of oscillators as the most usual natural generators of sounds and the fact that our ear is trained to recognize periodic signals. Non-periodic natural sources of sounds such as bells are out of scope of this discussion. We assume normalization of the periodic function to 2π periodicity in order to abstract from a particular frequency. According to this definition, the set of all possible sounds – the sound space – is represented by the set of all 2π-periodic functions. Next, we define the color space C following the standard RGB (red, green, blue) model: the set of colors is defined by a three-dimensional real vector space R3 , or, more precisely, a subset thereof: assuming, that the valid range of the red, green and blue color components is [0.0, 1.0], the color space is the subset of R 3 that is defined by the cube with the edges (0, 0, 0), (1, 0, 0), (0, 1, 0), and (0, 0, 1). Note that the color space is not a vector space since it is not closed with respect to addition and multiplication by scalar. However, this is not an issue as long as we do not apply operations that result in vectors outside of the cube. Also note that there are other possibilities to model the color space, such as the HSB (hue, saturation, brightness) model, which we will discuss later. Ideally, for a useful mapping of colors to sounds, we would like to fulfill the following constraints: • Injectivity. Different colors should map to different sounds in order to utilize the color space as much as possible. • Surjectivity. With a painting, we want to be able to address as many different sounds as possible – ideally, all sounds. • Topology preservation. Most important, similar colors should map to similar sounds. For example, when there is a color gradation in the painting, it should result in a sound gradation. There should be no discontinuity effect in the mapping. Also, we want to avoid noticeable hysteresis effects in order to preserve reproducibility of the mapping across the painting. • User-definable mapping. The actual mapping should be user-definable, as research has shown that there is no general mapping that applies uniquely well to all individual humans. Unfortunately, there is no mapping between the function space of 2π-periodic functions and R3 that fulfills all of the three constraints. Pragmatically, we drop surjectivity in order to find a mapping that fulfills the other constraints. Indeed, dropping the surjectivity constraint does not hurt too much, if we assume that the mapping is user-definable individually for each painting and that a single painting does not need to address all possible sounds: rather than mapping colors to the full sound space, we let the user select a three-dimensional subspace S of the full sound space. This approach also leverages the limitation of our mapping not being surjective: since for each painting, a different sound subspace can be defined by the composer, effectively, the whole space of sounds is still addressable, thus retaining surjectivity in a limited sense. Dropping the surjectivity constraint, we now focus on finding a proper mapping from color space to a three-dimensional subset of the sound space. Since we do not want to bother the composer with mathematics, we just require the basis of a three-dimensional sound space to be defined. This can be achieved by the user simply defining three different sounds, that span a three-dimensional sound space. Given the three-dimensional color space C and a threedimensional subspace S of the full sound space, a bijective, topology preserving mapping can be easily achieved by a linear mapping via a matrix multiplication, M : C → S, x → y = Ax, x ∈ C, y ∈ S (1) with A being a 3 × 3 matrix specifying the actual mapping. In practice, the composer would not need to specify this vector space homomorphism M by explicitly entering some matrix A. Rather, given the three basis vectors of the color space C, i.e. the colors red, green, and blue, the composer just defines a sound individually for each of these three basis colors. Since each other color can be expressed as a linear combination of the three basis colors, the scalars of this linear combination can be used to linearly combine the three basis sounds that the user has defined. 5 Generalizing the Mapping As excitingly this approach may sound at first, as disillusioning we are thrown back to reality: pure linear combination of sounds results in nothing else but cross-fading waveforms, which LAC2005 81 quickly turns out to be too limited for serious composing. However, what we can still do is to extend the linear combination of sounds onto further parameters that influence the sound in a non-linear manner. Most notably, we can apply non-linear features on sounds such as vibrato, noise content, resonance, reverb, echo, hall, detune, disharmonic content, and others. Still, also linear aspects as panning or frequencydependent filtering may improve the overall capabilities of the color-to-sound mapping. In general, any scalar parameter, that represents some operation which is applicable on arbitrary sounds, can be used for adding new capabilities. Of course, with respect to our topology preservation constraint, all such parameters should respect continuity of their effect, i.e. there should no remarkable discontinuity arise when slowly changing such a parameter. Again, we do not want to burden the composer with explicitly defining a mapping function. Instead, we extend the possibilities of defining the three basis sounds by adding scalar parameters, e.g. in a graphical user interface by providing sliders in a widget for sound definition. So far, we assumed colors red, green and blue to serve as basis vectors for our color space. More generally, one could allow to accept any three colors, as long as they form a basis of the color space. Changing the basis of the color space can be compensated by adding a basis change matrix to our mapping M : M : C’ → S, x → y = AφC →C x = A x, (2) set is not closed with respect to addition and multiplication by scalar. By changing the basis in R3 , the cubic shape of the RGB color space in the first octant generally transforms into a different shape that possibly covers different octants, thereby changing the valid range of the vector components. Therefore, when operating with a different basis, vectors must be carefully checked for correct range. 6 SoundPaint Prototype Implementation In order to demonstrate that our approach works, a prototype has been implemented in C++. The code currently runs under Linux, using wxWidgets(Roebling et al., 2005) as GUI library. The GUI of the current implementation mainly focuses on providing a graphical frontend for specifying an image, and parameterizing and running the transformation process, which synthesizes an audio file from the image file. An integrated simple audio file player can be used to perform the sound collage after transformation. Figure 1: Mapping Colors to Sounds Currently, only the RGB color space is supported with the three basis vectors red, green, and blue. The user defines a color-to-sound mapping by simply defining three sounds to be associated with the three basis colors. Figure 1 shows the color-to-sound mapping dialog. A generic type of wave form can be selected from a list of predefined choices and further parameterized, as shown in Figure 2 for the type of triangle waves. All parameters that go beyond manipulating the core wave form – namely pan, vibrato depth and rate, and noise content – are common to all types of wave forms, such that they can be linearly interpolated between different types. Parameters such as the duty cycle however only affect a particular wave form and thus need not be present for other types of wave forms. Some more details of the transformation are worth mentioning. When applying the core transformation as described in Section 2, the assuming that φC →C is the basis change matrix that converts x from space C’ to space C. Specifically, composers may want to prefer the HSB model over the RGB model: traditionally, music is notated with black or colored notes on white paper. An empty, white paper is therefore naturally associated with silence, while a sheet of paper heavily filled with numerous musical symbols typically reminds of terse music. Probably more important, when mixing colors, most people think in terms of subtractive rather than additive mixing. Conversion between HSB and RGB is just another basis change of the color space. When changing the basis of the color space, care must be taken with respect to the range of the vector components. As previously mentioned, the subset of the R3 , that forms the color space, is not a vector space, since the sub- LAC2005 82 plitude level falls below a threshold value. This threshold value can be controlled via the gate parameter in the Synthesize Options widget. 7 Preliminary Experience Figure 2: Parameterizing a Triangle Wave resulting audio file will typically contain many crackling sounds. These annoying noises arise from sudden color or brightness changes at pixel borders: a sudden change in sound produces high-frequency peaks. To alleviate these noises, pixel borders have to be smoothened along the time axis. As a very simple method of antialiasing, SoundPaint horizontally divides each image pixel into sub-pixels down to audio resolution and applies a deep path filter along the sub-pixels. The filter characteristics can be controlled by the user via the Synthesize Options widget, ranging from a plain overall sound with clearly noticeable clicks to a smoothened, almost reverb-like sound. Best results are achieved when painting only a few colored structures onto the image and leaving the keeping the remaining pixels in the color that will produce silence (i.e., in the RGB model, black). For performance optimization, it is therefore useful to handle these silent pixels separately, rather than computing a complex sound with an amplitude of 0. Since, as an effect of the before mentioned pixel smoothing, often only very few pixels are exactly 0, SoundPaint simply assumes an amplitude of 0, if the am- SoundPaint was first publically presented in a workshop during the last Stadtgeburtstag (city’s birthday celebrations) of the city Karlsruhe(Sta, 2004). Roughly 30 random visitors were given the chance to use SoundPaint for a 30 minutes slot. A short introduction was presented to them with emphasis on the basic concepts from a composer’s point of view and basic use of the program. They were instructed to paint on black background and keep the painting structurally simple for achieving best results. For the actual process of painting, XPaint (as default) and Gimp (for advanced users) were provided as external programs. Almost all users were immediately able to produce sound collages, some of them with very interesting results. What turned out to be most irritating for many users is the additive interpretation of mixed colors. Also, some users started with a dark gray rather than black image background, such that SoundPaint’s optimization code for silence regions could not be applied, resulting in much slower conversion. These observations strongly suggest to introduce HSB color space in SoundPaint. 8 Future Work Originally stemming from a command-line tool, SoundPaint still focuses on converting image files into audio files. SoundPaint’s GUI mostly serves as a convenient interface for specifying conversion parameters. This approach is, from a software engineering point of view, a good basis for a clean software architecture, and can be easily extended e.g. with scripting purposes in mind. A composer, however, may prefer a sound collage in a more interactive way rather than creating a painting in an external application and repeatedly converting it into an audio file in a batch-style manner. Hence, SoundPaint undoubtedly would benefit from integrating painting facilities into the application itself. Going a step further, with embedded painting facilities, SoundPaint could be extended to support live performances. The performer would simply paint objects ahead of the cursor of SoundPaint’s built-in player, assuming that the image-to-audio conversion can be performed in real-time. For Minimal Music like perfor- LAC2005 83 mances, the player could be extended to play in loop mode, with integrated painting facilities allowing for modifying the painting for the next loop. Inserting or deleting multiple objects following predefined rhythmical patterns with a single action could be a challenging feature. Assembling audio files generated from multiple images files into a single sound collage is desired when the surjectivity of our mapping is an issue. Adding this feature to SoundPaint would ultimately turn the software into a multi-track composing tool. Having a multi-track tool, integration with other notation approaches seems nearby. For example, recent development of LilyPond’s(Nienhuys and Nieuwenhuizen, 2005) GNOME back-end suggests to integrate traditional notation in separate tracks into SoundPaint. The overall user interface of such a multitrack tool finally could look similar to the arrange view of standard sequencer software, but augmented by graphical notation tracks. 9 Related Work Graphical notation of music has a rather long history. While the idea of graphical composing as the reverse process is near at hand, practically usable tools for off-the-shelf computers emerged only recently. The most notably tools are presented below. Maybe Iannis Xenakis was the first one who started designing a system for converting images into sounds in the 1950’s, but it took him decades to present the first implementation of his UPIC system in 1978(Xenakis, 1978). Like SoundPaint, Xenakis uses the coordinate axes following the metaphor of scores. While SoundPaint uses a pixel-based conversion that can be applied on any image data, the UPIC system assumes line drawings with each graphical line being converted into a melody line. Makesound(Burrell, 2001) uses the following mapping for a sinusoidal synthesis with noise content and optional phase shift: x position phase y position temporal position hue frequency saturation clarity (inv. noise content) luminosity intensity (amplitude) In Makesound, each pixel represents a section of a sine wave, thereby somewhat following the idea of a spectrogram rather than graphical notation. Color has no effect on the wave shape itself. EE/CS 107b(Suen, 2004) uses a 2D FFT of each of the RGB layers of the image as basis for a transformation. Unfortunately, the relation between the image and the resulting sound is not at all obvious. Coagula(Ekman, 2003) uses a synthesis method that can be viewed as a special case of SoundPaint’s synthesis with a particular set of color to sound mappings. Coagula uses a sinusoidal synthesis, using x and y coordinates as time and frequency axis, respectively. Noise content is controlled by the image’s blue color layer. Red and green control stereo sound panning. Following Coagula’s documentation, SoundPaint should show a very similar behavior when assigning 100% noise to blue, and pure sine waves to colors red and green, with setting red color’s pan to left and green color’s pan to right. Just like Coagula, MetaSynth(Wenger and Spiegel, 2005) maps red and green to stereo panning, while blue is ignored. Small Fish(Furukawa et al., 1999), presented by the ZKM(ZKM, 2005), is an illustrated booklet and a CD with 15 art games for controlling animated objects on the computer screen. Interaction of the objects creates polytonal sequences of tones in real-time. Each game defines its own particular rules for creating the tone sequences from object interaction. The tone sequences are created as MIDI events and can be played on any MIDI compliant tone generator. Small Fish focuses on the conversion of movements of objects into polytonal sequences of tones rather than on graphical notation; still, shape and color of the animated objects in some of the games map to particular sounds, thereby translating basic concepts of graphical notation into an animated real-time environment. The PDP(Schouten, 2004) extension for the Pure Data(Puckette, 2005) real-time system follows a different approach in that it provides a framework for general image or video data processing and producing data streams by serialization of visual data. The resulting data stream can be used as input source for audio processing. Finally, it is worth mentioning that the visualization of acoustic signals, i.e. the opposite conversion from audio to image or video, is frequently used in many systems, among them Winamp(Nullsoft, 2004) and Max/MSP/Jitter(Cycling ’74, 2005). Still, these species of visualization, which are often implemented as real-time systems, typically LAC2005 84 Han-Wen Nienhuys and Jan Nieuwenhuizen. 2005. LilyPond, music notation for everyone. URL:. 10 Conclusions Nullsoft. 2004. Winamp. URL:. We presented SoundPaint, a tool for creating Miller Puckette. 2005. Pure Data. URL: sound collages based on transforming image. data into audio data. The transformation folRobert Roebling, Vadim Zeitlin, Stefan Csolows to some extent the idea of graphical nomor, Julian Smart, Vaclav Slavik, and tation, using x and y axis for time and pitch, Robin Dunn. 2005. wxwidgets. URL: respectively. We showed how to deal with the. color-to-sound mapping problem by introducTom Schouten. 2004. Pure Data Packet. URL: ing a vector space homomorphism between color space and sound subspace. Our tool mostly overview.html. hides mathematical details of the transformaNigel Simeone. 2001. Universal edition history. tion from the user without imposing restric2004. Stadtgeburtstag Karlsruhe, June. URL: tions in the choice of parameterizing the trans. formation. First experience with random users Karl-Heinz Stockhausen. 1956. Studie II. during the city’s birthday celebrations demonJessie Suen. 2004. EE/CS 107b. URL: strated the usefulness of our tool. The re˜chia/EE107/. sult of our work is available as open source at Naoyuki Taneda and Ruth Taneda. 1993. Erziehung zum absoluten Geh¨r. Ein neuer o soundpaint/. Weg am Klavier. Edition Schott, 7894. B. Schott’s S¨hne, Mainz, Germany. o 11 Acknowledgments Rainer Wehinger. 1970. Ligeti, Gyorgy: ArThe author would like to thank the Faculty of ticulation. An aural score by Rainer WeComputer Science of the University of Karlhinger. Edition Schott, 6378. B. Schott’s sruhe for providing the infrastructure for develS¨hne, Mainz, Germany. o oping the SoundPaint software, and the departAlbert Wellek. 1954. Farbenh¨ren. MGG – o ment for technical infrastructure (ATIS) and Musik in Geschichte und Gegenwart, 4:1804– Tatjana Rauch for their valuable help in or1811. ganizing and conducting the workshop at the Eric Wenger and Edward Spiegel. city’s birthday celebrations. 2005. Methasynth 4, January. URL: References DOCS PUBLIC/MS4 Tutorials.pdf. Willi Apel. 1962. Die Notation der polyphonen Henrik Wiese. 2002. Preface to Concert for Musik 900-1600. Breitkopf & H¨rtel, Wiesa Horn and Orchestra No. 4, E flat mabaden. jor, K495. Edition Henle, HN 704. G. Michael Burrell. 2001. Makesound, June. URL: Henle Verlag, M¨ nchen, Germany. URL: u. Cycling ’74. 2005. Max/MSP/Jitter. URL: Vorwort/0704.pdf.. Iannis Xenakis. 1978. The UPIC system. URL: Rasmus Ekman. 2003. Coagula. URL:. INSTRUMENT/DIGITAL/UPIC/UPIC.htm. Kiyoshi Furukawa, Masaki Fujihata, and Wolf2005. Zentrum f¨ r Kunst und Medientechnolou gang M¨ nch. 1999. Small fish: Kammeru gie. URL:. musik mit Bildern f¨r Computer und Spieler, u volume 3 of Digital arts edition. Cantz, Ostfildern, Germany. 56 S. : Ill. + CD-ROM. Meinolf Neuh¨user, a Hans Sabel, and Richard Rudolf Klein. 1974. Bunte Zaubernoten. Schulwerk f¨r den ganzheitlichen u Musikunterricht in der Grundschule. Diesterweg, Frankfurt am Main, Germany. work on the audio signal level rather than on the level of musical structures. LAC2005 85 LAC2005 86 System design for audio record and playback with a computer using FireWire Michael SCHÜEPP BridgeCo AG michael.schuepp@bridgeco.net Rene Widtmann BridgeCo AG rene.widtmann@bridgeco.net Rolf “Day” KOCH BridgeCo AG rolf.koch@bridgeco.net Klaus Buchheim BridgeCo AG klaus.buchheim@bridgeco.net Abstract This paper describes the problems and solutions to enable a solid and high-quality audio transfer to/from a computer with external audio interfaces and takes a look at the different elements that need to come together to allow high-quality recording and playback of audio from a computer. 1.1 System Overview Keywords Recording, playback, IEEE1394 When discussing the requirements for an audio interfaces it is important to understand the overall system architecture, to identify the required elements and the environment in which those elements have to fit in. The overall system architecture can be seen in the following figure: 1 Introduction Computers, together with the respective digital audio workstation (DAW) software, have become powerful tools for music creation, music production, post-production, editing. More and more musicians turn to the computer as a tool to explore and express their creative ideas. This tendency is observed for both, professional and hobby musicians. Within the computer music market a trend towards portable computers can be observed as well. Laptops are increasingly used for live recordings outside a studio as well as mobile recording platforms. And, with more and more reliable system architectures, laptops/computers are also increasingly used for live performances. However making music on a computer faces the general requirement to convert the digital music into analogue signals as well as to digitize analogue music to be processed on a computer. Therefore the need for external, meaning located outside of the computer housing, audio interfaces is increasing. The paper describes a system architecture for IEEE1394 based audio interfaces including the computer driver software as well as the audio interface device. Illustration 1: Computer audio system overview The overall system design is based on the following assumptions: A player device receives m audio channels (connection 3), from the computer, and plays them out. In addition it plays out data to i MIDI Ports. The data (audio and MIDI) sent from the computer are a compound stream. A recorder device records n audio channels and sends the data to the computer (connection 4). In addition it records data from j MIDI Ports and sends their data to the computer. The data (audio and MIDI) sent from the computer are a compound stream. A device can send or receive a synchronisation stream (connection 1 and 2). Typically one of the Mac/PC attached audio devices is the clock master for the synchronisation stream. The player and recorder functions can be standalone devices or integrated into the same device. LAC2005 87 1.2 What is there? 2.2 Synchronization In the set-up above the following elements already exist and are widely used: On computers: Digital audio workstation software such as Cubase and Logic with their respective audio APIs (ASIO and CoreAudio) Operating system (here Windows XP and Apple Mac OS X) Computer hardware such as graphic cards, OHCI controllers, PCI bus etc. In a typical audio application there are many different clock sources for the audio interface. Therefore we have the requirement to be able to synchronize to all of those clock sources and to have means to select the desired clock source. 2.3 Signal Processing On audio interface: Analogue/Digital converters with I2S interfaces For low latency requirements and specific recording set-up, it is required to provide the capability for additional audio processing in the audio interface itself. An example would be a direct monitor mixer that mixes recorded audio onto the audio samples from the computer. All of the above elements are well accepted in the market and any solution to be accepted in a market place needs to work with those elements. 2.4 Device Management 1.3 What is missing? The key elements that are missing in the above system are the following: 1. The driver software on the computer that takes the audio samples to/from a hardware interface and transmits/receives them to/from the audio APIs of the music software. The interface chip in the audio interface that transmits/receives the audio samples to/from the computer and converts them to the respective digital format for the converter chips. 2. Since we have the requirement to sell our product to various different customers as well as for various different products in a short time-to-market, it is necessary to provide a generic approach that reduces the customization efforts on the firmware and driver. Therefore it was necessary to establish a discovery process that allows the driver at least to determine the device audio channels and formats on-the-fly. This would reduce the customization efforts significantly. Therefore means to represent the device capabilities within the firmware and to parse this information by the driver have to be found. 2.5 User Interface The paper will now focus on these two elements and shows, what is required for both sides to allow for a high-quality audio interface. In a first step we will look at the different problems we face and then at the proposed solutions. It must be possible to define a user interface on the device as well as the computer or a mix of both. Therefore it is required to provide means to supply control information from both ends of the system 2 Issues to resolve 2.6 Multi-device Setup To allow audio streaming to/from a computer the following items have to be addressed: 2.1 Signal Transport It has to be defined how the audio samples get to/from the music software tools to the audio interface. The transfer protocol has to be defined as well the transfer mode. Additionally precautions to reduce the clock jitter during the signal transport have to be taken. Also the latency in the overall system has to be addressed. It is believed that it must be possible to use several audio interfaces at the same time to provide more flexibility to end-users. This puts additional requirements on all above issues. To avoid sample rate conversion in a multi-device setup it is mandatory to allow only a single clock source within the network. This requirement means to select the master clock within the multi-device setup as well as to propagate the clock information within the network so that all devices are slaved to the same clock. 3 Resolution Very early in the design process it was decided to use the IEEE1394 (also called FireWire) standard LAC2005 88 [6] as the base for the application. The IEEE1394 standard brings all means to allow isochronous data streaming, it is designed as a peer-to-peer network and respective standards are in place to transport audio samples across the network. It was also decided to base any solution on existing standards to profit from already defined solutions. However it was also clear that the higher layers of the desired standards were not good enough to solve all of our problems. Therefore efforts have been undertaken to bring our solutions back to the standardization organisations. Overall the following standards are applied to define the system: IEC 61883-6 packets that contain data blocks with several audio formats are called compound packets. Isochronous streams containing compound packets are called compound streams. Compound streams are used within the whole system to transfer audio and music data to/from the audio interface. The IEC 61883-6 standard defines the structure of an audio packet being sent over the IEEE1394 bus. The exact IEC 61883-6 packet structure can be found in [6]. Illustration 2: Applied standards As we can see, a variety of standards on different layers and from different organisations are used. 3.1 Signal Transport Illustration 3: IEC 61883 packet structure The signal transport between the audio interfaces and the computer is based on the isochronous packets defined in the IEEE1394 standard. The format and structures of audio packets is defined in IEC61883-6 standard [6], which is used here as well. However a complex audio interface requires transmitting several audio and music formats at the same time. This could e.g. be PCM samples, SPDIF framed data and MIDI packets. An additional requirement is synchronicity between the different formats. Therefore it was decided to define a single isochronous packet, based on an IEC 61338-6 structure that contains audio and music data of different formats. Such a packet is called a compound packet. The samples in such a packet are synchronized since the time stamp for the packet applies to all the audio data within the packet. The blocking mode is our preferred mode for data transmission on the IEEE1394 bus. In case data is missing to complete a full packet on the source side empty-packets are being sent. An empty-packet consists of only the header block and does not contain data. The SYT field in the CIP1 header is set to 0xffff. The following rules to create an IEC 61883 compliant packet are applied: A packet always begins with a header block consisting of a header and two CIP header quadlets. (M) data blocks follow the header block. Table 1 defines M and its dependency on the stream transfer mode. In blocking mode, the number of data blocks is constant. If insufficient samples are available to fill LAC2005 89 all the data blocks in a packet, an empty packet will be sent. In non-blocking mode, all the available samples are placed in their data blocks and sent immediately. The number of data blocks is not constant. Blocking Mode 8 8 8 16 16 32 32 Non-Blocking Mode 5-7 5-7 5-7 11-13 11-13 23-25 23-25 Field SYT Cycles Description This field, in combination with the SYT Offset field, defines the point in time when the packet should be played out. Value range: 0 – 15 This field, in combination with the SYT Cycles field, defines the point in time when the packet should be played out. Value range: 0 – 0xBFF SYT Offset Sampling Frequency (FDF) [kHz] 32 44.1 48 88.2 96 176.4 196 Table 2: IEC 61883 packet header fields Table 1: Number of data blocks depending on the sampling frequency Within an IEC 61883 packet, the data blocks follow the header. For the data block structure we applied the AM824 standard as defined in 6. An audio channel is assigned to a slot within the data block: The header information and structure for an isochronous IEC61883 packet is defined as follows: Illustration 5: Data block structure The following rules apply to assemble the data blocks: 1. Illustration 4: IEC 61883 packet header 2. 3. 4. 5. The number of samples (N) within a data block of a stream is constant. The number of samples should be even (padding with ancillary no-data samples see 6) The label is 8 bits and defines the sample data type The sample data are MSB aligned The channel to slot assignment is constant Table 2 describes the different elements and their definition within the packet header: Field Data Length Channel SY Description Length in bytes of the packet data, including CIP1 and CIP2 header. Isochronous channel to which the packet belongs. “System” Can be of interest if DTCP (streaming encryption) is used. “System Identification” Contains the IEEE1394 bus node id of the stream source. “Data Block Size” Contains information about the number of samples belonging to a data block. “Data Block Count” Is a counter for the number of data blocks that have already been sent. It can be used to detect multiply sent packets or to define the MIDI port to which the sample belongs. “Format” The format of the stream. For an audio stream this field is always 0x10. The nominal sampling frequency of the stream. See [3.1] for value definition. SID DBS The channel to data block slot assignment is user defined. To create a compound packet, a data structure had to be defined to place the different audio and music formats within the data blocks. No current standard defines the order in which such user data must be placed within a data block. The current standard 6 simply provides a recommended ordering. We applied this recommendation for our application and made the following data block structure mandatory to stream audio and music information: DBC FMT FDF LAC2005 90 framing and deframing processes have to be done as efficiently as possible. 3.2 Synchronization The system synchronization clock for an IEEE1394 based audio interface can normally be retrieved from four different sources: 1. 2. Illustration 6: User data order The time information in the SYT header field of an incoming isochronous stream. The 8KHz IEEE1394 bus Cycle Start Packet (CSP). The time information from an external source like Word Clock or SPDIF. A clock generated by a XO or VCXO in the device. The following rules are applied to create the data blocks of a compound packet: 1. A region within a data block always contains data with the same data type 2. Not every region type must exist in a packet The following region order is used: 1. 2. 3. 4. 5. SPDIF: IEC 60958 (2 Channels) Raw Audio: Multi-Bit Linear Audio (N Channels) MIDI: MIDI Conformant Data SMTPE Time Code Sample Count 3. 4. MIDI data is transferred, like audio data, within channels of a compound data block. Because of the low transfer rate of one MIDI port, data of 8 MIDI ports, instead of just one, can be transferred in one channel. As shown in Illustration 7, one data part of a MIDI port will be transferred per data block and channel. This method of splitting data is called multiplexing. Illustration 8: Possible synchronization sources for an IEEE1394 based audio interface 3.3 Signal Processing The specific architecture of the BridgeCo DM1x00 series is designed to enable signal processing of the audio samples once they have been deframed: SD/SRAM, Flash PCMCIA SDRAM/SRAM Memory Controller Security Engine SRAM 96 kByte Timing Recovery DCO ARM926EJ MMU Cache TC RAM Illustration 7: MIDI data multiplexing For the two main elements in this system, the driver and the interface processor, it is required to assemble the data packet correctly when sending data as well as to receive and disassmble the packets. Based on the dynamics of the system with different channel counts and formats the final packet structure has to be derived from the configuration from the interface such as number of channels per format and sample rate. Overall in the system it is required to keep the latency low so the Audio In/Out Audio Ports I2S, I8S SPDIF 1394a/b LLC DTCP 61883 PHY-Link S100...S800 SPI GPIO UART (2) Illustration 9: Architecture of the BridgeCo DM1000 processor Since the on-board ARM processor core can access every audio sample before it is either sent to the audio ports or sent to the IEEE1394 link layer, LAC2005 91 it is possible to enable signal processing on the audio interface. Typical applications used in this field are direct monitor mixers, which allow mixing the inputs from the audio ports with the samples from the IEEE1394 bus before they are played out over the audio ports: is based on a sequence of AV/C commands exchanged between the driver and the device. Once this sequence is executed and the device is recognized as an AV/C device, the driver starts to query the device for the specific device information. This can either be done using proprietary AV/C commands or by parsing an AV/C descriptor from the device. Within the device, the control flow and different signal formats are described with an AV/C model. The AV/C model is a representation of the internal control flow and audio routing: Illustration 10: Direct monitor mixer 3.4 Device Management The device management plays a key role within the overall concept. To implement a generic approach it is absolutely necessary for the driver software to determine the device capabilities such as number of audio channels, formats and possible sample rates on-the-fly. Based on that information, the driver can expose the respective interfaces to the audio software APIs. The device management and device discovery is normally defined in the AV/C standards from the 1394TA. To achieve our goals, several audio and music related AV/C standards have been used: Illustration 11: Typical AV/C model The following rules are applied to an AV/C model: 1 2 Fixed connections must not be changed. Every control command requesting such a change must be rejected. Every unclear requested connection (like Ext. IPlug0 Destination Plug) must be rejected. AV/C Music Subunit Specification V1.0 AV/C Stream Format Information Specification V1.0 AV/C Audio subunit 1.0 However the standards did not provide all means to describe and control devices as intended. Therefore two of above standards, AV/C Music Subunit and AV/C Stream Format Information are currently updated within the 1394TA organization, based on the experience and implementations from Apple Computer and BridgeCo. Using AV/C, the driver has the task to determine and query the device for its capabilities whereas the device (meaning the software running on the device) needs to provide all the information requested by the driver to allow to stream audio between the driver and the audio interface. As soon as the device is connected to the driver via IEEE1394, a device discovery process is started. The discovery process In the AV/C model we also see the control mechanism for the direct monitor mixer which can be controlled over AV/C e.g. to determine the levels of the different inputs into the mixer. Based on this information, the driver software can now determine the number of audio channels to be sent to the device, the number of audio channels received from the device, the different formats and expose this information to the audio streaming APIs such as ASIO or CoreAudio and expose all available sample rates to a control API. 3.5 User Interface In the described system of an audio interface connected to a computer there are two natural points to implement a user interface: A control panel on the computer Control elements on the audio interface LAC2005 92 Depending on customer demands and branding, different OEMs have different solutions/ideas of a control interface. In our overall system architecture we understand that it is impossible to provide a generic approach to all possible configurations and demands. Therefore we decided to provide APIs that can easily be programmed. Those APIs have to be present on both sides, on the driver as well as in the device software: On the driver side we expose a control API that allows direct controlling the device as well as to sent/use specific bus commands. On the device we have several APIs that allow to link in LEDs, knobs, buttons and rotaries. Illustration 12: Multi-device configuration The commands from the control API of the driver are send as AV/C commands or vendor specific AV/C commands to the device. The control API provides the correct framing of those commands whereas the application utilizing the control API needs to fill in the command content. On the device side, those AV/C commands are received and decoded to perform the desired action. This could e.g. be different parameters for the mixer routines or being translated into SPI sequences to control and set the external components. Next to the UI information, the device might need to send additional information to a control panel, e.g. peak level information of different audio samples, rotary information for programmable functions etc. For those high-speed data transfers, the AV/C protocol can be too slow since it e.g. allows a timeout of about 10 msec before sending retries. Within that time frame, useful and important information might already be lost. Therefore we deployed a high-speed control interface (HSCI) that allows the device to efficiently provide information to the driver. With the HSCI, the device writes the desired information into a reserved space of the IEEE1394 address space of the device. This allows the application on the computer doing highly efficient asynchronous read requests to this address space to obtain the information. Since the information structure is such, that no information gets lost, the PC application can pull the information when needed. If multiple audio interfaces are connected to a computer, we face certain limitations, either imposed by the operating system, the audio software on a computer, APIs etc. : 1. 2. 3. ASIO and CoreAudio based audio software (e.g. Cubase) can only work with a single device. To avoid sample rate conversion, only a single clock source can be allowed for all devices. All devices need to be synchronised over the network To overcome those limitations the driver software has to provide the following capabilities: 1. 2. 3. 4. Concatenate multiple IEEE1394 devices into a single ASIO or CoreAudio device for the audio software application. Allow selecting the clock source for each device. Ability to transmit and send several isochronous streams. Ability to supply the same SYT values to all transmitted isochronous streams The device itself needs to provide the following functions: 1. 2. Expose available clock sources to the driver software Generate correct SYT values for outgoing isochronous streams To synchronise multiple devices on a single clock source, which might be an external clock source for one of the devices, the following clocking scheme is used: 1. 2. A device must be selected as clock master. This can be the computer as well. If an external device is the clock master, the driver software synchronizes to the SYT time stamps within the isochronous stream from the clock master device. The driver copies the received SYT time stamps from the clock master stream to its outgoing stream for all other devices. 3.6 Multi-device Setup A multi-device setup is normally needed when users like to use multiple device to gain a higher channel count or use different formats that are not all available in a single audio interface: 3. LAC2005 93 4. All external devices expect the clock master use the SYT time stamps of their incoming isochronous stream as a clock source. working. For further informatin please visit the website of the project (). 5 Conclusion Illustration 13: Example for a multi-device clock setup Now, all devices are slaved across the IEEE1394 network to a single clock source. This avoids to use word clock or similar schemes to synchronize multiple devices. Based on above configurations, each device needs to be able to synchronize on the SYT time stamps of the isochronous packets to work in such an environment. Therefore the following features are required for the driver software: 1. 2. 3. 4. 5. Concatenate multiple devices into a single device on the audio API (ASIO or CoreAudio) Allow synchronizing on the SYT time stamps from a selected isochronous stream Generate correct SYT time stamps for all isochronous streams based on the received SYT time stamps Parse AV/C model to determine all available clock sources on a device Allow to set the clock source for each device Due to the wide spectrum of interpretation within the available standards a very tight cooperation between all elements in the system is necessary. In developing such a system, it is not enough just to concentrate on and develop one element within the system. Instead it is rather required to start from a system perspective, to design an overall system concept that is initially independent of the different elements. Then, in a second step the individual tasks for each element can be defined and implemented. BridgeCo has chosen this approach and with over 20 different music products shipping today has proven that the concept and the system design approach leads to a success story. BridgeCo also likes to express its gratitude to Apple Computer, which has been a great partner throughout the design and implementation process and has provided valuable input into the whole system design concept. 6 [1] [2] [3] [4] [5] [6] [7] [8] Reference IEEE Standard 1394-1995, IEEE Standard for a High Performance Serial Bus, IEEE, July 22 1996 IEEE Standard 1394a-2000, IEEE Standard for a High Performance Serial Bus—Amendment 1, IEEE, March 30 2000 IEEE Standard 1394b-2002, IEEE Standard for a High-Performance Serial Bus—Amendment 2, IEEE, December 14 2002 TA Document 2001024, “Audio and Music Data Transmission Protocol” V2.1, 1394TA, May 24 2002 TA Document 2001012, “AV/C Digital Interface Command Set General Specification”, Version 4.1, 1394TA, December 11, 2001 TA Document 1999031, “AV/C Connection and Compatibility Management Specification”, Version 1.0, 1394TA, July 10, 2000 TA Document 1999025, “AV/C Descriptor Mechanism Specification”, Version 1.0, 1394TA, April 24 2001 TA Document 2001007, “AV/C Music Subunit”, Version 1.0, 1394TA, April 8 2001 TA Document 1999008, “AV/C Audio Subunit Specification”, Version 1.0, 1394TA, October 24 2000 IEC 61883-6, Consumer audio/video equipment - Digital interface - Part 6: Audio and music data transmission protocol, IEC, October 14 2002 For the chip/firmware combination on the audio interface the following requirements must be met: 1. 2. 3. Allow to synchronize to SYT time stamps Expose all available clock sources on the device in the AV/C Model Allow external control of the clock sources via AV/C 4 FreeBob Project Currently, there only exists drivers for the Windows and MacOS X platform, which are of course not free. The FreeBob project is trying to implement a complete free and generic driver for Linux based systems. The project is still in early stages, though first (hardcoded) prototypes are [9] [10] LAC2005 94 Recording all Output from a Student Radio Station John ffitch Department of Computer Science University of Bath Bath BA2 7AY, UK, jpff@cs.bath.ac.uk Tom Natt Chief Engineer, URB University of Bath Bath BA2 7AY, UK, ma1twn@bath.ac.uk Abstract Legal requirements for small radio stations in the UK mean, inter alia, that the student station at Bath (University Radio Bath or URB) must retain 50 days of the station’s output. In addition, as it has recently become easier to transfer data using disposable media, and general technical savvy amongst presenters has improved, there is now some interest in producing personal archives of radio shows. Because existing techniques, using audio videos, were inadequate for this task, a modern, reliable system which would allow the simple extraction of any audio was needed. Reality dictated that the solution had to be cheap. We describe the simple Linux solution implemented, including the design, sizing and some surprising aspects. such activities as student broadcasting in the UK read You are required to make a recording of all broadcast output, including advertisements and sustaining services. You must retain these recordings (‘logging tapes’) for a period of 42 days after broadcast, and make them readily available to us or to any other body authorised to deal with complaints about broadcast programmes. Failure to provide logging tapes on request will be treated seriously, and may result in a sanction being imposed. where the bold is in the original (OffComm, 2003). In the previous state the logging was undertaken using a video player and a pile of video tapes. These tapes were cycled manually so there was a single continuous recording of all output. This system suffered from the following problems. The quality was largely unknown. In at least the last 3 years no one has actually listening to anything recorded there; indeed it is not known if it actually works! The system required someone to physically change the tape. Hence, there were large gaps in logging where people simply forgot to do this, which would now expose the station to legal problems. Assuming that the tapes actually worked, recovering audio would be a painstaking process requiring copying it from the tapes to somewhere else before it could be manipulated in any way. Hence this was only really useful for the legal purposes, rather than for people taking home copies of their shows. Also, as far as could be determined, whilst recovering audio, the logger itself had to be taken offline. Put simply, the system was out of date. Over the last two years there has been a move to mod- Keywords Audio archive, Audio logging, Radio Station, Portaudio. 1 Introduction The University of Bath Student’s Union has been running a radio station(URB, 2004) for many years, and it has a respectable tradition of quality, regularly winning prizes for its programmes(SRA, 2004). Unfortunately the improved requirements for logging of output from the station coincided with the liquidation of a major sponsor and hence a significant reduction in the station’s income, so purchasing a commercial logging device was not an option. Following a chance conversation the authors decided that the task was not difficult, and a software solution should be possible. This paper describes the system we planned and how it turned out. It should be borne in mind that during development, cost was the overriding factor, in particular influencing hardware choices. 2 The Problem The critical paragraph of the regulations on Radio Restricted Service Licences, which control LAC2005 95 ernise URB by shifting to computers where possible, so it seemed logical to log audio in such a way that it would be easy to transmit onto the network, and be secure regarding the regulations. 3 Requirements The basic requirement for the system is that it should run reliably for many days, or even years, with little or no manual intervention. It should log all audio output from the radio station, and maintain at least 50 days of material. Secondary requirements include the ability to recover any particular section of audio by time (and by a non-technical user). Any extracted audio is for archiving or rebroadcast so it must be of sufficient quality to serve this purpose. There is another, non functional, requirement, that it should cost as close to zero as possible! Quick calculations show that if we were to record at CD quality (44.1KHz, 16bit stereo) then we would need 44100 × 2 × 2 bytes a second, or 44100×2×2×60×24 = 14Gb each day, which translates to over 700Gb in a 50 day period. While disks are much cheaper than in earlier times, this is significantly beyond our budget. Clearly the sound needs to be compressed, and lossy compression beckons. This reduces the audio quality but, depending on compression rates, not so much that it violates our requirements. We sized the disk requirements on a conservative assumption of 1:8 compression, which suggests at least an 80Gb disk. Quick experiments suggested about a 400MHz Intel processor; the decision to use Linux was axiomatic. Given sufficient resource, a system to record DJ training sessions and demo tapes was suggested as a simple extension. We assumed software would be customwritten C, supported by shell scripts, cron jobs and the like. A simple user recovery system from a web interface would be attractive to the non-technical user, and it seemed that PERL would be a natural base for this. There are commercial logging computers, but the simple cost equation rules them out. board and mouse. The only cash expenditure was a new 120Gb disk; we decided that 80Gb was a little too close to the edge and the additional space would allow a little leeway for any extensions, such as the DJ training. There were two unfortunate incidents with the hardware; the disk was faulty and had to be replaced, and following the detection of large amounts of smoke coming from the motherboard we had to replace the main system; the best we could find was a 433MHz Celeron system. Fortunately the disk, soundcards and other equipment were not damaged and in the process of acquiring a new motherboard and processor combination we were lucky enough to find another stick of RAM. Most important, what we lost was time for development and testing as we needed to hit the deadline for going live at the beginning of the 2004 academic year. Hardware 433MHz Celeron 120Gb disk 2 × SoundBlaster 16 256Mb main memory 10 Mbit ether Features slower than our design New! old but working Table 1: Summary of Hardware Base 5 Implementation 4 Hardware A search for suitable hardware allowed the creation of a 550MHz Celeron machine with 128Mb of memory, ethernet, and two old SoundBlasters retrieved from a discard pile. SuSE9.1(Novell, 2004) was installed with borrowed screen, key- The main program is that the suite needs to perform two main tasks: read a continuous audio stream and write compressed audio to the disk. The reading of the audio feed must not be paused or otherwise lose a sample. The current design was derived from a number of alternative attempts. We use a threaded program, with a number of threads each performing a small task, coordinated by a main loop. A considerable simplification was achieved by using PortAudio(Por, 2004) to read the input, using a call-back mechanism. We shamelessly cannibalised the test program patest record written by Phil Burk to transfer the audio into an array in large sections. The main program then writes the raw audio in 30 second sections onto the disk. It works on a basic 5 period cycle, with specific tasks started on periods 0, 3 and 4. On 0 a new file is used to write the raw audio, and a message is written to the syslog to indicate the time at which the file starts. On period 3 a subthread is signalled to start the compres- LAC2005 96 sion of a raw audio file, and on period 4 the next raw audio file is named and created. By sharing out the tasks we avoid bursts of activity which could lead to audio over-runs. This is shown in figure 1. The compression is actually performed by a lower priority sub task which is spawned by a call to system. There is no time critical aspect of the compression as long as it averages to compressing faster than the realtime input. Any local load on the machine may lead to local variation but eventually it must catch up. There is a dilemma in the compression phase. The obvious format is OGG, for which free unencumbered software exists, but the student community is more used to MP3 format. We have experimented with oggenc(ogg, 2004), which takes 80% of elapsed time on our hardware and compresses in a ratio of 1:10, and notlame(Not, 2004), where compression is 1:11 and 74% of elapsed time. Our sources have both methods built in with conditional compilation. We have varied the period, and have decided on a minute, so each audio file represents five minutes of the station’s output; this is a good compromise between overheads and ease of recovery. The result of this program is a collection of 5 minute compressed audio files. Every day, just after midnight, these files are moved to a directory named after the day, and renamed to include the time of the start of the interval of time when the recording started. This is achieved with a small C program which reads the syslog files to get the times. This program could have been written in PERL but one of us is very familiar with C. A snapshot of part of the logging directory is shown in figure 2, where compressed audio, raw PCM files, unused PCM files and a compression log can be seen. The decision to rename actual files was taken to facilitate convenience during soak testing. We were running the system over this time as if it were live, and hence were using it to extract logs when they were requested by presenters. Cross-referencing files with the system log was a tedious task so an automatic renaming seemed the obvious choice. Using this opportunity to refile the logs in directories corresponding to the date of logging also assisted greatly in retrieval. A more long-term consideration was that renamed files would be easier to extract via a web interface and hence this work could probably be used in the final version also. Figure 1: Overview of Software Cycle 6 Experience The program has now been running for a significant time. Two problems with the design have emerged. The first was the change to winter time which happened a few days after going live. Our logs, and hence times, where based on local time as that seemed to be closest to what the users would require. But with the clock being put backwards, times repeat. Clearly we need to maintain both times in the logs, and append the time zone to the ultimate file name, or some similar solution. But how did we manage this shift backwards without loss of data? The answer is in the second problem. We are capturing the raw station output in 44.1KHz 16bit stereo. Every five minutes a new file is started. Actually we are not starting files by time but by sample count (13230000 frames). As was predicted, the sound card was not sampling at exactly CD rate, but faster, and as a result we are drifting by 19 seconds a day. In itself this is not a problem, and indeed rescued the possible data loss from the introduction of winter time, but it is less convenient for the student DJs who want a copy of their program. The suggestion is that the files should be aligned on five minute boundaries by the clock. This entails monitoring the clock to decide on a change of file, which would be a considerable departure from the simplicity of design. Exactness is not important, but we would like to be less than a minute adrift. Our revised code, not yet in service, makes the switch of files after reading the clock, and additional care is needed to avoid clock drift. LAC2005 97 52920000 4801096 0 42332160 0 3145728 Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 10:59 11:04 11:09 11:14 11:19 11:24 11:29 11:34 11:39 11:44 11:44 11:47 11:48 11:48 11:48 Arc0041022.mp3 Arc0041023.mp3 Arc0041024.mp3 Arc0041025.mp3 Arc0041026.mp3 Arc0041027.mp3 Arc0041028.mp3 Arc0041029.mp3 Arc0041030.mp3 Arc0041032 Arc0041031.mp3 log5Pwl4o Arc0041033 Arc0041034 Arc0041032.mp3 Figure 2: Part of Directory for Running System It was this clock drift, which can be seen in figure 3, that saved the situation when we changed from summer time to winter time. If we had implemented the time alignment method then the file names would have repeated for the one hour overlap (1am to 2am is repeated in the UK time scheme), but as the soundcard had read faster, the second hour was fortuitously aligned to a different second, and so we could rescue the system manually. The zone change from winter to summer involves the non-existence of an hour and so raises no problems. Before next autumn we need to have deployed a revised system. It has been suggested that using the Linux linking mechanisms we could maintain all logging in universal time, and create separate directories for users to view. There was one further problem. When the syslog file got large the usual logrotate mechanism started a new file. But as our renaming system reads the syslog, it subsequently missed transfer and rename of some output. This was fixed by hand intervention, but at present we do not have a good solution to this; nasty solutions do occur to us though! Another minor problem encountered during initial testing was with the hardware: it seems under Linux older versions of the SoundBlaster chipset could not handle both recording from an input stream and outputting one simultaneously. The output stream took priority so unless we specifically muted the output channels on the card, no sound was captured. This is only mentioned here in case an attempt is made to duplicate this work, and so to avoid the hours of frustration endured during our initial tests. We expect that similar minor problems will appear later as we develop the system, but the main data collection cycle seems most satisfactorily robust. Most importantly, despite being forced to downgrade our hardware, the system performs within its new limitations without loss of data during compression phases — even during periods of additional load from users (i.e. when logs are being extracted). There is sufficient slack for us to consider adding additional services. 7 Conclusions Tests have demonstrated that our original aim, of a cheap data logging system, has been easily achieved — the whole system cost only £60 in new purchased materials. What is also clear is that the whole structure of the Linux and Open Source movements made this much more satisfactory than we feared. The efficiency of Linux over, say, Windows meant that we could use what was in effect cast-off hardware. The ability to separate the data collection from the compression and filing allowed a great simplification in the design, and so we were able to start the logging process days before we had thought about the disposal process, but before the start of the university term. The crontab mechanism enables us to delete whole directories containing a single day after 60 days have passed. We still need to implement a web-interface to extracting of programs, but the availability of PERL, Apache, and all the related mechanisms suggests that this is not a major task. LAC2005 98 1 root root 6912 Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar Mar 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 22:55 23:00 23:05 23:10 23:15 23:20 23:25 23:30 23:35 23:40 23:45 23:50 23:55 00:00 00:05 01:10 22:45:08.mp3 22:50:08.mp3 22:55:08.mp3 23:00:08.mp3 23:05:08.mp3 23:10:07.mp3 23:15:07.mp3 23:20:07.mp3 23:25:07.mp3 23:30:07.mp3 23:35:07.mp3 23:40:07.mp3 23:45:07.mp3 23:50:07.mp3 23:55:07.mp3 index Figure 3: Part of an Archive Directory Although it is not a major problem to write, the extraction web page for the system will be the only part most users see and hence design for ease of use must be key. Currently, the idea is to incorporate this into the current URB online presence(URB, 2004) which allows members of the station to log into a members area. We will add a logging page, which presents users with a very simple interface specifying only the beginning and end points of the period required. With their user-names tied to a download destination, presenters will always find their logs in the same place, limiting the possibility of confusion. Being based on such reliable software packages, we are sure that if we ever have sufficient funds for an upgrade, for example to a digital input feed, this can easily be accommodated. We are aware that the current system lacks redundancy, and a secondary system is high on our wish-list. More importantly we have not yet completed a physically distributed back-up in case the next machine fire does destroy the disk. We are confident that as the radio station continues to be the sound-track of the University of Bath, in the background we are listening to all the sounds, logging them and making them available for inspection. With this infrastructure in place we might even consider a “play it again” facility, if the legal obstacles can be overcome. Naturally as the program is based on open source code we are willing to provide our system to anyone else who has this or a similar problem. 8 Acknowledgements Our thanks go to Simon Lee, the instructor of Bath University Students’ Union T’ai Chi Club, for tolerating our discussions before (and sometimes during) training sessions. References 2004. Notlame mp3 encoder.. rsise.anu.edu.au/~conrad/not_lame/. Novell. 2004. de-de/linux/suse. OffComm. 2003. Office of Communications Document: Long-Term Restricted Service Licences. codes_guidelines/broadcasting/radio/ guidance/lo%ng_term_rsl_notes.pdf, January. 2004. Ogg front end. projects/oggenc/. 2004. PortAudio — portable cross-platform Audio API.. 2004. SRA: Student Radio Association. http: //. 2004. URB: University Radio Bath. http://. LAC2005 99 LAC2005 100 AGNULA/DeMuDi - Towards GNU/Linux audio and music Nicola Bernardini, Damien Cirotteau, Free Ekanayaka, Andrea Glorioso Media Innovation Unit - Firenze Tecnologia Borgo degli Albizi 15 50122 Firenze Italy Abstract AGNULA (acronym for “A GNU/Linux Audio distribution”, pronounced with a strong g) is the name of a project which has been funded until April 2004 by the European Commission (number of contract: IST-2001-34879; key action IV.3.3, Free Software: towards the critical mass). After the end of the funded period, AGNULA is continuing as an international, mixed volunteer/funded project, aiming to spread Free Software in the professional audio/video arena. The AGNULA team is working on a tool to reach this goal: AGNULA/DeMuDi, a GNU/Linux distribution based on Debian, entirely composed of Free Software, dedicated to professional audio research and work. This paper1 describes the current status of AGNULA/DeMuDi and how the AGNULA team envisions future work in this area. Keywords AGNULA, audio, Debian 1 The AGNULA project - a bit of history In 1998 the situation of sound/music Free Software applications had already reached what could be considered well beyond initial pioneeristic stage. A website, maintained by musician and GNU/Linux2 enthusiast Dave Phillips, was already collecting all possible sound and music software running on GNU/Linux architectures. At that time, the biggest problem was that all these applications were dispersed over the Internet: there was no common operational framework and each and every application was a case-study by itself. This paper is Copyright c 2004 Bernardini, Cirotteau, Ekanayaka, Glorioso and Copyright c 2004 Firenze Tecnologia. It is licensed under a Creative Commons BY-SA 2.0 License (see legalcode). 2 Throughout the document, the term GNU/Linux will be used when referring to a whole operating system using Linux as its base kernel, and Linux when referring to the kernel alone. 1 A natural development followed shortly after, when musician/composer/programmer Marco Trevisani proposed a to a small group of friends (Nicola Bernardini, Maurizio De Cecco, Davide Rocchesso and Roberto Bresin) to create LAOS (the acronym of Linux Audio Open Sourcing), a binary distribution of all essential sound/music tools available at the time including website diffusion and support. LAOS came up too early, and it did not go very far. But in 2000,, times were riper. LAC2005 101. AGNULA has constituted a major step in the direction of creating a full-blown Free Software infrastructure devoted to audio, sound and music,. • The freedom to redistribute copies so you can help your neighbor (freedom 2); • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this; The most famous of such licenses is probably the GNU General Public License, which is the founding stone of the Free Software Foundation effort to build a completely free operating system, GNU (GNU’s Not Unix). This is not the right place to describe the concepts and the history of the Free Software movement as it would deserve. Suffice it to say that the possibility to use, study, modify and share computer programs with other people is of paramount importance to the everyday life of creators (i.e. composers), professional users (i.e. sound engineers, performers) and researchers. This distinction is of course artificial, since all of us can be creators, professional users and researchers in specific moments of our life. But this taxonomy can work as a simple tool to better understand the pros of using Free Software in everyday life and work: • Creators can use tools which don’t dictate them what they should do, instead being easily modifiable into something that does what they want them to do. The non-physical nature of software makes for a very convenient material to build with; even though the creator might not have the needed technical skills and knowledge to modify the program to best suit his/her needs, s/he can always ask someone else to do it; on a related note, this kind of requests make for a potentially (and in some key areas, factually) very thriving marketplace for consultants and small businesses; • Professional users have at their disposal a series of tools which were often thought and designed by other professional users; they can interact more easily with the software writers, asking features they might need or reporting bugs so that they are corrected faster (some would say “at all”). They can base their professional life not on the whim of a single company whose strategies are not necessarily compatible with the professional user’s own 2 Free Software and its applications in the “pro” audio domain When describing the AGNULA project, and the AGNULA/DeMuDi distribution specifically, a natural question arises - why is it necessary or desiderable to have a completely Free Software based distribution (whether based on the Linux kernel or not is not the point here) for audio professionals and research in the sound domain? Free Software3 is the set of all computer programs whose usage and distribution licenses (think about the “EULA” or “End User Licensing Agreements”, that so many users have come to know throughout the years) guarantee a precise set of freedoms: • The freedom to run the program, for any purpose (freedom 0); • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this; We tend to prefer this term, rather than “Libre Software”, even if the former term is inherently ambiguous because of the english term “free” — which can mean “free as in free beer” or “free as in free speech”. Free Software is, of course, free as in free speech (and secondarily, but not necessarily, as in free beer). Usage of the term “Libre Software” arose in the european context trying to overcome this ambiguity with a term, libre, which is correct in the french and spanish languages and is understandable in italian and other european languages. However, it is not universally accepted as an equivalent of “Free Software” and its usage can induce confusion in readers and listeners — we therefore prefer to stick to the traditional, albeit somewhat confusing, terminology. 3 LAC2005 102 plans, but on a shared ecosystem of software which won’t easily disappear — if the original software authors stop maintaining the software, someone else can always replace them; • Researchers can truly bend the tool they have at their disposal to its maximum extent, something which is often very hard to do with proprietary software (even with well designed proprietary software, as it is basically impossible to understand all users’ requirements in advance). They can count on computer programs which have been deeply scrutinized by a peer-review process which finds its counterpart only in the scientific community tradition4 as opposed to the habit of proprietary software to completely obscure the “source code” of a program, and all the bugs with it. Last, not least, for all those researchers who use software programs not as simple tools but as bricks in software development (as often happens today in computer–assisted composition and more generally in sound research) the possibility to draw from an immense database of freely available, usable and distributable computer programs can prove an incredible advantage, especially when considering the cost of proprietary computer programs and the financial situations of most research institutions nowadays.5 In the end, one might ask whether creativity is truly possible without control on the tools being used — a control which Free Software guarantees and proprietary software sometimes grants, but more often than not manipulates for purely economical reasons. This is not an easy question to answer at all — there are many subtle issues involved, which span in the field of economics, psychology, engineering, sociology, etc, etc. The AGNULA project actually believes that creativity is very 4 This is not a coincidence, as the GNU project was basically born in the Artificial Intelligence Laboratories at the M.I.T. 5 It should be noted, however, that whilst monetary costs are of course a strong variable of all the equation, the central importance of Free Software in research is not related to money itself. Having free (i.e. gratis) software which is not free (i.e. not libre) can be an ephemeral panacea, but on the long run it simply means tying oneself and one’s own research strategy to somebody else’s decisions. difficult without such control,6 but it’s unquestionable that the subject would deserve a fairer treatise, through cross-subject studies able to span the whole range of fields outlined above. 3 The AGNULA/DeMuDi framework The framework of AGNULA/DeMuDi is the “classical” environment one can expect from a GNU/Linux system to run audio applications. The first component is the Linux kernel patched to turn it into an efficient platform for real time applications such as audio applications. Then the ALSA drivers allow the usage of a wide range of soundcards from consumer grade to professional quality. On top of the drivers runs the Jack server which allows lowlatency, synchronicity and inter-application communication. Last, not least, the LADSPA plugins format is the standard for audio plugins on the GNU/Linux platform. 3.1 3.1.1 The Linux kernel Is the Linux kernel suitable for audio applications? The heart of the AGNULA/DeMuDi distribution is the Linux kernel. However, since Linux was originally written for general purpose operating systems (mainly for servers and desktop applications) as a non preemptive kernel, it was not really useful for real-time applications. Truely showing the power of Free Software, several improvements of the kernel scheduler turned it into a good platform for a Digital Audio Workstation (DAW). To face this limitation two strategies have been adopted: the preemption patch and the lowlatency patch. 3.1.2 Preemption patch Originally created by MontaVista and now maintained by Robert M. Love,7 this patch redesigns the kernel scheduler and redefines the spinlocks from their SMP specific implementation to preemption locks. This patch allows the Linux scheduler to be preemptive – when an interruption of higher priority occurs the kernel preempts the current task and runs the higher priority task – except for specific critical sections (such as spinlocks or when the scheduler 6 This belief has become a sort of mantra, as is stated on our t-shirts: “There is no free expression without control on the tools you use”. 7 LAC2005 103 occurrences occurrences is running). This strategy has proven its efficiency and reliability and has been included in the new stable releases of the kernel (2.6.x). 3.1.3 Lowlatency patch Introduced by Ingo Molnar and improved by Andrew Morton, the lowlatency8 patch introduces some specific conditional rescheduling points in some blocks of the kernel. Even if the concept of this patch is quite simple, it imposes a very high maintenance burden because the conditional rescheduling points are spread all over the kernel code without any centralization. 3.1.4 Which patch is the best? We test the kernel 2.4.24 with the methodology of (Williams, 2002).9 We used realfeel10 while running the Cerberus Test Control System 11 to stress the kernel. 5.000.000 interrupts were generated with a frequency of 2048 interrupt per second and the scheduling latency is measured for each interrupt on a Intel Centrino 1.4 MHz with 512 Mb of RAM . The result for the non–patched kernel (see Figure 1) with a maximum latency of 48,1 ms makes this kernel not suitable for real–time application. The patches greatly improve the situation. The lowlatency patch provides better results – better maximum latency and highest percentage of lowlatency interrupts. The optimal choice seems to be the combination of both. The combination of the patches has also proven to be more reliable after a long uptime (see (Williams, 2002)) 2.4.24 vanilla 1e+06 100000 2.4.24 lowlatency + preempt 100000 10000 1000 100 10 1 0.1 2.4.24 vs 2.6.5 100000 10000 1000 100 10 1 0.1 lowlatency + preempt 2.4.24 2.6.5 0.1 1 milliseconds 10 0.1 1 milliseconds 10 Figure 2: Lowlatency + Preempt 2.4.24 and preempt 2.6.5 scheduler latency the new stable kernel (2.6.x) provides better scheduler and will therefore be very suitable for an audio platform. The preempt patch is now directly shipped with the vanilla kernel. The maximum latency measured for the 2.6.5 kernel is 0.7ms, and the percentage of interrupts being served within 0.1 ms is significantly higher than for any version of the 2.4.24 kernel. 3.1.5 Capability patch The third patch applied to the kernel does not improve the performance of the system but allows a non–root users to use the real time capability of Linux. It is particularly useful to run the Jack (see 3.3) audio server as a normal user. 3.2 ALSA ALSA (Advanced Linux Sound Architecture) is a modular software framework which supports a wide range of soundcards12 from consumer grade to professional quality. The ALSA drivers also provide an OSS/Free emulation to allow compatibility with legacy applications. ALSA is now the standard audio subsytem of the 2.6.x Linux kernels (replacing OSS, which was the standard throughout the 2.4.x series). ALSA also provides an API and a user space library (libasound). 2.4.24 lowlatency vs preempt 100000 10000 vanilla lowlatency preempt occurrences occurrences 10000 1000 100 10 1 1000 100 10 1 3.3 The Jack Audio Connection Kit The Jack Audio Connection Kit (Jack) can be considered as the real user–space skeleton of Figure 1: Vanilla vs Lowlatency and preempt AGNULA/DeMuDi. This audio server runs on 2.4.24 scheduler latency top of the audio driver (ALSA , OSS or Portaudio) and allows the different audio applications to communicate with each other. While Even if AGNULA/DeMuDi still provides a other audio servers exist (aRts and esd among 2.4.x kernel some preliminary tests show that others), Jack is the only one which has been 8 designed from the grounds up for professional 9 audio usage: it guarantees low latency operaWe invite the reader to consult this paper for a more detailed explanation of how the kernel scheduler works tions and synchronicity between different client 0.1 0.1 0.1 1 10 100 0.1 milliseconds 1 milliseconds 10 and of the two patches. 10 12 See the – not-so-up-to-date – soundcards matrix on #amlat the ALSA web pages to have an idea of the number of 11 soudcards supported. LAC2005 104 max(L)(ms.) L < 0.1ms(%) L < 0.2ms(%) L < 0.5ms(%) L < 1ms(%) L < 10ms(%) L < 50ms(%) 2.4.24 vanilla 48.1 90.2182 97.3432 99.9768 99.9801 99.9983 100 2.4.24 lowlatency 1.8 99.1168 99.9679 99.9976 99.9997 100 100 2.4.24 preempt 4.8 99.5404 99.9115 99.9311 99.9567 100 100 2.4.24 both 1.8 99.4831 99.9643 99.9973 99.9998 100 100 2.6.5 preempt 0.7 99.9685 99.9878 99.9982 100 100 100 Table 1: Distribution of the latency measurements for the different kernels applications. Therefore it has become a de facto standard for professional audio on GNU/Linux systems and the majority of the applications included in the AGNULA/DeMuDi distribution are Jack–compliant (“jackified”, to use the relevant jargon). Another reason for Jack’s success is the simple, high–level but powerful API that it provides, which has greatly facilitated the jackification of audio applications. Last, not least, Jack also provides a master transport which allows for simultaneous control of different applications (start/pause/stop). 3.4 The LADPSA plugins LADSPA, which stands for Linux Audio Developers Simple Plugins Architecture, is the VST equivalent on GNU/Linux systems. It provides a standard way to write audio plugins. The majority of the applications included in AGNULA/DeMuDi supports this format; since a number of high qualities plugins are available and non–compliant applications are “changing their mind”, it’s apparent how LADSPA is the “de facto” standard as far as audio plugins are concerned. Sound Editors The choice of the sound editors included in AGNULA/DeMuDi illustrate the versatility of the distribution: it goes from the complex but extremely powerful Snd to the user friendly and straightforward audacity for the time domain. Frequency domain edition is possible with ceres3 Multitracker Considered as one of the major audio applications for GNU/Linux, Ardour is not only an excellent multitrack recorder but it also “caused” the development of Jack, as the author of these two programs, Paul Davis, originally developed Jack to fulfil a need he had for Ardour. Ecasound is a robust non-GUI alternative for multitrack recording. Interactive Graphical Building Environments Free Software is very strong in this field with two well developed applications which have been enjoying a tremendous success for years: jMax and Pure Data (better known as Pd). Sequencers Two sequencers amongst others are worth mentioning: Rosegarden and Muse. While originally they were pure midi–sequencers, now they both have some audio capabilities which turn them into complete musical production tools. Sound Processing Languages A wide choice of compositional languages like CSound, SuperCollider, Common Lisp Music are available. It may be noticed that the first two were re–licensed under respectively the GNU LGPL (GNU Lesser General Public License) and the GNU GPL during the funded lifetime of the AGNULA project. Software synthesizers A good range of software synthesizer is provided, including 4 Applications AGNULA/DeMuDi doesn’t provide all the music/audio programs available for the GNU/Linux platform; the goal is to provide a thought–out selection of the “best” ones, allowing every kind of user to choose from consumer–grade to professional–grade applications. Even after the reduction process the original list underwent, the number of applications included in AGNULA/DeMuDi (100+) obliges us to present a restricted but representative overview. A complete list of the available applications is included either in the distribution itself or online13 . 13 LAC2005 105 tools for modular synthesis (AlsaModularSynth, SpiralSynthModular); for additive and subtractive synthesis (ZynAddSubFX); and dedicated synthesis/compositional languages, such as Csound and SuperCollider. Last, not least, fluidsynth and TiMidity++ allow sample-based synthesis. In the attempt to distribute only Free Software, a Free GUS patch, freepat is also provided with TiMidity++. The patch is not complete (it still misses some instruments to cover the General Midi map) and this raised our perception that free content (like free samples or free loops) are a crucial need in order to provide a totally Free audio platform. Notation The last category is particularly well represented with the professional–grade automated engraving system Lilypond. While Lilypond provides a descriptive language of the musical scores, it is also a back-end for different applications such as Rosegarden or the dedicated notation interface NoteEdit. all those projects/communites whose goals, spirit and people are closely related with A/DeMuDi. 5.1 Development infrastructure The development infrastructure, currently hosted on the AGNULA server14 will be moved to Alioth15 and A/DeMuDi source packages will be tracked using the official Debian Subversion server. Every major Custom Debian Distribution is already registered on Alioth16 , and having them all in a single place helps exchanging code, automate common services, spawn new CDDs. Moreover all Debian developers are members of Alioth, and having a Debian-related project registered on Alioth makes it easier for the Debian community to notice and possibly join it. 5.2 Mailing lists All user level discussions shall be carried directly on official Debian mailing list. Probably due to historical reasons AGNULA/DeMuDi is now often perceived as a different distribution from or a derivation of Debian, as other popular projects 17 This attitude somehow conflicts with the concept of Custom Debian Distribution and its advantages. One of the goals of the AGNULA project was and is to improve the quality of Debian as far as audio and multimedia are concerned, and this effort will be carried directly inside Debian, with the work of Debian maintainers. AGNULA/DeMuDi releases shall be considered as a comfortable way to install Debian for audio and multimedia work. Every issue concerning AGNULA/DeMuDi is actually concerning Debian and it makes sense discuss it on Debian mailing lists, where one can get in touch and receive support from a much larger community than AGNULA. These lists are of particular interest for the AGNULA community: 16 17 15 14 5 Prosecution after the ending of the funded phase AGNULA/DeMuDi gave rise to a fairly large interest. Even after the ending of the funded phase, users’ feedback has constantly increased as well as the requests for further enhancements. Being AGNULA/DeMuDi a Free Software project, these conditions naturally favoured its continuation. As a matter of fact over the past months the distribution kept improving, and has now achieved most of its fundamental goals along with a certain degree of maturity and stability. Nevertheless the project is probably encountering growth limits. At the moment the improvement of the distribution almost exclusively depends on the “centralised” effort of the AGNULA team. As computer based audio and multimedia processing are very wide fields, a distribution aiming to embrace them all from different perspectives needs to actively involve different communities. Time has come to exit the prototype and experimental phase and put A/DeMuDi in a wider picture. Here follow some steps that the AGNULA team is going to undertake during the next months in order to entight the connection with LAC2005 106 Debian-Multimedia 18 For audio and multimedia Debian-specific issues Debian-User 19 For generic user support in English. Furthermore other dedicated mailing lists, as Debian-French, DebianItalian, Debian-Spanish, Debian-Russian, Debian-Japanese, etc. 20 offer user support in various languages. Moreover we encourage joining the Linux Audio Mailing list 21 for all discussions on using Linux for audio purposes. 5.3 Quality assurance The AGNULA team is going to promote the birth of a quality assurance group dealing with audio and multimedia Debian packages. While AGNULA/DeMuDi had a fairly large success among the users, creating an active community around the project, it is remarkable that, beside a few cases, the same thing did not happen with respect to the developers, who generally preferred to stick to Debian and coordinate themselves through the DebianMultimedia group. Debian-Multimedia is an official Debian sub-project started by Marco Trevisani (former technical manager of AGNULA/DeMuDi) whose goals are virtually identical to AGNULA/DeMuDi. The activity of the group is not as intense as AGNULA/DeMuDi, but it is constant in time, and has achieved some high quality results (e.g. good packaging for the JACK Audio Connection Kit). Currently Debian-Multimedia is simply a mailing list, and no web page has been yet published to describe the project, as it happened for other Debian groups 22 The Debian-Multimedia sub-project not only represents the ideal door for AGNULA/DeMuDi to enter Debian, but can also be considered a reference point for other Debian based distributions dealing with audio and mul20 timedia (e.g. Medialinux), and it would allow to gather the various efforts under the same hat. Beside the tasks which Debian-Multimedia is already successfully carrying on, the group would: • be a reference point for the audio/multimedia subset of Debian, assuring coherence and usability • deal with a well defined set of packages • provide bleeding-edge applications • test packages and look for possible bugs • discuss design and interface issues • maintain a FAQ of the Debian-Multimedia mailing list 6 Conclusions The AGNULA project, originally funded by the European Commission, is now continuing to pursue its goal of making Free Software the best choice for audio/video professionals on a volunteer/paid basis. The history of the AGNULA project, AGNULA/DeMuDi current status and its foreseeable future have been shown, as well as the general philosophy and technical beliefs that are behind the AGNULA team choices. The AGNULA team does believe that a positive feedback loop has been spawned between Debian and the fast evolving domain of GNU/Linux audio applications. As a matter of fact a previously weak ring in the chain between audio professionals, musicians and composers on one side and Free Software developers on the other has been significantly strengthened. This result can be considered the basis of a future adoption of Free Software tools by people who formerly had no alternative to proprietary software, along with all the implications of such a process in the educational, social, artistic and scientific fields. 7 Acknowledgements As the reader may expect, projects such as AG NULA/DeMuDi are the result of the common effort of a very large pool of motivated peo ple. And indeed, giving credit to any deserv ing individual that contributed to these projects 21 would probably fill completely the space al22 lotted for this paper. Therefore, we decided to make an arbitrarily small selection of those without whose help AGNULA/DeMuDi would csmall/ipv6/ not probably exist. We would like to thank, LAC2005 107 Marco Trevisani, who has been pushing the envelope of a Free audio/music system for years, Dave Phillips, G¨nter Geiger, Fernando Lopezu Lezcano, Fran¸ois D´chelle and Davide Rocc e chesso: all these people have been working (and still work) on these concepts and ideas since the early days. Other people that deserve our gratitude are: Philippe Aigrain and Jean-Fran¸ois Junger, the European Commisc sion officials that have been promoting the idea that AGNULA was a viable project against all odds inside the Commission itself; Luca Mantellassi and Giovanni Nebiolo, respectively President of Firenze’s Chamber of Commerce and CEO of Firenze Tecnologia, for their support: they have understood the innovative potential of Free Software much better than many socalled open-source evangelists. Finally we wish to thank Roberto Bresin and the rest of the Department of Speech Music and Hearing (KTH, Stockholm), for kindly hosting the AGNULA server. References Fran¸ois D´chelle, G¨nter Geiger, and Dave c e u Phillips. 2001. Demudi: The Debian Multimedia Distribution. In Proceedings of the 2001 International Computer Music Conference, San Francisco USA. ICMA. Clark Williams. 2002. Linux scheduler latency. Technical report, Red Hat Inc. LAC2005 108 SURVIVING ON PLANET CCRMA, TWO YEARS LATER AND STILL ALIVE Fernando Lopez-Lezcano, nando@ccrma.stanford.edu CCRMA Stanford University ABSTRACT Planet CCRMA at Home [2] is a collection of packages that you can add to a computer running RedHat 9 or Fedora Core 1, 2 or 3 to transform it into an audio workstation with a low-latency kernel, current ALSA audio drivers and a nice set of music, midi, audio and video applications. This presentation will outline the changes that have happened in the Planet over the past two years, focusing on the evolution of the linux kernel that is part of Planet CCRMA. 1. INTRODUCTION Creating worlds is not an easy task, and Planet CCRMA is no exception. The last two years have seen a phenomenal expansion of the project. The history of it will reflect, I hope, part of the recent history of Linux Audio projects and kernel patching. 2. A BIT OF HISTORY For those of you that are not familiar with Planet CCRMA [2] a bit of history is in order. At CCRMA (the Center for Computer Research in Music and Acoustics at Stanford University) we have been using Linux as a platform for research and music production since the end of 1996 or so. Besides the software available in the plain distribution I installed at the time, I started building and installing custom music software in our main server (disk space was not what it is today, and there were not that many Linux machines at that time, we were dual booting some PCs between Linux and NEXTSTEP, which was the main computing platform at CCRMA). I don’t need to say that sound support for Linux in 1997 was a bit primitive. Not many sound cards were supported, and very few existed that had decent sound quality at all. Low latency was not a concern as just getting reliable sound output at all times was a bit of a challenge. Eventually the sound drivers evolved (we went through many transitions, OSS, ALSA 0.5 and then 0.9), and patches became available for the Linux kernel that enabled it to start working at low latencies suitable for realtime reliable audio work, so I started building custom monolithic kernels that incorporated those patches and all the drivers I needed for the hardware included in our machines (building monolithic kernels was much easier than trying to learn the details of loadable kernel modules :-). But over time hard disks became bigger so that there was now more free space in the local disks, and the number of Linux machines kept growing, so the server installed software was going to become a network bottleneck. Also, some adventurous CCRMA users started to install and try Linux in their home machines, and wanted an easy way to install the custom software available in all CCRMA workstations. I was installing RedHat so I started to use RPM (the RedHat Package Manager) to package a few key applications that were used in teaching and research (for example the Snd sound editor, the CM - CLM - CMN Common Lisp based composition and synthesis environment, Pd and so on and so forth). At first I just stored those packages in a network accessible directory and told potential users, “there you are, copy the packages from that directory and install them in your machine”. A simple Web site with links to the packages was the next step, and installation instructions were added as I got feedback from users on problems they faced when trying to install the packages. Finally the project was made “public” with a post announcing it in the Cmdist mailing list - an email list for users of Snd and CM/CLM/CMN (although I later learned that some users had discovered the existence of the packages through search engines, and were already using them). The announcement happened on September 14th 2001. Time flies. This changed the nature of the project. As more people outside of CCRMA started using the packages I started to get requests for packaging music software that I would not have thought of installing at CCRMA. The number of packages started to grow and this growth benefited both CCRMAlites and external Planet CCRMA users alike. As the project (and this was never an “official” project, it was a side effect of me packaging software to install at CCRMA) grew bigger the need for a higher level package management solution became self-evident. The dreaded “dependency hell” of any package based distribution was a problem. More and more packages had external dependencies that had to be satisfied before installing them and that needed to be automatic for Planet CCRMA to be re- LAC2005 109 ally usable. At the beginning of 2002 apt for rpm (a port of the Debian apt tool by Conectiva) was incorporated into Planet CCRMA, and used for all package installation and management. For the first time Planet CCRMA was reasonably easy to install by mere mortals (oh well, mere geek mortals). Fast forward to today: there are more than 600 individual packages spanning many open source projects in each of the supported branches of RedHat/Fedora Core. You can follow the external manifestation of these changes over time by reading the online ChangeLog that I have maintained as part of the project (a boring read, to say the least). 3. AT THE CORE OF THE PLANET Since the announcement of the project outside CCRMA on September 2001, the base distribution on which it was based (RedHat) has seen significant changes. In July 2003 RedHat stopped releasing commercial consumer products, and the last RedHat consumer version was 9, released on March 2003. The Fedora Project was created, with the aim of being a community driven distribution with a fast release cycle that would also serve as a testbed for new technologies for the enterprise line of RedHat products. Fedora Core 1 was the first release, followed by Fedora Core 2 and 3, at approximately 6 month intervals. The rapid release cycle plus the introduction of new technologies in the releases have made my life more “interesting”. In particular, Fedora Core 2 saw the introduction of the 2.6 kernel, which created a big problem for rebuilding the Planet CCRMA package collection on top of it. The problem: a good, reliable low latency kernel did not exist. At that point in time 2.6 did not have an adequate low latency performance, despite the assurances heard during the 2.5 development cycle that new infrastructure in the kernel was going to make it possible to use a stock kernel for low latency tasks. Alas, that was not possible when Fedora Core 2 was released (May 2004). 4. THE KERNELS Up to Fedora Core 1 the base distribution used a 2.4 kernel, and Planet CCRMA provided custom kernel packages patched with the well known low latency (by A. Morton) [6] and preemptible kernel (by R. Love) [5] patches (the last originally created by Monta Vista [4]), in addition to the tiny capabilities patch that enabled to run the Jack Audio Connection Kit server [15] and his friends with realtime privileges as non-root users. Fedora Core 2 changed the equation with the introduction of the 2.6 kernel. Running a 2.4 kernel on top of the basic distribution presented enough (small) compatibility problems that I discarded the idea very early in my testing cycle. And 2.6 had a very poor latency behavior, at least in my tests. As a consequence until quite recently I still recommended using Fedora Core 1 for new Planet CCRMA installs. For the first 2.6 kernels I tested (March 2004) I used a few additional patches by Takashi Iwai [7] that solved some of the worst latency problems. But the results were not very usable. Ingo Molnar and Andrew Morton again attacked the problem and a very good solution evolved that is now available and widely used. Ingo started writing a series of patches for realtime preemption of the 2.6 kernel [8] (named at the beginning the “voluntary preemption” patchset). This set of patches evolved on top of the “mm” patches by Andrew Morton [9], the current equivalent of the old unstable kernel series (there is no 2.7 yet!, experimental kernel features first appear in the “mm” patches and then slowly migrate - the successful ones, that is to the official release candidates and finally to the stable releases of the Linux kernel). Ingo did very aggressive things in his patches and the voluntary preemption patches (later renamed realtime preemption patches) were not the most stable thing to run in your computer, if it booted at all (while tracking successive releases I must have compiled and tried out more than 40 fully packaged kernels, for details just look at the changelog in the spec files of the Planet CCRMA 2.6 kernels). I finally released a preliminary set of kernel packages on December 24 2004, using version 0.7.33-04 of Ingo’s patches, one of the first releases that managed to boot in all my test machines :-) What proved out to be interesting and effective in Ingo’s patches gradually percolated to the not so bleeding edge “mm” patches by Andrew Morton, and bits and pieces of “mm” gradually made it upstream to the release candidates and then to the stable kernel tree. So, little by little the latency performace of the stock kernel improved. By the time of the release of 2.6.10 (December 24 2004 again just a coincidence) it was pretty good, although perhaps not as good as a fully patched 2.4 kernel. But keep in mind that this is the stock kernel with no additional patches, so the situation in that respect is much much better than it was in the old stock 2.4 kernel. The end result for Planet CCRMA dwellers at the time of this writing are two sets of kernels, currently available on both Fedora Core 2 and 3. 4.1. The “stable” kernel The current version is 2.6.10-2.1.ll. 2.6.10 turned out to be an unexpected (at least by me) milestone in terms of good low latency behavior. Finally, a stock kernel that has good low latency performance, out of the box. I would say it is close to what a fully patched 2.4 kernel could do before. The package also adds the realtime lsm kernel module, more on that later. 4.2. The “edge” kernel Currently 2.6.10-0.6.rdt based on Ingo Molnar’s realtime preempt patch version 0.7.39-02. This is a more bleeding edge kernel, with significantly better low latency performance and based on Ingo Molnar’s realtime preemption patches. The downside of trying to run this kernel is that it LAC2005 110 still (at the time of this writing) does not work perfectly in all hardware configurations. But when it works, it works very well, and users have reported good performance with no xruns running with two buffers of 64 or even 32 samples! Amazing performance. I’m still being a bit conservative in how I configure and build this kernel, as I’m not currently using the REALTIME RT configuration option, but rather the REALTIME DESKTOP option (thus the rdt in the release). The penalty in low latency behavior is worth the extra stability (at this time). I hope that the RT option (which gets the linux kernel close to being a “hard realtime” system) will evolve and become as stable as the REALTIME DESKTOP configuration. These packages also include the realtime lsm module. 4.3. Small details that matter But a kernel with good low latency is not nearly enough. You have to be able to run, for example, Jack, from a normal non-root account. Enter Jack O’Quinn [10] and Torben Hohn. Their efforts created a kernel module, part of the kernel security infrastructure, that enables applications run sgid to a certain group, or run by users belonging to a group, or run by any user (all of this configurable, even at runtime), to have access to realtime privileges without having to be root. This is more restrictive and secure than the old capabilities patch, and at the time of this writing and after a very long discussion in the Linux Kernel mailing list (see [11] and [12]), has been incorporated into the “mm” kernel patches. Hopefully it will eventually percolate down to the stable kernel tree at some point in the future. It was a tough sell advocating for it in the Linux Kernel mailing list, many thanks to Jack O’Quinn, Lee Revell and others for leading that effort and to Ingo Molnar and Con Kolivas for proposing workable alternatives (that were later discarded). When the realtime patch becomes part of the standard kernel tree, a stock kernel will not only have decent low latency performance but will also work with software that needs realtime privileges like Jack does (including the ability of applications to run with elevated SCHED FIFO scheduling privileges and to lock down memory so that it is not paged to disk). But this was not enough for a Planet CCRMA release. Ingo Molnar’s realtime preemption patch changed the behavior of interrupt requests, the lower half of the interrupt processes (if I understand correctly) are now individual processes with their own scheduling class and priorities, and a vital part of tuning a system for good low latency behavior is to give them, and Jack itself, the proper realtime priorities so that the soundcard and its associated processes have more priority than other processes and peripherals. I was trying to find a solution to this that did not involve users looking around /proc and tuning things by hand, when Rui Nuno Capela sent me a neat startup service script called rtirq that does just that, it sorts all interrupt service routines and assigns them decent priorities. Together with another small startup script I wrote that loads and configures the realtime lsm module, they make it possible to package an easy to install turn-key solution to a low latency 2.6 based kernel. 4.4. The core packages The end result in Planet CCRMA are two sets of meta packages that reduce the installation and configuration of a 2.6 kernel to two apt-get invocations (installing planetccrmacore for the safer kernel and planetccrma-core-edge for the more risky one that offers better low latency performance). This, coupled to the fact that due to 2.6 both Fedora Core 2 and 3 use ALSA by default, made installing Planet CCRMA is a much easier process when compared to Fedora Core 1 or RedHat 9 and their 2.4 kernels. 5. CONTINENTS AND ISLANDS A large number of applications and supporting libraries have been added and updated over time to Planet CCRMA since 2003. Although not as many as I would like (just take a look at the “Pipeline” web page for packages waiting to be added to the repository). The list is almost too long but here it goes: seq24, filmgimp (later renamed to cinepaint), fluidsynth (formerly iiwusynth), the mcp ladspa plugins, hydrogen, rezound, cinelerra, mammut, csound, qarecord, qamix, qjackctl, gmorgan, ceres, pmidi, denemo, jackeq, cheesetracker, the rev ladspa plugins, qsynth, xmms-jack, jamin, vco ladspa plugins, pd externals (percolate, creb, cxc, chaos, flext, syncgrain, idelay, fluid, fftease, dyn), tap ladspa plugins, timemachine, caps ladspa plugins, xmms-ladspa, specimen, simsam, pvoc, brutefir, aeolus, fil ladspa plugins, pd vasp externals, jaaa, tap reverb editor, jackmix, coriander, liblo, jack bitscope, dvtitler, the soundtouch library, beast, phat, sooperlooper, qmidiarp, dssi. Sigh, and that’s only new packages. Many many significant updates as well. Go to the Planet CCRMA web page for links to all these (and many other) fine software packages. 6. OTHER WORLDS Planet CCRMA is one of many package repositories for the RPM based RedHat / Fedora family of distributions. Freshrpms [13], Dag [14], Atrpms, Dries and many others constitute a galaxy of web sites that provide easy to install software. Planet CCRMA is in the process of integrating with several of them (the so called RpmForge project) with the goal of being able to share spec files (the building blocks of RPM packages) between repositories. That will make my work, and that of the other packagers, easier, will reduce the inevitable redundancy of separate projects and will increase compatibility between repositories. Another world in which I also want to integrate parts of Planet CCRMA is the Fedora Extras repository. This Fedora sponsored project opened its first CVS server a short while ago and will be a centralized and more official repository of packages, but probably exclusively ded- LAC2005 111 icated to augmenting the latest Fedora Core release (as opposed to the more distribution agnostic RpmForge project). With the availability of Fedora Extras the “community” part of the Fedora Project is finally arriving and I’m looking forward to becoming a part of it. 7. PLANET FORGE A short time ago I finally got all the remaining components, and finished building a new server here at CCRMA. It is a fast dual processor machine with a lot of memory and hard disk space completely dedicated to the Planet CCRMA project. The original goal was to create a fast build machine in which to queue packages to be rebuilt, as that process was fast becoming one of my main productivity bottlenecks in maintaining Planet CCRMA. A secondary, but no less important goal, is to try to create a collaborative environment in which more people could participate in the development and maintenance of Planet CCRMA packages and associated documentation. We’ll see what the future brings. A lot of work remains to be done to port my current build environment to the new machine and create a collaborative and more open environment. 8. FUTURE DIRECTIONS One of the many things that are requested from time to time in the Planet CCRMA lists is the mythical “single media install” of Planet CCRMA (ie: “do I have to download all these cdroms?”). In its current form (and on purpose), a potential user of Planet CCRMA has to first install Fedora Core, and then add the kernel, drivers and packages that make up Planet CCRMA (this additional installation and configuration work has been substantially reduced in Fedora Core 2 and 3 releases as they use ALSA by default instead of OSS). While this is not that hard, specially with the help of meta packages and apt-get or synaptic, it appears that sometimes it is too much work :-) And I have to agree, it would be much nicer to have a single cd (hmm, actually a dvd given the size of current distributions) and at the end of the install have everything ready to go, low latency kernel active, just start the applications and make some music. I have long avoided going down this road and becoming a “distribution” because of the additional work that would involve. It is hard enough trying to keep up to date with the very fast evolution of Linux audio software. But on and off I’ve been thinking about this idea, and lately I’ve been actually doing something about it. At the time of this writing (end of February 2005) I already have a single “proof of concept” dvd with everything in it, all of Fedora Core 2 - the distro I’ve been playing with, I obviously have to do this on Fedora Core 3 as well - plus all of Planet CCRMA. This test dvd is not small, about 3G of stuff, remember, all of Fedora Core is included! Installing Planet CCRMA from it entails booting into the dvd, selecting the Planet CCRMA installation target, customizing the packages installed if desired and pressing “Install” (while going through the normal installation choices of a stock Fedora Core system install, of course). One reboot and you are up and running. Furthermore, the dvd creation process is pretty much automatic at this point (start a series of scripts, wait for some time and out comes a dvd iso image). Of course things are not that easy. What kernel should I select for installation? The more stable or the more risky that has better latency performance? How will the idiosyncracies of this non-standard kernel interact with the Fedora Core install process? (for example, it may happen that it will fail to boot in some machines, while the original Fedora Core kernel would have succeeded - and I don’t think Anaconda, the RedHat installer, would be able to deal with more than one kernel at install time). Hopefully some or all of these questions will have answers by the time I attend LAC2005, and conference attendees will be able to test drive an “official” alpha release of Planet CCRMA, the distro (another question to be answered: why do I keep getting into a deeper pit of support and maintenance stuff??). 9. CONCLUSION It is easy to conclude that Planet CCRMA is very cool. More seriously. Planet CCRMA as a project is alive and well. As a maintainer I’m (barely) alive, but have made it to another conference, no small feat. 10. ACKNOWLEDGEMENTS The Planet CCRMA project would never have been possible without the support for GNU/Linux and Open Source at CCRMA, Stanford University, and in particular, the support of to Chris Chafe, CCRMA’s Director. It goes without saying that I extend my heartfelt thanks to the hundreds of commited developers whose software projects I package. Without them Planet CCRMA would not exist and I would live in much more boring world. 11. REFERENCES [1] The Fedora Project. [2] The Planet CCRMA Project. [3] Ingo Molnar: Low latency patches for 2.2/2.4. [4] MontaVista: The Preemptible Kernel Patch., see also [5] Robert Love: The Preemptible Kernel Patch. LAC2005 112 [6] Andrew Morton: Low latency patches for 2.4. akpm/linux/schedlat.html [7] Takashi Iwai: low latency tweaks. [8] Ingo Molnar: Realtime Preemption patches for 2.6. [9] Andrew Morton: the “mm” patches for 2.6. [10] Jack O’Quinn: the realtime lsm kernel module. [11] Linux Weekly News: Merging the realtime security module. [12] Weekly News: Low latency for Audio Applications. [13] Freshrpms: package repository. [14] Dag: package repository. [15] The Jack Audio Connection Kit, a low latency sound server. LAC2005 113 LAC2005 114 Linux As A Text-Based Studio Ecasound – Recording Tool Of Choice Julien CLAASSEN Abtsbrede 47a 33098 Paderborn Germany julien@c-lab.de Abstract This talk could also be called ”ecasound textbased harddisk recording”. I am going to demonstrate a few of the most important features of ecasound and how to make good use of them in music recording and production. This talk explains what ecasound is and what its advantages are, how a braille display works, Ecasound’s basic features (playback, recording, effects and controllers), and a few of ecasound’s more advanced features (real multitrack recording and playback and mastering). 1.2 Advantages 1. Ecasound can easily be used in shell-scripts through its commandline options. Thus it can perform some clever processing. 2. Through its shell-interface you can access realtime controls. Via its set and get commands one can change and display controller values. 3. Because ecasound does not require an Xserver and a lot of other GUI overhead, it is slim and fast. On a 700 MHz processor one can run an audio-server (JACK), a software synthesizer (fluidsynth) and ecasound with 3 or more tracks without problems. 4. Ecasound is totally accessible for blind people through its various textbased interfaces. Those interfaces provide full functionality! 1.3 Disadvantages 1. Its textbased interface is not as intuitive and easy to learn as a GUI for a sighted person . 2. Its audio routing capabilities still lack certain features known to some other big linux audio tools. 3. It does not provide much support for MIDI (only ALSA rawmidi for controlling effects and starting/stopping). Keywords audio, console, recording, text-based 1 1.1 What is Ecasound? Introduction to Ecasound Ecasound is a textbased harddisk recording, effects-processing and mixing tool. Basically it can operate in two ways: • It can work as a commandline utility. Many of its features can be used from the commandline, via a whole lot of options. • It can also be operated from a shell-like interface. This interface accepts its own set of commands, as well as commandline options. Ecasound supports more than your usual audio io modes: • ALSA - Advanced Linux Sound Architecture • Jack - Jack Audio Connection Kit • ESD - Enlightenment Sound Daemon • Oldstyle OSS - Open Sound System • Arts - the Arts Sound Daemon 2 How I Work I work with a braille display. A braille display can display 40 or 80 characters of a screen. In textmode this is a half or full line. The braille display has navigation buttons, so you can move the focus over the whole screen, without moving the actual cursor. Usually the display tracks the cursor movement, which is very useful most of the time. For the rest of the time, you can deactivate tracking of the cursor. LAC2005 115 So the best programs to use are line-oriented. Thus tools with shell-interfaces or commandline utilities are the best tools for me. N.B.: As I heard such tools are also among the top suspects for users of speech-synthesizers. 3 Usage 3.2 Interactive mode Ecasound interactive mode offers a lot more realtime control over the things you mean to do like starting, stopping, skipping forward or backward etc. Thus in most cases it is more suited to the needs of a recording musician. Below there are some simple examples. 3.3 Playing a file This method of playing a file is much closer to what one could expect of a nice player. The syntax for starting ecasound is very similar to the one from 3.1.1. ecasound -c -i myfile.wav [-o alsa,p1] By pressing ”h” on the ecasound shell prompt you get some basic help. For more info – when trying it at home, there is the ecasound-iam (InterActive Mode) manual page. 3.4 Interactive recording The simplest way to record a file is almost as simple as playing a file. The only thing is you have to specify the audio-input source. Btw.: The same syntax can be used to convert files between different formats (wave-file, mp3, ogg, raw audio data...). To do a simple interactive recording, type this: ecasound -c -i alsa,io1 -o myrecording.wav Again you have the interactive capabilities of ecasound to support your efforts and extend your possibilities. Besides that, it is the same as in paragraph 3.1.2. 3.5 Effects in ecasound Ecasound has two sources for effects: internal and external via LADSPA. In the following sections both are introduced with a few examples and explanations. 3.5.1 Internal effects Ecasound comes with a substantial set of internal effects. There are filters, reverb, chorus, flanger, phaser, etc. All effect-options start with ”e”, which is good to know when looking for them in the manual pages. Here is a demo of using a simple lowpass filter on a wave-audio file: ecasound -i myfile.wav -efl:1000 which performs a lowpass filter with a cutoff frequency of 1000Hz on the file myfile.wav and outputs the result to the default audio device. This chapter will give several use cases of ecasound. 3.1 Using ecasound from the command line As already stated ecasound can – in general – be used in two ways: From the commandline and from its interactive mode. The following examples will deal with ecasound’s commandline mode. 3.1.1 Playing files from the commandline One of the simplest uses of ecasound is playing a file from the commandline. It can look like calling any simple player – like i.e. aplay. If the ecasound configuration is adjusted correctly it looks like this: ecasound myfile.wav or ecasound -i myfile.wav The ”-i” option stands for input. If you wish to specify your output explicitly and do not want to rely on the ecasoundrc configuration file, you can do it like that: ecasound -i myfile.wav -o alsa,p1 alsa,p1 marks the alsa output on my systemconfiguration running ALSA. The ”-o” option means output. 3.1.2 Recording files from the commandline It is as simple as playing files. The only thing one needs to exchange is the place of the sound-device (ALSA device) and the file (myrecording.wav). So if one intends to record from an ALSA device called ”io1” to myrecording.wav, one would do it like that: ecasound -i alsa,io1 -o myrecording.wav It looks just like the example from section 3.1.1 with sound-objects exchanged. LAC2005 116 3.5.2 External / LADSPA effects Ecasound can also use LADSPA effects which makes it a very good companion in the process of producing and mastering your pieces. There are two different ways of addressing LADSPA effects: By name or by unique ID. 3.5.3 Addressing by name With analyseplugin you can determine the name of a LADSPA effect like: babel:/usr/local/lib/ladspa # analyseplugin ./decimator_1202.so to it) much simpler to type – this is at least my personal experience. 3.5.5 Effect presets Another powerful feature of ecasound are effect presets. Those presets are stored in a simple text-file, usually /usr/local/share/ecasound/effect presets. An effect preset can consist of one or more effects in series, with constant and variable parameters. What does this mean in practice? The following illustrates the use of the metronome-effect: ecasound -c -i null -pn:metronome,120 Plugin Name: "Decimator" This provides a simple clicktrack at 120 BPM. Plugin Label: "decimator" Internally the ecasound ”metronome” effectPlugin Unique ID: 1202 preset consists of a sinewave at a specific freMaker: "Steve Harris " quency, a filter – for some reason – and a pulse Copyright: "GPL" gate. This gate closes at a certain frequency Must Run Real-Time: No given in BPM. Would you use all those effects Has activate() Function: No on the commandline directly, you would have Has deativate() Function: No to type a lot. Besides getting typos, you could Has run_adding() Function: Yes also choose very inconvenient settings. If you Environment: Normal Ports: "Bit depth" input, control, 1 to use the effect preset, everything is adjusted for you. 24, default 24 "Sample rate (Hz)" input, control, The standard preset file contains a good col0.001*srate to 1*srate, default 1*srate lection to start with. From simple examples for learning, to useful things like a wahwah, "Input" input, audio, -1 to 1 metronome, special filter constellations, etc... "Output" output, audio, -1 to 1 3.5.6 Controllers Ecasound also offers a few controllers which you Thus one knows that ”decimator” is the can use to change effect parameters while your name – label – of the plugin stored in decimamusic is playing. The simplest controller is a tor 1202.so. Now you can use it like that: two-point envelope. This envelope starts at a given start value and moves over a period of ecasound -i file.wav time to a certain endvalue. In practice it could -el:decimator,16,22050 look like this: A user wants to fade in a track which simulates the resampling of the file from volume 0 to 100 over 4 seconds: ”file.wav” at 22.05 KHz. ecasound -i file.wav -ea:100 3.5.4 Addressing by unique ID -kl:1,0,100,4 analyseplugin not only outputs the label of a What does the first parameter of -kl mean? LADSPA plugin, but also its unique ID, which This parameter is the same for all -k* – conecasound can also use. Mostly this way is simtroller – options. It marks the parameter you pler, because there is less to type and you do want to change. The amplifier (-ea) has only not have to look for upper- and lowercase letone parameter: the volume. Thus the first paters. With the following command you can use rameter is 1. The second is the start value (0), the decimator plugin by its unique ID: meaning the volume should start at 0, the third ecasound -i file.wav value is the endvalue for the envelope: Volume -eli:1202,16,22050 should go up to 100. The last value is the time in seconds that the envelope should use to move This command does the same as the one befrom start to end value. fore. Ecasound offers more controllers than this Although it looks more cryptic to the naked simple one. It has a sine oscillator and generic eye, it is really shorter and (once you are used LAC2005 117 oscillators which can be stored in a file like effect presets. Besides that you can use MIDI controllers to do some really customised realtime controlling. 3.5.7 An interactive recording with realtime control Now a short demonstration of the features presented so far: A short and simple recording with some realtime-controlled effects. The scenario is: One synthesizer recorded with ecasound and processed by a lowpass filter which is modulated by a sinewave. This will generate a simple wahwah effect. It might look like this: ecasound -c -i jack auto,fluidsynth -o my file.wav -ef3:5000,0.7,1.0 -kos:1,800,5000,0.5,0 The -ef3 effect is a resonant lowpass filter with these parameters: Cutoff frequency in Hz, resonance – from 0 to 1 (usually) – and gain. Values for gain should be between 0 and 1. The -kos controller is a simple sine oscillator with the following parameters: 1. effect-parameter – parameter of the effect to modify (first parameter of -ef3 – the cutoff) 2. start-value – lowest value for the cutoff frequency 3. end-value – highest value for the cutoff 4. frequency in Hz – the frequency at which the cutoff should change from lowest to highest values – in this case 0.5 Hz. It takes 2 seconds. 5. iphase – initial phase of the oscillator. A sinus starts at 0 and moves upwards from there. Yet one can shift the wave to the right by supplying an iphase > 0. You have already seen chains, without really knowing them because even a simple thing like: ecasound -i file.wav uses a chain with the default output. To explicitly specify a chain, you need to use the -a option. The above example with an explicit chain-naming, yet still unchanged behaviour looks like that: ecasound -a:my first chain -i file.wav (-o alsa,p1) 4.1.2 What is a chain setup? A chain setup can be seen as a map of all chains used in a session. You can perhaps imagine that you can have parallel chains – for mixing audiotracks – or even more complex structures for tedious mastering and effects processing. You can store a complete chain setup in a file. This is very useful while mastering pieces. A simple example of an implicit chain setup includes all above examples. They have been chain setups with only one chain. To store chain setups in files you can use the interactive command cs-save-as or cs-save, if you’ve modified an existing explicit chain setup. 4.2 Playing back multiple files at once Now the user can play back a multitrack recording before having generated the actual outputmixdown. It could look like this: ecasound -c -a:1 -i track1.wav -a:2 i track2.wav -a:3 -i track3.wav -a:1,2,3 -o alsa,p1 This also demonstrates another nice simplification: One can write something like -a:1,2,3 to say that chain 1, 2 and 3 should have something in common. In this example it could be even shorter: ecasound -c -a:1 -i track1.wav -a:2 -i track2.wav -a:3 -i track3.wav -a:all -o alsa,p1 This line does exactly the same as the last demo. The keyword all tells ecasound to apply the following options to all chains ever mentioned on the commandline. 4.3 Recording to a clicktrack 4 More complex work This chapter gives some more complex usage examples of ecasound. 4.1 Chains 4.1.1 What is a chain? A chain is a simple connection of audio objects. A chain usually contains of: • an audio input • effects (optional) • an audio output Now one can use chains to perform an earlier recording to a clicktrack: LAC2005 118 ecasound -c -a:1,2 -i alsa,io1 -a:1 -o track1.wav -a:3 -i null -pn:metronome,120 -a:2,3 -o alsa,p1 This does look confusing at first sight, but is not. There are three chains in total. Chain 1 and 2 get input from the soundcard (alsa,p1), chain three gets null input (null). Chain 2 (soundcard) and 3 (metronome) output to the soundcard so you hear what is happening. Chain 1 outputs to a file. Now you can use track1.wav as a monitor and your next track might be recorded with a line like this: ecasound -c -a:1,2 -i alsa,io1 -a:1 -o track2.wav -a:3 -i null -pn:metronome,120 -a:4 -i track1.wav -a:2,3,4 -o alsa,p1 This extends the earlier example only by a chain with track1.wav as input and soundcard (alsa,p1) as output. Thus you hear the clicktrack – as a good guidance for accurate playing – and the first track as a monitor. 4.4 Mixing down a multitrack session Having several tracks on harddisk, the mixdown is due. First one can take a listen to the multitrack session and then store the result to a file. Listening to the multitrack can be achieved by issuing the following command: ecasound -c -a:1 -i t1.wav -a:2 -i t2.wav -a:3 -i t3.wav -a:all -o alsa,p1 Now adjusting of volumes can be managed by applying -ea (amplifier effect) to each track. i.e.: ecasound -c -a:1 -i t1.wav -ea:220 -a:2 -i t2.wav -ea:150 -a:3 -i t3.wav -a:180 -a:all -o alsa,p1 This amplifies t1.wav by 220%, t2.wav by 150% and t3.wav by 180%. Being content with volume adjustment and possibly other effects, the only thing left is exchanging soundcard output by fileoutput. Meaning exchange alsa,p1 with my output.wav: ecasound -c -a:1 -i t1.wav -ea:220 -a:2 -i t2.wav -ea:150 -a:3 -i t3.wav -ea:180 -a:all -o my output.wav Now ecasound will process the files and store the mixdown to disk. The last – optional – step is to normalize the file my output.wav which can be performed by ecanormalize: ecanormalize my output.wav The normalized output file overwrites the original: So be careful! 5 Resume Having in theory produced a piece ready for burning on CD or uploading to the Internet, here comes the resume. It is not the same way you would do it in a graphical environment, yet it still works fine! For me ecasound is always the tool of choice. It is a very flexible tool. Its two general modes – commandline and interactive – combined with its chain-concept make it a powerful recording and mixing program. Because ecasound has LADSPA support and can be connected to the JACK audio server it is very simple to integrate it in a linux-audio environment. You can also use it in combination with graphical tools, if you so choose. So for those who love text interfaces, need fast and simple solutions or those who start to learn about audio-recording, ecasound can be a tool of great value. Besides that, ecasound is of course a very good example of what free software development can do: Produce a very up-to-date piece of fine software which is fully accessible to blind and visually impaired people. Yet still it was not written with this audience in mind. There is a fairly large crowd relying on ecasound for very different kinds of work. Though it lacks a few things that others have, it is not said that ecasound can never get there. Meanwhile there are other ways to achieve what one needs to achieve, thanks to the flexibility of ecasound and the tools you can combine/connect with it. 6 Thanks and Acknowledgements • Kai Vehmanen and the ecasound crew at • Of course the ALSA crew with much work from Takashi Iwai and Jaroslav Kysela at • Richard E. Furse and companions, at, for creating LADSPA in the first place • Steve Harris and his wonderful collection of LADSPA plugins at • Paul Davis and friends at for jackd, our favourite realtime audio server Thanks and acknowledgements go to: LAC2005 119 •.fluidsynth.org, namely Josh Green, Peter Hanappe and colleagues for the soundfont-based softsynth fluidsynth • Dave Phillips and his great collection of MIDI and audio links at • ZKM for hosting this conference, see the official LAC webpages at Before thanking the great bunch of people who organised and host this event, I want to mention my own webpage at. Great many thanks to Frank Neumann, Matthias Nagorni, G¨tz Dipper and ZKM for o organising and hosting this conference! And many thanks and apologies to all those I forgot! Sorry to you, I didn’t mean to! LAC2005 120 ”terminal rasa” - every music begins with silence Frank EICKHOFF Media-Art, University of Arts and Design Lorenz 15 76135 Karlsruhe, Germany, feickhof@hfg-karlsruhe.de Abstract An important question in software development is: How can the user interact with the software? What is the concept of the interface? You can analyze the ”interface problem” from two perspectives. One is the perspective of the software developer. He knows the main features of the software. From that point he can decide what the interface should look like, or which areas should be open for access. On the other side is the perspective of the user. The user wants an interface with special features. The questions for audio software designed for live performance are: What should the program sound like? If software for live performance should have features like an instrument, what features does an acoustic instrument have, and what features should a computer music instrument have? The first part of this paper is concerned with music and sound, the special attributes of acoustic instruments, and the features of an average Personal Computer. The second part of the paper presents the audio software project ”fui” as a solution to the aforementioned questions and problems. lation, which is interaction. Therefore, the first item to consider is music. 2 Computer Music Instrument? Music - Instrument - Computer Silence, noise, and sound are the most basic elements of the phenomenon that is music. What music one wants to hear is an individual decision. Each has his own likes and dislikes. In the first place, music is a matter of taste. An instrument (acoustic or electronic) is a tool or a machine to make music. Any of these tools are designed with a special intention. The basis of this intention is a certain idea of sound and timbre. One can say that the instrument is a mechanical or electronical construcion of a sound idea. What should my instrument sound like? How can I construct this sound? 2.1 Acoustic Sound Sound is nothing more than pressure differences in the air. One can hear sound, but one can not easily see it or touch it. The behavior of sound in a space is complex and depends on the physical properties of the space. Thus, any visual representation of sound must remain abstract, and is necessarily a simplified model of the real situation. The special character of sound is that one can NOT see it. 2.2 Digital Sound A computer calculates a chain of numbers which can be played by a soundcard. Acoustic waves are simulated by combinations of algorithms. Such mathematical processes are abstract and not visible. An audio application can run within a shell process or even as a background process. It does not require any visual or even statistical feedback. 2.3 Instrument = Sound + Interface At the point where one wants direct access to the sound manipulating parameters of his software or instrument, one needs some sort of an Keywords computer music instrument, interface problem 1 Introduction The ”interface problem” is a very important aspect in audio software development. The interface of a machine is not the device itself, but rather the parts of the machine which are used for interaction and exchange; the area between the inner and the outer world of the machine. The ”interface problem” is the problem of interaction between human and machine. For an instrument, it is the area between sound production and sound modulation. Sound is produced through specific methods or physical phenomena. These methods of sound production can be controlled through the manipulation of various parameters. These parameters are the values which should be open for access by the user. It is possible to draw conclusions from the analysis of sound to the possibilities of sound modu- LAC2005 121 interface. The construction of the interface is derived on one hand from the timbre of the sound. On the other hand, the interface has influence on the playability of the instrument and, thus, on the sound aesthetic. The instrument is the connection of sound with an interface. 2.3.1 Classic, Acoustic Instruments A classical instrument like the violin or the piano is very old compared to the computer. The structure and operation of acoustic instruments has been optimized through years of usage. One could say, then, that the instrument has a balance between optimized playability and a characteristic tone colour / timbre. Every instrument has its own unique sound. 2.3.2 Universal Computer From the start the computer was developed as a universal flexible calculating machine. The ”universal computer” works with calculating operations and algorithms. Alan Turing proved with his invention of the ”Turing machine” that every problem which could be mechanized can be solved by a computer calculation1 . Otherwise the computer ends up in an infinite loop and without result. The ”Turing machine” does not stop. It is obvious that the computer can solve a huge amount of problems. The computer interface is divided into hardware and software interface. The hardware setup of an ordinary personal computer is a keyboard, a monitor and a mouse. Software interfaces are programs which can interact with such hardware. The clarification of this concept makes it easy to deal with the complex possibilities of the computer. 2.4 Computer Music Instrument When one wants to use the computer as an instrument, one must combine the features of an instrument with the features of a computer. One needs to create a balance between playability, unique sound and the special character of the computer, that is flexibility: Sound vs Playability vs Flexibility performance”, as such it is playable like an instrument. It is a simple tool to create short rhythmic loops. It has a minimal sound characteristic and serial or linear rhythmic aesthetic. The user has two different interfaces. One is a terminal for keyboard commands. The other is a graphic window with a GUI (Graphical User Interface) for the interaction with the mouse. 3.1 Short Description The user can load audio samples into a sequence. Such sequences are played in a loop directly. He can move such samples to a specific point in time within a sequence. Samples are dragged and moved multiple times until the music gets interesting. With this method it is easy to construct rhythmic patterns. Every sample can be modulated through the control of different parameters (Filter, TimePitch, PitchShift, Incremental or Decremental Loop). It is possible to create multiple sequences, and to switch between them in a song like manner. Because of the playback of the loops, the user gets a direct response to all the changes he or she makes. The music develops through improvising and listening. 3.1.1 Sound Effects Every sample can be modulated with different effects. The effect parameters are both static and random. The ”pitch” control allows the user to manipulate the pitch of the sample. The ”position” is the playback start value within a sample. ”Loop” restarts the sample at the end and ”count” is the number of repeats. ”Incremental loop” or ”decremental loop” starts the sample at the ”position” point, and the sample length gets shorter or longer after every repeat. 3.2 Interaction - Interface The ”fui” application uses the standard interfaces of an ordinary computer. Every interface has its advantages and disadvantages. The terminal program is specialized on keyboard control. The GUI is specialized on mouse control. The ”fui” application uses both features (see Figure 1). 3 The ”fui” Audio Application The audio application ”fui” is a sample loop sequencer. The program is designed for ”live 1 ”On computable Numbers and an Application to the Entscheidungsproblem”, Alan Turing, 1947 Figure 1: ”fui” Interface LAC2005 122 3.2.1 Terminal Before the invention of the desktop computer with GUI control there was only a terminal. The terminal is one of the oldest software interfaces to the processes of the computer. The terminal works perfectly as an interface because it is incorporated on many operating systems. It operates simply on keyboard input and text output. It is difficult to implement a comparable interface in a mouse orientated GUI. When there is a terminal anyway, why shouldn’t we use it? 3.2.2 ”pet” - Pseudo emulated Terminal The first thing which is launched by ”fui” is ”pet” (pseudo emulated terminal). The idea behind ”pet” is to use the terminal as a keyboard input and text output interface during the runtime of the program. This object uses the standard streams (stdout, stdin) for reading and writing. The user can type in simple UNIX like commands (see Table 1) for file browsing, changing the working directory, loading files into the ”fui” software. The ”ls” command prints out a list of the current directory. The ”pet” object numbers all files and folders in the directory (see Figure 2). creates index numbers. The sample-ID is ”ID” and the sequence-ID is ”SID”. For example, when the user wants to call a specific sample he has to know the sample-ID. The command ”la” prints out a list with all sample filenames, information about position, pitch and IDs of the current sequence. cd PATH or NUM ls start stop open load ’name’ save ’name’ new dels la get NAME or NUM del ID seq SID loop TIME loff ID lon ID lr ID ld ID POS NUM li ID POS NUM pi ID VALUE change directory list directory, files are numbered start audio stop audio open GUI load sequence save sequence new sequence, generates SID delete sequence list all samples, with ID and SID load sample delete sample set current sequence set loop time (ms) loop off infinite loop on random loop decremental loop incremental loop set pitch value Table 1: ”pet” Commands Figure 2: list Directory The ”get” command loads a filename into the ”pet” command parser. The command argument is the filename or the number printed out with ”ls” (see Figure 3). Some other commands like ”cd” use the same method of file identification. This method provides a simple and fast way to load files or browse directories. 3.2.4 Graphic User Interface Sometimes the possibility of visualization, graphical feedback of statistical values, or interaction is very useful. The ”fui” GUI is rendered in OpenGL and has a very simple design. There are text buttons (strings which function like a button), text strings without any interactivity and variable numbers to adjust parameters with the mouse. Every control is listed in a simple menu (see Figure 4). Some text buttons have multiple states. Active GUI elements are rendered in black, inactive elements are grey (see Figure 5). A vertical, dotted line is the loop cursor. The cursor changes the position from left to right, analog to the current time position of the loop. Audio samples are drawn as rectangles (see Figure 6). The width of the rectangle is proportional to the length of the sample and the length of the Figure 3: get Filename 3.2.3 ”pet” and ”fui” The ”fui” software uses ”pet” file loading and file browsing. The ”pet” object numbers all file in a directory chronologically. When the user loads a sample or creates a new sequence ”fui” LAC2005 123 Figure 6: Vertical Cursor and two Samples Figure 4: GUI Menu Figure 7: Start Screen - ”terminal rasa” window. Now, the user can browse the harddisk for suitable samples. The ”get” command loads a sample into the sequence. The user can move the sample to a position within the GUI window. Every time the cursor reaches the sample, the sample will be played (see Figure 8). 3.4 Sound - Playability - Flexibility Figure 5: GUI sample loop in seconds which is the width of the window. The sample can be moved with the mouse within a two dimensional area (like a desktop, table or ”tabula”). Every new sequence has a blank area, a blank table (”tabula rasa”). 3.3 Example Usage Every music begins with silence. ”fui” starts as a simple terminal application without any other window or GUI (”terminal rasa”). After startup the software waits for command line input (see Figure 7). The ”open” command opens the GUI window. The ”new” command creates a new, empty sequence. ”fui” adds a new number to the ”SID LIST” in the GUI window. This number is the new ID for the current sequence. The ”start” command starts the audio playback. The loop cursor starts moving over the Sound, playability and flexibility have a mutual influence on each other. The sound is determined by the implementation of audio playback and audio manipulation. Many interesting rhythms can be found by improvising and playing with the software. Different interfaces and the implementation of a powerful audio engine enhance the flexibility of ”fui”. 3.4.1 Sound The characteristic ”fui” sound comes from the combination of short samples into rhythmic loops. All samples are freely arranged within the time frame of one sequence. There are no restrictions imposed by time grids or ”bpm” (beats per minute) tempo parameters. The user has a simple visualization and a direct audio playback. 3.4.2 Playability The use of the UNIX like terminal and the simple GUI provide a simple and playful access to the software. Different sound effects with LAC2005 124 static or random modulation vary the sound. All changes are made intuitively by the user through listening. For example, a combination of two samples which might sound boring at first, can become very interesting with slight changes to the position of one sample within the time frame of the loop. A simple change of one parameter can have an interesting result in the music. 3.4.3 Flexibility - Audio API? In the first place the source code should be portable. This project was developed on an Apple Macintosh PISMO, G3, 500 Mhz, OSX 10.3 using the gcc compiler. Later it was ported to Linux. The whole project was written in ANSI C/C++ with OpenGL for graphic rendering. The ”Software Toolkit”2 from Perry Cook and Gary Scavone is used for realtime audio file streaming. The platform independent window class from ”plib”3 is used for creating the render window. Different audio engines are tested for the main audio streaming: ”RtAudio”4 from Gary Scavone, ”Portaudio”5 from Ross Bencina, ”FMOD”6 from Firelight Technologies and ”JACK”7 from Paul Davis ”and others”. The ”JACK” API works as a sound server within the operating system. Completely different audio applications, which are compiled as ”JACK” clients, can share audio streams with each other. Now the developer does not need to think about implementing some kind of plugin architecture in the software. Audio streams can easily be shared in a routing program. It is simply perfect for audio software developers. From that point the use of the ”JACK” API is the most flexible solution for the ”fui” audio project. between the two, it is possible to say that the ideal combination of both media is a hybrid and open environment. The design of the interface is simple, minimal and experimental. The sound aesthetic is linear with nested rhythmic patterns. The user deals with the program in a playful way and the music is created through listening. 5 Acknowledgements G¨tz Dipper, Frank Neumann, Anne Vortisch, o Dan Santucci 6 Project Webpage 4 Conclusions Music is meant to be listened to. The idea of ”fui” is to establish a balance between the interface and the characteristic sound of the computer as a musical instrument. When one is familiar with the special features and the historical background of acoustic instruments and computers in music AND the general differences 2 3 4 5 6 7 LAC2005 125 Figure 8: ”fui” Screenshot LAC2005 126 The MusE Sequencer: Current Features and Plans for the Future Werner SCHWEER Ludgerweg 5 33442 Clarholz-Herzebrock, Germany ws@seh.de Frank NEUMANN Bärenweg 26 76149 Karlsruhe, Germany beachnase@web.de Abstract The MusE MIDI/Audio Sequencer[1] has been around in the Linux world for several years now, gaining more and more momentum. Having been a one-man project for a long time, it has slowly attracted several developers who have been given cvs write access and continuously help to improve and extend MusE. This paper briefly explains the current feature set, gives some insight into the historical development of MusE, continues with some design decisions made during its evolution, and lists planned changes and extensions. Keywords MIDI, Audio, Sequencer, JACK, ALSA expected operations like selection, cut/copy/paste, Drag&Drop, and more. Unlimited Undo/Redo help in avoiding dataloss through accidental keypresses by your cat. 2 Historical Rundown 1 Introduction MusE has been developed by german software developer Werner Schweer since roughly January 2000. Early developments started even years before that; first as a raw Xlib program, later using the Tcl/Tk scripting language because it provided the most usable API and nicest look at that time (mid-90s). It was able to load and display musical notes in a pianoroll-like display, and could play out notes to a MIDI device through a hand-crafted kernel module that allowed somewhat timing-stable playback by using the kernel's timers. MIDI data was then handled to the raw MIDI device of OSS. As the amount of data to be moved is rather small in the MIDI domain, reasonable timing could be reached back then even without such modern features as "realtimelsm" or "realtime-preempt" patches that we have today in 2.6 kernels. With a growing codebase, the code quickly became too hard to maintain, so the switch to another programming language was unavoidable. Since that rewrite, MusE is developed entirely in C++, employing the Qt[5] user interface toolkit by Trolltech, and several smaller libraries for housekeeping tasks (libsndfile[6], JACK[7]). In its early form, MusE was a MIDI only sequencer, depending on the Open Sound System (OSS) by 4Front Technologies[8]. When the ALSA audio framework became stable and attractive (mid-2000), ALSA MIDI support was added, and later on also ALSA audio output. Summer 2001 saw the introduction of an important new feature, the "MESS" (MusE Experimental Soft Synth"). This allows for the development of pluggable software synthesizers, like the VSTi mechanism on Steinberg's Cubase LAC2005 127 sequencer software for Windows. At some point in 2003 the score editor which was so far a part of MusE was thrown out (it was not really working very well anyway) and has then been reincarnated as a new SourceForge project, named MScore[9]. Once it stabilizes, it should be able to read and export files not only in standard MIDI file format, but also in MusE's own, XML-based .med format. In October 2003 the project page moved to a new location at SourceForge. This made maintenance of some parts (web page, cvs access, bug reporting) easier and allowed the team of active developers to grow. Since this time, MusE is undergoing a more streamlined release process with a person responsible for producing releases (Robert Jonsson) and the Linux-typical separation into a stable branch (only bug fixes here) and a development branch (new features added here). In November 2003 the audio support was reduced to JACK only. Obviously the ALSA driver code inside JACK was more mature than the one in MusE itself, and it was also easier to grasp and better documented than the ALSA API. Additionally, fully supporting the JACK model meant instant interoperability with other JACK applications. Finally, it had also become too much of a hassle for the main developer to maintain both audio backends. The data delivery model which looked like a "push" model from outside MusE until now (though internally it had always used a separate audio thread that pulls the audio data out of the engine and pushes it into the ALSA pcm devices) has now become a clear "pull" model, with the jackd sound server collecting audio data from MusE and all other clients connected to it, and sending that data out to either pcm playback devices or back to JACKenabled applications. It has been asked whether MusE can be used as a simple "MIDI only" sequencer without any audio support. The current CVS source contains some "dummy JACK code" which gets activated when starting MusE in debug mode. If the interest in this is high enough, it might get extended into a real "MIDI only" mode. In early 2004, the user interface underwent substantial changes when Joachim Schiele joined the team to redesign a lot of pixmaps for windows, menus, buttons and other user interface elements. Finally, MusE also received the obligatory splash screen! Another interesting development of 2004 is that it brought a couple of songs composed, recorded and mixed with MusE. This seems to indicate that after years of development work, MusE is slowly becoming "ready for the masses". 3 Examples of Coding Decisions (1) In earlier versions of MusE, each NoteOn event had its own absolute time stamp telling when this event should be triggered during playback. When a part containing a set of elements was moved to a different time location, the time stamps of all events in this part had to be offset accordingly. However, when the concept of "clone copies" was introduced (comparable to symlinks under Linux: Several identical copies of a part exist, and modifying an event in one part modifies its instance in all other cloned copies), this posed a problem: The same dataset is used, but of course the timestamps have to be different. This resulted in a change by giving all events only a timestamp relative to the start of the part it lives in. So, by adding up the local time stamp and the offset of the part's start from the song start, a correct event playback is guaranteed for all parts and their clone copies. (2) MIDI controller events can roughly be separated into two groups: Those connected to note events, and those decoupled from notes. The first group is covered by note on/off velocity and to some degree channel aftertouch. The second group contains the classic controllers like pitchbender, modulation wheel and more. Now, when recording several tracks of MIDI data which are all going to be played back through the same MIDI port and MIDI channel, how should recording (and later playback) of such controller events be handled? Also, when moving a part around, should the controller data accompanying it be moved too, or will this have a negative impact on another (not moved) part on another track? Cubase seems to let the user decide by asking him this, but there might be more intelligent solutions to this issue. (3) The current development or "head" branch of the MusE development allows parameter automation in some points. How is parameter LAC2005 128 automation handled correctly, for both MIDI and audio? Take as an example gain automation. For MIDI, when interpolating between two volumes, you do normally not want to create thousands of MIDI events for a volume ramp because this risks flooding a MIDI cable with too much data and losing exact timing on other (neighbouring) MIDI events. For audio, a much finer-grained volume ramp is possible, but again if the rate at which automation is applied (the so-called "control rate") is driven to extremes (reading out the current gain value at each audio frame, at audio rate), too much overhead is created. So instead the control rate is set to a somewhat lower frequency than the audio rate. One possible solution is to go for the JACK buffer size, but this poses another problem: Different setups use different values for sample rate (44.1kHz? 48kHz? 96kHz?) or period size, which means that the same song might sound slightly different on different systems. This is an ongoing design decision, and perhaps communication with other projects will bring some insight into the matter. 4 Weak Spots 5 Future Plans The plans for the future are manifold - as can be seen very often with open-source projects, there are gazillions of TODO items and plans, but too little time/resources to implement them all. What follows is a list of planned features, separated into "affects users" and "affects developers". Some of these items are already well underway, while others are still high up in the clouds. 5.1 • Planned changes on the User level Synchronisation with external MIDI devices. This is top priority on the list currently, and while some code for MIDI Time Code (MTC) is already in place, it needs heavy testing. MusE can already send out MMC (MIDI Machine Control) and MIDI Clock, but when using MusE as a slave, there is still some work to be done. There have been plans for a while in the JACK team to make the JACK transport control sample-precise, and this will certainly help once it is in place. A file import function for the old (0.6.x) MusE .med files. This has been requested several times (so there are in fact people using MusE for a while! ☺), and as all .med files (XMLbased) carry a file version string inside them, it is no problem to recognize 1.0 (MusE 0.6.x) and 2.0 (0.7.x) format music files. Complete automation of all controllers (this includes control parameters in e.g. LADSPA plugins). Mapping audio controllers to MIDI controllers: This would allow using external MIDI control devices (fader boxes, e.g. from Doepfer or Behringer) to operate internal parameters and functions (transport, mixer etc). A feature to align the tempo map to a freely recorded musical piece (which will alter the tempo map in the master track accordingly). This would bring a very natural and "human" way of composing and recording music while still allowing to later add in more tracks, e.g. drums. Support for DSSI soft synths (see below) A configurable time axis (above the track list) whose display can be switched between "measure/beat/tick", SMPTE (minute / second / frame) and "wallclock time". The "MScore" program which has been separated from MusE will start to become useful for real work. It will make use of the above mentioned new libraries, AWL and AL, • There are a couple of deficiencies in MusE; for all of these efforts are already underway to get rid of them, though: • There is a clear lack of documentation on the whole software. This is already tackled, however, by a collaborative effort to create a manual through a Wiki-based approach[10]. • The developer base is small compared to the code base, and there is a steep learning curve for prospective new developers wishing to get up to speed with MusE's internals. However, this seems to be true for a lot of medium-sized or large Open Source projects these days. Perhaps better code commenting (e.g. in the Doxygen[11] style) would help to increase readability and understandability of the code. • Not all parts of MusE are as stable as one would want. One of the reasons is that the focus has been more on features and architecture than on stability for quite some time, though since the advent of stable and development versions of MusE this focus has changed a bit and MusE is getting more stable. • • • • • • LAC2005 129 and will have a simple playback function built in (no realtime playback though), employing the fluidsynth software synthesizer. It will be able to work with both .mid and .med files, with the .med file providing the richer content of the two. MScore is still lacking support for some musical score feature like triplets and ntuplets, but this is all underway. A lot of code is already in place but again requires serious testing and debugging. 5.2 • 7 Other Applications Planned changes on the Developer level Better modularisation. Two new libraries are forming which will become external packages at some point: "AWL" (Audio Widget Library) which provides widgets typically found in the audio domain, like meters, location indicators, grids or knobs, and "AL" (audio library) which features house-holding functions like conversion between a tempo map (MIDI ticks) and "wallclock time" (SMPTE: hh:mm:ss:ff). • Also, MESS soft synths shall get detached from the MusE core application so that they can get built easier and with less dependencies. This reduces the steepness of the learning curve for new soft synth developers. • The new "DSSI"[12] (Disposable SoftSynth Interface) is another desired feature. Already now the "VST" and "MESS" classes have a common base class, and adding in support for DSSI here should not be too complicated. • The creation of a "demosong regression test suite" will be helpful in finding weak spots of the MIDI processing engine. This could address issues like MIDI files with more than 16 channels, massive amounts of controller events in a very short time frame, SysEx handling, checks for "hung notes" and more. Getting input and suggestions from the community on this topic will be very helpful! 6 Conclusions There are some other MIDI/audio applications with a similar scope as MusE; some of them are: • Rosegarden[13], a KDE-based notation software and sequencer • Ardour[14], turning a computer into a digital audio workstation • seq24[15], a pattern-based sequencer (MIDIonly) and live performance tool • Cheesetracker[16], a "classic" pattern-oriented tracker application • Beast[17], an audio sequencer with a built-in graphical modular synthesis system Besides these, there are already numerous soft synthesizers/drum machines/samplers etc that are able to read MIDI input through ALSA and send out their audio data through JACK; these programs can be controlled from within MusE and thus extend its sound capabilities. However, connecting them with MusE MIDI- and audiowise is more complicated, and can cause more overhead due to context switches. 8 Acknowledgements The authors wish to acknowledge the work of everyone helping in turning a Linux-based computer into a viable and free alternative to typical commercial audio systems. No matter whether you are developing, testing, documenting or supporting somehow – thank you! References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] MusE is one of the best choices to compose music under Linux when the "classical" approach (notes, bars, pattern, MIDI, samples) is required. Thanks to the integration with the existing ALSA and JACK frameworks, it can interact with other audio applications to form a complete recording solution, from the musical idea to the final master. [11] [12] [13] [14] [15] [16] [17] LAC2005 130 ZynAddSubFX – an open source software synthesizer Nasca Octavian PAUL Tg. Mures, Romania zynaddsubfx@yahoo.com Abstract ZynAddSubFX is an open source real time software synthesizer that produces many types of sounds. This document will present the ZynAddSubFX synthesizer and some ideas that are useful in synthesizing beautiful instruments without giving too much (mathematical) detail. 2.1 Synth engines The engines of ZynAddSubFX are: ADDsynth, SUBsynth and PADsynth. In the Fig.1 it shows the structure of these engines: Keywords Synthesizer, bandwidth, harmonics. 1 Introduction The ZynAddSubFX software synthesizer has polyphonic, multitimbral and microtonal capabilities. It has powerful synth engines, many types of effects (Reverberation, Echo, Chorus, Phaser, EQ, Vocal Morpher, etc.) and contains a lot of innovations. The synthesizer engines were designed to make possible many types of sounds by using various parameters that allow the user to control every aspect of the sound. Special care was taken to reduce the amount of computation in order to produce the sound, but without lowering it's quality. 2 ZynAddSubFX structure Fig. 1 Synth engines ZynAddSubFX has three synth engines and allows the user to make instrument kits. In order to make possible to play multiple at the instruments same time, the synth is divided into a number of parts. One part can contain one instrument or one instrument kit. The effect can be connected as System Effects, Insertion Effects and Part Effect. The system effects are used by all parts, but the user can choose the amount of the effect for each part. The Insertion Effects are connected to one part or to the audio output. The Part Effects are a special kind of effect that belong to a single part, and they are saved along the instrument. 1) ADDsynth The sound is generated by the oscillator. The oscillator has different kind of parameters, like the harmonic type(sine, saw, square, etc.), the harmonic content, the modulations, waveshapers, filters. These parameters allow the oscillators to have any shape. A very interesting parameter of the oscillator is called “adaptive harmonics”. This parameter makes possible very realistic sounds, because it allows to control how the resonances appear on different pitches. The oscillators includes a very good antialiasing filter that avoids aliasing even at the highest pitches. If the user wants, the oscillator can be modulated by another oscillator(called the “modulator”) using the frequency modulation, phase modulation or the ring modulation. The frequency of the oscillators can be changed by the low frequency oscillators and envelopes. After the sound is produced by the oscillator, it passes through LAC2005 131 filters and amplitude changers controlled by LFOs and envelopes. An oscillator with a modulator and the amplitude/frequency/filter envelopes is called a “voice”. The ADDsynth contains more voices. The output of voices is added together and the result is passed through another amplitude/filter envelopes and LFO. A interesting feature is that the output of the voice can be used to modulate a oscillator from another voice, thus making possible to use modulation stacks. All the oscillators that are not modulators can pass through a resonance box. 2) SUBsynth This module produces sound by generating a white noise, filtering each harmonic(with band pass filters) from the noise and adding the resulting harmonics. The resulting sound passes through a amplitude envelope and a filter controlled by another envelope. 3) PADsynth This synth engine is the most innovative feature in ZynAddSubFX. It was designed following to the idea that the harmonics of the sounds are not some simple frequencies, but are rather are spread over a certain band of frequencies. This will be discussed later.. Firstly there will be generated a long sample (or few samples) according to the settings in this engine (like the frequency spread of the harmonics, the bandwidth of each harmonic, the position of the harmonics, etc..). After this, the sample is played at a certain speed in order to achieve the desired pitch. Even though this engine is more simpler than ADDsynth, the sounds generated by it are very good and make possible very easy generation of instruments like pads, choirs, and even metallic noises like bells, etc. 2.2 Instrument/Part structure The structure of the Parts are drawn in Fig.2. Fig. 2 The sum of the output of the ADDsynth, SUBsynth and PADsynth engines is called “kit item”, because ZynAddSubFX allows a part to contain several kit items. These kit's items can be used to make drum kits or even to obtain multi timbrality for a single part (for dual instruments, like a bell+strings or rhodes+voice) The output of them can be processed by the part's effects. The instrument kit with the part effects is considered to be an instrument and saved/loaded into the instrument banks. An instrument, usually contains only one kit item. 2.3 ZynAddSubFX main structure The main structure of ZynAddSubFX is drawn in the Fig.3. Fig. 3 As seen from the Fig.3, the part's output is sent to insertion effect and, after this, the signal can pass through system effects. A useful feature of the system effects is that the output of one system effect can go to the next system effect. Finally, the sound passes through the master insertion effects (they could be a EQ or a reverberation, or any LAC2005 132 other effect) and, after this, the sound is send to the audio output. 3 Design principles Because of this, if the sound has enough harmonics, the upper harmonics merge to a continuous frequency band (Fig.6). This section presents some design principles and some ideas that were used to make the desired sounds with ZynAddSubFX. 3.1 The bandwidth of each harmonic Fig. 6 Higher harmonics merges to a continuous frequency band This considers that the harmonics of the pitched sounds are spread in frequency and are not a single sine function. Fig. 4 A narrow band harmonic vs. a wide band harmonic Each ZynAddSubFX module was designed to allow easy control of the bandwidth of harmonics easily: – by detuning the oscillators from ADDsynth module and/or adding “vibrato”. – in SUBsynth, the bandwidth of each bandpass filter controls the bandwidth of the harmonics – the PADsynth module uses this idea directly, because the user can control the frequency distribution of each harmonic. 3.2 Randomness This helps to produce “warm” sounds, like choir, orchestra or any other ensembles. So the bandwidth of each harmonic can be used to measure the ensemble effect. An important aspect about the bandwidth of each harmonic is the fact, that if you'll measure it in Hz, it increases for higher harmonics. For example, if a musical tone has the “A” pitch (440Hz) and the bandwidth of the first harmonic is 10 Hz, the bandwidth of the second harmonic will be 20 Hz, the bandwidth of the third harmonic will be 30 Hz, etc.. The main reason why the digital synthesis sounds too "cold" is because the same recorded sample is played over and over on each keypress. There is no difference between a note played first time and second time. Exceptions may be the filtering and some effects, but these are not enough. In natural or analogue instruments this does not happen because it is impossible to reproduce exactly the same conditions for each note. All three synth engines allow the user to use randomness for many parameters. 3.3 Amplitude decrease of higher harmonics on low velocity notes Fig.5 Higher harmonics has a higher bandwidth All natural notes have this property, because on low velocity notes there is not enough energy to spread to higher harmonics. In ZynAddSubFX you can do this by using a lowpass filter that lowers the cutoff frequency on notes with low velocities or, if you use FM, by lowering the modulator index. LAC2005 133 3.4 Resonance If you change the harmonic content of the sound in order to produce higher amplitudes on certain frequencies and keep those frequencies constant, the listener will perceive this as if the instrument has a resonance box, which is a very pleasant effect to the ears. In ZynAddSubFX this is done by: • using the Resonance function in ADDsynth and SUBsynth • using the Adaptive Harmonics function from the Oscillators • using filters, EQ or Dynamic filter effects In the user interface the LFO interface is shown like this (fig.9): Fig. 9 LFO interface 4.2 Envelopes 4 4.1 Basic blocks of ZynAddSubFX Low Frequency Oscillators Envelopes control how the amplitude, the frequency or the filter change over time. There are three types of envelopes: Amplitude Envelope, Frequency Envelope and Filter Envelope. All envelopes have 2 modes: parameter control (like ADSR – AttackDecaySustainRelease, ASR – AttackSustainRelease) or Freemode, where the envelope can have any shape. The ADSR envelopes control the amplitudes (fig. 10). These oscillators do not produce sounds by themselves, but rather change some parameters (like the frequency, the amplitude or the filters). The LFOs have some basic parameters like the delay, frequency, start phase and depth. These parameters are shown in the Fig.7. Fig. 10 Envelopes Fig. 7 Another important LFO parameter is the shape. There are many LFO types according to the shape. ZynAddSubFX supports the following LFO shapes (Fig. 8): The following images show the filter envelope as parameter control mode(Fig.11) or freemode (Fig.12). Fig. 11 Filter envelope user interface Fig. 8 ZynAddSubFX's LFOs have other parameters, like frequency/amplitude randomness, stretch, etc. Fig. 12 Freemode envelope user interface LAC2005 134 4.3 Filters ZynAddSubFX supports many types of filters. These filters are: 1. Analog filters: • Low/High Pass (1 pole) • Low/Band/High Pass and Notch (2 poles) • Low/High Shelf and Peak (2 poles) 2. Arbitrary format filters 3. State Variable Filters • Low/Band/High Pass • Notch The analog filter's frequency responses are are shown in the Fig.13. Fig. 15 Applying filter multiple times The formant filters are a special kind of filter which can produce vowellike sounds, by adding several formants together. A formant is a resonance zone around a frequency that can be produced by a bandpass filter. The user, also can specify several vowels that are morphed by smoothly changing the formants from one vowel to another. Fig. 16 shows a formant filter that has an “A” and “E” vowel and how the morphing is done: Fig. 16 Formant filter freq. response and morphing 5 ZynAddSubFX interaction to other programs Fig. 13 Analog filter types and frequency response The filters have several parameters that allow to get many types of responses. Some of these parameters are center/cutoff frequency, Q (this is the bandwidth of bandpass filters or the resonance of the low/high pass filters), gain (used by the peak/shelf filters). Fig.14 shows how the Q parameter changes the filter response: ZynAddSubFX receives MIDI commands from an OSS device or it can make a ALSA port that allows other programs (like Rosegarden or MusE sequencers) to interact with it. The audio output can be OSS or JACK. 6 Conclusion ZynAddSubFX is an open source software synthesizer that produces sounds like commercial software and hardware synthesizers(or even better). Because it has a large number of parameters, the user has access to many types of musical instruments. Also, by using the idea of the bandwidth of each harmonic the sounds which are produced are very beautiful. References Fig. 14 “Q” parameter and filter frequency response [1] The Analog and StateVariable filters have a parameter that allows the user to apply the filtering multiple times in order to make a steeper frequency response, as is shown in the Fig.15. LAC2005 135 LAC2005 136 This document was written as accompanying material to a presentation at the 3rd International Linux Audio Conference 2005 in Karlsruhe, Germany. Music Synthesis Under Linux Tim Janik University of Hamburg, Germany timj@gtk.org ABSTRACT While there is lots of desktop software emerging for Linux which is being used productively by many endusers, this is not the case as far as music software is concerned. Most commercial and non-commercial music is produced either without software or by using proprietary software products. With BEAST, an attempt is made to improve the situation for music synthesis. Since most everything that is nowadays possible with hardware synthesizers can also be processed by stock PC hardware, it's merely a matter of a suitable implementation to enable professional music production based on free software. As a result, the development of BEAST focuses on multiple design goals. High quality demands are made on the mathematical characteristics of the synthesis, signals are processed on a 32-bit-basis throughout the program and execution of the synthesis core is fully realtime capable. Furthermore, the synthesis architecture allows scalability across multiple processors to process synthesis networks. Other major design goals are interoperability, so the synthesis core can be used by thirdparty applications, and language flexibility, so all core functionality can be controlled from script languages like scheme. In addition, the design of all components accounts for an intense focus on the graphical user interface to allow simple and if possible intuitive operation of the program. Keywords Modular Synthesis, MIDI Sequencer, Asynchronous Parallel Processing, Pattern Editor. seamless integration of audio samples, synthesis instruments and sequencing information. 1 BEAST/BSE - An Overview BEAST is a graphical front-end to BSE which is a synthesis and sequencing engine in a separate shared library. Both are being released under the GPL and are being developed as free software for the best part of a decade. Since the first public release, some parts have been rolled out and reintegrated into other Projects, for instance the BSE Object system which became GObject in Glib. The programming interface of BSE is wrapped up by a glue layer, which allows for various language bindings. Currently a C binding exists which is used by BEAST. A C++ binding exists which is used to implement plugins for BSE and there also is a Scheme binding which is used for application scripting in BEAST or scripting BSE through the scheme shell bsesh. BEAST allows for flexible sound synthesis and song composition based on utilization of synthesis instruments and audio samples. To store songs and synthesis settings, a special BSE specific hybrid text/binary file format is used which allows for Wave View Dialog Since the 0.5 development branch, BEAST offers a zoomable time domain display of audio samples with preview abilities. Several audio file formats are supported, in particular MP3, WAV, AIFF, Ogg/Vorbis and BseWave which is a hybrid text/binary file format used to store multi samples with loop and other accompanying information. A utility for creation, compression and editing of BseWave files is released with version 0.6.5 of BEAST. Portions of audio files are loaded into memory on demand and are decoded on the fly LAC2005 137 even for intense compression formats like Ogg/Vorbis or MP3. This allows for processing of very large audio files like 80 megabytes of MP3 data which roughly relates to 600 megabytes of decoded wave data or one hour of audio material. To save decoding processing power, especially for looped samples, decoded audio data is cached up to a couple of megabytes, employing a sensible caching algorithm that prefers trashing of easily decoded sample data (AIFF or WAV) over trashing processing intense data (Ogg/Vorbis). The synthesis core runs asynchronously and performs audio calculations in 32-bit floating point arithmetic. The architecture is designed to support distribution of synthesis module calculations across multiple processors, in case multiple processors are available and the operating system supports process binding. In principle the sampling rate is freely adjustable, but it is in practice limited by operating system IO capabilities. The generated audio output can be recorded into a separate wave file. The graphical user interface of BEAST sports concurrent editing of multiple audio projects, and unlimited undo/redo functionality for all editing functions. To easily give audio setups a try and for interactive alterations of synthesis setups, real-time MIDI events are processed. This allows utilization of BEAST as a ordinary MIDI synthesizer. Since the complete programming interface of the synthesis core is available through a scheme shell, BEAST allows registration of scheme scripts at startup to extend its functionality and to automate complex editing tasks. 2 Song Composition The post processing mechanism is currently being reworked, to integrate with the audio mixer framework that started shipping in recent versions of the 0.6 development branch. In the new audio mixer, audio busses can freely be created and connected, so volume metering or adjustment and effects processing is possible for arbitrary combinations of tracks and channels. Other standard features like muting or solo playback of busses are supported as well. Piano Roll and MIDI Event Dialog Songs consist of individual tracks with instruments assigned to them, and each track may contain multiple parts. A part defines the notes that are to be played for a specific time period. The type of instrument assigned to a track is either a synthesis instrument, or an audio sample. Synthesis instruments are separate entities within the song's audio project and as such need to be constructed or loaded before use. In current versions, to enable sound compression or echo effects, post processing of audio data generated by a track or song is supported by assigning designated post processing synthesis meshes to them which simply act as ordinary audio filters, modifying the input signal before output. To allow editing of parts, a zoomable piano roll editor is supplied. Notes can be positioned by means of drag-and-drop in a two dimensional piano key versus time grid arrangement. This enables variation of note lengths and pitch through modification of note placement and graphical length. The piano keys also allow preview of specific notes by clicking on or dragging about. Also many other standard editing features are available via context menu or the toolbar, for instance note and event selection, cutting, pasting, insertion, quantization and script extensions. MIDI events other than notes, such as velocity or volume events can also be edited in an event editor region next to the piano roll editor. Newer versions of BEAST even sport an experimental pattern editor mode, which resembles well-known sound tracker interfaces. The exact integration of pattern mode editing with MIDI parts is still being worked out though. Similar to notes within parts, the individual parts are arranged within tracks via drag-and-drop in the zoomable track view. Tracks also allow links to parts so a part can be reused multiple times within multiple tracks or a single track. The track view also offers editing abilities to select individual tracks to be processed by the sequencer, specification of the number of synthesis voices to be reserved and adding comments. LAC2005 138 audio character of a signal is determined by the way of utilization through the user. 3 Synthesis Characteristics The graphical user interface provides for simple access to the construction and editing functionality of synthesis networks. Modules can be selected from a palette or context menu, and are freely placeable on a zoomable canvas. They are then connected at input and output ports via click-anddrag of connection lines. For each module, an information dialog is available and separate dialogs are available to edit module specific properties. Both dialogs are listed in the module context menu. Properties are grouped by functional similarities within editing dialogs, and many input fields support multiple editing metaphors, like fader bars and numeric text fields. All property and connection editing functions integrate with the project hosted undo/redo mechanism, so no editing mistakes committed can be finally destructive. 3.1 Voice-Allocation The maximum number of voices for the playback of songs and for MIDI controlled synthesis can be specified through the graphical user interface. Increasing this number does not necessarily result in an increase in processor load, it just sets an upper limit within which polyphonic synthesis is carried out. To reduce processor load most effectively, the actual voice allocation is adjusted dynamically during playback time. This is made possible by running the synthesis core asynchronously to the rest of the application, and by preparing a processing plan which allows for concurrent processing of voice synthesis modules. This plan takes module dependencies into account which allows distribution of synthesis module processing tasks across multiple processors. Execution of individual processing branches of this plan can be controlled with sample granularity. This allows suspension of module The synthesis facilities of the standard 0.6 development branch of the BEAST distribution, roughly equates the facilities of a simple modular synthesizer. However the quality and number of the supplied synthesis modules is constantly being improved. Various synthesis modules are available. Amongst the audio sources are an Audio Oscillator, a Wave Oscillator, Noise, Organ and a Guitar Strings module. Routing functionality is implemented by modules like Mixer, Amplifier, ADSR-Envelope, Adder, Summation, Multiplier and Mini Sequencer. Various effect modules are also supplied, many based on recursive filters, i.e. Distortion, Reverb, Resonance, Chorus, Echos and the list goes on. Finally, a set of connection or IO modules is supplied for instrument input and output, MIDI input or synthesis mesh interconnection. Apart from the synthesis modules shipped with the standard distribution, BSE also supports execution of LADSPA modules. Unfortunately, limitations in the LADSPA parameter system hinder seamless integration of LADSPA modules into the graphical user interface. In general, the modules are implemented aliasing free, and highly speed optimized to allow real-time applicability. Per module, multiple properties (phase in an oscillator, resonance frequency of filters, etc...) are exported and can be edited through the user interface to alter synthesis functionality. A large part of mutable module parameters is exported through separate input or output channels, to allow for maximum flexibility in the construction of synthesis meshes. BEAST generally does not differentiate between audio and control signals. Rather, the control or LAC2005 139 branches from inactive voices. The fine grained control of processing activation which avoids block quantization of note onsets allows for precise realization of timing specifications provided by songs. 4 User experience and documentation years. 7 Internet Addresses Like with most audio and synthesis applications, BEAST comes with a certain learning curve for the user to overcome. However, prior use of similar sequencing or synthesis applications may significantly contribute to reduction of this initial effort. The ever growing number of language translations can also be of significant help here, especially for novice users. BEAST does not currently come with a comprehensive manual, but it does provide a “Quick Start” guide which illustrates the elementary editing functions, and the user interface is equipped with tooltips and other informative elements explaining or exemplifying the respective functionality. Beyond that, development documentation for the programming interfaces, design documents, an FAQ, Unix manual pages and an online “Help Desk” for individual user problems are provided, accessible through the “Help” menu. 5 Future Plans BEAST home page: Contacts, mailing list links, IRC channel: Open project discussion forums: 8 Abbreviations and References Although BEAST already provides solid functionality to compose songs and work with audio projects, there is still a long list of todo items for future development. Like with any free software project with an open development process, we appreciate contributions and constructive criticism, so some of the todo highlights are outlined here: ● Extend the set of standard instruments provided. ● Implement more advanced effect and distortion modules. ● Adding a simple GUI editor for synthesis mesh skins. ● Implementing new sound drivers, e.g. interfacing with Jack. ● New instrument types are currently being worked on such as GUS Patches. ● Support for internal resampling is currently in planning stage. ● Extending language bindings and interoperability. 6 Acknowledgements ADSR – Attack-Decay-Sustain-Release, envelope phases for volume shaping. BEAST - Bedevilled Audio System,. BSE - Bedevilled Sound Engine. C++, C - Programming languages,. FAQ – Frequently Asked Questions. GLib - Library of useful routines for C programming,. GObject - GLib object system library. GPL - GNU General Public License,. GUI – Graphical User Interface. GUS Patch – Gravis Ultrasound Patch audio file format. IRC – Internet Relay Chat. Jack - Jack Audio Connection Kit,. LADSPA - Linux Audio Developer's Simple Plugin API,. MIDI - Musical Instruments Digital Interface,. MP3, WAV, AIFF - sound file formats,. Ogg/Vorbis - open audio codec,. Our thanks go to the long list of people who have contributed to the BEAST project over the LAC2005 140 AGNULA Libre Music - Free Software for Free Music Davide FUGAZZA and Andrea GLORIOSO Media Innovation Unit - Firenze Tecnologia Borgo degli Albizi 15 50122 Firenze Italy d.fugazza@miu.firenzetecnologia.it, a.glorioso@miu.firenzetecnologia.it Abstract AGNULA Libre Music is a part of the larger AGNULA project, whose goal as a european–funded (until April 2004) and as mixed private–volunteer driven (until today) project was to spread Free Software in the professional audio and sound domains; specifically, AGNULA Libre Music (ALM from now on) is a web–based datase of music pieces licensed under a “libre content” license. In this paper1 Andrea Glorioso (former technical manager of the AGNULA project) and Davide Fugazza (developer and maintainer of AGNULA Libre Music) will show the technical infrastructure that powers ALM, its relationship with other, similar, initiatives, and the social, political and legal issues that have motivated the birth of ALM and are driving its current development. Keywords AGNULA, libre content, libre music, Creative Commons 1 The AGNULA project — a bit of history In 1998 the situation of sound/music Free Software applications had already reached what could be considered well beyond initial pioneeristic stage. At that time, the biggest problem was that all these applications were dispersed over the Internet: there was no common operational framework and each and every application was a case-study by itself. But, something happened. This paper is Copyright c 2005 Fugazza, Glorioso and Copyright c 2005 Firenze Tecnologia. It is licensed under a Creative Commons BYSA 2.0 License (see). 1. Free Ekanayaka2 is the current maintainer of the distribution. AGNULA has constituted a major step in the direction of creating a full-blown Free Software infrastructure devoted to audio, sound and mu2 free@miu-ft.org LAC2005 141 sic, (Bernardini et al., 2004). 2 AGNULA Libre Music: sociopolitics On February 2003 Andrea Glorioso was appointed as the new technical manager of the AGNULA project, replacing Marco Trevisani who had previously served in that position but was unable to continue contributing to the project due to personal reasons. This is not the place to explain in detail how the new technical manager of the AGNULA project tackled the several issues which had to be handled in the transition, mainly because of the novelty of the concept of “Free Software” for the European Commission (a novelty which sometimes resulted in difficulties to “speak a common language” on project management issues) and of the high profile of the project itself, both inside the Free Software audio community — for being the first project completely based on Free Software and funded with european money — and in the European Commission — for being the first project completely based on Free Software and funded with european money (Glorioso, ). The interesting point of the whole story — and the reason why it is cited here — is that the new Technical Manager, in agreement with the Project Coordinator (Nicola Bernardini, at the time research director of Centro Tempo Reale) decided to put more attention on the “social” value of project, making the life of the project more open to the reference community (i.e. the group of users and developers gravitating around the so called LA* mailing lists: linux-audio-announce,3 linuxaudio-users,4 linux-audio-dev5 ) as well as creating an AGNULA community per se. In September 2003, when the first idea of AGNULA Libre Music was proposed to the Project Coordinator by the Technical Manager for ap 4 5 3 proval,6 the zeitgeist was ripe with the “Commons”. A number of relevant academic authors from different disciplines had launched a counter– attack against what was to be known as the “new enclosure movement”, (Boyle, 2003): the attempt of a restricted handful of multinational enterprises to lobby (quite succesfully) for new copyright extension and a stricter application of neighbouring rights. The result of this strategy on behalf of the multinational enterprises of the music business was twofold: on the one hand, annoying tens of thousands of mostly law–abiding consumers with silly lawsuits that had no chance of standing in the court7 ;8 on the other hand, motivating even more authors to escape the vicious circle of senseless privatization that this system had taken to its extremes. It seemed like a good moment to prove that AGNULA really wanted to provide a service to its community, and that it really had its roots (and its leaves, too) in the sort of “peerto-peer mass production” (Benkler, 2002) that Free Software allowed and, some would argue, called for. After investing a major part of its human and financial resources on creating the project management infrastructure for working on the two GNU/Linux distributions the project aimed to produce, it was decided that a web– accessible database of music would be created, and the music it hosted would be shared and made completely open for the community at large. Davide Fugazza was hired as the chief architect and lead developer of AGNULA Libre Music, which saw its light in February 2004.9 2.1 Libre Content vs Libre Software What might be missing in the short history of ALM is that the decision to allow for the European Commission funding to be spent on this 6 The reader should remember that AGNULA, being a publicly financed project, had significant constraints on what could or could be done during its funded lifetime — the final decision and responsibility towards the European Commission rested in the hands of the Project Coordinator. 7 8 In fact, it can be argued that the real strategic reason of these lawsuits had a marketing/PR reason rather than substantial grounds, which does not make them less effective in the short term. 9 See LAC2005 142 sub–project of the main AGNULA project was not an easy one, for several reasons: • The European Commission, as all large political bodies, is under daily pressure by several different lobbies;10 the “all rights reserved” lobby, which is pressuring for an extension of copyright length and of the scope of neighbouring rights, was particularly aggressive at the time the ALM project was launched (and still is, by the way). This made financing a project, whose primary goal was to distribute content with flexible copyright policies, questionable in the eyes of the EC (to say the least); • Software is not content in the eyes of the European Commission, which maintains a very strict separation between the two fields in its financing programmes.11 Using money originally aimed at spreading Free Software in the professional audio/sound domain to distribute content was potentially risky, albeit the reasons for doing so had been carefully thought out; • The licensing scheme which ALM applies, mainly based on the Creative Commons licenses,12 , did not and does not map cleanly on the licensing ontology of Free Software. Although there are striking similiarities in the goals, the strategies and the tactics of Creative Commons Corporation, Free Software Foundation and other organizations which promote Free Software, not all the Creative Commons licenses can be considered “Free” when analyzed under the lens of “Software” (Rubini, 2004). This point is discussed with more detail in section 4 of assuring data integrity and the validation of all information according to the given specifications. Registration is free (as in free speech and in free beer) and anonymous — the only request is a valid e-mail address, to be used for automatic and service communications. In the spirit of libre content promotion, no separation of functionalities between “simple users” and “authors” has been implemented: both classes of users can benefit from the same features: • Uploading and publishing of audio files with automatic metatag handling; • Real–time download statistics; • Creation of personalized playlist, to be exported in the .pls and .m3u formats, themselves compatibles with the majority of players around (xmms,13 winamp (TM),14 iTunes (TM)15 ); Other features which are available to anonymous users, too, are: • A search engine with the possibility of choosing title, artist or album; • RSS 2.0 feed with enclosures, to be used with “podcasting” supporting clients;16 ; • For developers and for integration with other services, ALM offers a SOAP (Group, 2003) interface that allows queries to be remotely executed on the database; 3.1 The web and tagging engine ALM uses the PostgreSQL database17 as the back–end and the PHP language18 for its web– enabled frontend. PHP also handles a page templating and caching system, though the Smarty library. File uploading on the server is handled through a form displayed on users’ browsers; first HTTP handles the upload on a temporary location on the server, and then a PHP script copies the audio files to their final destination. It is in this phase that the MP3 or OGG Vorbis metags, if already available in the file, are read. See See 15 See 16 See 17 See 18 See 14 13 3 AGNULA Libre Music: technique To make a long story short, AGNULA Libre Music is a Content Management and online publishing system, optimized and specialized for audio files publication and management. Registered users is given complete access to his/her own material. The system takes care 10 Please note that in this paper the term “lobby” is used with no moral judgement implied, meaning just a “pressure group” which tries to convince someone to apply or not apply a policy of a certain kind. 11 It could be argues that, in the digital world, the difference between data (“content”) and computer programs is rather blurred. 12 See....... LAC2005 143 Besides, a form for the modification/creation of such tags is presented to the user. The system ask which license should be applied to the files — without this indication files are not published and remain in an “invisible” state, except for the registered user who uploaded them in the first place. To avoid abuses of the service and the uploading of material which has not been properly licensed to be distributed, all visitors (even anonymous ones) can signal, through a script which is present in every page, any potential copyright violation to the original author. The script also puts the file into an “invisible” status until the author either reviews or modifies the licensing terms. 3.2 Metadata and license handling To guarantee a correct usage of the files and an effective way to verify licenses, the scheme proposed by the Creative Commons project has been adopted (Commons, 2004). Such scheme can be summarized as follows: • using metagas inside files; • using a web page to verify the license; ALM uses the “TCOP” Copyright tag, which the ID3v2 metadata format provides (Nilsson, 2000), to show the publishing year and the URL where licensing terms can be found. This page, which lives on the AGNULA Libre Music server, contains itself the URL of the Creative Commons licensing web page; moreover, it contains an RDF (Group, 2004) description of the work and of the usage terms. In this way it is possible: • to verify the authenticity of the license; • to make it available a standardized description to search engines or specialized agents; • Creative Commons Attribution 2.020 • EFF Open Audio License21 The overall goal was to allow for the broadest possible distribution of music, leaving to the author the choice whether to apply or not a “copyleft” clause (Stallman, 2002a) — i.e. that all subsequent modifications of the original work should give recipients the same rights and duties that were given to the first recipient, thus creating a sort of “gift economy” (Stallman, 2002b), albeit of a very particular nature, possible only thanks to the immaterial nature of software (or digital audio files, in this case). We chose not to allow for “non-commercial uses only” licenses, such as the various Creative Commons licenses with the NC (Non Commercial) clause applied. The reason for this choice are various, but basically boil down to the following list: • Most of the AGNULA team comes from the Free Software arena; thus, the “non commercial” clause is seen as potentially making the work non-free. Further considerations on the difference between software and music, video or texts, and the different functional nature of the two sets would be in order here; but until now, an “old way” approach has been followed; • It is extremely difficult to define what “non commercial” means; this is even more true when considering the different jurisdiction in which the works will be potentially distributed, and the different meanings that the term “commercial” assumes. Besides, what authors often really want to avoid is speculation on their work, i.e. a big company using their music, but have no objection against smaller, “more ethical” entities doing so.22 However, “non commercial” licensing does not allow such fine–grained selection (Pawlo, 2004). 4 4.1 AGNULA Libre Music: legalities Licensing policy 5 Future directions AGNULA Libre Music has decided to accept the following licenses to be applied on the audio files published and distributed through the system: • Creative Commons Attribution-ShareAlike 2.019 See. 19 AGNULA Libre Music is far from reaching its maximum potential. There are several key areas which the authors would like to explore; 20 See. 21 See licenses/20010421 eff oal 1.0.html. 22 The decision of what constitutes an “ethical” business vs a non–ethical one is of course equivalent to opening a can of worms, and will not be discussed here. LAC2005 144 moreover — and perhaps, much more interestingly for the reader — the AGNULA project has always been keen to accept help and contributions from interested parties, who share our commitment to Free Software23 and circulation of knowledge. More specifically, the ares which the ALM project is working on at the moment are: • Integration with BitTorrent BitTorrent24 has shown its ability to act as an incredibly efficient and effective way to share large archives (Cohen, 2003). AGNULA Libre Music is currently implementing a system to automatically and regularly create archives of its published audio files. The ALM server will act as the primary seeder for such archive. • Integration with Open Media Streaming (OMS) Open Media Streaming25 is a free/libre project software for the development of a platform for the streaming of multimedia contents. The platform is based on the full support of the standard IETF for the real-time data transport over IP. The aim of the project is to provide an open solution, free and interoperable along with the proprietary streaming applications currently dominant on the market.” ALM is currently analyzing the necessary step to interface its music archive with OMS, in order to have a platform completely based on Free Software and Open Standards to disseminate its contents. Besides, OMS is currently the only streaming server which “understands” Creative Commons licensing metadata, thus enabling even better interaction with ALM metatag engine (De Martin et al., 2004). It should be noted that Free Software Foundation Europe holds a trademark on the name “AGNULA”; the licensing terms for usage of such trademark clearly state that only works licensed under a license considered “free” by the Free Software Foundation can use the name “AGNULA”. 24 See. 25 See. 23 6 Acknowledgements As the reader may expect, projects such as AGNULA and AGNULA Libre Music are the result of the common effort of a very large pool of motivated people. And indeed, giving credit to any deserving individual that contributed to these projects would probably fill completely the space allotted for this paper. Therefore, we decided to make an arbitrarily small selection of those without whose help AGNULA and AGNULA Libre Music would not probably exist. First of all, we would like to thank Richard Stallman, without whose effort Free Software would not exist at all; Lawrence Lessig, whose steadfast work on behalf of the Digital Commons has given justice to all the less known persons that worked on the subject in unriper times. Special thanks go to Roberto Bresin and to the Speech, Music and Hearing department of the Royal Institute of Sweden (KTH) for hosting the main AGNULA Libre Music server. Other people that deserve our gratitude are: Philippe Aigrain and Jean-Fran¸ois Junger, the c European Commission officials that have been promoting the idea that AGNULA was a viable project against all odds inside the Commission itself; Dirk Van Rooy, later AGNULA Project Officer, Marc Leman and Xavier Perrot, patient AGNULA Project Reviewers; Luca Mantellassi and Giovanni Nebiolo, respectively President of Firenze’s Chamber of Commerce and CEO of Firenze Tecnologia, for their support. References Y. Benkler. 2002. Coase’s penguin, or, linux and the nature of the firm. The Yale Law Journal, 112. N. Bernardini, D. Cirotteau, F. Ekanayaka, and A. Glorioso. 2004. The agnula/demudi distribution: Gnu/linux and free software for the pro audio and sound research domain. In Sound and Music Computing 2004,. J. Boyle. 2003. The second enclosure movement and the construction of the public domain. Law and Contemporary Problems, 66:33–74, Winter-Spring. B. Cohen. 2003. Incentives build robustness in bittorrent., May. Creative Commons. 2004. Using creative commons metadata. Technical report, Creative Commons Corporation. LAC2005 145 J.C. De Martin, D. Quaglia, G. Mancini, F. Varano, M. Penno, and F. Ridolfo. 2004. Embedding ccpl in real-time streaming protocol. Technical report, Politecnico di Torino/IEIIT-CNR. F. D´chelle, G. Geiger, and D. Phillips. 2001. e Demudi: The Debian Multimedia Distribution. In Proceedings of the 2001 International Computer Music Conference, San Francisco USA. ICMA. A. Glorioso. Project management, european funding, free software: the bermuda triangle? forthcoming in 2005. XML Protocol Working Group. 2003. Soap version 1.2 part 0: Primer. Technical report, World Wide Web Consortium. Semantic Web Working Group. 2004. Rdf primer. Technical report, World Wide Web Consortium. M. Nilsson. 2000. Id3 tag version 2.4.0 - main structure. Technical report. M. Pawlo, 2004. International Commons at the Digital Age, chapter What is the Meaning of Non Commercial? Romillat. A. Rubini. 2004. Gpl e ccpl: confronto e considerazioni. CCIT 2004. R Stallman, 2002a. Free Software, Free Society: Selected Essays of Richard M. Stallman, chapter What is Copyleft? GNU Books, October. R. Stallman, 2002b. Free Software, Free Society: Selected Essays of Richard M. Stallman, chapter Copyleft: pragmatic idealism. GNU Books, October. LAC2005 146 Where Are We Going And Why Aren't We There Yet ? A Presentation Proposal for LAC 2005, Karlsruhe Dave Phillips linux-sound.org 400 Glessner Avenue Findlay OH USA 45840 dlphillips@woh.rr.com Abstract A survey of Linux audio development since LAC 2004. Commentary on trends and unusual development tracks, seen from an experienced user's perspective. Magic predictions and forecasts based on the author's experience as the maintainer of the Linux Sound & Music Applications website, as a professional journalist specializing in Linux audio, and as a Linux-based practicing musician. Keywords history, survey, forecast, user experience, magic 1 Introduction Linux sound and music software developers have created a unique world populated by some remarkable programs, tools, and utilities. ALSA has been integrated with the kernel sources, the Rosegarden audio/MIDI sequencer has reached its 1.0 milestone, and Ardour and JACK will soon attain their own 1.0 releases. Sophisticated audio and GUI toolkits provide the means to create more attractive and better-performing sound and music programs, and users are succeeding in actually using them. 2 A Brief Status Report The Linux Sound & Music Applications site is the online world's most popular website devoted to Linux audio software. Maintaining the site is an interesting task, one in which we watch the philosophy of “Let 10,000 flowers blossom!” become a reality. It can be difficult to distinguish between a trend and the merely trendy, but after a decade of development there are definite strong currents of activity. The past year has been a year of maturities for Linux audio software at both the system and application development levels. To the interested user, the adoption of the ALSA sound system into the Linux kernel means that Linux can start to provide sound services to whatever degree required. Desktop audio/video aficionados can enjoy better support for the capabilities of the their soundcards. Users seeking support for more professional needs can find drivers for some proaudio hardware. In addition to this advanced basic support there are patches for the Linux kernel that can dramatically reduce performance latency, bringing Linux into serious consideration as a viable professional-grade platform for digital audio production needs, at least at the hardware level. It is important to note that these patches are not merely technically interesting, that they are being used on production-grade systems now. Furthermore, there is a continuing effort to reduce or eliminate the need for patching at all, giving Linux superior audio capabilities out-of-the-box. ALSA has passed its 1.0 release, as has Erik de Castro Lopo's necessary libsndfile. JACK is currently at 0.99, and the low-latency kernel patches have been well-tested in real-world application. The combined significance of these development tracks indicates that Linux is well on its way to becoming a viable contender in the sound and MIDI software arenas. Support for the LADSPA plugin API has been an expected aspect of Linux audio applications for a few years. LADSPA limits are clear and selfimposed, but users want services more like those provided by VST/VSTi plugins on their host platforms. Support for running VST/VSTi plugins under Linux has also inspired users to ask for a more flexible audio/MIDI plugin API. At thistime the most likely candidate is the DSSI (Disposable SoftSynth Interface) from the Rosegarden developers. The DSSI has much to recommend it, including support for LADSPA and an interface for plugin instruments (a la VSTi plugins). In this author's opinion the union of ALSA, LAC2005 147 JACK, and LADSPA should be regarded as the base system for serious audio under Linux. However, the world of Linux audio is not defined only by the AJL alliance. Other interesting and useful projects are going on with broader intentions that include Linux as a target platform. The PortAudio/MIDI libraries have been adopted as the cross-platform solution to Csound5's audio/MIDI needs. Support for PortAudio has appeared in Hydrogen CVS sources, and it is already a nominal driver choice for JACK. GRAME's MidiShare is not a newcomer to the Linux sound software world, but it is beginning to see some wider implementation. Among its virtues, MidiShare provides a flexible MIDI multiplexing system similar to the ALSA sequencer (it can even be an ALSA sequencer client). The system has been most recently adopted by Rick Taube's Common Music and the fluidsynth project. Sound support in Java has been useful for a few years. All too often more attention has been paid to Java's licensing issues than to its audio capabilities. Many excellent Java-based applications run quite nicely on Linux, including the jMusic software, JSynthEdit, and Phil Burk's excellent jSyn plugin synthesizer. At the level of the normal user the applications development track of Linux audio is simply amazing. Most of the major categories for music software have been filled or are being filled soon by mature applications. Ardour is designed for high-end digital audio production, Rosegarden covers the popular all-in-one Cubase-style mode, Audacity, Snd, and ReZound provide excellent editing software, Hydrogen takes care of the drum machine/rhythm programmer category, and MusE and Rosegarden cover the standard MIDI sequencer environment. Denemo and Rosegarden can be used as front-ends for LilyPond, providing a workpath for very high-quality music notation. Notably missing from that list are samplers and universal editor/librarian software for hardware synthesizers. However, the LinuxSampler project is rapidly approaching general usability, and while the JSynthEdit project's pace is slow it does remain in development. Some similar projects have appeared in the past year, but none have advanced as far as JSynthEdit. A host of smaller, more focused applications continues to thrive. Programs such as Jesse Chappell's FreqTweak and SooperLooper, Rui Capela's QJackCtl, and holborn's midirgui indicate that useful Linux audio software is becoming more easily written and that there is still a need for small focused applications. Of course the on-going development of graphics toolkits such as GTK, QT, and FLTK has had a profound effect on the usability of Linux applications. Csound represents yet another significant class of sound software for Linux, that of the traditional language-based sound synthesis environment. The currently cutting-edge Csound is Csound5, basically a complete reorganization and rewrite (where necessary) of the Csound code base. Improvements include extensive modularization, internal support for Python scripting, and an enhanced cross-platform build system. The Linux version of Csound5 is already remarkable, with excellent realtime audio and MIDI performance capability. One downside to the increasing capabilities of the Linux sound system is the increasing complexity of Linux itself. For most users it is decidedly uncomfortable and uninteresting to perform the necessary system modifications themselves, but happily the AGNULA/Demudi and Planet CCRMA systems have brought nearpainless Linux audio system installation to the masses. However, given the resistance of said masses, we have seen the rise of the “live” Linux multimedia-optimized CD. These systems allow provide a safe and very effective means of introducing not only Linux audio capabilities but Linux in general, without alteration of the host system. The Fervent Software company has taken advantage of this trend and released their Studio To Go! commercially. I believe that these live CDs have enormous potential for Linux evangelization generally, and they may be a particular blessing for the expansion of interest in Linux audio capabilities. 3 Visibility Is Clear Linux audio software is becoming a serious alternative for serious users. Composers of all sorts, pro-audio recordists, sound synthesis mavens, audio/video DJs and performance artists, all these and many other sound & music people are using this software on a productive daily basis. More “music made with Linux” has appeared in the past year than in the entire previous decade, and coverage of Linux audio regularly appears in major Linux journals. Articles on Linux audio software have appeared in serious audio journals such as Sound On Sound and the Computer Music Journal. LAC2005 148 Some of the significant events acknowledging Linux audio software included Ron Parker's demonstrations of the viability of Ardour and JAMin in a commercial recording environment, Criscabello's announcement that he'd recorded Gilberto Gil with software libre, and the awards received for Hydrogen and JACK. Small steps perhaps, but they mark the steady progress of the development in this domain. 4 Some Problems world often lament the absence of programs such as Acid, Fruity Loops, or Ableton Live, and I have already mentioned the dearth of editor/librarian software for hardware MIDI synthesizers. The situation is surely improving, but there are still several application types awaiting project involvement. 5 Summary Conclusions There is no perfection here. Lack of documentation continues to be a primary issue for many new users. Hardware manufacturers still refuse to massively embrace Linux audio development. Many features common in Win/Mac music software are still missing in their Linux counterparts. Many application types are still poorly represented or not represented at all. Community efforts towards addressing documentation issues include various wikis (Ardour, Pd) and a few focus groups (Hydrogen, Csound5), and while many applications do have excellent docs, the lack of system-comprehensive documentation still plagues the new user, particularly when troubleshooting or attempting to optimize some aspect of an increasingly complex system. The problem is familiar and remains with us: writing good documentation is difficult and there are too few good writers with the time and energy to spare for the work required. Nevertheless, the impact of the documentation wikis is yet to be felt, they may yet prove to be a salvation for the befuddled user. Hardware support still remains problematic. ALSA support expanded to the Echo cards, and the AudioScience company announced native Linux driver support for their high-end audoi boards, but no train of manufacturers hopped on the Linux sound support bandwagon. I'm not sure what needs to happen to convince soundcard and audio hardware manufacturers that they need to support Linux, and I believe that this issue needs some more focused discussion in the community. Limited hardware support is often worse than none at all. ALSA developers are working to provide sound services as complete as their Win/Mac counterparts, but there are still problems with regard to surround sound systems (3D, 5.1) and access to on-board DSP chipsets. A glance through any popular music publication clearly shows that Linux audio software, wonderful as it is, definitely lacks the variety of the Win/Mac worlds. Users new to the Linux audio The good news far outweighs the bad, and the bad news itself can be dealt with in productive ways. The development and user communities continue to thrive, long-term projects proceed, more people are coming into our world, and more music is being made. Coordinated efforts need to be made to bring about greater program documentation and manufacturer participation, but whatever difficulties we encounter, the history of Linux software development advises us to never say never. 6 Acknowledgements The author thanks the entire community of Linux audio developers for their enormous contribution to music and sound artists the world over. The author also thanks the community of Linux audio software users for their experiences, advice, and suggestions regarding a thousand sound- and music-related topics. Finally, the author extends great gratitude to the faculty and staff at ZKM for their continued support for this conference. LAC2005 149 LAC2005 150
https://www.scribd.com/document/116555005/Soft-Libre-Articulos-Electronica
CC-MAIN-2018-22
refinedweb
75,876
51.28
Wiki polib / Home polib polib allows you. If you find polib useful, please consider giving a donation via paypal, any contribution will be greatly appreciated. Installation Note: chances are that polib is already packaged for your linux/bsd system, if so, we recommend you use your OS package system, if not then choose a method below: Installing latest polib version with pip $ pip install polib Installing latest polib version from source tarball $ tar xzfv polib-x.y.z.tar.gz $ cd polib-x.y.z $ python setup build $ sudo python setup.py install Installing the polib development version Note: this is not recommended in a production environment. $ hg clone $ cd polib $ python setup build $ sudo python setup.py install Basic usage example import polib # load an existing po file po = polib.pofile('tests/test_utf8.po') for entry in po: # do something with your entry like: print entry.msgid, entry.msgstr # adding an entry entry = polib.POEntry(msgid='Welcome', msgstr='Bienvenue') entry.occurrences = [('welcome.py', '12'), ('anotherfile.py', '34')] po.append(entry) # saving the modified po file po.save() # compile it to an mo file po.save_as_mofile('tests/test_utf8.mo') Documentation polib is generously documented, you can browse the documentation online thanks to the amazing Read The Docs project. Development Bugtracker, wiki and mercurial repository can be found at the project's page. New releases are also published on the official Python Package Index. Credits Author: David Jean Louis. References Updated
https://bitbucket.org/izi/polib/wiki/Home
CC-MAIN-2018-05
refinedweb
240
51.95
A brief guide on how to write ClojureScript blog posts if you are using GitHub pages. by Andrea Richiardi October 19, 2015 Tags : Clojure Github ClojureScript Reagent Ever wondered if you can embed ClojureScript in a GitHub blog? The answer is: of course you can! At the end of the day, ClojureScript just translates to plain old JavaScript that can be included in any web page. Here at Scalac, we are always trying to be innovative and creative in everything we do. This is why when I heard that we were planning to write some blog post on Clojure/ClojureScript, I asked myself: why can’t I write a ClojureScript post in ClojureScript itself? The advantage of this is evident, our pages will be way more interactive and therefore interesting to read, play with and learn from. In order to start, I want first to give the reader some general notion on how the compiler works and why it is so powerful. I am not going to spend too much time on it because you can find better detailed articles out there. The most significant difference between Clojure and ClojureScript is that ClojureScript concretely isolates the code interpretation phase (reading in Lisp terms) from the actual compilation phase (analysis and emission). The text that a programmer writes is read by the ClojureScript reader, then passed to the macro expansion stage and ultimately to the analyzer. The outcome is the abstract syntax tree (AST), a tree-like metadata-reach version of it. This is paramount for a bunch of obvious reasons, but mostly because you have separation of concerns between tools that understand text and tools that understand ASTs, like compilers. Indeed this is why ClojureScript can actually target disparate kinds of platforms: in order to emit instructions belonging to the X programming language you “only” need your own compiler to read the AST, prune it and emit. This is also how currently the Google Closure Compiler is employed in order to emit optimized JavaScript. GitHub blog posts are nothing but standard .md files that Jekyll builds locally and then serves. The Markdown syntax allows adding standard HTML tags and consequently JavaScript <script> tags. Therefore, embedding ClojureScript was relatively easy once I got pass configuring it. This is mainly why I thought writing a blog post might be a good idea and save some folk’s time. I wanted to be able to plug my changes in transparently, still allowing to create posts without Clojure. A simple start.sh was previously used to create .md files in Jekyll’s _posts folder that then had to be worked on by the author and committed. Instead, I needed to create fully-fledged ClojureScript projects, potentially one per blog post, somewhere. I chose to hide them in a brand new _cljs folder as git submodules and for this reason I added a few lines to start.sh for: Performing git submodule add. Materializing the project with lein new <template> <project-name>. As before, copy the .md template to Now switching to the project folder was showing me the reassuring sight of the project.clj file. I wanted to be smarter about emitting JavaScript and tried to deploy directly inside Jekyll’s scripts folder. According to my template default, the JavaScript was emitted to <project-name>/resources/public/js/compiled so I changed my <project-name>/project.clj to: Remember that we are inside a project folder under _cljs, therefore Jekyll’s root is two directories up. Typically, only :output-to is significant for the final version as this option contains the path of the generated (and minified) .js file. In jekyll-dev though, you can also specify :output-dir, which is where temporary files used during compilation are written, and :asset-path, that sets where to find :output-dir files at run-time. This way you have full visibility of the output. Now I was finally able to cd to my project, execute lein cljsbuild once jekyll, and see my generated .js in scripts. Hooray! The last piece of the puzzle was to run the actual JavaScript code inside the Markdown blog page. There are many ways to do this, but the one I found most intuitive and straightforward was by using reagent. This is not a post about reagent per se (we wrote about it some time ago), but its lean and unopinionated architecture struck me as the way to go. Reagent, a React wrapper, dynamically mounts DOM elements and re-renders them when necessary, effectively hiding the complications of managing React component’s life cycle. Consequently, on the HTML side I needed to: define a mounting point, include the compiled .js and trigger the JavaScript main() which mounts my app. My .md became: Note that it is very important to prepend a slash to script and replace dashes with underscores in the last JavaScript call. The reason is that the compiler always transforms namespace dashes in underscores. On the ClojureScript side instead I needed to ensure that the <div> with id cljs-on-gh-pages was correctly mounted: Now every time the blog post page is shown, reagent intercepts the div and renders anything our main() returns, typically Hiccup-crafted react components, like page above. If you have had the patience of reading till the end, here is a reward for you: a ClojureScript REPL to toy with! Thanks to highly skilled ClojureScript hackers and Clojure being a homoiconic language, it is not surprising that it can compile itself and run into a self-hosted environment. Self-hosted means that the language provides itself the environment where to run, which in this case is the set of JavaScript functions that are performing the code evaluation. The JavaScript, in turn, being compiled from ClojureScript source. Convoluted and awesome. Not everything works in this ClojureScript-in-ClojureScript habitat at the moment. However, thanks to other inspiring implementations, here too you have access to Clojure’s superpowers. Note that the REPL has history ( up to start, up/ down to navigate) plus other handy shortcuts. Enjoy! by Andrea Richiardi October 19, 2015 Tags : Clojure Github ClojureScript Reagent
https://blog.scalac.io/2015/10/19/cljs-on-gh-pages.html
CC-MAIN-2018-26
refinedweb
1,021
62.07
Friday, January 05, 2007 posted @ Friday, January 05, 2007 1:53 PM | Feedback (0) | Monday, May 01, 2006 One. posted @ Monday, May 01, 2006 12:15 PM | Feedback (2) | Wednesday, April 19, 2006 Guys, go and check this out Thanks, a. posted @ Wednesday, April 19, 2006 2:44 PM | Feedback (2) | Thursday, April 13, 2006 I am really facing a tough weekend. The other day I had my Data Structures midterm exam. On Sunday I have the Micro-controllers misterm exam ( this one is really really really really hard ! especially with the 983578734 :D page resource ! ) , and the funniest thing that I'm gonna be co-presenting in ATLAS session on Saturday .. :D I called my father, and I told him that I might fail in university but am not gonna skip my session in MAD .. hehehe .. “wish he didnt read this post :s .. lol “ a. posted @ Thursday, April 13, 2006 4:17 PM | Feedback (1) | Wednesday, April 05, 2006 Microsoft Academic Day “MAD” is an event arranged by .NET Clubs Champs around the country, and sponsored by Microsoft itself. The idea of such event appeared last year, when .NET Clubs champs of Jordanian universities wanted to show up their knowledge and their ability to manage events. This year, Jordanian MAD is going to be on April 15th, in PSUT (Princess Sumaya University for Technology). It’s going to be a full day (9 am - 7 pm) and the agenda will look like: 09:00 to 09:30 Microsoft keynote / .NET Clubs and theSpoke.net community 09:30 to 10:30 Developing websites using the FREE Visual Studio & SQL Express 10:30 to 11:30 Managing look, feel and layout with Visual Studio 2005 and ASRNET 11:30 to 13:00 Lunch & coffee break 13:00 to 14:00 Vista demo / Videos 14:00 to 15:00 Creating Personalizable websites using Web Parts with Visual Studio 2005 and ASRNET 2.0 15:00 to 15:15 Coffee break 15:15 to 16:15 DirectX 9.0 — Game development! 16:15 to 17:15 ATLAS 17:15 to 18:00 Refreshments: Meet the partners / submit your resume and feedback Draw (Give away some exam vouchers and NFR copies of VS & SQL 2005) I and Bander Al-Shrafi are going to present the (16:15 to 17:15) session; talking about ATLAS framework for building ASP.NET 2 AJAX based applications. Hope to see you there guys. Thanks, A. posted @ Wednesday, April 05, 2006 10:27 AM | Feedback (4) | Friday, March 31, 2006 Microsoft Corp. has launched a public bug database for Internet Explorer 7, which is currently in beta. Access to the Internet Explorer Feedback site requires a Passport account and signup is through Microsoft Connect. The new site is similar to Bugzilla, a bug-reporting site set up for Firefox by Mozilla Corp. “The intent of this work is to give everyone a better place to give IE7 feedback and to prepare the ground for future versions of IE,” Microsoft said in its IE blog. The IE public database, launched on Friday, is not for reporting security issues, Microsoft said. Those problems should be reported through the Microsoft Security Response Center. -- Source: Information week posted @ Friday, March 31, 2006 3:43 PM | Feedback (0) | Tuesday, March 28, 2006 Yes, I'm taking a Data Structures using C++ course in the university, and as you know am a Microsoft Maniac Guy - huh - thus, I decided to learn this course using C# .. hehe Have Fun posted @ Tuesday, March 28, 2006 12:06 PM | Feedback (1) | Friday, March 24, 2006 I was watching this video from channel9.com, and of course I was amazed by this device. Here's some pics.. watch and adore guys :P I'm wondering, if there is a NOMINATION for beta testers for this device :D :D :D ... -- I would be the first applicant. posted @ Friday, March 24, 2006 3:23 PM | Feedback (1) | This release - March CTP - is much different than the others, it came up with new features - like gadgest and other stuff- it also came with new documentations and resources + the new website interface : ) Actually, I did not have the time to explore the new bits, but I'm going to do this soon. I liked to share these usefull links with you guys : If you have more links guys, let me know about them, add them as comments. posted @ Friday, March 24, 2006 2:49 PM | Feedback (0) | Tuesday, March 07, 2006 I am facing some problems in Internet Explorer 7 Beta Two .. and here is some pics >> Tables and some text get nested ! >> In this pic, am typed sumthing and I tried to delete it by pressing back space, you see the cursor is getting backwards, but nuthing gets deleted unless I type again. Anyone faces these problems !? posted @ Tuesday, March 07, 2006 1:37 PM | Feedback (2) | Friday, March 10, 2006 CSS Properties Window for VS 2005 .. Did you try this thing before !? If you didnt try it yet, go and download it posted @ Friday, March 10, 2006 11:01 AM | Feedback (0) | Thursday, March 09, 2006 Here is a small code for sending an email from ASP.NET 2.0 page, using the namespace System.net.mail and a Gmail account. First of all we have to import these : Imports Then, Dim Dim msgBody As String Dim smtp As New SmtpClient mail.From = New MailAddress(“ur-gmail-account@gmail.com“, "display name") mail.To.Add(“ur-email@host.com“) mail.Subject = “Subject“ mail.Body = “msgBody“ mail.IsBodyHtml = True ' This is to enable HTML in your email body mail.ReplyTo = New MailAddress(“reply-to-email-address“) ' This is optional, it allows you to add Reply To email address. smtp.Host = "smtp.gmail.com" smtp.Port = 25 smtp.EnableSsl = True smtp.Credentials = New System.Net.NetworkCredential(“ur-gmail-account@gmail.com“, "gmail-password") smtp.Send(mail) lblFlag.Text = "Your Message has beent sent." Note: We can build this email form AJAX based, by putting the content in an UpdatePanel control, and add the button “send” as a trigger. posted @ Thursday, March 09, 2006 7:08 PM | Feedback (28) | Wednesday, March 08, 2006 Umm, I think this is the only way makes MSN search better than google.com ! posted @ Wednesday, March 08, 2006 1:21 PM | Feedback (0) | Monday, March 06, 2006 I dont know .. I have been using the MSN 8 for two days, umm, I noticed one thing, everytime I sign in my display picture gets changed :D .. I dunno if it is a new feature to pick up the pics randomly or it is a new BUG .. heheh .. Anyone faces what I face !? posted @ Monday, March 06, 2006 5:54 AM | Feedback (8) | It's 9:34 AM in here, I was just exploring some website, and I wanted to check out the weathercast for today.. Usually I use Yahoo.com for this thing. But, it is my first time to see that YAHOO uses AJAX in fetching the weather! .. Guys, am afriad :P.. day after day we see AJAX in everywhere.. I think one day our eyes will be AJAX-Based, so we'll see everything and we'll never close our eyes even for seconds :P I LONG FOR POSTBAKS .. lool Have a nice day, posted @ Monday, March 06, 2006 4:39 AM | Feedback (0) |
http://geekswithblogs.net/aymanfm/Default.aspx
crawl-002
refinedweb
1,226
72.56
A Beginner's Guide to Linear Regression Models in Python Does your team prefer Python over R? Or are you looking to brush up your on your Python skills? We'll walk through a simple example of a linear regression model using the scikit-learn library in Periscope's Python/R Integration. In this exercise, we will also follow guiding principals on creating training and testing datasets. Here is some information from a fictional gaming company.. Let's get started with Python! In the example below, we use Python 3.6. To begin setting this up, let's first import our libraries, and define what we want our X (our explanatory variable) and Y (response variable) to be. import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split X=df["total_plays"].reshape(-1, 1) Y=df["total_revenue"].values General guidance suggests that we use 70% of our data in our training dataset, and 30% for testing. It's also imperative that this assignment is random. Scikit-learn has a function called train_test_split specifically designed for this (which is why we imported it earlier)! X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3) Great, now let's build our model using the training dataset. Python makes this simple with 2 quick lines of code. lm = LinearRegression() model = lm.fit(X_train, y_train) Now let's inspect this model further.. Surfacing the m and b of our model is quite simple! We just need to run a quick print statement: print(model.coef_, model.intercept_) This returns 0.56 for model.coef (which is our slope, m), and -3675 for model.intercept_ (which is our y intercept, b).. Another key metric when discussing linear regression models is the R^2 value. The closer this value is to 1, the more likely it is that the data is explained by the linear regression model. We call the R^2 value on the test dataset, as shown below. print(model.score(X_test,y_test)) For the example above, this returns 0.81, which is a fairly strong R^2 value. We can take this further and look at the difference between the predicted y values and the actual y values. This difference is referred to as the residuals. The code below accomplishes this by (1) calculating the predicted values for Y given the values in X_test, (2) converting the X, Y and predicted Y values into a pandas dataframe for easier manipulation and plotting, and (3), subtracting the actual - predicted y values to reach the residual values for each record in the test dataset. predictions = lm.predict(X_test) test=pd.DataFrame({"total_plays":X_test.flatten(), "actual_revenue":y_test.flatten(), "predicted_revenue":predictions.flatten()}) test["residuals"]=test["actual_revenue"]-test["predicted_revenue"] Finally, we return the test dataframe back into periscope periscope.output(test) Let's look at the model, comparing it to our test data. #plot the predicted trend line plt.plot(X_test.flatten(),y_test.flatten(),'bo',X_test.flatten(), predictions.flatten()) periscope.output(plt) We can also plot the residuals by passing in test into periscope.output and using the chart settings below In this residual plot, we see that there is no trend in the residuals. This is also further evidence that the data is well explained by a linear model. New to Python? Did you find this post helpful? Let us know in the comments what you'd like to see! For more information on linear regression models in Python, I found this blog to be especially helpful! Prefer R? See the community post here for an R equivalent!
https://community.periscopedata.com/t/m2r2jm/a-beginners-guide-to-linear-regression-models-in-python
CC-MAIN-2019-39
refinedweb
595
59.7
hi,I’m a high school student and from chinese…. my english is poor…… I know the fmod from the url: my compiler is ms2003.net I am a new learner ,but i can not play a mp3 files in my code……… the code As follows [code:19as84hw] include <windows.h> include <iostream> include "Fmod.h" include <conio.h> using namespace std; pragma comment(lib,".\FmodVC.lib") void main() { char filename[255]; cout<<"please input the mod files complete paths....:"; cin>>filename; cout<<"press the Esc to end..."<<endl; cout<<"begin play......"<<endl; if (!FSOUND_Init(44100, 32, 0)) { return; } FMUSIC_MODULE* mod; mod = FMUSIC_LoadSong(filename); FMUSIC_PlaySong(mod); //start the spectrum analysis FSOUND_DSP_SetActive(FSOUND_DSP_GetFFTUnit(), TRUE); //ok but this time set it to TRUE...mabey it will work... while(GetAsyncKeyState(27)==0) { } FSOUND_DSP_SetActive(FSOUND_DSP_GetFFTUnit(), FALSE); FMUSIC_StopSong(mod); FMUSIC_FreeSong(mod); mod = NULL; } [/code:19as84hw] tankyou alls… - qxtianlong asked 13 years ago - You must login to post comments You forget to init and close FMOD using FSOUND_Init and FSOUND_Close. - KarLKoX answered 13 years ago Actually, it looks like you’re initializing FMOD okay, but you’re trying to use the FMUSIC functions to play MP3s. You need to look at the FSOUND functions. The FMUSIC functions play sequenced files, such as MIDIs, MODs, XMs, etc. FSOUND functions play sampled files, such as WAVs, MP3s, Oggs, etc. Oh, but KarLKoX is right that it doesn’t look like you’re shutting FMOD down properly with FSOUND_Close(). Good luck! - Guy - Adiss answered 13 years ago thankyou KarLKoX and Adiss…. ^_^ my new code… [code:1urs695i] include <windows.h> include <iostream> include "Fmod.h" include <conio.h> using namespace std; pragma comment (lib,".\FmodVC.lib") void main() { char filename[255]; cout<<"please input the mp3 files complete paths:"; cin>>filename; cout<<"press the Esc to close"<<endl; cout<<"begin play......"<<endl; if (!FSOUND_Init(44100, 32, 0)) { return; } FSOUND_SAMPLE* handle; handle = FSOUND_Sample_Load (0,filename,0, 0, 0); FSOUND_PlaySound (0,handle); while(GetAsyncKeyState(27)==0) { } FSOUND_Sample_Free (handle); FSOUND_Close(); } [/code:1urs695i] - qxtianlong answered 13 years ago
https://www.fmod.org/questions/question/forum-16552/
CC-MAIN-2018-47
refinedweb
335
68.06
scene vs. ui I'm working on a countdown clock with two timers. Each timer has a button, and when the button is pushed, the associated timer starts (and the other timer stops). This is a pretty simple program, but I'm wondering what the best way to do this is pythonista would be. Should I use a ui interface or a scene interface? The ui seems more natural, but it doesn't seem to have an "update" method, so I don't know how to count time when either timer is running ... any suggestions are appreciated! Thanks! - AtomBombed With ui, you can create your own custom view by having a class that inherits the ui.Viewobject's attributes. With your custom view class, you can then add a drawmethod which is like updatefrom scene, but for that view specifically. The view could be the root view of your clock. Example: import ui class ClockView (ui.View): def __init__(self): # operates just like "setup" from the scene module. Feel free to add arguments. pass def touch_began(self, touch): # called when a touch on the view is recognized. pass def touch_moved(self, touch): # called every time the touch moves across your view. pass def touch_ended(self, touch): # called when the touch leaves the screen (ends). pass def draw(self): # operates just as "update" works in the scene module. pass Hopefully that helps. Look at the following discussions to see how you can implement a clock in ui. - - I feel that it is better to implement the clock using scene and if you want that to be part of your ui application, use SceneView. If you want to know how to use SceneView, you can look the following code. The above code is discussed here. - chriswilson While uiis more natural as an interface, scenehas some useful methods that you can use to make a timer and then bundle your scenein a SceneViewin a uilike @abcabc suggested. For example, scene.Scene.tis the time (in seconds) since a scene was started. Setting a variable equal to this will give you a 'timestamp' to compare the current scene.Scene.tto in the updatemethod. In my (limited) experience, this causes fewer problems than using things like time.sleep(), ui.delay()or recursion. I hope this helps! Thanks for the comments — very helpful. @chriswilson 's comment was going to be my next question: how often does the draw/ui or update/scene routines get called? His comment suggests that you don't know and need to calculated it by keeping track of the time since the last call ... is that right? An unrelated question: if I use the scene approach, is there a way to load a .pyui interface and use it with the scene architecture? Thanks again for all your help! - chriswilson I think the update()method gets called once per frame; at least 60 times per second usually, but I understand this is variable so it's not useful in itself for counting. As regards your second question, you can certainly use a SceneViewto put a scenewithin a ui. I'm not sure about the other way around, but I think it can be done. I'll mess around with this and let you know if I work it out! with Scene, the default rate is 60Hz, but you can divide that down using the frame_interval argument to run, or attribute of SceneView. if you do some long processing in the draw methods, it could be slower, but I don't think it is ever faster. ui does not provide a timer, so you would use a threading.Timer, or ui.delay, or one of several other methods which give you soem degree of control over the rate. Scene has a view attribute which you can add other ui components. SceneView can be added to ui.View's. Thank you, everyone! @chriswilson @JonB @abcabc @AtomBombed
https://forum.omz-software.com/topic/3322/scene-vs-ui
CC-MAIN-2018-13
refinedweb
651
83.76
In this tutorial, you will learn the concept of C programming strings with relevant examples. C Strings A string is a sequence of characters which is treated as a single data item in C. It can also be said as an array of characters and any group of characters defined between double quotations is string constants. “trytoprogram” is an example of string Now, try2program is a string with a group of 11 characters. Each character occupies 1 byte of memory. These characters are placed in consecutive memory locations; after all, string is an array of characters. address of "t" = 1000 (say) address of "r" = 1001 address of "y" = 1002 address of "2" = 1003 address of "p" = 1004 address of "r" = 1005 address of "o" = 1006 address of "g" = 1007 address of "r" = 1008 address of "a" = 1009 address of "m" = 1010 String and memory As we know string is the sequence of arrays and is placed in consecutive memory locations. To indicate the termination of string a null character is always placed at the end of the string in the memory. Now above string will be stored in memory location like this: Declaring and Initializing string variables String is not a data type in C, so character arrays are used to represent strings in C and are declared as: char string_name [size]; The length of a string is always greater than the number of string characters by one because when a compiler assigns a character string to a character array, it automatically supplies a null character (‘\o’) at the end of the string. Initializing string arrays Strings in C can be initialized in following ways: char string_name [12] = "try2program"; char string_name [12] = {'t','r','y','2','p','r','o','g','r','a','m','\0'}; In the above example, the size of an array will be determined automatically by the compiler. Note: Difference between 0, ‘0’, ‘\0’, “0”. 0 //an integer value '0' //character value '\0' //an escape sequence representing null character "0" //string representation How to read strings from users? Basically, there are two ways for reading the string from users: - using scanffunction - using getcharand getsfunctions using scanf function The method of reading string using input function scanf with %s format specifications is the most infamous method. However, with this scanf function, we can only read a word because this function terminates its input on the first white space it encounters. For example: char name [10]; scanf("%s", name); Note: Normally, an ampersand is used before scanf function while reading the value of variables but in the case of character arrays, we don’t need an ampersand (&) because string name itself acts as a pointer to the first variable location. using getchar and gets functions scanf function is used to read only a word and to read a whole line of text either getchar or gets function is used. This function can be used to read successive single characters from the input until a new line character '\n' is encountered and then null character is inserted at the end of the string. Syntax of getchar function char name; name = getchar( ); Note: getchar function has no parameters. Syntax of gets function char name; getchar(name); Example: C program to read a word and a line of text entered by the user #include <stdio.h> int main() { char word[20]; char line[50], ch; int a=0; printf("Enter name :"); while(ch!='\n') //terminates if user hit enter { ch=getchar(); line[a]=ch; a++; } line[a]='\0'; printf("Name =%s",line); printf("Enter name :"); scanf("%s",word); //only reads a word and terminates at whitespace printf("Name = %s \n",word); return 0; } Output Enter name: Denise Ritchie Name = Denise Ritchie Enter name : Denise Ritchie Name = Denise Explanation of the program In above program, while loop will continue until the compiler encounters new line command i.e \n . While user enters strings, getchar and gets also reads white spaces and only terminates when user will hit enter. So a whole line will be printed. However, when we read input with scanf function no matter how long line user enters it will terminate once it encounters white space. As a result with getchar function compiler can read whole name Denise Ritchie and with scanf function compiler can only read Denise as there is white space after it. Moreover, there are certain string handling functions in C to manipulate strings and these functions are explained in our next chapter.
http://www.trytoprogram.com/c-programming/c-programming-strings/
CC-MAIN-2019-30
refinedweb
744
52.23
Nov 28 2017 05:53 AM I was looking for a crawled property for the list item / document version - stumbled on a few such as _Version, Version, Version0, ows_DocVersion. I have mapped them to the tenant's RefineableDecimal00. Of course, I have re-indexed the site collection. However, when making a search query for the property, it returns an empty value. For example, here is a test KQL I am using in the Chrome SP Editor -> PnP JS Console: import pnp, { SearchQuery, SearchResults } from "pnp"; pnp.sp.search(<SearchQuery>{ Querytext: "Path:\"\" AND Title:\"My Item's Title\"", RowLimit: 16, SelectProperties: ["Title", "RefinableDecimal00", "Path"], SortList: [{ Property: 'Created', Direction: 1 }] }).then((response) => { console.log(response.PrimarySearchResults) }) Has anyone found a way to query for the list item version? View best response Nov 30 2017 07:51 AM Posting our solution as somebody else might find it useful: 1. We mapped the UI Version property (ows_q_TEXT__UIVersionString) to one of the Refinable Decimals 2. We have given the refinable decimal an alias (SharePointVersion) in our case. So we were able to sort, and filter via a KQL the results, i.e. SharePointVersion>=2.1 was giving us items with version greater than or equal to 2.1 - it seems that SharePoint have successfully managed to parse the decimal value into the decimal property.
https://techcommunity.microsoft.com/t5/sharepoint-developer/filter-by-item-version-in-kql/td-p/131857
CC-MAIN-2022-33
refinedweb
219
55.74
I have re-looked at this issue today with the following results. * Problem Summary: input-pending-p yields an incorrect value 50% of the time in emacs 21.1 and emacs-21.2 under some window managers. * Result Summary: I have been unsuccessful in solving the problem. I have made all the efforts that I can think of, with result that it appears that the problem lies in code called by get_input_pending in keyboard.c. The problem only occurs when running under some window managers. Truly simple demonstration code is not obvious. The complex demonstration code is available to anyone; this complex code is simple to install and uninstall. * My procedure was as follows. ** run under Gnome ** run one copy of emacs-21.1 as the testpiece ** run another emacs (which was emacs-21.2) running gdb, and attach the testpiece via its pid. ** do not use the mouse at all * Following this procedure gives the sequence of outputs from input-pending-p as t t t t t ... * Running the emacs-21.1 without gdb gives the sequence t nil t nil t ... * The sequence from emacs-21.1 running under twm is nil nil nil nil nil ... and this is correct under the test conditions. * The sequence from emacs-20.7 running under Gnome is nil nil nil nil nil ... and this is correct under the test conditions. * Thus (my) using gdb modifies the problem so that I am unable to make comparisons. * I put the following code into keyboard.c. This gives entirely consistent results with what is seen in lisp. I conclude that the problem lies in get_input_pending. /* this function is used below */ void FYS (str) char *str; { Lisp_Object dummy; Lisp_Object * dumref; dummy = build_string (str); dumref = &dummy; Fmessage (1, dumref); } DEFUN ("input-pending-p", Finput_pending_p, Sinput_pending_p, 0, 0, 0, "T if command input is currently available with no waiting.\n\ Actually, the value is nil only if we can be sure that no input is available.") () { if (!NILP (Vunread_command_events) || unread_command_char != -1) return (Qt); get_input_pending (&input_pending, 1); input_pending > 0 ? FYS("true") : FYS("false"); return input_pending > 0 ? Qt : Qnil; } * It is possible to upset the t nil t nil t ... sequence by using "tricks", in particular by inputting two keystrokes in rapid succession ("double-clicking" on the keyboard). But this leads me to no conclusion except to re-inforce the idea that the problem is a timing isue. I think that interrupt_input always is 1, causing the return from get_input_pending, at least in the attached emacs-21.1 case. static void get_input_pending (addr, do_timers_now) int *addr; int do_timers_now; { /* First of all, have we already counted some input? */ *addr = !NILP (Vquit_flag) || readable_events (do_timers_now); /* If input is being read as it arrives, and we have none, there is none. */ if (*addr > 0 || (interrupt_input && ! interrupts_deferred)) return; /* Try to read some input and see how much we get. */ gobble_input (0); *addr = !NILP (Vquit_flag) || readable_events (do_timers_now); } * Also I again looked into producing a simple example; I do not see how to do that. I can demonstrate, what really is deducible anyway, that the problem only occurs when I involve X: I have a toggle on the code that uses either popup frames or windows in the parent emacs; the latter continues to work perfectly.
http://lists.gnu.org/archive/html/emacs-devel/2002-03/msg00509.html
CC-MAIN-2015-48
refinedweb
541
67.55
Perhaps that example is a little extreme, but it does demonstrate what most people do; code without testing. They make major changes without a clear and concrete way to verify the changes did not break existing functionality. This results in a lack of confidence in the code after making the changes, because there are not enough test cases written to cover every possible aspect. In this article, you will be introduced to the concept of writing unit tests[sup]1[/sup] for your projects, and going a step further, to begin driving your development process with the test first, code later concept. And in order to introduce an interesting project to test drive with, you will be exposed to expression objects in the latter part of the article, where each action of an object does not result in a copy of itself, but merely an object representing that expression. Motivational example For some obscure reason, imagine there is a need in your application to have arrays of numeric values. The arrays as a whole must be able to perform multiplication by a scalar value, as well as addition and multiplication of two arrays of the same size and type. For even more obscure reasons, your project manager has decided it must be a template class so that it can be reused for other types and sizes. For further obscure reasons, he does not want to use an external library, but wants it written by you. (Gotta 'love' the management.) Citing an example of how it should work, array x, y; x = 5 * x + x * y; Going through the user story, a list of the following requirements can be quickly generated: - Beginning development on the new object On receiving such a requirement, most programmers' first impulse is to simply fire up their editor and start chunking out code like a factory. However, I would like to restrain you from doing so, and to instead take a step back, breathe in, and begin considering how you can test each requirement. "Why?", you may ask. Citing the earlier introductory example, tests for each aspect of functionality is important, because they tell you that the code is still working even after the changes you just made. They also tell the customer that your code are still working, serving to boost their confidence in you. Knowing that your code are still working you can carry on adding more new things, with their corresponding new tests. And the cycle goes on. Writing test cases that handle each requirement also ensures that we strictly follow the requirements handed to us, and secondly, writing test cases first ensures that we do not write unnecessary code. In most cases, starting work on a class without test cases is too much leeway given to programmers. They soon get creative and start introducing unnecessary features and functions, and what should have been a slim, thin, library class becomes bloatware. Secondly, some who start coding the classes first eventually produce classes that are hard to use, and similarly hard to test. The development platform will be Microsoft Visual C++ 2003, using CPPUnit[sup]2[/sup] as our unit testing tool. To begin with the development, let's begin with a barebone skeleton suite[sup]3[/sup] for the unit test cases for our project. CPPUnit is not the topic to cover here, though, so I will simply provide the code required below. Note that for demonstration purpose, namespaces will not be used for the suite of test cases as well as the Array class. I would, however, strongly encourage the use of namespace in your own development. //--------------main.cpp---------------- #include "array_test.hpp" #include #include #include int main(int argc, char** argv) { CppUnit::TextUi::TestRunner runner; CppUnit::TestFactoryRegistry (R)istry = CppUnit::TestFactoryRegistry::getRegistry(); runner.addTest(registry.makeTest()); bool success = runner.run("", false); std::cout << "Press enter to continue..." << std::endl; std::cin.get(); return 0; } //--------------array_test.hpp---------------- // Generated by Henrik Stuart's .hpp generator: #ifndef array_test_hpp_bf68b031_b047_4d13_92d8_2736b8dded2a #define array_test_hpp_bf68b031_b047_4d13_92d8_2736b8dded2a #include class array_test : public CppUnit::TestFixture { public: private: CPPUNIT_TEST_SUITE(array_test); CPPUNIT_TEST_SUITE_END(); }; CPPUNIT_TEST_SUITE_REGISTRATION( ArrayTest ); #endif // array_test_hpp_bf68b031_b047_4d13_92d8_2736b8dded2a After creation of the above two files, running the resulting application (after linking to the cppunit library you built with the source from the cppunit download), would show a console screen saying "OK (0 test)". First Test Case Let's see how we can start implementing the first requirement... oops, I mean the test for the first requirement. Template array class that allows different types and sizes That's actually pretty easy to write a test case for it. To fulfill the requirement, we simply need to be able to declare a template array class for different types and sizes. void test_declaration() { array a; array b; } Due to the lack of full-fledged reflection in c++, we would need to, after adding this test case to the array_test class, manually add an entry to the CPPUNIT_TEST_SUITE section. The resulting CPPUNIT_TEST_SUITE section should look as follow. CPPUNIT_TEST_SUITE(array_test); CPPUNIT_TEST(test_declaration); CPPUNIT_TEST_SUITE_END(); Note that in languages that support a more powerful version reflection methods like Java and C#, there is no need for this manual creation of CPPUNIT_TEST_SUITE. There, we have our first test case. So let's compile it. Compilation fails, as expected. Why did we compile even when we knew we would fail the compilation? The compiler acts as a to-do list for us. Our job is to simply resolve all the errors (and even warnings for those more zealous programmers out there), no more and no less. Looking at the list of compilation errors, we can simply deduce that they are all due to the missing array class. No worries, let's start coding the array class. //--------------array.hpp---------------- // Generated by Henrik Stuart's .hpp generator: #ifndef array_hpp_a650bf08_e950_4ea1_99bd_a579ae1d2179 #define array_hpp_a650bf08_e950_4ea1_99bd_a579ae1d2179 #include template class array { }; #endif // array_hpp_a650bf08_e950_4ea1_99bd_a579ae1d2179 You may have noted that this class does nothing. In fact, it is not even an array, but simply an empty shell! It is, however, the perfect class. It does nothing more than it should right now, which is simply to eliminate the compilation error. With the inclusion of this file in our array_test.hpp, we managed to receive zero compilation errors (though we did get two unreferenced local variables warnings). After running the test case, you should get an OK (1 test). Great, we are progressing! Moving along Moving along to the next requirement, Allows multiplication of an array with a scalar value So how are we going to test this requirement? As easy as the first case, it appears.(Remember to add the function to be tested to the CPPUNIT_TEST_SUITE section, as with all the following test functions) void test_scalar_multiplication() { array a; a * 5; } And so we hit the compile button again, and the compilation error comes as no surprise. It can't find the * operator which multiples an array with a scalar, so we go ahead and add the operator in the array class. void operator*(T const& t) const {} Yes, the operator is even more meaningless than the class. It simply does nothing, and returns nothing. Yet it serves its purpose for now, as the program compiles fine. Running the suite of test cases gives us the green bar[sup]4[/sup]. All is well, but we realize the test case is pretty dumb. We need a way to verify that the array is working. How, then, can we verify that the scalar multiplication works? We need to assert it, of course. void test_scalar_multiplication() { array a; for (int i = 0; i < a.size(); ++i) a = i; a = a * 5; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == i * 5); } Rethinking the testing, a new test case is developed. As expected, compilation fails. The new test case has forced us to introduce more functions to the class, but they are as we would most likely use them, a size member function, and a subscript member operator. We could just introduce them simply, but I would feel safer if I know that these functions work properly as well when tested independently. So, we take a step back, comment out the previous test case, and instead, we introduce a new test case for the size function first. void test_size() { array a; CPPUNIT_ASSERT(a.size() == 100); } And we compile again (I remind you of this repetitive step to reinforce what we are doing here), with the compiler complaining of the lack of the member function size. A typical implementation would be as below, const std::size_t size() const { return Size; } But that is only when the implementation is crystal clear in your mind. A more TDD[sup]5[/sup] approach would be to simply make it work for our test case. Remember, resolve the errors, no more, no less. const std::size_t size() const { return 100; } After running the test case, and getting the expected result, we know we have to make size work for different template arrays. So we add in another assertion in the test_size function array b; CPPUNIT_ASSERT(b.size() == 5); Before you make the change to the size function, run the suite of test cases first. Expect it to fail. If it does not, there is something wrong somewhere. Having it run successfully when you expect it to fail is as discomforting as you expect it to succeed instead of failing. In any case, you should have gotten a similar error message as below, !!!FAILURES!!! Test Results: Run: 2 Failures: 1 Errors: 0 1) test: array_test.test_size (F) line: 22 e:\development\projects\library\array_test.hpp "b.size() == 5" To make the code work, we do the obvious change, const std::size_t size() const { return Size; } Compile and run. Green Bar. Next we need the subscript operator. It should support reading and writing. An obvious test case would be as follows: void test_subscript_read() { array a; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == 0); } Compile, and fix the error. We need the subscript operator. Do we actually need to introduce the member array in the class? Not yet actually. A simple hack would have made the compiler happy. const T operator[](std::size_t i) const { return 0; } Compile and run. Green Bar. Now we need to test the writing part of a subscript operator. The test case should basically be a simple write and read and assert. void test_subscript_write() { array a; for (int i = 0; i < a.size(); ++i) a = 0; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == 0); } Compile it, and the compiler complains of '=' : left operand must be l-value. Remember we had returned a const T from the subscript operator. So we actually need to have one that returns a value that we can assign to. So do we need the internal array now? Actually, no, not yet. Why, after this long, are we still not introducing the member array variable? The reason is that we must always follow the rule of not introducing new features until we must. That way, we can get away with the smallest/most slim class interface, as well as get the most return on investments on production, since there are cases where you introduce a new feature that will be useful earlier, but not necessary and get no return on investments. So, to resolve our current compiler error, public: T& operator[](std::size_t i) { return temp_; } private: T temp_; Compile and run. Red bar! An assertion error! !!!FAILURES!!! Test Results: Run: 4 Failures: 1 Errors: 0 1) test: array_test.test_subscript_read (F) line: 28 e:\development\projects\library\array_test.hpp "a == 0" A quick look would tell us that the non-const version is actually called for both version, and temp_ has not been properly initialized. It turns out we need a constructor for our array class after all. explicit array():temp_(0) {} Compile and run. Green bar, finally. Reviewing the implementation and test case, it is obvious that the subscript is not working as intended. We need to further assert the test case. for (int i = 0; i < a.size(); ++i) a = i; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == i); Compile and run. Red bar. So, we need the array after all. Let's introduce the variable first, as v_, and remove temp_. T v_[Size]; Compile first so we get a list of errors to missing references of temp_. Remove and replace those with v_ explicit array() { std::fill_n(v_, Size, 0); } T& operator[](std::size_t i) { return v_; } Compile and run. Green bar. We could now move on back to the scalar multiplication. But wait - why did it run properly, when we have a wrong implementation as the const version of the subscript? To get the bug to shout at us, we need to manifest it as a test case. We will add an additional assertion in the test_subscript_write array const& b = a; for (int i = 0; i < b.size(); ++i) CPPUNIT_ASSERT(b == i); Hooray! Red bar! Let's fix it! const T& operator[](std::size_t i) const { return v_; } Hooray! Green bar! Back to scalar multiplication! Back to scalar multiplication test case and others Uncomment the scalar multiplication test case and compiling the code gives us one compilation error. It is complaining that there's no assignment operator defined that takes in a void, which is the returned type of operator*. So apparently we need to rework that a bit. Under most circumstances I might generate an assignment operator that takes in a void, but that function would be meaningless in other operations. So I went ahead and let operator* returned a new array object. array operator*(T const& t) const { return array(); } Compile and run. Red bar. That is expected, because no scalar multiplication was actually performed. So we need to rework the implementation of operator* to perform a multiplication of 5. Why did I say 5, specifically? Because that is the fix that will make this test case work, so we should do that for now. array operator*(T const& t) const { array tmp; for (std::size_t i = 0; i < size(); ++i) tmp = operator[](i) * 5; return tmp; } Notice that also the function is expressed in terms of other member functions. In fact, it has given us a strong hint that we can create operator* not as a member function, but as a free function. template inline array operator*(array const& a, T const& t) { array tmp; for (std::size_t i = 0; i < tmp.size(); ++i) tmp = a * 5; return tmp; } template inline array operator*(T const& t, array const& a) { return operator*(a, t); } Compile and run. Green bar. But is it working yet? We know it's not, so we need a better assertion test. for (int i = 0; i < a.size(); ++i) a = i; a = 8 * a; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == i * 8); Note the differing expression 8 * a, instead of the usual array * scalar format. This is to test and verify that the other operator* works. Compile and run. Red bar. Fix the scalar multiplication function to make use of the variable t now tmp = a * t; Compile and run. Green bar. Rest of the list Let's review the list again. - Wait a minute, didn't we just perform assignments of arrays earlier? Well it turns out that C++ has synthesize (as it should) the default assignment operator for us (member wise copy semantics), and it worked for our purpose. So, crossing out the to-do list, we have the following left. - Allows natural multiplication expression of an array with other arrays of the same size and type - Allows natural addition expression of two arrays of same size and type We will go with addition since it seems the easier to do (funny how one's brain always perceives multiplication as harder). The test case for addition would look like the following. void test_array_addition() { array a; array b; for (int i = 0; i < a.size(); ++i) a = i; for (int i = 0; i < b.size(); ++i) b = b.size() - i; a = a + b; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == i + (b.size() - i)); } Good, a compile tells us we need the operator+ definition. Again we will build it as a free function. template inline array operator+(array const& a, array const& b) { array tmp; for (std::size_t i = 0; i < tmp.size(); ++i) tmp = a + b; return tmp; } Compile and run. Green bar. Next, onto multiplication of arrays. void test_array_multiplication() { array a; array b; for (int i = 0; i < a.size(); ++i) a = i + 1; for (int i = 0; i < b.size(); ++i) b = b.size() - i; a = a * b; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(a == (i + 1) * (b.size() - i)); } Compile. It complains that we need the operator* for two arrays. template inline array operator*(array const& a, array const& b) { array tmp; for (std::size_t i = 0; i < tmp.size(); ++i) tmp = a * b; return tmp; } Compile and run. Green bar! To verify that what the manager cited as an example actually works, let's enter one last test case! void test_example() { array a; array b; for (int i = 0; i < a.size(); ++i) a = i + 1; for (int i = 0; i < b.size(); ++i) b = b.size() - i; array x = 5 * a + a * b; for (int i = 0; i < a.size(); ++i) CPPUNIT_ASSERT(x == 5 * (i + 1) + (i + 1) * (b.size() - i)); } Compile and run. Green bar! We're all done! Now this can be submitted to your project manager! More changes All was fine and great, until days later, your project manager come back and tell you that your array class is the performance bottleneck of the project. It creates and utilizes too many temporaries. Your job now is to optimize it. Reviewing the code, it seems that we could rewrite the array class to provide operators *= and += instead, and eliminate temporaries. However, expression written with *= and += are not as natural as + and *, resulting in resulting unclear code compare to their * and + counterparts. Not to mention such a change would break all existing code that uses the array class. A solution to this problem seems impossible... ...or does it? Apparently the problem here is premature evaluation of expressions even when they are only used as an element in another expression. Reviewing the example given by the project manager, x = 5 * x + x * y, it can be parsed as x = ((5 * x) + (x * y)), where (5 * x) is an expression object and (x * y) is another expression object, and the encompassing ((5 * x) + (x * y)) is yet another expression object, which eventually can be used in a right hand side assignment of a template array object. So let's review a new list of requirements: - Expression object representing array multiplication with a scalar - Expression object representing array multiplication with another array - Expression object representing array addition with another array - Assignment of expression object to an array We will pick addition to work with first. Addition expression An early attempt to introduce an addition expression would be to modify the operator+. But wait! Where's the test case? Well, the test case has already been defined. We are reusing the test case defined in the previous implementation of array. All changes should still result in a green bar with the previous test case, and since we are simply redefining operators previously defined, we can reuse the test case as well. template inline array_addition There are no comments to display. You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/test-driving-expression-template-programming-r2114/
CC-MAIN-2018-51
refinedweb
3,295
65.01
This is by no means ready for release, but I wanted to get a sanitycheck. I'm still stuck on this idea that userspace needs access to ACPInamespace. Manageability apps might use this taking inventory ofdevices not exposed by other means, things like X can locate chipsetcomponents that don't live in PCI space, there's even the possibility ofmaking user space drivers. Populating the sysfs tree didn't seem to generate as much interest asI'd hoped and I don't think it kept with the spirit of sysfs very well.So, now I present dev_acpi (name suggestions welcome). The link belowis a tarball with a first stab at the driver as well as a simple proofof concept application. It should build against any 2.6 kernel as longas you have the include files available. There are no kernel changesrequired, thus it doesn't expose anything not already exposed as asymbol. The basic concept of operation is that the ioctl operates on the ACPIpath passed into the ioctl call. The ioctl may return the result of theoperation either in the status field of the argument or use that toindicate the number of bytes available to read(2) for the result. Theheader file included describes the input and output for each operation.If the status field indicates a byte count to read, the callingapplication can easily size buffers, and call read(2) on the device fileto get the results. I've also include support for write(2) that couldallow writing arguments for method calls that take input (completelyuntested). I've limited some of the output (for instance in GET_NEXT)to try to only print out standard ACPI objects, but the filter is prettysimple (objects beginning w/ '_'). I know the completely open interfacefrom the sysfs implementation scared some people. Non-standard objectscan still be operated on, but you've got to know what to look for. Many of the ioctls mimic the behavior of the acpi calls that arealready exported. What I have now is only a start at what could beprovided. The sample, proof-of-concept app, is called acpitree. It'smuch like the tree app for listing files and directories. It doesevaluate and print _HIDs represented by integers, but that's about it.Here's some sample output (sorry anyone not using fixed width fonts):>From an rx4640 ia64 server:\\|-- _GPE|-- _PR_|-- _SB_| |-- SBA0| | |-- _HID (HWP0001)| | |-- _CID| | |-- _CRS| | |-- _INI| | |-- _UID| | |-- MI0_| | | |-- _HID (IPI0001)| | | |-- _UID| | | |-- _STA| | | `-- _CRS| | |-- PCI0| | | |-- _UID| | | |-- _STA| | | |-- _BBN| | | |-- _HID (HWP0002)| | | |-- _CID| | | |-- _PRT...>From an nc6000 laptop:\\|-- _GPE|-- _PR_|-- _SB_| |-- _INI| |-- C00C| | |-- _HID (PNP0C01)| | `-- _CRS| |-- C046| | |-- _HID (PNP0A03)| | |-- _ADR| | |-- C047| | | |-- _ADR| | | |-- C0D1| | | | |-- _ADR| | | | |-- _REG| | | | |-- _S3D| | | | |-- _S4D| | | | |-- _DOS| | | | |-- C0DD| | | | | |-- _ADR... You can find the driver and sample app here:'s a brutally short README there. Caveat: the driver is hardcodedto use an experimental major number, you'll have to mknod it, see theREADME.Please try it out, let me know if it sucks. I make no guarantees itwon't kill your system, but it shouldn't unless you start evaluatingdangerous objects (ie, if you don't know what it does, don't do it).And of course, if you have any suggestions, I welcome feedback.
http://lkml.org/lkml/2004/8/3/106
CC-MAIN-2014-52
refinedweb
527
62.68
Zero to DAPP 3 of 4 for Windows (or MacOS/Linux) In this page, you examine and modify the Animal Kingdom DApp you built in part 2. You’ll review the underlying code and locate the portions of it which fulfill the requirements necessary to qualify an application for App Mining. You’ll expand your knowledge of the application by extending it. Finally, you’ll learn how to deploy a DApp. This page contains the following topics - Understand the Animal Kingdom application code - Add a territory - Add the Blockstack kingdom to Other Kingdoms - Deploy your DApp on the web - Add your Kingdom to our Clan Before you get started Before you continue, make sure you can locate the key files and directories (folders) in your project. You’ll need to make sure you have opened a terminal and have changed directory to the top of your Animal Kingdom project. If you find it easier to navigate, you can use the Finder as well. Just remember you’ll need the command line to run your project. Understand the Animal Kingdom application code The Animal Kingdom application has two major components, React and Blockstack. React is used to build all the web components and interactions. You could replace React with any framework that you like; Blockstack is web framework agnostic. This section does not explain the React in any detail; The discussion focuses on the Blockstack Javascript library the DApp instead. The Blockstack Javascript library is all a developer needs to create a DApp. It grants the application the ability to authenticate a Blockstack identity and to read and write to the user’s data stored in a Gaia hub. Authenticating user identity The src/App.js file creates a Blockstack UserSession and uses that session’s isUserSignedIn() method to determine if the user is signed in or out of the application. Depending on the result of this method. The application redirects to the src/SignedIn page or to the src/Landing.js page. import React, { Component } from 'react' import './App.css' import { UserSession } from 'blockstack' import Landing from './Landing' import SignedIn from './SignedIn' class App extends Component { constructor() { super() this.userSession = new UserSession() } componentWillMount() { const session = this.userSession if(!session.isUserSignedIn() && session.isSignInPending()) { session.handlePendingSignIn() .then((userData) => { if(!userData.username) { throw new Error('This app requires a username.') } window.location = `/kingdom/${userData.username}` }) } } render() { return ( <main role="main"> {this.userSession.isUserSignedIn() ? <SignedIn /> : <Landing /> } </main> ); } } export default App The first time you start the application, this code determines if the user has signed into the DApp previously. If not, it opens the Landing.js page. This page offers the user an opportunity to Sign in to Blockstack. Clicking the button ends up calling the redirectToSignIn() method which generates an authentication request and redirects the user to the Blockstack Browser to approve the sign in request. The actual Blockstack sign-in dialog depends on whether the user already has an existing session in the Blockstack Browser. Signing in with an identity is the means by which the user grants the DApp access. Access means the DApp can read the user profile and read/write user data for the DApp. Data is encrypted at a unique URL on a Gaia storage hub. The source code imports UserSession from the Blockstack library. Data related to a given user-session is encapsulated in the session. In a web browser, UserSession default behavior is to store session data in the browser’s local storage. This means that app developers can leave management of session state to users. In a non-web browser environment, it is necessary to pass in an instance of AppConfig which defines the parameters of the current app. App Mining Requirement: Blockstack Authentication To participate in application mining your application must integrate Blockstack authentication. Get and put user data to a Gaia Hub Gaia is the Blockstack data storage hub (). Once a user authenticates, the application can get and put application data in the user’s storage. After a user signs in, the SignedIn.js code checks the user’s Gaia profile by running the loadMe() method. loadMe() { const options = { decrypt: false } this.userSession.getFile(ME_FILENAME, options) .then((content) => { if(content) { const me = JSON.parse(content) this.setState({me, redirectToMe: false}) } else { const me = null this.setState({me, redirectToMe: true}) } }) } Most of the imports in this file are locally coded React components. For example, Kingdom.js, EditMe.js, and Card.js. The key Blockstack imports is the UserSession and an appConfig which is defined in the constants.js file. The loadMe() code uses the Blockstack’s UserSession.getFile() method to get the specified file from the applications data store. If the users’ data store on Gaia does not have the data, which is the case for new users, the Gaia hub responds with HTTP 404 code and the getFile() promise resolves to null. If you are using a Chrome Developer Tools with the DApp, you’ll see these errors in a browser’s developer Console. After a user chooses an animal persona and a territory, the user presses Done and the application stores the user data on Gaia. saveMe(me) { this.setState({me, savingMe: true}) const options = { encrypt: false } this.userSession.putFile(ME_FILENAME, JSON.stringify(me), options) .finally(() => { this.setState({savingMe: false}) }) } The Blockstack putFile() stores the data provided in the user’s DApp data store. By default, putFile() stores data in an encrypted format which means only the user that stored it can view it. You can view the URL for the data store from a user’s profile. Because this application wants other users to view the persona and territory, the data is not encrypted, so the encrypt option is set to false. If you tested your Animal Kingdom, you can see this on your profile. To see your profile, go to the Blockstack explorer and search for your ID: App Mining Optional: Gaia Storage Use of Gaia storage is not required for application mining. Keep in mind, using Gaia may make data storage easier as it is designed to work in the Blockstack Ecosystem. Application configuration Your DApp contains three pages Animals, Territories, and Other Kingdoms that are derived from three code elements: - The src/constants.jsfile defines the application’s data profile ( AppConfig). - The public\animalsdirectory which contains images. - The public\territoriesdirectory which contains images. In the next section, you extend your Kingdom’s configuration by modifying these files. Add a territory If your application is still running in localhost stop it with a CTRL-C from your keyboard. Decide what kind of territory to add — desert, ocean, or city! This example adds Westeros, a fictional territory. Google images is a good place to find a JPEG image of Westeros. Save the image to the public/territoriesfolder in your Animal Kingdom project code.Warning: The territory filename must be all lower case and have a .jpgextension. For this example, the territory image is saved in the westeros.jpgfile. Use the lscommand to confirm your file appears in territoriesdirectory and has the correct name. PS C:\animal-kingdom-master> ls .\public\territories\ Directory: C:\animal-kingdom-master\public\territories Mode LastWriteTime Length Name -a 2/26/2019 6:09 AM 132814 forest.jpg -a 2/26/2019 6:09 AM 128272 tundra.jpg -a 2/26/2019 6:31 AM 1087534 westeros.jpg PS C:\animal-kingdom-master> - Open the src\constant.jsfile in your favorite editor. Scroll down to the section that defines the Territories. export const TERRITORIES = [ { id: 'forest', name: 'Forest', superpower: 'Trees!' }, { id: 'tundra', name: 'Tundra', superpower: 'Let it snow!' } ] Add your new territory. export const TERRITORIES = [ { id: 'forest', name: 'Forest', superpower: 'Trees!' }, { id: 'tundra', name: 'Tundra', superpower: 'Let it snow!' }, { id: 'westeros', name: 'Westeros', superpower: 'The Iron Throne!' } ] - Save and close the constant.jsfile. Back in a terminal window, restart your application. c:\animal-kingdom-master> npm run start - After the application starts, navigate to the Territories page and look for your Westerosterritory. Add the Blockstack kingdom to Other Kingdoms Your Animal Kingdom only recognizes two Other Kingdoms. In this section, you add a third, the Blockstack kingdom (). Open the src/constant.jsfile in your favorite editor. On Windows you can use Notepad. Scroll down to the section that defines the Other Kingdoms export const OTHER_KINGDOMS = [ { app: '', ruler: 'larry.id' }, { app: '', ruler: 'larz.id' } ] To add a kingdom, you need its URL and the ID of its owner. Edit the file and add the is owned by meepers.id.blockstack. When you are done the file will look like this. export const OTHER_KINGDOMS = [ { app: '', ruler: 'larry.id' }, { app: '', ruler: 'larz.id' }, { app: '', ruler: 'meepers.id.blockstack' } ] - Save and close the constants.jsfile. Back in your browser, navigate to the Other Kingdoms page. - Go to the meeperskingdom by clicking on her kingdom. - Try adding a subject from meepers’s kingdom to yours. Deploy your DApp on the web So far, you’ve been running the application locally. This means you are the only person that can use it to create a kingdom. You can make your application available to others by hosting it out on the internet. You can do this for free with a Netlify account. App Mining Requirement: Review Accessibility To participate in application mining your application must be available for review. Open source projects must provide the URL to their code. Projects with private repositories can provide their application in a package form. Before you begin, you need to build a site that is ready to deploy. - In your terminal, press CTRL-Con your keyboard to stop your npm run startbuild. Build a website from your code by entering the npm run buildcommand: npm run build When the command completes, you should have a new buildsubdirectory in your project. - Open your project in the Finder. Locate the newly created buildsubfolder. This example assumes you create an account by providing an email and password. In your email inbox, find Netlify’s welcome email and verify your account. - Log into Netlify and go to the Overview page in your browser. Drag your buildsubdirectory from the Finder into the drop zone in Netlify. After a moment, Netlify builds your code and displays the location of your new website. Click on your website name to display the website. You are prompted to sign into this new site with your Blockstack ID. Click Sign in with Blockstack. After you sign in, your website presents you with this message: You get this message because, when you authenticate, your DApp at one URL requested a resource (an identity) from another DApp, the Blockstack Browser. A request for a resource outside of the origin (your new website) is called as a cross-origin request(CORs). Getting data in this manner can be risky, so you must configure your website security to allow interactions across origins.You can think of CORS interactions as an apartment building with Security. For example, if you need to borrow a ladder, you could ask a neighbor in your building who has one. Security would likely not have a problem with this request (i.e., same-origin, your building). If you needed a particular tool, however, and you ordered it delivered from an online hardware store (i.e., cross-origin, another site), Security may request identification before allowing the delivery man into the apartment building. Credit: Codecademy The way you configure CORs depends on which company is serving your website. You are using Netlify for this example. Locate the cors/_headersand cors/_redirectsfiles in your project. You can use the Finder or the lscommand. Copy them into your builddirectory. To copy them with the lscommand, enter the following in the root of the animal-kingdom-masterproject. cp cors/_headers build cp cors/_redirects build The name of each file, with the underscore, is essential. Drag the buildfile back into the Netlify drop zone. After a moment, Netlify publishes your site. Check the published location, it may have changed. - Click on the link and log into your Animal Kingdom. Recreate your animal person and territory. The Animal Kingdom is identified by its location on the Internet, remember? So, the animal kingdom you created on your local workstation is different than the one you create on Netlify. Add your Kingdom to our Clan At this point, your kingdom is isolated. If you know another kingdom, you can add subjects from that kingdom but other kingdoms can’t access your subjects. In this section, you use a free GitHub account to add your kingdom to the Blockstack kingdom. - If you have a GitHub account, go to step 2 otherwise go to GitHub site and create a new account. - Go to the repository on Github. Click New Issue. The new issue dialog appears. Fill out the issue with the URL from Netlify and your Blockstack id. When you are done, your issue will look like the following: Press Submit new issue. The Blockstack team will add your Netlify kingdom to ours. When we do that, we will notify you on the issue and you’ll also get an email. - When you receive the email, login to the Blockstack Animal kingdom to see your kingdom under Other Kingdoms. Next steps (and a cool tshirt!) In the next part, you learn about how application mining can fund your DApp development efforts. And you will take a couple of minutes to add your Animal Kingdom DApp to App.co — the Universal App store. Completing this step earns you a limited edition t-shirt. If you have a twitter account, why not tell some folks about your progress?
https://docs.blockstack.org/develop/zero_to_dapp_3_win.html
CC-MAIN-2019-47
refinedweb
2,261
58.79
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: JRuby 1.6.6 - Fix Version/s: JRuby 1.7.0.pre1 - Component/s: Standard Library - Labels:None - Environment:OS X 10.7.3 - Number of attachments : Description require 'tempfile' Tempfile.open 'test' do |io| io.unlink io.write "hi" io.rewind # flush p io.stat.size end When run with ruby 1.8: $ ruby -v t.rb ruby 1.8.7 (2010-01-10 patchlevel 249) [universal-darwin11.0] 2 When run with jruby: $ jruby -v t.rb Unable to find a $JAVA_HOME at "/usr", continuing with system-provided Java... jruby 1.6.6 (ruby-1.8.7-p357) (2012-01-30 5673572) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_29) [darwin-x86_64-java] RubyFileStat.java:115:in `setup': java.lang.NullPointerException from RubyFileStat.java:92:in `newFileStat' from Ruby.java:2894:in `newFileStat' from RubyFile.java:834:in `stat' from RubyFile$i$0$0$stat.gen:65535:in `call' from CachingCallSite.java:292:in `cacheAndCall' from CachingCallSite.java:135:in `call' from t.rb:9:in `block_0$RUBY$_file_' from t$block_0$RUBY$_file_:65535:in `call' from CompiledBlock.java:112:in `yield' from CompiledBlock.java:95:in `yield' from Block.java:130:in `yield' from RubyTempfile.java:265:in `open' from RubyTempfile$s$0$1$open.gen:65535:in `call' from DynamicMethod.java:211:in `call' from CachingCallSite.java:322:in `cacheAndCall' from CachingCallSite.java:178:in `callBlock' from CachingCallSite.java:187:in `callIter' from t.rb:3:in `_file_' from t.rb:-1:in `load' from Ruby.java:695:in `runScript' from Ruby.java:688:in `runScript' from Ruby.java:595:in `runNormally' from Ruby.java:444:in `runFromMain' from Main.java:344:in `doRunFromMain' from Main.java:256:in `internalRun' from Main.java:222:in `run' from Main.java:206:in `run' from Main.java:186:in `main' Issue Links - is related to JRUBY-6688 Tempfile#{unlink,delete} should warn or actualy do something Activity So...I can fix the NPE, but this still isn't going to work. JRuby's stat support is always based on a filename, since we have no way to get at real file descriptors (and so we can't call fstat). With the fix below, it doesn't NPE anymore, but it raises ENOENT since the path no longer points to a file on the filesystem. diff --git a/src/org/jruby/ext/tempfile/Tempfile.java b/src/org/jruby/ext/tempfile/Tempfile.java index 17911b5..10c6592 100644 --- a/src/org/jruby/ext/tempfile/Tempfile.java +++ b/src/org/jruby/ext/tempfile/Tempfile.java @@ -242,7 +242,6 @@ public class Tempfile extends RubyTempfile { if (!tmpFile.exists() || tmpFile.delete()) { referenceSet.remove(reaper); reaper.released = true; - path = null; } return context.getRuntime().getNil(); } Where is this pattern being used? In mechanize a Tempfile is used for large responses. They're unlinked immediately. See: (I should refactor these!) The length is used here: I think it is OK to make unlink a no-op for jruby if you can't stat a file descriptor. This would be the same behavior as on CRuby windows users. This doesn't protect Tempfile#stat from users that bypass Tempfile#unlink using File.unlink tempfile.path, but that should be an acceptable trade-off as such use is likely wrong. PS: I've added a workaround to mechanize that jruby users can use (see the linked mechanize ticket). Oops, see for the related mechanize ticket. I'm going to go with the no-op. close! will still delete the file, as will close(true), and we mark the file as "close on exit" for the JVM. The only cases where the file would leak is if people are closing it without unlinking (which would leak anyway) or if someone exits the process prematurely (which would leak before if you didn't actively call #unlink or an unlinking #close form). This will be in JRuby 1.7, so we've got the preview and release candidates to shake out any issues. commit 43bd20f28fa185430f7bf06f529db0221f9e79c3 Author: Charles Oliver Nutter <headius@headius.com> Date: Mon May 14 14:11:44 2012 -0500 Fix JRUBY-6477 Because we can't unlink a file without making it path useless (and preventing other file methods that depend on path like #stat from working) we make unlink/delete a no-op. Tempfiles that are closed properly will still unlink, and normal JVM exit will also delete the file. IO's finalization also helps ensure Tempfiles that are walked away from still clean up. Confirmed on master.
http://jira.codehaus.org/browse/JRUBY-6477
CC-MAIN-2015-22
refinedweb
765
63.66
Scala FAQ: How do I convert a String to Int in Scala? Solution: Use ‘to. A Java-like". A Scala “String to Int” conversion function that uses Option) As I wrote in my book about Scala and functional programming, you can also write a Scala toInt function that uses Try, Success, and Failure like this: import scala.util.{Try, Success, Failure} def makeInt(s: String): Try[Int] = Try(s.trim.toInt) You can also return an Option like this: import scala.util.control.Exception._ def makeInt(s: String): Option[Int] = allCatch.opt(s.toInt) Please see my Scala Option/Some/None idiom tutorial for more ways to use these patterns, and to use the Option and Try results..
https://alvinalexander.com/scala/how-cast-string-to-int-in-scala-string-int-conversion/
CC-MAIN-2022-33
refinedweb
119
65.42
Brill, > > I don't think "," is the standard ant separator. Where does it say this? > > It doesn't say it anywhere, but its all over Ant... call it the "unwritten > rule"... > Easily broken then ... :-). In fact the current spec proposal (see File Naming Conventions) states that in the build file the path separator is ':' and the directory separator is '/'. The ';' and '\' characters are alternatives. > > Its actually better than you realise. As it stands ant will > accept a path > > specification with either Windows or Unix standard path separators and > > translate to the native platform's representation. This makes ant very > > accepting. In general I try to stick to the Unix style separators in my > > build files even though I work mostly on NT. > > The trouble with it is that we assume that we're going to get a unix or a > windows path separator... what happenes on another OS, such as JOS > ()... does anyone know what will happen on an Amega? (not that > amega is likely to ever implement a recent JDK, but the concept is sound). > My point is, that a particular token separator is used everywhere in ant, > except in a classpath... > I don't think we can get universal agreement on a path separators over all platforms. The current behaviour accepts Windows and Unix style separators and tries and be "accommodating". For interchange of build files between those two families of platforms, that is pretty cool. Nevertheless, I would be happy to fix the separators within build files to be ':' and '/'. The build files would be platform independent and it would be up to the ant core to cater for the platform differences. I would still want DOS style absolute paths to be supported when running on systems which support such paths (C:/blah). Previously my position was that I had wanted ant to be useable by people using their native platform conventions. If a person has to provide a classpath and they are used to providing it in a particular way, I wanted ant to accept that. The resulting build files would still have been cross platform, provided no absolute paths had been used. My view was based on Unix and Windows style platforms and I guess that view may preclude other platforms. > There is also no reason that the system classpath can't work in > conjunction > with an "ant classpath"... once the ant classpath is converted to a native > path at runtime, the system classpath could be appended if desired. > Not sure what you are getting at here. > > Looks like it would work, but you have hard coded the path separator chars > in some places... I think you can do the same thing without doing > that... in > fact I have done that very thing... here is a rough method that will be > something like what I did, although in a slightly different context... > I don't think it will compile without a little work (I didn't try > it anyway) > but you can see what I'm talking about. > > public static final File[] tokenizePaths(String working) { > ArrayList filelist = new ArrayList(); > if (working != null) { > if (working.indexOf(File.pathSeparator) > -1) { > // this is a series of paths. > // we need to tokenize the string. > if (!working.endsWith(File.pathSeparator)) { > // append a token, for the tokenizer. > working = working + File.pathSeparator; > } > StringTokenizer tok = new > StringTokenizer(working, File.pathSeparator); > while (tok.hasMoreTokens()) { > filelist.add(new File((String) > tok.nextToken())); > } > } else { > // this is a single path. > File file = new File(working); > filelist.add(file); > } > } > return (File[]) filelist.toArray(); > } > > As you can see, at no point do I use a hard-coded path separator... and > because of that, this method should work on *any* java compliant OS, > regardless of what unique path separators it uses... Your code is platform independent but it makes all build files platform dependent. A build file which works on one platform will not work on another with a different separator. One of the major goals of ant is to allow build files to be platform independent. I suggest you have a read of the core.html document in the spec directory. It discusses some of these issues. Cheers Conor
http://mail-archives.apache.org/mod_mbox/ant-dev/200006.mbox/%3CNDBBIMHCHMPELMBMMFIMKEHEDJAA.conor@cortexebusiness.com.au%3E
CC-MAIN-2014-42
refinedweb
690
67.55
Agenda See also: IRC log <RebeccaB> Scribe this morning is Rebecca Bergersen; Jonathan Marsh this afternoon Hugo: docs for WSDL - making progress, talking with Description WG in couple of days Mark: approval of minutes - no objections minutes - April 11th Mark: scheduled end last call May 11 ... May 11 also last day to register for Berlin F2F ... presentation on leaving Last Call ... number of requirements from process POV ... after last call comes candidate recommendation ... Process document outlines 10 steps ... 1 documentg all changes to document, both editorial and substantive ... editors keeping change log Hugo - when docs generated, diff is also generated Mark: 2 indicate if "substantive" change in doc has occured "substantive": change that invalidates an individual's review or implementation experience Mark: substantive change means doc needs to go back to Last Call ... we'll determine when we go to CR ... when we close an issue with a change we'll mark whether "substantive" <hugo> Meeting: WS Addressing F2F <hugo> Chair: Mark <hugo> Chair: MarkN Mark: 3 - formally addressing an issue is sending a substantive response to reviewer who opened issue ... response delegated by AI with cc to comments list ... ask for ack, further comment within 2 weeks ... record of issues and responses must be kept - Last Call Issues List ... issues considered before LC can't be reraised without new info ... reviewer doesn't have to accept. If firm objection may be reviewed by director when final review happens ... 4 - report formal objections ... objection to decision of WG, reported to director during transition Hugo: only W3C member can make objection Marsh: diff btwn formal objection and onjection to comment? c/onjection/objection Mark: formal objection sent as formal objection via email, tracked, collected on web page Hugo: go to director with issues list. He may ask if anyone unsatisfied with our resolution and we discuss our process and consideration Mark: want to keep formal objection and objection to our resolution separate ... will track on response whether acknowledgement, no ackknowledgement or dissatisfaction Hugo: will do more research on who may make objection Mark: 5 - tell whether requirements have changed ... 6 - review evidence - need to show wide review has taken place (emailfrom outside WG, external orgs, etc. ... 7 - identify features at risk ... allows hedge against features we might remove; removal of such doesn't constitute substantial change ... MUST be precisely identified; reviewers can formall object to proposed removal of features ... 8: PR Entrance requirements ... 4 implementations, not 2 ... (on #7) if feature appears to put interop at risk, may go on list of risky features ... Lock period, Implementation period - how long think to implement <hugo> re formal objections, anybody can make a formal objection: Mark: we can allow external implementations, by default, but we can constrain further if we want ... process requires we test every feature - empty test suite won 't do Anish: need to make results public immediately? Mark: we'll get there Mark: most immediately important thing will be how we address issues and how we dance the dance on responding to reviewers ... Now move into LC issues list Last call issues list: Last call issues list: Prasad: describing issue 1 Prasad: how can faults be received by sender if message id and fault endpoint reply properties not provided? Anish - if reply-to required, how would work in one-way MEP Glenn - do right thing if want right thing to happen Prasad - add text to specs that props are needed if desire to handle faults prasad: make SHOULD in text - if want to receive faults, SHOULD define properties ... In SOAP binding spec, add sentence "in order to receive these faults, message ID and fault endpoint or reply endpoint SHOULD be supplied." Marsh - put in first couple of para section 5 Marsh - first sentence 2nd paragraph section 5 Prfasad: no objection to adding to core if desired Mark: "in order to receive..." is conditional phrase, so this might trip up some people <scribe> ACTION: prasad write up text handling concern with conditional properly, send to self and mail comments list [recorded in] Hugo: relation to issue 50 Mark: Use lower case should, not rfc ... lower case makes it English issue, not conformance issue Marsh: Message ID and fault endpoint and reply endpoint properties facilitate delivery of fault back to originating party. Mark: Message ID and fault endpoint and reply endpoint properties facilitate the delivery of faults. Prasad: okay with text Glenn - adds flexibility Anish: Message ID doesn't help DELIVERY of fault, but correlates ] Mark: Message ID and fault endpoint and reply endpoint properties facilitate the delivery and correlation of faults. ... resolution to issue 1, to be added to section 5, is "Message ID and fault endpoint and reply endpoint properties facilitate the delivery and correlation of faults." <Marsh> ACTION: Marsh to respond to LC issue 1 [recorded in] Mark: LC 2 ... 20 minute break; restart ~10:50 Mark: section 2.2 example 2.1 ... editorial - spelling, etc. WG walks through each of the editorial problems; short discussion of changes necessary Mark: accept all suggestions, give editorial discretion where no fix supplied ... no objections to resolution Prasad: Many extension points described but not shown in pseudo-schema Marsh: strike excess any in example 2-1 lc3 closed with acceptance of Marsh's recommendation Prasad: confusing editorial note at end of status section "Note that this restricts use of Web Service Addressing to XML 1.0" but later state "...this is not requirement." Marsh: note is incorrect ... remove editorial note or fix by removing sentence at end ort clarifying to restriction of :some XML features..." resolution - no objections to removing the note. Prasad: why do we need [source endpoint] jonathan: [source endpoint] is kinda strange since it is not referenced anywhere else, we could change the rules or drop [source endpoint] marc: dropping is a better solution greg: MAPs are extensible, so one can add itin hugo: we define MAPs, we define rules for replying, someone may come up with an interaction pattern that uses [source endpoint]. I see this as a harmless and potentiall useful mark: it is relevant that in CR we have to test interop for features, this is a feature. marc: this comes back to davidh's issue around MAP extensibility greg: one adv. is that -- we can prevent DoS attack Anish - how does it prevent DOS attacks? Hugo: where use from in email is in rules to reply to mshg Glenn - expand rules to cover Umit - source enspoint can be anon EPR Mark: substantial change if we pull source Greg: if no implementation actiually uses it ... Glenn: some test case may use it Anish: what rationale for not defaulting replyto? MarcH - substantive change Paul: fine the way it is at moment Mark: Prasad, acceptable if mark as feature at risk? ... no change at this point Anish: I do see a use for the feature Anish if use src EPR and reply EPR, you may have common proxy for whole bunch of things Greg: if we start suggesting use, we might want to make 1st class citizen by including in rules ... is change to rules substantive? Umit - no - just plugging hole Anish -aligns with email pattern Hugo - wonders if forces to return to last call Anish: Who decides if something substantial? Mark: ultimately hope consensus of room; if push comes to shove, we'll see. dhull: possible no from header but src abstract property defined? Mark: decided at earlier issue not to go there ... identify src endpoint property and places it exists and mark as feature at risk Anish object c/object/objects/ Mark: needs test case Marsh: test case: malformed header and see if response comes back Glenn: real world - auditing / logging Mark: two proposals - identify feature at risk or modify rules MarcH: sounds significant change - worthwhile? Hugo: significant change is to remove source endpoint Mark: only identify as at risk, not remove it hugo: use case contemplated by someone may be prevented by changing semantics Paul: leave it in as informative text paul do as response or add something in text as "it is here' Paul: just document it Anish: Marking feature as something at risk tgakes away some of the functionality in use cases we had in mind Mark: indicates WG needs more Anish: we've heard use cases here ... so why mark as "at risk"? ... not mark as as risk - incorporate in rules Mark: make the comment if that's what you feel Umit: two probs Prasad wants to achieve - functionality and representation Anish: message may be part of more complicated interaction - receiver may in future contact reply addr Prasad: spec doesn't say anything about that Anish: it identifies source of the message and use cases exist where that knowledge is impportant Paul: agrees Marsh: can set CR criteria to see some other spec out in wild that uses it. Umit: what if my product is using it but no spec for it? Marsh - spec or product, sure. TRutt: poor man's multiple reply use case? Why not just a URI with some metadata? Pauld: more info may be conveyed beyong just URI Anish: supply proxy can use src EPR; message ID useful to receiver Mark: is usefule to have this bucket or indulging in differing semanticvs ... 1 option - identify as feature at risk, PR to make sure external use cases, if decide later we can yank it ... 2: change defaulting rules so falls back to src EPR if reply not there; concern if substantive change ... 32: status quo - our charter to provide feature for situations; we don't need to provide rationale c/32/3/ <abbie> can i vote Mark: straw poll: feature at risk: 15 ; rules change 13 ; status quo 15 <pauld> chad, feeling ignored? Mark: vote for one of three. Who has preference for feature at risk? answer: 10 ... prefer chanfging rule: 9 Mark keep status quo: 6 Mark: changing defaulting rules to justify existance of source endpoint is funny Glen: no - dependency issue Swinkler: majority seems to agree not to put at risk Marsh: putting at risk only holds feet to fire in terms of supplying use cases ... if not four implementations, we can remove it without problems Anish: Do you prefer defaulting to src endpoint when reply and fault endpoints not supplied as separate issue? Glenn: use pattern must be specified or else that way lies madness. We should specifiy what pattern does. Marsh: can always stick in reply endpoint, so why specify default action? dhull: can we define a use case? discussion of optimization.... Pauld: like status quo - should just leave and move on, but marking at risk gives us process advantages, so let's mark at risk and move on. Anish: can always specify all properties, but if not all specified and fault occurs, what do you do about it? Glenn: when go thru LC, it's expected that comments make changes. ... minimum at end of LC, new LC can be requested ... unlikely we can get thru all issues without making "substantive changes" trutt: have tgo put in replyto if expect reply. that goes away if we accept suggested change to this issue ... that's substantive Mark: defer resolving issue,, let discussion occur .. look for owner of LC5 - glenn volunteers to make proposal and drive discussion Marsh: much like issue 35 ... proposes text for his proposal for statements about SOAP 1.1 and 1.2 in issue 35 be added to conformance section ... addresseth LC6 and LC35 Anish: Is there better way of saying it? ... when looking at conformance look as target - look at message, look at rules and make determination - nitpicking over the phrase ... say if follows rules then it's conformant ... if I get message without WSA header, I want to flag it as non-conformant text being debated is: "An endpoint which conforms to this specification understands and accepts SOAP messages containing headers in the wsa namespace targeted to it, and generates reply or fault messages it may send in response according to the rules outlined in this specification." Mark: that paragraph add something new that is a change from Prasad's LC6 concern discussion in sweveral directions simultaneously Mark: can we choose between Marsh's and Prasad's wording? ... of first two paragraphs under conformance in LC35 ... take wording to mail list and make last para topic of LC35? <scribe> ACTION: jonatahn to start discussion of endpoint conformance on mail list [recorded in] c/jonatahn/Jonathan/ Anish: if include WSDL binding in mix, can we conclude conformance discussion? Anish if say conform we require endpoint to emit something that can be examined WRT rules. If add WSDl binding to mix, would it make it easier to nail things down? Anish: E.g. following rules for reply - behavior in addition to message Mark: tracking all comments, all changes Marsh: extremely minor Mark: missing prefix, wrong prefix, use of pseudo schema consistently, comment daveO: yea verily (no objections to accetance) lc8 Rationalize URI vs. IRI Marsh: did global search and replace of URI with IRI, but there are places where URI is appropriate ... when talk about type of field, use IRI, but when talking about string (i.e. specific instance), use URI Hugo: may be confusing - can use IRI everywhere, since URI isa IRI ... maybe at beginning of spec, say all URIs are IRIs Hugo - All the IRIs used in the examples are URIs Jonathan: wants to look thru spec to see if any exceptions to Hugo's proposal c/Jonathan/Marsh/ Mark: W3C to never use "URI" anymore? ... preference for Marsh proposal: 5 ... preference for Hugo's proposal: 1 ... everyone else abstaining Hugo: web arch direction - uses IRI everywhere ... may be confusing to sometimes see IRI, sometimes URI Mark: resolition LC8 - go with Jonathan's proposal c/resolition/resolution/ Luch break until 1330 (now 1236) <Marsh> Scribe: Marsh Mark: We have a FTF in June in Berlin ... Next region in the rotation is East Coast in mid-late July. ... Then there's the question of whether to take a holiday in August. ... Who will not be available for a substantial portion of August? (7-8) scribe: My expectation would be that we'd have a FTF in early Sept. to get us back on track. ... That meeting will be here in the Bay Area. ... Theoretically the meeting after that would be overseas. (Late Oct?) ... Looking for volunteers to host these meetings. ... Decide on August holiday at Berlin meeting. ... Only reason we wouldn't take a holiday is if we're at a critical time in our schedule. Bob: Informal poll on willingness of WG to meet in Japan? Mark: Location is chair's decision, Japan is something we'd like to do, I'll be looking at Japan late this year or early next year. ... Have some other offers to host in London and Sri Lanka. Marsh: (introduces the issue) TonyR: messageID absolute? DaveH: Discussed this already. RESOLUTION: Accepted Jonathan's proposal. <scribe> ACTION: Greg to respond to LC9 [recorded in] Marsh: (introduces issue) Anish: Implication that ref params have to have a namespace? Marsh: Think that's what this text is trying to do. RESOLUTION: Accepted Jonathan's proposal. <scribe> ACTION: Tony to respond to LC10 [recorded in] Mark: (introduces issue) Anish: I have a related issue, haven't sent it. Mark: Please send to the list. RESOLUTION: Accepted Jonathan's proposal <scribe> ACTION: Paul to respond to LC11 [recorded in] Topic LC12 Mark: (introduces the issue) ... Does anyone know what was intended here>? Glen: Seems to be about whether they appear as XML, HTTP headers, etc. Marc: Not so much the "use of". More about serialization or encoding... DaveH: Reification... Marc: Amendment: Change "use" to "serialization". ... SOAP would be the protocol here... RESOLUTION: Accepted Jonathan's proposal as amended by changing "use" to "serialization." <scribe> ACTION: Marc to respond to LC12 [recorded in] Marsh: word "each" no longer refers to something specific. Hugo: EPR comparison was removed. RESOLUTION: Accepted Jonathan's proposal. <scribe> ACTION: DaveO to respond to LC13 [recorded in] RESOLUTION: Accepted Jonathan's proposal. <scribe> ACTION: Vikes to respond to LC14 [recorded in] Glen: What if I send a message that is a combination of three. Can I resply to all of them? DaveH: Don't want to preclude something if we don't know it's harmful. ... There may be more than is dictated by the requirements of the MEP. Glen: Not particularly about a specific MEP. ... Possible to indicate a set of messages. DaveH: Underlying semantics are hazy. Maybe a bug, maybe a feature. ... In request-reply there is a single reply message. ... In a different MEP there might be a different relation. ... Not so harmful to exclude for request-reply. Glen: We could say the same thing about ReplyTo. ... You're defining the MEP in a subtle way. Anish: You're thinking app-level response to multiple messages? ('ack') Glen: You're going to have to understand these messages in the context... Anish: We already limit [reply endpoint] to maxOccurs 1. Glen: Two ways to go: 1) crisp things up, but doesn't go far enough. 2) loosen it up, say "must contain appropriate relationship properties." DaveH: This proposal moves further towards crisp. Glen: I'd like to see ReplyTo defined in terms of request response, not reused elsewhere. Nilo: Use case is bulk acknowledgement. ... We call it acknowledgement at a high-level, at the WSA level it's reply. Glen: What's being teased out is that there is some delicate semantics implied by "reply message". ... Either we should be clearer about what those mean or we should remain abstract. ... reformulate as: "A message must contain zero or one relationship properties with the URI ...reply." Mark: 'A message MUST NOT contatin more than one "reply" [relationship]' Nilo: What if I formatted some data as several messages. How do I indicate a bulk ack? ... best to invent a new URI for that purpose. DaveH: Now we've taken a little bit more of the WSDL req-resp MEP into the Core. Marsh: We've changed the text but lost the requirement to have one. Hugo: What is broken if we leave it as is? ... The reply rules say you put in one. Glen: When parties are communicating, they have in mind the pattern that's engaged. If I send you a message, and you respond with a relationship on it, we agree what that means. ... I'd like things to be crisp, we need to have a way to describe the contract implied by the relationships. You can introduce new headers for slightly different semantics. ... Not hard to introduce new headers. Hugo: We should then clarify that this is a reply with in a request-response pattern. Anish: The idea of reply is in the description of the URI itself, yet we're not defining "reply". Seems like we should move this to the WSDL binding. Umit: There is an agreement, but not all exchanges need to be in WSDL. Two one-ways might constitute a request reply. ... Doesn't have anything to do with WSDL binding. Glen: One meaning is request-reply. Another is a conversation, with request, reply, repliy, reply, ... end. ... Any kind of lockstep conversation. Should we have a different URI for that? Anish: Up to you to define those URIs, why do we need that in the core? Glen: We need some concept of a MEP somewhere (WSDL or elsewhere). Umit: basic building block, don't take it out of the core. ... I'm in line with crisply defining that reply can only occur once. Anish: If we try to define what reply means in absence of WSDL MEP it becomes hazy. Glen: Use english. Anish: Why would we constrain it as Jonathan proposes? Umit: Describes when reply needs to occur. Glen: When you do reply, it's only to one. Nilo: Is the important thing the fact that it relates to, or the semantics of how it relates? ... Leave the relationship type to some higher level. Glen: What ... 's the utility of defining the relationship without the patterns? DaveH: Are these properties defined in relationship to their WSDL MEP meanings or are we abstracting out a notion of reply? We don't say which we're trying to do. ... If I see "Core" I'm expecting seeing addressing in general. We've been bringing in more and more of the WSDL MEPs. ... If we do say it's bringing in the WSDL MEPs, we should quarantine it to the WSDL Binding. Glen: There is a more general concept, in WSDL and in SOAP, of a message exchange pattern. ... If we were able to pull that into a realm where we could talk about it. DaveH: We can talk about MEPs that don't have concepts of reply or fault. ... Let's say what we're doing. Mark: Should we accept this proposal while we're investigating these larger issues? Nilo: A message sent as a reply to another must contain exactly one [relationship] property consisting of the predefined reply URI and the message ID of the original ... A message sent as a reply to another must contain exactly one [relationship] property consisting of the predefined reply URI and the message ID of the original message. Anish: Same ambiguity as the first; constraint apply to the pair or to the relationship? <scribe> New proposal: Keep the original text ('a relationship property'). Add "A message MUST NOT contain more than one 'reply' [relationship]. A reply message MUST contain a [relationship] property containing the predefined reply URI and the [message ID] property of the request message. A message MUST NOT contain more than one [relationship] containing the predefined reply URI. RESOLUTION: LC15 closed: "A reply message MUST contain a [relationship] property containing the predefined reply URI and the [message ID] property of the request message. A message MUST NOT contain more than one [relationship] containing the predefined reply URI." <scribe> ACTION: Hugo to respond to LC15. [recorded in] Marc: Alternate proposal: delete the paragraph. Also see LC54. Mark: Any objections to dropping it? Umit: Like keeping this, as rationale for the design. Marc: We can drop because we already say what these properties are for, dispatching is none of our business. Umit: We've had a lot of discussion in WSDL about dispatch, like to see some rationale here. Mark: What if we change "facilitate" to "may facilitate"? Glen: Why do we need to say "dispatch"? ... The important part is what you send, not how you process what you recieve. Marc: Kinda lame. RESOLUTION: LC16 closed with Jonathan's proposal, amended by "may facilitate". ... LC54 also closed with LC16 resolution. <scribe> ACTION: Nilo to respond to LC16, LC54. [recorded in] 20 min break --resuming-- Marc: The previous section talks about NAT and stuff, it seems like this is still talking about replies of some description. Are we prohibiting the use of anonymous in the To? Tom: anonymous is the default for wsa:To, implies it's allowed. Marc: Might need to do some work on the preceding sentence as well. Vikas: The word anonymous means what in relation to NAT and DHCP, which are not anonymous. Greg: The true address is not visible. It's unknown outside the NAT. DaveH: Two things going on: it is the backchannel, and it's also anything you can't name. We need to nail down the intent. Marc: Does it mean backchannel? ... or use some underlying protocol. Marsh: Specific to the underlying protocol. DaveH: Means "you know what to do with it" Seems like some interop issues there. Glen: Because I'm using request reply, I'm saying get it back to me. DaveH: Think the semantics aren't very well defined. ... Perhaps define a backchannel URI to mean backchannel. Mark: Seems like an issue on the preceding text. Please raise an issue if you want. ... Ok with the resolution to this issue? Marc: change to say "To allow these anonymous endpoints to send and receive messages..." Marsh: OK by me. RESOLUTION: Accept Jonathan's proposal for LC17. <scribe> ACTION: Anders to respond to LC17. [recorded in] RESOLUTION: Accept Jonathan's proposal for LC18 <scribe> ACTION: Rebecca to respond to LC18 [recorded in] Anish: WebArch problems with simple string compare? Marsh: Probably not - XML namespaces work like this. RESOLUTION: Accept Jonathan's proposal for LC19 <scribe> ACTION: Anish to respond to LC19 [recorded in] Glen: Likes the first part, not the second. DaveH: Distinguish backchannel from other semantics. ... Some transports have backchannels, others don't. There might be more that needs to be captured from the protocol. ... If your transport defines this, it should define it as a sensible place to put replyTo and faults. ... In the case of HTTP the backchannel is the sensible place for replies. Tom: What does is mean to put it in the destination?> DaveH: Assume it means send the message up the backchannel. Glen: Unless you aren't in a situation with a reasonable binding, in which case you barf. ... We all know what we want to happen. Marc: Seems like an echo on the logical vs. physical which we never really closed on satisfactorily. Glen: Continue to dance around the MEP issues. DaveH: Considerably more tangible than log/phys. Marc: Only because you're tying to backchannel responses. anonymous is fine for one-way if we delegate to the binding. Glen: In a particular case it means a particular thing doesn't preclude other cases. ... You don't even need a To in some cases. DaveH: There are some transports that define a backchannel, we want to say use the backchannel. Glen: Can generalize more: some transports can do more. What if I have a binding that is a single "wire". DaveH: Can we say "anonymous" with a replyto or faultto means the backchannel - an error if there isn't a backchannel. <umit2> This reminds me of what was in Message Delivery. Here is what was the description: <umit2> A special URI value "" MAY be used to indicate destinations that either do not have a WSDL service description (such as Web service clients) or destinations that do not have a dereferenceable endpoint. The underlying transport mechanisms, such as HTTP connections, may be used to distinguish such destinations. Marc: For SOAP/HTTP for a request sent in an HTTP entity body, anonymous means the backchannle. Glen: There is a call for a middle ground that isn't HTTP-specific so you don't have to rewrite this thing over and over again. DaveH: Backchannels are an important fact of life. (BEEP, JMS). We want to capture it and give it it's own name. ... We shouldn't use the same name for non-backchannel use. Tony: Should be "binding in use" instead of "transport in use"? (yes) Tom: Gudge said "you know what to do with it" DaveH: dangerous Glen: Is there a way to more crisply define what that means? DaveH: I have app logic that I want to get my reply directly back. I'd like to pass that to the infrastructure and have it do the right thing. ... If the infrastructure doesn't know what that means, I get breakage. Things can move under your feet. Marc: I'm with you if "backchannel" was "connection". You might want to use "connection" for subsequent messages in a conversation. ... Destination would be anonymous, replies to that message would be anonymous. Umit: "transport-specified" DaveH: In HTTP you know what it means, in other bindings you might not. ... If I don't know the binding in advance I can't use the value safely. Glen: At the app level, yes. Which layer is sticking this message into the envelope? A lot of frameworks may do this work for you. DaveH: There's a monster behind the door. Hugo: The spec says the use of anonymous no claim was being made. Could we say it's the "no claim" URI? <GlenD> Hugo: We talk about the behavior being unconstrained in this spec (in the rules). Can we reuse text like this for anonymous URI? <umit2> Tobe clear, I was not suggesting changing the uri to transport-specified to anonymous. Suggestion is to use the last statement as I quoted from message delivery <dhull> I don't think that changes much either way Umit: Do we like the first part of the proposal? DaveH: I don't Tony: It's appropriate to say "go look in the binding' DaveH: We don't define any semantics in the core. Glen: We either say what it means at a high abstract level, and allow it to be clarified at the binding level. ... Or we don't define it at all. (discussion leaks a bit...) Mark: Take it to the list. Marc: +1 <RebeccaB> +1 Mark: Step forward on the discussion we just had. RESOLUTION: Accept Jonathan's proposal for LC21. <scribe> ACTION: Arun to respond on LC21 [recorded in] <RebeccaB> +1 on proposal for lc22 Glen: I could use another header with mU to override this if I needed to. RESOLUTION: Accept Jonathan's proposal for LC22 <scribe> ACTION: Tom to respond on LC22 [recorded in] Mark: Continuation of URI/IRI for SOAP doc. RESOLUTION: Accept Jonathan's proposal for LC23 <scribe> ACTION: Bob to respond on LC23 [recorded in] RESOLUTION: Accept Jonathan's proposal for LC24 <scribe> ACTION: DaveH to respond on LC24 [recorded in] Anish: Does it make sense to have things in the same namespace for versioning purposes. ... Different namespaces and schemas would help versioning (independent of how many specs) DaveO: Leaning towards doing it. Rebecca: Might open up new problems. E.g. Anish's versioning problem, making it more difficult to avoid the implication that this is SOAP-specific. Marc: I wonder if we should improve the separation by putting the XML representation in the SOAP binding. Paul: Abstracting from XML even. ... How do you apply the XML infoset to MQSeries? Rebecca: Positioning problem. DaveO: Majority case is SOAP, let's make it simple. Mark: In the extreme we'd separate abstract, XML, and SOAP. But I'm reminded of RFC2045 which buried in media types. ... In the end we had to pull that out so we could reference it in other contexts. Marc: We could separate the abstract from the SOAP infoset. That would make it clearer by moving some of the XML infoset stuff to the SOAP binding. Anish: Isn't the XML representation useful to more than SOAP? Marc: Knock yourself out. Mark: Thinks it's OK to have a single doc per our charter. Marc: Prefers two docs. ... Would like to move the XML representation to the SOAP spec. Mark: Strawpoll: Option 1: Maintain the current split: 11 Option 2: Zip together: 3 Abstain: 7 RESOLUTION: LC25 closed with no change <scribe> ACTION: Prasad to respond on LC25 [recorded in] Hugo: Would this align with SOAP 1.1? ... Action is not aligned (SHOULD vs. MUST) so we need to do it differently in SOAP 1.1. <scribe> ACTION: Marsh to look into the impacts on SOAP 1.1. [recorded in] <hugo> Discussion about action alignment: Marsh: pendantic Glen: Could say "Each property that is of type IRI MUST be serialized as an absolute IRI in the SOAP header for that Message Addressing Property." ... Could say "Each property that is of type IRI MUST be serialized as an absolute IRI in the corresponding SOAP header representation for that Message Addressing Property." RESOLUTION: Accept Jonathan's proposal for LC27, as amended by Glen. <scribe> ACTION: Glen to respond on LC27 [recorded in] <Marsh> RESOLUTION: Accept Jonathan's proposal for LC29 <Marsh> ACTION: Pete to respond on LC29 [recorded in] <Marsh> Marc: IsReferenceParameter is SOAP-specific. We should define it there. Marsh: Putting the definition in the Core is not central to the proposal. We could put it somewhere in the SOAP spec instead. RESOLUTION: Accept Jonathan's proposal for LC30, amended by putting the definition in the SOAP spec somewhere instead of in the Core. <Marsh> ACTION: Umit to respond on LC30 [recorded in] Glen: We could fix on the presence instead of the value. <Marsh> RESOLUTION: Accept Jonathan's proposal for LC31 <Marsh> ACTION: Steve to respond on LC31 [recorded in] <Marsh> Glen: Is there any other SOAP-specific markup that might be added? Marc: Any other soap markup will already be there. RESOLUTION: Accept Jonathan's proposal for LC32 <Marsh> ACTION: Jeff to respond on LC32 [recorded in] <Marsh> Marc: Is there a problem when a node plays multiple roles? <Marsh> ... We're making something illegal but there's a way to get around it. Marc: If you write the text from the POV of the reciever, if you get more than one header targetted to you you fault. Glen: SOAP says you union the roles you play and then iterate over those. DaveH: What does it mean for an intermediary to have a ReplyTo? Marc: Amendment: change "ultimate reciever" to "recipient". <Marsh> ... "targetted to the same role" Greg: Do you have to distinguish "next" and "ultimate reciever"? Glen: We do intinerary-based routing, and like to be able to target sets at different nodes along a path. Marc: Each node gets one MAP. Glen: Don't know beforehand what distinguishes A and B. Marc: If we don't preclude it we have to specify what happens. DaveH: Where do we talk about this? Marsh: SOAP section 3.3 after the bullet list is where I propose this goes. Mark: Choose between "node" and "role". Greg: Why not change "ultimate reciever" to "node"? Glen: Precludes smart processing of multiple roles by a node. Marsh: Can an entity act as if it were two nodes? Glen: Yes... DaveH: Can we define a new fault for this? Glen: Nice to have another fault for this case - an intermediary might add the duplicate header. Marc: reword without MUST - it's not a testable assertion. <Marsh> ACTION: Marsh to rework proposal, adding a DuplicateProperty fault. [recorded in] Mark: Get your issues in. <Marsh> ... We'll start again at 9AM.
http://www.w3.org/2002/ws/addr/5/04/19-ws-addr-minutes.html
CC-MAIN-2014-23
refinedweb
5,668
65.42
Implementing Util Examples List examples that demonstrate the syntax and example code of java util package...) cache in Java. LRU Cache helps you how to use the buffer and how to limit... Java Util Examples List - Util Tutorials   Open Source Cache Softwares written in Java browser cache browser cache how can we clear browser cache Open Source Cache Solution written in Java Lang and Util Base Libraries Lang and Util Base Libraries The Base libraries provides us the fundamental features and functionality of the Java platform. Lang and Util Packages Lang and Util package provides the fundamental classes and Object of primitive type cache php code cache php code Basically, i'm looking for a code to cache the content of webpage in server side Clear UIWebView cache iPhone Clear UIWebView cache iPhone Hi, In my iPhone application ..i am using a UIWebView to load different URL on it. But i am afraid that it 's not releasing the memory. Can anyone please explain me how to load and release Session_cache_limter() . Parameters cache_limiter(); Let's see the example: <?php session...Session_cache_limter() session_cache_limiter() returns the name of the current cache limiter. The cache_limiter defines which cache control HTTP headers Second level cache and how do I refresh( reload) my second level cache. Please advise, and ignore my tomcat cache - JSP-Servlet tomcat cache hai friends i have a query that is i want to remove the cache memory from Tomcat server Automatically using jsp without deleting it manually from work folder because every time when i make any changes What is the default cache service of hibernate? What is the default cache service of hibernate? Hi, What is the default cache service of hibernate? Thanks util packages in java util packages in java write a java program to display present date and after 25days what will be the date? import java.util.*; import java.text.*; class FindDate{ public static void main(String[] args cache refresh after 24 hours cache refresh after 24 hours I have developed an application which uses websphere application server 6.1 which needs to query the database(oracle) after every 24 hours periodically.But it is not getting executed properly because yum clear cache yum clear cache In this tutorial I will tell you how you can delete the cached file by yum utility. The yum utility downloads and saves the file in cache... system then you should delete the cache. What is yum? The Yellow dog Updater how to create and use cache in j2me? - MobileApplications how to create and use cache in j2me? Hi everyone, I am a student in Viet Nam! I am coding project j2me using caching but what by I do not begin, please hepl me! Hi Friend, Please visit the following link: http Open Source Distributed Cache Solutions written in Java Queue example to remove elements. Following is the Java queue example: import...Queue Interface is part of java.util package. Queue generally works on FIFO (First In First Out) in ordering elements. In FIFO, new element is added PHP clearstatcache example, clearstatcache php, clearstatcache in php The example & code of clearstatcache() in the PHP programming...()., then the PHP will store cache in the memory. You need to clear the cache.... The following example explains how the clearstatcache() function can be used util Java util date Java util date The class Date in "java.util" package represents... to string and string to date. Read more at: http:/ Java Web Start and Java Plug-in -in Common Enhancements Cache and System Format In Java SE 6, the cache format has... of cache for Java Web Start or Java Plug-in will no longer work. Existing applications... time you launch a Java Web Start application, or by launching the cache viewer Open Source Proxy Servers written in Java Java Web Start Enhancements in version 6 Cache and System Format In Java SE 6, the cache format has been fully changed. That's why, any existing code using previous format of cache for Java Web Start or Java Plug-in will no longer work. Existing applications cache Getting Previous, Current and Next Day Date current and next date in java. The java util package provides...;+ nextDate); } } Download this example. Output... C:\vinod\Math_package>java GetPreviousAndNextDate GPS in Geocaching Games GPS receivers in order to find out a cache. Geocachers search an undercover treasure by using the co-ordinates of the cache through Internet and then go for finding out the hideout place of the cache. Cache is a small waterproof container java - Java Beginners java write a programme to to implement queues using list interface Hi Friend, Please visit the following link: Thanks java - Applet :// Thanks...java what is the use of java.utl Hi Friend, The java Java JTable about Java JTable, just click http:/ http:/... Java JTable   java persistence example java persistence example java persistence example stack and queue - Java Beginners :// Hope...stack and queue write two different program in java 1.) stack 2 Java Queue queue go to the link: http:/... Java Queue A queue... but not necessarily queues, order elements in a FIFO (first-in-first-out) manner STACK&QUEUE - Java Interview Questions :// Hope that it will be helpful for you Java - Java Interview Questions :// Thank you for posting Java Client Application example Java Client Application example Java Client Application example JSP - Java Interview Questions JSP definition of the scope of the project Hi please find the definition of the scope: "The scope (or extent) of the cache: page, request, session, or application. For instance, page-scoped caches let you cache Example caching - MobileApplications Example caching Hi everyone!! I am a student in Viet Nam, I am coding a project J2ME. Before, I had some question in RoseIndia and received your... of last query into cache and the new query will find into cache before send request Example of HashMap class in java Example of HashMap class in java. The HashMap is a class in java collection framwork. It stores values in the form of key/value pair. It is not synchronized. hibernate criteria 'And' 'or' condition Example is the simple Example code files. CriteriaConditionAndOr .java package...hibernate criteria 'And' 'or' condition Example In this Example, We will discuss about hibernate criteria query, In this example we create Hibernate Criteria Equal Example Hibernate Criteria Equal Example In this Example, We will discuss.... In this example we create a criteria instance and implement the factory methods for obtaining certain built-in Criterion types method. In This example search Hibernate Criteria NotEqual Example Hibernate Criteria NotEqual Example In this Example, We will discuss.... In this example we create a criteria instance and implement the factory methods for obtaining certain built-in Criterion types method. In This example java - Java Interview Questions information : Get Example... with example. Java Mangling Integers,java,java newsletter,tutorial number system of Java would be corrupted ... for example... Mangling Integers 2005-01-31 The Java Specialists' Newsletter [Issue 102... or RSS. Welcome to the 102nd edition of The Java(tm) Specialists - Java Interview Questions to : Thanks java-jar file creation - Java Beginners :// Thanks RoseIndia Team...java-jar file creation how to create a jar file?i have a folder Java: Example - Palindrome test Java: Example - Palindrome test //========================================================= isPalindrome // This method returns 'true' if the parameter // is a palindrome, a word that is spelled the // same both SortedMap - Java Beginners :// Regards Meeya...SortedMap I am new to Collections in java. Can you help me to get work on SortedMap by providing an example. Why this interface is used?   Java - Java Beginners Java how can i make this java program open an image file and open a website like yahoo? GrFinger Java Sample (c) 2006 Griaule Tecnologia Ltda... environment . This sample is provided with "GrFinger Java Fingerprint Recognition Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/98752
CC-MAIN-2013-20
refinedweb
1,336
54.63
What is Event handler in SharePoint ? How to use it in a SharePoint site? This article will tell you about event handler used in sharepoint. Event handler perform a action depending up on your code. It helps you to perform an action by you. Learn about Event handler in SharePoint? How to use Event handler in a SharePoint site? Here I am providing code and few steps for event handler. Learn What is Event handler in SharePoint? How to use it in a SharePoint site? I will create an event handler which will avoid deletion from a document library in your SharePoint site. Follow these steps for Event handler: Open Visual studio, File-New-Project Choose C# as your language and Class library from the template section. So before writing the code you need to add few references. Right click on your solution and select Add Reference... From .net tab select Windows SharePoint Services and Microsoft SharePoint. Add this namespace in your class library: using Microsoft.SharePoint; namespace EventhandlerCode { public class eventhandlersample:SPItemEventReceiver { public override void ItemDeleting(SPItemEventProperties properties) { properties.Cancel = true; properties.ErrorMessage = "Sorry! You can't delete items from this library"; } } } Now build your application. It should succeed, if yes then fine. (Do not run your application now) Open project menu from Visual studio, select your project name from the menu. From new window select Signing tab, select the checkbox "sign the assembly" and from dropdownlist select new and provide a name. Leave it as it is for other options. Build your solution again. Locate this path: C:\Documents and Settings\Administrator\My Documents\Visual Studio 2008\Projects\Your project name\your project name\bin\Debug Drag .dll file of your project name in to C:\Windows\assembly Go to this location. Program Files/Common Files/Microsoft Shared/web server extensions/12/TEMPLATE/FEATURES Create a folder EventHandler. In this folder create 2 xml files. Feature.xml <Feature Scope="Web" Title="Event Handler To avoid Delete" Id="1F579D91-4F4B-4389-B03B-7A92EF2EE210" xmlns=""> <ElementManifests> <ElementManifest Location="Element.xml"/> </ElementManifests> </Feature> Element.xml <Elements xmlns=""> <Receivers ListTemplateId="101"> <Receiver> <Name>No Deletion</Name> <Type>ItemDeleting</Type> <SequenceNumber>22500</SequenceNumber> <Assembly>DeletingEventHandler1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4646756f239ef28b</Assembly> <Class>EventhandlerCode.eventhandlersample</Class> <Data></Data> <Filter></Filter> </Receiver> </Receivers> </Elements> Open command prompt and prompt to C:/Program Files/Common Files/Microsoft Shared/web server extensions/12/BIN Type below commands one by one Your task is completed now. Open your sharepoint site create a new document library, add few documents and try to delete any one of them. It will give you a warning message that you can delete. Note: It will work for all document libraries of specified url. Do not forget to give your valuable feedback. - stsadm -o installfeature -filename EventHandler\Feature.xml - stsadm -o activatefeature -filename EventHandler\Feature.xml -url yoursiteurl - iisreset SharePoint Event Handlers Manager allows admins to browse, add, edit and remove SharePoint event handlers from any list or web. This SharePoint solution provides two features that enables admins to play with event handlers from within SharePoint interface.
http://www.dotnetspider.com/resources/43548-what-event-handler-sharepoint-how-use.aspx
CC-MAIN-2019-26
refinedweb
517
51.85
hurry.jqueryform 2.47.1 hurry.resource style resources for jQuery Form Plugin. hurry.jqueryform Introduction This library packages the jQuery Form Plugin for hurry.resource. How to use? You can import jqueryform from hurry.jqueryform and .need it where you want these resources to be included on a page: from hurry.jqueryform import jqueryform .. in your page or widget rendering code, somewhere .. jqueryform.need() This requires integration between your web framework and hurry.resource, and making sure that the original resources (shipped in the jqueryform-build directory in hurry.jquery 2.47.1 (2010-09-06) - Initial public release, using jQuery Form Plugin 2.47 Download - Author: Martijn Faassen - Keywords: hurry.resource jquery - License: MIT - Package Index Owner: faassen - DOAP record: hurry.jqueryform-2.47.1.xml
http://pypi.python.org/pypi/hurry.jqueryform/2.47.1
crawl-003
refinedweb
128
54.08
AUTOMATING THE DESTRUCTION OF BIND SHELLS Author: Brennan Turner @BLTSEC [Enterprise Security] Automating the Destruction of Bind Shells Scenario: A web developer goes to his local coffee shop and connects to the open WiFi network and starts browsing for Bootstrap website themes. An attacker has deauthenticated clients from the legitimate coffee shop network and jammed the coffee shop’s access point in order to redirect the clients to his malicious network which has the same name as the legitimate network. The attacker is running an arp spoofing and DNS spoofing attack on the network that redirects traffic from HTTP domain names that contain bootstrap to his malicious site that offers free bootstrap themes. The site has a link to “try” Bootstrap and once the link is clicked the client downloads a macOS dmg that contains what appears to be a “Bootstrap Installer” app. The app executes the attacker’s malicious payload and spawns 4 bind shells on the target while staying fully undetectable by the host’s Sophos anti-virus. A bind shell allows an attacker to gain access to the target by connecting to the listening port established by the payload. The client issues a Dictation command which starts a macOS Automator workflow. This workflow gives the client audio feedback as a Python script executes. This Python script is a port scanner that looks for listening ports and once a port is found it will find the associated process id and terminate that process which kills the bind shell. Disclaimer: This is a test environment and the methods shown below shouldn’t be done in a production environment without proper authorization and documentation. Use these techniques responsibly and ethically and always test every flag, switch, command, and exploit in your own test environment prior to running in a customer’s environment. I would also like to state that even though quite a few Red Team techniques are used in the attack phase of this scenario I will not be covering how to perform these attacks. This is an article on Enterprise Security so I will not deviate from that topic but I did want to provide context by showing a real-world example of how quickly an unsuspecting user could become compromised. Breakdown: This week I’ve been looking at macOS’s Automator application. Automator is a tool included with OS X which allows you to build custom workflows to perform both simple and complex tasks, such as renaming files in a folder, combining multiple PDF documents, or converting movies from one format to another using QuickTime. In this example, I created a simple Dictation workflow that executes a Python script and provides audio feedback. Before you start creating a Dictation Automator workflow you will need to enable Dictation enhanced commands. The screenshot below shows the entire workflow which was created after opening Automator -> File -> New and then selecting the Dictation document type. Once you save the document a Dictation command is added to the available commands to call when you open the Dictation prompt. The Python script is a port scanning tool that looks for open ports that are listening for incoming connections and once the port is found the script will find the associated process id and terminate the process which will kill the malicious bind shell established by the attacker’s payload when the “Bootstrap Installer” app executed. #!/usr/bin/python3 # # bind_destroyer - Python listening port scanner and destroyer # # Written by: Brennan Turner (@BLTSEC) # # Usage: python bind_destroyer.py # Usage: python3 bind_destroyer.py # # from subprocess import Popen, PIPE from os import system import argparse import os import signal import socket flag = 1 ports = "" # reads the contents of the port whitelist def read_whitelist(whitelist): with open(whitelist) as f: accetable_ports = f.read().splitlines() return accetable_ports # uses the macOS lsof command to find open listening ports def get_process(i): p1 = Popen(['lsof', '-n', '-i4TCP:{}'.format(i)], stdout=PIPE) p2 = Popen(['grep', 'LISTEN'], stdin=p1.stdout, stdout=PIPE) return (p1,p2) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--whitelist', dest='file', help='Path to the whitelisted ports file') args = parser.parse_args() file = args.file if file: ports = read_whitelist(file) # loops through a range of ports for i in range(10000, 12000): # skip whitelisted ports if str(i) in ports: system('say -v Daniel Skipping white listed port {}'.format(i)) continue # creates a client socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = s.connect_ex(('127.0.0.1', i)) # if the result returns 0 the port is open if result == 0: flag = 0 system('say -v Daniel Port {} is open'.format(i)) # finds the associated pid and terminates the process try: p1, p2 = get_process(i) pid = str(p2.communicate()[0]).split(" ")[2] os.kill(int(pid), signal.SIGTERM) system('say -v Daniel Process {} was terminated'.format(pid)) # disconnecting from the socket may cause the proc to die eg netcat except IndexError: system('say -v Daniel The process died automatically.') # the pid may not be at a different index value, this finds the pid except ValueError: inc = 1 while pid == '': p1, p2 = get_process(i) pid = str(p2.communicate()[0]).split(" ")[2+inc] inc += 1 if pid != '': os.kill(int(pid), signal.SIGTERM) system('say -v Daniel Process {} was terminated'.format(pid)) system('say -v Daniel The scan will now continue.') # closes the socket s.close() if flag == 1: system('say -v Daniel No anomalies were detected.') system('say -v Daniel The port scan has completed.') Please watch the video below for a demonstration and see the attacker establish a foothold onto the target and see the client’s Automator job destroy the bind shells established by the malicious macOS app. Sources: - BLTSEC.NINJA’s “Free WiFi” article that includes tips on staying secure - VPN Comparison Chart
https://medium.com/@BLTSEC/automating-the-destruction-of-bind-shells-bafde2396aaa
CC-MAIN-2018-47
refinedweb
955
51.89
Using Oracle BPM Object Methods in Script Tasks (OBPM 12.1.3) By Venugopal Mangipudi on Jul 03, 2014 With Oracle BPM 12.1.3 becoming GA, I wanted to try out some of the new features that got added to this release. There are 2 features that are very helpful to the BPM developers: - Groovy Scripting support - BPM Data Object Methods These are very good additions to the product as it provides a lot of flexibility to the developer ( For developers who come from ALBPM or OBPM this is priceless....). In this blog post, I wanted to explore the following: - Create a BPM Data Object with attributes and Methods - Use the Data Object Method from a Script task For this simple scenario, I created a process that accepts 2 Integers as Input and returns back the result of Adding the 2 Integers. The following are the High Level Steps to implement the process: - Create the BPM process (Synchronous BPM Process Type). - Implement the Start Event and define the interface which accepts 2 input arguments (input1Arg and input2Arg of Type Int) - Implement the End Event and define the Interface which returns 1 output argument (outputArg of Type Int) - Add a Script task to the Process CalculateTotal - In the Business Catalog add a new Module MyModule - Add a new BPM Data Object ArithmeticHelper to the module. Define ArithmeticHelper by adding the following: - Attribute: input1 Type: Int - Attribute: input2 Type: Int - Method: calculateSum Parameters: none Return: Integer - Implement the calculateSum method with the following Groovy script code: def result=0; result=input1+input2; return result; - In the BPM Process, create 2 Process Data Objects: - v_arithmeticHelper Type: ArithmeticHelper - v_output Type: Integer - Map the Start input arguments to the attributes in the process data object v_arithmeticHelper - Map the End output arguments to the process data object v_output - Implement the Script task CalculateTotal. To implement Groovy scripting on a BPM Script task, we can navigate to the script editor by right-clicking the Script task and selecting the option Go To Script. In the script editor, add the following groovy script: v_output=v_arthmeticHelper.calculateSum(); Once the Project is compiled and deployed, we can test the composite from EM. The result of the testing should show the Total for the 2 input integers that were entered. You can find the complete BPM project here to run the sample on your environment. Hope this blog helps in demonstrating Simple Scripting example in Oracle BPM 12.1.3 which can be used to implement your own requirements! Le i didn't understand about this Posted by Narayan on July 29, 2014 at 04:35 PM IST #
https://blogs.oracle.com/VenugopalMangipudi/entry/using_oracle_bpm_object_methods
CC-MAIN-2015-35
refinedweb
438
53.04
hibernate firstExample not inserting data - Hibernate hibernate firstExample not inserting data hello all , i followed... problem is data is not inserting into DB even though the program is executed... for more information. Thanks.  Problem in running first hibernate program.... - Hibernate Problem in running first hibernate program.... Hi...I am using... programs.It worked fine.To run a hibernate sample program,I followed the tutorial below.../FirstExample Exception in thread "main" " errors java errors when i am compiling the java program it is giving that the file can not be read what does it mean.. and what i have to do to remove... are running it in the same directory or not. also go through the given link Running First Hibernate 3.0 Example Running First Hibernate 3.0 Example Hibernate is free open source software it can... of database. After running the code example you may check your MySQL database Java Program Errors Java Program Errors These are the 2 errors which I am getting when executing the java source code extracted from executable jar file The project was not built since its build path is incomplete. Cannot find the class file Help please, some strange errors this errors so i have posted the whole program here. Here i tried to make the monopoly... board according to the dice rolled. it even runs and works accurate but after some... is causing that run-time errors? Any kind of help will be helpful to me. and let Running First Hibernate 3.0 Example Running First Hibernate 3.0 Example Hibernate is free open source software it can.... This will run the Hibernate example program in Eclipse following output C++ program not running C++ program not running Hi, this program need to ask 10 random questions with a random month name. Example: RUN How many days are there in the month of March? 28 No March has 31 days. How many days are there in the month process of compiling and running java program process of compiling and running java program Explain the process of compiling and running java program. Also Explain type casting Servlets errors in same page. Servlets errors in same page. How do I display errors list in the same page where a form field exists using servlets...........i.e. without using JSP? Please explain with a simple username password program J2EE Tutorial - Running RMI Example ;/html> How to compile and run the RMI-IIOP program? 1) We require jndi package for running this program.  ... J2EE Tutorial - Running RMI Example   Hibernate Correct errors - Java Beginners Correct errors Identify and correct the errors in the following program: public java Method { public static method1(int n, m) { n += m; xMethod(3. 4); } public static int xMethod(int n) { if (n > 0) return 1 Running & Testing on WAMP server ; Let's develop a small program to check whether your server is running...Running & Testing on WAMP server In this section we will learn how to start, restart the WAMP server and test a simple php program. You Runtime Errors Runtime Errors Errors are arised when there is any logic problem with the logic of the program. In this program we are trying errors that what the errors in above code errors project. If there is any specific errors related to hibernate envers those Hibernate Annotations Example Hibernate Annotations Example In this section we will see how we can develop Hibernate Annotations based example program. After completing this tutorial you will be able to use Hibernate Annotations in your project and develop remopving errors incore java programming remopving errors incore java programming sir error in program:-file does not contain class runnable program:- class MyRunnable implements Runnable{ public void run(){ System.out.println("Run"); } public Applet in Eclipse - Running Applet In Eclipse Applet in Eclipse - Running Applet In Eclipse... in Eclipse 3.0. An applet is a little Java program that runs inside a Web..., a startup screen appears, and the program spends some time loading various modules Java - Hibernate Java friends plz help me. when i run hybernate program i got, this type of output. ---------------------------- Inserting Record Done Hibernate... ------------------------------------------- My program is ----------------- package org.students; import Hibernate Getting Started Hibernate Getting Started Hibernate Getting Started In this Hibernate Getting Started you will quickly download the a Hibernate running example project and run Null pointer exception in hibernate - Hibernate exception.while running the program in hibernate at roseindia.tutorial.hibernate.FirstExample. i copied the total files from roseindia tutorial. the program is compiled successfully but while running the program i am getting null ponter Compiling and Running Java program from command line Compiling and Running Java program from command line - Video tutorial... of compiling and running java program from command line. We have also video... the process of compiling and running the Java program from command prompt hibernate - Hibernate hibernate program shareware hibernate program shareware example Running and deploying Tomcat Server Running and deploying Tomcat Server HI Somebody has given... described below I want to run a simple program on servlet and the application is a simple program Hello world to print on Internet browser. And the directory About running the Tomcat Server About running the Tomcat Server HI I want to run a simple program on servlet and the application is a simple program Hello world... and lib and web.xml folders. Now i compiled Helloworld program in myapps How to show all errors in PHP? How to show all errors in PHP? I this tutorial I will explain you how you can... in your PHP program. This is one of the most asked question "How to show all errors in PHP?" by the PHP developers. As it is necessary to see all hibernate configuration with eclipse 3.1 - Hibernate project.its not running. so i want to about the whole process. i have simple hibernate program code and with struts also. so please tell me the step...hibernate configuration with eclipse 3.1 Dear Sir, i got your mail Not running - Java Beginners ) and final_grade (type of String) program using getter/setter but not want a public setter method for final_grade; Here is my program that calcuate the average of all..."); } } } =========================================================================== When I run the program Hibernate Hibernate Can we write more than one hibernate.cfg.xml file... ? if so how can we call and use it.? can we connect to more than one DataBase from a single Hibernate program error in eclipse Hibernate error in eclipse Hi... while running my application i got these type errors...can youme please? Exception in thread "main" java.lang.ExceptionInInitializerError at com.StoreData.main(StoreData.java:11) Caused Run time Exception after Clean and Build JavaNetBeans Project Run time Exception after Clean and Build JavaNetBeans Project I have... and for database MySQL 5 The project is completed and running fine ( great) without any... Splash From) Opens fine but after progress bar is filled it should be disappeared Coding errors for printing function, please help Coding errors for printing function, please help Hello, We, my classmates and I, wrote this software but I ran into problems with printing button... button for class schedule to bring up the printer menu. I can email the program too calculation after if. user input calculation after if. user input System.out.print ("please select one of the following by pressing tne number on your keypad"); System.out.println... addition 5 errors found: File: C:\Users\ Error: variable appleCost might not have verify the code and give me the code with out errors (); } } when i run this program... it shows the given below errors.pls clear the errors and give me correct tutorial for my knowledge improving.pls anyone Complete Hibernate 3.0 and Hibernate 4 Tutorial program to demonstrate it. Hibernate Update Query.... After completing this tutorial you will be able to use Hibernate in your...Complete Hibernate 3.0 and Hibernate 4 Tutorial   Building and Running Java 8 Support Building and Running Java 8 Support As you know Java 8 is already... in your Eclipse. After installation of the plugin you will be able to start using JDK 8 for compiling and running your applications. Check the tutorial for complete error occured in following page ... how to resolve it? error occured in following page ... how to resolve it? // to convert image into thumbnail i used following code. But netbeans has given me following...) { System.out.println("Error occured saving thumbnail Exception Handling are used for handling errors in programs that occurs during the program execution... of the program which generate the error in the try{} block and catch the errors using... the errors occurred in the program. In the following example code you will see that how Running the load test Running the load test You can also view the graph for your test after running... in the picture below. After clicking on the graph element, you will see Criteria Query Example Program Hibernate Criteria Query Example Program Hi, I am new in Hibernate and learning the Criteria Query. How to use the Hibernate Criteria Query in my Example program? Explain me he Criteria Query API. Thanks   Java Compilation and running error - RMI Java Compilation and running error The following set of programs runs with the lower versions of Java 6 update 13, but not with JAVA 6 update 13... showing problem. Please tell me why I am unable to run this program with Java hibernate - Hibernate hibernate Hai,This is jagadhish I have a problem while developing insert application of Hibernate.The application is compiled,while running.... For read more information: Handling Errors While Parsing an XML File Handling Errors While Parsing an XML File This Example shows you how to Handle Errors While... below for Handling Errors:-SAXParserFactory saxpf = SAXParserFactory.newInstance Hibernate code - Hibernate Hibernate code firstExample code that you have given for hibernate to insert the record in contact table,that is not happening neither it is giving... inserted in the database from this file. <errors><error><domain>yt:quota</domain><code>too_many_recent_calls</code></error></errors> your application and wait for 10-15 minutes. Then start the application, after Installation, Configuration and running Servlets Installation, Configuration and running Servlets  ... for running this server is Sun?s JDK (Java Development Kit) and JRE (Java Runtime... to the base installation directory of JDK installation. (e.g. c:\program file\java ; Tutorial Section Introduction to Hibernate 3.0 | Hibernate Architecture | First Hibernate Application | Running the Example in Eclipse | Understanding Hibernate O/R Mapping | Understanding Hibernate <generator> element on netbeans - Hibernate hibernate on netbeans is it possible for me to run the hibernate program on Netbeans IDE check for errors check for errors How can I check for errors Hibernate - Hibernate Hibernate Hai this is jagadhish, while executing a program in Hibernate in Tomcat i got an error like this HTTP Status 500 - -------------------------------------------------------------------------------- type Hibernate Tutorials In this lesson I will show you how to write running program to demonstrate... Hibernate Tutorials Deepak Kumar Deepak... by him on hibernate. Abstract: Hibernate is popular open source object Running Batch files in java - Java Beginners Running Batch files in java Hi, I am writing a program which reads a windows batch file and executes it. The program should read each... are sending you some sample program code for running batch (.bat) file in Java. We have Hibernate code problem - Hibernate Hibernate code problem how to write hibernate Left outer join program error while running the applet - Java Beginners is correct but u have problem in running the program. You follow the following steps...error while running the applet import java.applet.Applet; import...); ++num; } } } } i have problem while running the code , error What is the errors? What is the errors? ) while ( c <= 5 ) { product *= c; ++c; if ( gender == 1 ) cout << "Woman" << endl; else; cout << "Man" << endl; Post the whole code Trouble in running Dos Command from Java Servlet Trouble in running Dos Command from Java Servlet Hello All, I have...:Scan_Result.txt vscanwin32 is resided at location: C:\Program Files\Trend Micro\OfficeScan Client I have tried this : String dosCommand = "C:\Program:What is hibernate? Hibernate:What is hibernate? What is hibernate? Hibernate: -Hibernate is an Open Source persistence technology. It provides Object.... Hibernate works as a bridge between application and database. Application compilation errors () { } } giving errors: 1) WelcomeServlet.java:37: ')' expected Apache Server Running But project not deploying on server? Apache Server Running But project not deploying on server? May 27... in production environments was no t found on the java.library.path: C:\Program Files (x86...;C:\Program Files (x86)\PC Connectivity Solution\;C:\Program Files (x86)\Java
http://www.roseindia.net/tutorialhelp/comment/20871
CC-MAIN-2014-23
refinedweb
2,099
57.98
A Python-based socket client for communicating with Crestron control processors via CIP. Project description python-cipclient A Python module for communicating with Crestron control processors via the Crestron-over-IP (CIP) protocol. NOTICE: This module is not produced, endorsed, maintained or supported by Crestron Electronics Incorporated. 'XPanel', 'Smart Graphics' and 'SIMPL Windows' are all trademarks of Crestron. The author is not affiliated in any way with Crestron with the exception of owning and using some of their hardware. This is a Python-based socket client that facilitates communications with a Crestron control processor using the Crestron-over-IP (CIP) protocol. Familiarity with and access to Crestron's development tools, processes and terminology are required to configure the control processor in a way that allows this module to be used. Installation This module is available throught the Python Package Index, and can be installed using the pip package-management system: pip install python-cipclient Usage and API This module works by connecting to an "XPanel 2.0 Smart Graphics" symbol defined in a SIMPL Windows program. Once the control processor has been programmed accordingly, you can communicate with it using the API as described below. Getting Started Here is a simple example that demonstrates setting and getting join states using this module. import cipclient # set up the client to connect to hostname "processor" at IP-ID 0x0A cip = cipclient.CIPSocketClient("processor", 0x0A) # initiate the socket connection and start worker threads cip.start() # you can force this client and the processor to resync using an update request cip.update_request() # note that this also occurs automatically on first connection # for joins coming from this client going to the processor cip.set("d", 1, 1) # set digital join 1 high cip.set("d", 132, 0) # set digital join 132 low cip.set("a", 12, 32456) # set analog join 12 to 32456 cip.set("s", 101, "Hello Crestron!") # set serial join 101 to "Hello Crestron!" cip.pulse(2) # pulses digital join 2 (sets it high then immediately sets it low again) cip.press(3) # emulates a touchpanel button press on digital join 3 (stays high until released) cip.release(3) # emulates a touchpanel button release on digital join 3 # for joins coming from the processor going to this client digital_34 = cip.get("d", 34) # returns the current state of digital join 34 analog_109 = cip.get("a", 109) # returns the current state of analog join 109 serial_223 = cip.get("s", 223) # returns the current state of serial join 223 # you should really subscribe to incoming (processor > client) joins rather than polling def my_callback(sigtype, join, state): print(f"{sigtype} {join} : {state}") cip.subscribe("d", 1, my_callback) # run 'my_callback` when digital join 1 changes # this will close the socket connection when you're finished cip.stop() Detailed Descriptions start() should be called once after instantiating a CIPSocketClient to initiate the socket connection and start the required worker threads. When the socket connection is first established, the standard CIP registration and update request procedures are performed automatically. stop() should be called once when you're finished with the CIPSocketClient to close the socket connection and shut down the worker threads. update_request() can be used while connected to initiate the update request (two-way synchronization) procedure. set(sigtype, join, value) is used to set the state of joins coming from the CIPSocketClient as seen by the control processor. sigtype can be "d" for digital joins, "a" for analog joins or "s" for serial joins. join is the join number. value can be 0 or 1 for digital joins, 0 - 65535 for analog joins or a string for serial joins. press(join) sets digital join high using special CIP processing intended for joins that should be automatically reset to a low state if the connection is broken or times out unexpectedly. release(join) sets digital join low. Used in conjunction with pulse(join) sends a momentary pulse on digital join by setting the join high then immediately setting it low again. get(sigtype, join, direction="in") returns the current state of the specified join as it exists within the CIPSocketClient's state machine. (Join changes are always sent from the control processor to the client at the moment they change. The client tracks all incoming messages and stores the current state of every join in its state machine.) sigtype can be "d", "a" or "s" for digital, analog or serial signals. join is the join number. direction is an optional argument, which is set to "in" by default to retrieve the state of incoming joins. If you need to get the last state of a join that was sent from the client to the control processor, you can specify direction="out". subscribe(sigtype, join, callback, direction="in") is used to specify a callback function that should be called any time the specified join changes state. sigtype, join and direction function the same as in the get method described above. callback is the name of the function that should be called on each change. sigtype, join and state will be passed to the specified callback in that order. See the example above in the Getting Started section. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/python-cipclient/
CC-MAIN-2020-16
refinedweb
884
55.34
Opened 5 years ago Closed 4 years ago #18336 closed Bug (fixed) Static files randomly fail to load in Google Chrome Description I've noticed while using Google Chrome that frequently (once every 2 or 3 refresh) some of my static files fail to load, it can be anything: an image, a .js file, etc. Opening that file directly always work, reloading the page usually work as well. I couldn't reproduce this behavior with Firefox. This is happening with the default options for the runserver of django.contrib.staticfiles. I figured this probably had to do with either a timeout or the size of the request queue, possibly both. I tried setting request_queue_size to 10 (default is 5) as demonstrated in the attached file and it completely solve the issue. I the tried setting it to 1 and it makes the issue systematic. I tried to find how many concurrent requests chrome does and found the following: Unless I'm missing something, Chrome actually use less concurrent requests than Firefox. A value of 10 for request_queue_size does seem to solve my problem completely, but I wouldn't know what should be the actual best value. Attachments (1) Change History (17) Changed 5 years ago by comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by I made a test project to demonstrate the behavior, here with a request_queue_size of 1, each refresh seems to load a different set of pictures. comment:4 Changed 4 years ago by I'm curious if this has still been a problem for anyone. I've been noticing this lately with recent versions of Chrome, and can confirm that the attached patch solves it (though, it feels like a bandaid to me). I can reliably reproduce this, so if there's anything I can do to help in debugging this, please let me know. comment:5 Changed 4 years ago by Yes this is a problem for me as well with Chrome 23 on OS X Mountain Lion when there are a lot of concurrent staticfiles requests. comment:6 Changed 4 years ago by I had the same problem. It happened exactly at the spesific location and only with Chrome. I managed to get runserver working by adjusting request_queue_size. It's not a problem for other browsers. comment:7 Changed 4 years ago by Can the users that are experimenting this issue report their OS?. Maybe it's a MAC OS X-specific thing? comment:8 Changed 4 years ago by OP here, I'm indeed running OS X. comment:9 Changed 4 years ago by Chrome has marked this as a wontfix: comment:10 Changed 4 years ago by I'm also on MacOS X. To do any actual development, I have to patch basehttp.py every time I upgrade Django, in each virtualenv. If there were a way to at least override this in some form globally, I could live with it, but for the moment, it's a complete pain and leads to odd, sometimes very subtle bugs. comment:11 Changed 4 years ago by comment:12 Changed 4 years ago by I've also been experiencing this (Django 1.5, Mac OS 10.8, Chrome 25). In some cases it completely locks up Chrome (tab dies). Other times random static files fail to load. In addition, I've seen very similar behavior in AppEngine's dev_appserver (also based on Python's SocketServer module) , and I'm inclined to think request_queue_size is the culprit. comment:13 Changed 4 years ago by BTW I got this fix merged into django-devserver, which is a drop in replacement for runserver in dev. This bug is probably going to be a WontFix on both Chrome and Django ends. comment:14 Changed 4 years ago by For anyone looking for a quick fix that doesn't require patching Django or installing a third party app: you can monkey patch WSGIServer from settings.py. A queue size of 10 has worked really well for me since I opened this ticket, but with this in settings.py you can easily increase the value to match your needs. from django.core.servers.basehttp import WSGIServer WSGIServer.request_queue_size = 10 comment:15 Changed 4 years ago by I could eventually reproduce this, on OS X.8 with Chrome, with the test project provided by Loic in comment 3 and WSGIServer.request_queue_size set to 1. I'm not sure why the original path and devserver resort to monkey-patching instead of simply defining request_queue_size on WSGIServer. Wow, what a weird bug, accepting in general, but have to look into this further.
https://code.djangoproject.com/ticket/18336?cversion=0&cnum_hist=7
CC-MAIN-2017-09
refinedweb
781
71.14
I started to prepare two posts, one about Page Sessions and the other one about reducing QueryString, ViewState and Session boilerplate. But I realized that both solutions were implemented on a BasePage. So I felt that I first needed to write about the importance of implementing a BasePage in a project. What is a BasePage? I bet you have noticed that all your pages inherit from Page. This implementation has three goals: - The ASP.NET Runtime knows how to instantiate a Page class and interact with it. - The Page class does all the hard work for us, Nothing less than turning our page into HTML. - It provides us with tools that make us more productive. For instance, it provides an easy way to persist data in ViewState or helps us determine if we are in a PostBack or not. What if we make our own Page class so we can add more tools to our toolbox? How can we do it? The implementation is simple. We just need a class that inherits from Page. public class BasePage : Page { } Then we need to go page by page (ok, you can use replace all), changing all your pages, so they inherit from BasePage instead of Page. So we turn this: public class default: Page Into this public class default: BasePage What can we add to our toolbox? There are many things you can do on this BasePage. As I mentioned before, I will write a few posts about some solutions, such as PageSessions and reducing QueryString boilerplate. But, for now, let me give you some ideas. Make user data available across the board Do you need the user full name on every page? Move that boilerplate to your BasePage. public class default: BasePage { public string UserFullName { get { return Session[userfullname].ToString(); } set { Session[userfullname] = value; } } } Change the thread culture If you want to serve users around the globe, you need to take their culture into consideration. According to this Wikipedia article, if you render the date 1/2/2018, around 600 million people will read "January second, 2018", whereas 3565 million people will read "February first, 2018". If you know the user's culture you can set it in the current thread. Then, all ToString methods will use that culture to render dates and numbers properly. Thread.CurrentThread.CurrentCulture = Thread.CurrentThread.CurrentUICulture = new CultureInfo(userCulture); Communication across the board between the browser Let's say you have a cart application and need to change the style. This could be because the cart is filled, the user is logged in, or maybe the user is from another country. You can set classes in the form element and then use them in your CSS stylesheet. Form.Attributes["class"] = mainElement.Attributes["class"] + " " + (HasItemsInCart() ? "items-in-cart" : string.Empty); And then in your CSS you can do this .items-in-cart .some-cart-div { display: block; } You can also declare javascript variables you need across the board. Let's say you know the user culture and you want to take advantage of it; not only on the server side, but also on the client site. You could declare a javascript variable on all your pages. ScriptManager.RegisterClientScriptBlock( this, GetType(), "currentCulture", $"var currentCulture = '{userCulture}';", true); Generic methods I bet you’ve written a method called FindControlInAllControls, which tries to find a control in a Controls array using a recursive loop. A BasePage is a great place to implement this. Compress your ViewState You could implement this ViewState compression in your base page. This was also recommended by Scott Hanselman. PageSession PageSessions? What's that? I'm going to write a post about page sessions and you'll be able to implement it on your BasePage. Reduce QueryString/Session/Viewstate boilerplates I hate the boilerplate used to sync and initialize variables from QueryString, session and ViewState. I'll post a few ideas to eliminate those boilerplates using.... yes, you guessed it!, Your BasePage. Final words Dependency Injection and extension methods may have become popular in modern development, but as we've seen in this Basepage exercise, inheritance still has a lot to offer. Originally posted on harkoded.com Posted on by: Darío Kondratiuk Microsoft MVP - .NET Developer with 15+ years working on web projects. Discussion FindControlInAllControls sample ? Posts about PageSessions and Reduce QueryString/Session/Viewstate boilerplates ? Will do!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/hardkoded/implementing-a-basepage-in-your-aspnet-webforms-project--16g9
CC-MAIN-2020-40
refinedweb
720
65.22
I am working on an assignment in a C++ class where I have to convert a C program to C++. I don't have a background in C and I am having some trouble with a calculation and an if statement (highlighted below in bold). I just don't understand the symbols and what they mean. I tried the C tutorials on this site and figured some of it out but not all. Any help would be appreciated. Here is the program in C Here is what I have coded:Here is what I have coded:Code: /* Convert this C program into C++ style. This program computes the lowest common denominator. */ #include <stdio.h> int main(void) { int a, b, d, min; print( "Enter two numbers: "); scanf("%d%d", &a, &b); min = a > b ? b : a; for(d = 2; d<min; d++) if(((a%d)++0) && ((b%d) ==0)) break;} if(d==min) { printf("No common denominators\n");} return 0; printf("The lowest common denominator is %d\n", d); return 0; Code: #include <iostream> using namespace std; int main() { int a; int b; int d; int mn; //calulated lowest common denominator cout << "Enter 2 numbers: "; cin >> a, b; min = 0; for(d = 2; d < min; d++) { if()break; if(d == min) { cout << "No common denominators" << endl; cout << "Press Enter to continue." << endl; cin.ignore(1); // Ignore leftover Enter key. cin.get(); //press to continue return 0; } cout << "The lowest common denominator is :" << d << endl; cout << "Press Enter to continue." << endl; cin.ignore(1); // Ignore leftover Enter key. cin.get(); //press to continue return 0; } }
https://cboard.cprogramming.com/cplusplus-programming/87807-converting-c-cplusplus-printable-thread.html
CC-MAIN-2017-26
refinedweb
264
61.56
Understanding XML Schemas July 1, 1999 Editor's note: since the publication of this article the W3C has made significant progress on the XML Schema specification. For an updated reference please see Using W3C XML Schema, published on XML.com November 29, 2000. Introduction In May, the. The following sections cover specific topics in more detail. The sections are independent, so you can read them in whatever order suits you. Schemas A schema is a model for describing the structure of information. It's a term borrowed from the database world to describe the structure of data in relational tables. In the context of XML, a schema describes a model for a whole class of documents. The model describes the possible arrangement of tags and text in a valid document. A schema might also be viewed as an agreement on a common vocabulary for a particular application that involves exchanging documents. Schemas may sound a little technical, but we use them to analyze the world around us. For example, suppose I ask you, "is this a valid postal address?" <address> <name>Namron H. Slaw</name> <street>256 Eight Bit Lane</street> <city>East Yahoo</city> <state>MA</state> <zip>12481-6326</zip> </address> Mentally, you compare the address presented with a schema that you have in your head for addresses. It probably goes something like this: a postal address consists of a person, possibly at a company or organization, one or more lines of street address, a city, a state or province, a postal code, and an optional country. So, yes, this address is valid. In schemas, models are described in terms of constraints. A constraint defines what can appear in any given context. There are basically two kinds of constraints that you can give: content model constraints describe the order and sequence of elements and datatype constraints describe valid units of data. For example, a schema might describe a valid <address> with the content model constraint that it consist of a <name> element, followed by one or more <street> elements, followed by exactly one <city>, <state>, and <zip> element. The content of a <zip> might have a further datatype constraint that it consist of either a sequence of exactly five digits or a sequence of five digits, followed by a hyphen, followed by a sequence of exactly four digits. No other text is a valid ZIP code. The purpose of a schema is to allow machine validation of document structure. Every specific, individual document which doesn't violate any of the constraints of the model is, by definition, valid according to that schema. Using the schema described (informally) above, a parser would be able to detect that the following address is not valid: <address> <name>Namron H. Slaw</name> <street>256 Eight Bit Lane</street> <city>East Yahoo</city> <state>MA</state> <state>CT</state> <zip>blue</zip> </address> It violates two constraints of our schema: it does not contain exactly one <state> and the ZIP code is not of the proper form. A formal definition of this schema for addresses is presented in the syntax section. The ability to test the validity of documents is going to be an important aspect of large web applications that are receiving and sending information to and from lots of sources. If you're receiving XML transactions over the web, you don't want to process the content into your database if it's not in the proper schema. The earlier, and easier it is, to catch this sort of error, the better off you'll be. (You wouldn't want to issue someone a refund check because you allowed them to order -4 hammers, would you?) DTDs XML inherited Document Type Definitions (DTDs) from SGML. DTDs are the schema mechanism for SGML. XML Schemas are the first wide-spread attempt to replace DTDs with something "better".. XML Schema overcome these limitations and are much more expressive than DTDs. The additional expressiveness will allow web applications to exchange XML data much more robustly without relying on ad hoc validation tools. Although XML Schema is poised to replace DTDs, in the short term DTDs still have a number of advantages: Widespread tools support. All SGML tools and many XML tools can process DTDs. Widespread deployment. A large number of document types are already defined using DTDs: HTML, XHTML, DocBook, TEI, J2008, CALS, etc. Widespread expertise and many years of practical application. Warts and all, DTDs are well understood by a large community of SGML and XML programmers and consultants. Features XML Schema offer a range of new features. Richer datatypes. Part 2 of the Schema draft defines booleans, numbers, dates and times, URIs, integers, decimal numbers, real numbers, intervals of time, etc. In addition to these simple, predefined types, there will be facilities for creating other types and aggregate types (although the mechanisms have not been finalized as of the 06 May 1999 draft). User defined types, called Archetypes in the draft. An archetype allows you to define your own named datatype. For example, you might define a "PostalAddress" datatype and then define two elements, "ShippingAddress" and "BillingAddress" to be of that type. This is a more powerful than simply defining the two elements to have the same structure because the shared archetype information is available to the processor. Attribute grouping. It's not uncommon to have several attributes that "go together". For example, common attributes that apply to all elements or several attributes that augment graphic or table elements. Attribute grouping allows the schema author to make this relationship explicit. In DTDs, the grouping can be achieved with a parameter entity, simplifying the process of authoring a DTD, but the information is not passed on to the processor. Refinable archetypes, or "inheritance". This is probably the most significant new feature in XML Schemas. A content model defined by a DTD can be described as "closed": it describes all and only what may appear in the content of the element. XML Schema admit two other possibilities: "open" and "refinable". In an open content model, all required elements must be present, but it is not an error for additional elements to also be present. A refinable content model is the middle ground: additional elements may be present, but only if the schema defines what they are. (Consider a schema that extends another: it might refine the content model of some element type to add new elements.) Namespace support. Since the introduction of Namespaces in XML, validation has become much more difficult. In fact, until the XML Schema work is completed, it just isn't practical to validate documents that use namespaces. The XML Schema WD describes mechanisms for schema composition (allowing schemas for multiple namespaces to be combined in a rational way so that validation can be performed) and support for namespaces. Validity. Syntax>. Conclusion.
https://www.xml.com/pub/a/1999/07/schemas/
CC-MAIN-2018-05
refinedweb
1,141
53.61
How to load a custom scene Texture?. Thanks for the suggestion, but the image is in the same directory. It is really odd that it does work for you, but not for me. I am on iOS 13.5 (17F75) with an iPad 7 and Pythonista 3.3 (330025) Try to restart Pythonista After testing with a bigger image (overworld_tileset.png, 565x564 px, 44kb) I got the same result. Restarting Pythonista did not help, restarting the iPad did not help. I now imported a photo I took with the iPad (JPG) via the little plus in the bottom left -> Import... -> Photo Library and it worked fine! Maybe there is a problem using PNGs?. You could use io. BytesIOin a Context manager soyou dont need to save and open the images. and when you use ui.Image.from_data(image_data[, scale])and set scale to your IOS device pixel/point ratio. most comonly 2. (1:1, 2:1, 3:1 ...) this will scale your image properly to the screen. you can get this by calling scene.get_screen_scale() heres an example: import scene import Image import io cache = dict({ "img1":"./image1.png", "img2":"./image2.png", }) for k,v in cache.items(): with Image.open(v) as img: resized_img=img.resize( (int(img.size[0]/2), int(img.size[1]/2)), Image.ANTIALIAS) with io.BytesIO() as byteStream: resized_img.save(byteStream, "tiff") ns.cache[k]=scene.Texture(ui.Image.from_data(byteStream.getvalue(), scene.get_screen_scale())) @Moe sure that sprite.png in the same folder as your script? For me, it is ok import ui, scene image = ui.Image('sprite.png') image.show() # this works texture = scene.Texture(image) # this works texture = scene.Texture('sprite.png') # this works from my understanding shoudn't it be ui.Image.named('sprite.png') from my understanding shoudn't it be ui.Image.named('sprite.png') You're right but try it, it works also without .named well look at that... i think its all beena lie... lol jk but i do wonder what the method namedmight be doing that may be a benefit? MyI convert them to ui.Imagewithout.
https://forum.omz-software.com/topic/6439/how-to-load-a-custom-scene-texture/11
CC-MAIN-2022-40
refinedweb
350
72.32
On May 10th, ICS presented a Qt Developer Conference in Waltham, Massachusetts. Over 100 developers were on hand to hear presentations from Havaard Nord (TrollTech CEO), Jasmin Blanchette (TrollTech uber-developer) and Matthias Kalle Dalheimer (Klarälvdalens Datakonsult CEO). It was a somewhat uneven conference. There were some great technical talks and then there were some that were too high-level and slow moving. Jasmin Blanchette did not have enough time for his talk on Qt 4, causing him to rush and leaving little time for Q&A. Havaard Nord's talk on the soul of Qt was inspiring and included some interesting results from a recent customer survey. A low point was probably the talk on GUI design, which really had nothing specific to Qt. This time would have been better spent having Matthias Kalle Dalheimer demonstrate KD Executor, which was very popular during the breaks. Joe Longson (from Walt Disney) demonstrated an amazing Qt application that Disney uses to manage production of animated feature films. Gregory Seidman's talk on using a app-global relay object (publish/subscribe pattern) to centralize connecting signals and slots was very useful. As of this writing, I could not find an online version of the speakers' slides, but I believe ICS intends to post them. TrollTech Customer Survey. Havaard Nord's presentation included data from a March 2004 customer survey. The survey was sent to 6,000 licensees and had a 25% response rate. Some of the results: Target OSNowPlanningChange Mac OSX14%25%79% Windows 200313%19%46% GNU/Linux67%72%21% Windows XP71%65%-9% Windows 200070%53%-24% Licensees where asked what OS they are targeting now and what OS they are planning to target. While this table is based on the 1,250 responses TrollTech got, it is not clear if all 1,250 responses answered this question, nor was any attempt made to quantify the customer base of each respondent. So take these results with a grain of salt. Qt 4.0 An article in the recent issue of the Qt Quarterly has a list of the changes coming in Qt 4 (pronounced "cute four"). TrollTech developer Jasmin Blanchette, co-author of the recent TrollTech/Prentice Hall Qt book, gave a great talk on Qt 4. Some things that stuck out for me from the talk: A alpha / developer-preview release of Qt 4 is planned for May or June of 2004. One thing I'm very interested in knowing about is the possibility of a C#/.NET implementation of Qt. I'm no fan of Microsoft, but the more I look at .NET, particularly Mono, the more I become convinced of it's potential. My biggest hope out of the conference was the announcement of Novell purchasing Trolltech or at least special rights to Qt and porting it to Mono. This of course didn't happen, but I still hold out hope for either Novell coming to their senses that this is the route to take OR Trolltech deciding to explore porting Qt themselves (which I'm sure they've done some of). Was there any hints of this or buzz among attendants? Eron I didn't hear much discussion of .NET/Qt. Someone did ask if Qt was going to support things like Avalon / Longhorn (whatever that means) and the answer was yes. Mmm. In what way? Is this Longhorn only? What would it buy you? The point of C#/.NET is the integrated development environment, especially the framework. If you have the .NET framework there is no real need for a second framework (Qt). It just causes unneccessary troubles, and there are no features in Qt that are not also provided by .NET. And once XAML is available (or if you use one of the XAML clones which are available now), Qt4 does not look very competitive... IMHO it would offer a better language to develop in, firstly. C# seems much more elegant and consistent than C++, providing garbage collection, better exception support, much more OO container types, etc. Don't think of .NET as one monolithic system... the libraries of the CLR alone with the improved syntax of C# could make Qt development much more pleasant. Plus, the CLR includes many useful libraries that make me turn to Python usually, like math, XML, networking, and cryptography. Even if Qt provides a lot of overlap to the CLR (like it does in PyQt), the fact of not having to use C++ alone is very appealing. Qt is having to provide a lot of this anyways (look at Qt's STL replacement types, properties, data access, runtime introspection, etc), so it should move to a more modern platform to provide these capabilities at a lower-level. I suppose the biggest appeal is using Qt instead of GTK+ to provide the GUI facilities under Mono. That coupled with Trolltech's fantastic documentation standards... Wishful thinking, I suppose. What is it that is so fantastic with C#? I've heard a lot of times that C# is Java done right, without any futher explanations. What is it that is so right about C#? To me it looks like C# is as different from Java as Java 1.4 is to Java 1.5. By the way, when will XAML, which so far got support for rotating a widget in realtime (given you have a 6GHz computer), be able to compete on any other platforms than Longhorn? As far as I know, QT runs on anything that's got a processor, It looks completely consistent on my KDE desktop which is a lot more than you can say about any MS product. My MS office package looks completely different from my Internet Explorer and not to mention 3rd party software, which looks like it runs by accident. This is the same on all versions of Windows. What is there to compete with? QT is already Superior. You have to remember that Qt is a framework, whereas C# is a language. What can be done is an implementation of Qt *in* C# to provide a better alternative than C++. It is very unlikely that Qt will be reimplemented in C#, as there is so much code that depends on it. This is arguably a good thing. Contrary to the hype which is around C#/.NET these days, you never hear compelling advantages of C# over C++. You often hear garbage collection, which is available for C++ too, and you hear properties which are, first, of dubious value and, second, available with C++ if you use QT too. On the other hand what is missing from C# is eminent, just some examples: Support for generic programming is weak, and "generics" are planned first for C# v2.0. By consequence, generic library components are inferior. There is no Support for "real" RAII, garbage collection is forced on you. There is one unnecessary base class for every type. There is no multiple inheritance (probably only a minor point). There is no standardized preprocessor. I do not want to let it sound as if C# was a complete catastrophe, if you do like the language it is of course you choice to use it (use, however, a free implementation). But do not try to fool others into thinking that C# is better than C++. If you have something you do not like about C++, say specifically what it is (and check if the next version of the standard will probably have it). Well, people are already trying (bindings, anyways). That there are no compelling features of C#, I found this article to catch my attention: I know some GC support is available for C++, but like most things, seems tacked on as an afterthought. And how do you mean that GC is forced upon you in C#? For properties, I see this as important; in fact, you'll notice Qt 4 moving more to properties itself. Qt itself sports a base object, QObject, which has real and important functionality. Qt also only support single-inheritance models, so how would that hurt if using C# as well? Usually multiple inheritance is only good for mixins, whose need can be removed with a well-designed class structure (again, see Qt itself). I just don't see the arguments. I suppose I just see C++ as slow-moving (improvement-wise) and past its prime. Sometimes it is better to start all over again. Perhaps if someone could point out all the exciting things C++ is planning to offer I'd change my mind. Because C# doesn't support this or that yet doesn't convince me. The language is still quite young and new stuff is being added at a good pace (similar to the pace Python adds language and API changes). With Mono in the mix now adding value to the current implementation and extensions to mscorlib.System, there should be some competition (hopefully). I aggre with El Pseudonymo, that C# is not that much better than C++. a) In an OO world memory management ist way easier than in the procedural world. additionally to GC smart pointer can provide a safe and fast solution. b) not all classes in Qt are derived from QObject. c) C++ may have a few dark corners, but on the other hand you get seemles interoperability with C libraries, which is a big plus especially in the Unix world that seems to be stuck in the C-ages d) a new c++ standard is supposed to be out in 2005(introducing GC), 2002 was the ABI standardisation, 1998 was ISO standardarsation. so there is stuff going on. For a language of the importance of C++ you just cant change things all the time. e) many things are being developed in libraries instead of the language, which fits into the philisophy of C++ being low overhead. I wouldnt call Qt slow moving ;) Marc a) Smart pointers and constructors/destructors make memory management certainly easier but still not as easy as with garbage collection, while smart pointers introduce overhead that is comparable to GC, while often only providing partial solutions to automated memory management. Also memory management of shared resources with constructors/destructors is as cumbersome as in C. You have to be careful with your interface if the class handles all memory by itself or you delegate responsibility to a client which is dangerous. The .NET garbage collector is well integrated and you don't need to use some special syntax, something which cannot be said of most C++ GCs. The design can focus on algorithmic challenges instead of worrying about proper memory handling. b) I consider a common base class a good thing(TM). c) Interoperability with C libraries is well handled in most higher level languages. Take Python for example which provides many bindings to C libraries. .NET also allows easy interoperability with legacy code via P/Invoke. d) The ISO C++ standard won't introduce GC AFAIK. What you are referring to is the C++/CLI integration that is on a separate ISO track (while some of its members are also taking part in ISO C++). In fact C++/CLI is a targeting C++ to the CLI (which is a subset of .NET) thereby introducing GC as well. However if you consider this as a good development I don't understand the critique of .NET/C#. Besides I agree with you that C++ is (slowly) evolving and there are reasons tor the slow pace. Anyway C++ is falling behind in terms of features and the legacy burden forces lots of questionable syntax extensions making the complex C++ even more complex. a) smart pointers do not cause the same overhead as a GC, because in case of smart pointers there is only overhead for the objects managed by a smart pointer, while with GC _ALL_ objects have GC overhead even if they dont need it. b) Not always you want to derive every class from a heavywheight Object class if you dont need the extra functionality. Why should QChar for instance derive from QObject? Just imagine what kind of overhead that would be for the QString class!!! c) In C++ you can use C libs directly, without any wrappers, or special C API's. P/Invoke is probably not as complicated as Pythons C API or Javas JNI. d) Garbage collection is planned to go in the ISO standard. Even Stroustrup is ok with that. But you will be able to switch it off, of course. Btw, my critique wasn't that C#/.Net is a bad language, I just said its not that much better than C++. a) Sorry for stating my point not clear enough. What I wanted to point out is that GC is no more slower or inefficient than most semi automatic memory handling or even than completely manual memory handling. You may want to refer to Hans Boehm's web site and the articles mentioned there. b) .NET still allows you to create and use efficient types. Read something about boxing/unboxing on the net. I can treat an int simply as an integer value type (one that's sizeof() returns 4 byte) and as an object if I box it (often implicit). As long as I simply use it as an integer there is no overhead involved. c) P/Invoke is certainly not complicated but easy to use and straightforward. Microsoft had to ensure that the transition from legacy code is not to painful since it wanted to attract lots of developers. d) GC in ISO C++ is new to me. Need to check this out. I am not sure what people mean when they say that C# has no compelling features. It seems to me that people are confusing the runtime environment of .NET and the language C#. C# is a *nice* language, but currently the only way to run C# is on top of a virtual machine with all the associated overheads (extra memory for VM instantiation, runtime checks, etc). Many of the things that people say are great/bad about C# are in fact properties of the .NET runtime, not the C# language. I'm sure if somebody made a native compiler for C# that used the QT libraries, people wouldn't complain so much. In fact, it seems as though that's exactly what Trolltech is moving towards with moc and language extensions such as the "foreach" construct. Likewise, the implication that C++ is "slow-moving" is quite meaningless. Once again, are they implying that the language should evolve faster or the platform? QT is moving quite rapidly (as observed) and making very interesting strides comparable to .NET. C++ is a fairly mature language and IMHO *shouldn't* radically alter itself every two years. As for platform niceties such as GC (that QT currently doesn't offer) and runtime security checking, it is true that VM platforms such as Java and .NET offer a more integrated environment. However it is arguable that such things can be added into a platform without all the associated overhead of a virtual machine environment through a combination of static analysis and conservative collectors. Basically, I believe it is important for people to 1) Separate the language and platform when talking about things like C++, QT, C#, .NET, etc 2) Realize that just because something runs on top of a virtual machine doesn't make it better I think right the first point on the page you mentioned illustrates what I mean: There are many restrictions ("No concept of inheriting a class with a specified access level", "No global functions or constants, everything belongs to a class"), and there are "improvements" on C++, whereas in C++ the feature is actually there like thread support. So you have some real improvements but you have quite some deficiencies. So my opinion remains, changing to C# is no real advantage. There are *not* many restrictions in C#. There are some features (access specified inheritance is one of them) and some questionable or errorphrone contructs left out. The "No global functions" is something you can live with since static methods are possible. Threading in particular is well supported in .NET/C#. You fail to specify the deficiencies that overweight the advantages. Here is my incomplete advantage list (using C# version 2.0): - Feature rich framework that presents lots of stuff a programmer needs in a coherent and well documented form. - Garbage collection frees your mind from memory handling. - Safety. No buffer overruns, segmentation faults, etc. - Reflection capabilities. - Some handy syntax sugar (anonymous methods, iterators, properties, indexers). - Type safe and more powerful callbacks (delegates) - Enumerations introduce a new scope. - Versioning via new/override. Every declared virtual method is the root of dispatch. - Unified type system with a common base class, bridging the gab with boxing/unboxing. - No distinction between pointers and references. No need for -> * &. - Definite Assignment of variables. The crystal clear ref and out syntax. - Checked arithmetic. - Rectangular arrays. - ... I leave it here. There is certainly some point that I unintentionally left. Maybe you can come up with your disadvantage list ;-) I do not want to post a list of advantages or disadvantages. Programmers clearly have different opinions whether a particular feature is good or bad. So just putting a list together will just perpetuate an unhelpful discussion. I want however to pick on one feature of C# you mentioned. It is "Type safe and more powerful callbacks (delegates)". Why is that a new feature of C#? You just have to look at boost::signal and boost::function. Both of them provide delegating and similar capabilities. The core language of C++ is powerful enough to make a library implementation of this possible. This is a key difference, The support for programming generic components in C# is still not powerful enough (and won't be with C# v2.0 presumably) to achieve similar things in C#. It enables one not much more than to write generic containers and doing simple method dispatch. * More libraries and frameworks for C++ than for C# - Garbage collection frees your mind from memory handling. * Handling memory allocations based on specific cases gives smaller/faster programs. - Safety. No buffer overruns, segmentation faults, etc. * The possibility to use pointers when needed - Reflection capabilities. * Templates, generic programming - Some handy syntax sugar (anonymous methods, iterators, properties, indexers). * A lot of compilers, direct compattibility with C and fast bindings for every existing language. Ability to add inline assembly for fully utilizing your CPU when nedded. - Type safe and more powerful callbacks (delegates) * Ability to evict type safety when needed - Enumerations introduce a new scope. * namespaces - Versioning via new/override. Every declared virtual method is the root of dispatch. * Dunno if this is a feature :) - Unified type system with a common base class, bridging the gab with boxing/unboxing. * You can have it too in C++, it's your decision - No distinction between pointers and references. No need for -> * &. * What's the feature here? - Definite Assignment of variables. The crystal clear ref and out syntax. * ? - Checked arithmetic. - Rectangular arrays. * You can find a myriad of C/C++ libs for handling arrays - ... * Compiled language, without the burden of a VM and GC, _optional_ GC to come. I just dont understand why people are so hyped for languages that use a VM. Portability does not work there too, see java. The vm makes them heavyweight and makes your new PC look like a 386. "More libraries and frameworks for C++ than for C#" You point out the crux with C++. The standard library leaves a lot to be desired and so you have a lot of libraries trying to close the gap. Unfortunately most of them are redundant featurewise and deciding between the zillions of libraries essentially doing the same thing for a given purpose but differing in their API styles and concepts make life harder for programmers. Why do you think are so many people interested in Qt? Its because Qt offers a unified framework that offers a lot more than just GUI functionality and that is badly needed for C++. I however prefer one coherent and well thought framework I can rely on. You know less is more sometimes. "Handling memory allocations based on specific cases gives smaller/faster programs." This claim needs to be proved. Besides the usual laypersons believe GC does not have to yield slower and bigger programs. "The possibility to use pointers when needed" This possibility exists in C# as well. "Templates, generic programming" Those features certainly do not outweight the lack of reflection. Besides I referred to version 2 of C# which directly supports generics. "A lot of compilers, direct compattibility with C and fast bindings for every existing language. Ability to add inline assembly for fully utilizing your CPU when nedded." There are at least 3 .NET/C# implementations at this time and this is an implementation detail that has nothing to do with language capabilities. What you could mean with "bindings for every existing language" is beyond me--it is certainly not easy to access eg. Python from C++ (and Python is one of the easier ones). Invoking C/C++ functions is straightforward from C# via P/Invoke. "Ability to evict type safety when needed" First time someone tries to present this as a feature to me. Concernign the unified type system. Your claim is I can have it in C++ too and this is definately wrong. C++ doesn't have a unified type system. There are basic types (eg. int, float) and user defined ones (via classes). This gap can't be closed in C++. "I just dont understand why people are so hyped for languages that use a VM. Portability does not work there too, see java. The vm makes them heavyweight and makes your new PC look like a 386." The hype is not about VMs. It is about coherent frameworks and simple languages that don't impose barock syntaxes and semantics caused by backward compatibility desires. Portability with VMs is possible and works fine with Java. Is there any reason for me to develop anything in C#?. I am starting a project with C/C++, though time consuming, I am building a GUI, with a complex database, and a new file format. Is there anything I can save some time and build with C#? or is it as limited as it looks? translated code has never worked for me in the past. Should I try again? thanks Albert I forgot to answer: "And how do you mean that GC is forced upon you in C#?" Easily answered, you have unpredictable disposal of ressources in C#, if you do not do disposal yourself. If your class implements the IDisposable interface (basically requiring you to write a Dispose() method) you might want to use the 'using' contruct which calls Dispose() automatically even in the case of exceptions. So your resources are freed at the earliest possible time. You are right. I never heard about that. So one main point of my critique is alleviated. But problems still remain, because in order to use "using" you have to declare a variable. This isn't possible everywhere and in many places it would make the code harder to read. And as far as I know, there is no way to "remove" "using" from a variable. > What is it that is so fantastic with C#? Two years from now, how many developers do you think will be using C# and .NET? I think it will be a lot. Having Qt C# bindings will lower the barriers for all these developers to participate in developing free software (or, more likely, make their proprietary software work on KGX). It's not about what is the best technology, rather it's about the marketing reality of Microsoft that will push many developers to .NET, and lowering barriers for those developers and their apps work on KGX. Anyway, that's how I see it ... There will probably be a lot of C# developers two years from now that still just take it for granted that C# will save the world :) I'm not against making Qt C# bindings. I think one should add support for C# in KDevelop, so that that all these new developers will be able to use a real IDE, not to mention run their CLRs at twice the speed in dotgnu. I also think that this new language 'D', (announced on Slashdot two weeks ago) should be integrated into KDevelop as well as get Qt bindings. I was also very impressed with "D" - it should be the successor to C++ but that doesn't mean it will be. I'm afraid that unless the C++ committee powers drop their egos and adopt it(unlikely) or some big software company adopt it into a major IDE effort, that D will just float around out there as another great technology with no real support. Most programmers know C, C++, Java, and maybe now C#. Students are learning C, C++, Java, and C#. It takes something really big to alter that entrenched skill base. Snif, snif. I'm sad :-( because I don't know why there is so much attention to languages like C# or Java. Try Smalltalk! It's there since the 80's and has a lot of features still not available in C# or Java. (here is some ST propaganda ;-) ) I would like to see some open source Smalltalk implementation with support of native widgets like Qt or GTK (Squeak - - it's nice... but doesn't have support to create "normal" desktop applications). But, if I have to use Java... I would like see an implementation of SWT () with Qt. The linux implementation of SWT is GTK only :-( ouch! I think that making Java applications look like native KDE applications would be great. (sorry, my english is not good) Smalltalk is not really a competitor to C# or Java, rather to Ruby and Python. It is not a statically type-safe language. It's in a different league (I leave to you whether it is better or worse for system programming). Beside that, Smalltalk's syntax is weird in a world where people are used to C-style and Basic-like languages. That alone is enough reason to rule out a wide-spread adoption. I agree Smalltalk is still one of the best languages for developing GUIs, apart from the unusual syntax if you aren't used to it, and non-native look and feels. But there are issues with the Squeak license (Debian doesn't include it for that reason), and I think GNU Smalltalk would be a better basis for a Qt/KDE language binding. "But, if I have to use Java... I would like see an implementation of SWT () with Qt. The linux implementation of SWT is GTK only :-( ouch! I think that making Java applications look like native KDE applications would be great." Why do you think SWT is some sort of improvement over QtJava or the Koala KDE java api? SWT still has clunky event listener subclassing like the hard to use Swing framework, rather than slots and signals. You have to dispose() of all resources with no help from the garbage collector, and it's pretty limited for creating new custom widgets. Actually IBM did the port, but it was never released due to licensing problems. This was touched on a bit in the interview with the Trolltech guys on the dot recently. Basically for such a port to exist and be useful it would need to be possible to run applications written in SWT regardless of license -- however this would mean running proprietary apps using a GPL'ed Qt which is a no-go. Uhmm!!! Why we need a C# implementation of QT??? it's so much work!!! But i have to admit that having a .NET implementation of QT would be really good!!!, in that sense why not to do a modification of QT to compile it with C++.NET (yes i know we don't C++.NET for Linux, but we could !!!) less work than doing a C# implementation (i think!!), and now that C++.NET will be standarized (see), would be really good to have a C++.NET compiler for Linux and a QT.NET implementation!!! and this would be better that the bindings that are aleady in progress (), it would have less dependencies!!! Regarding memory allocation optimisations, I wonder if they have plans to support something like memory pools. The idea behind pools is that allocating an object has an expensive cost in time. If you have many allocations of small object to make, you should better allocate big chunks of memory, keep them in a pool and let the objects use the memory of the pool. This could certainly be used where one allocates many instances of small objects. It could be for example an option of QList, or for QListViewitem. I like these news:-) How far will the optimizations in Qt influence KDE performance? How big (if necessary) are the needed modifications to KDE?
https://dot.kde.org/comment/98978
CC-MAIN-2017-17
refinedweb
4,852
64.81
Question: I have made a file called time.hs. It contains a single function that measures the execution time another function. Is there a way to import the time.hs file into another Haskell script? I want something like: module Main where import C:\Haskell\time.hs main = do putStrLn "Starting..." time $ print answer putStrLn "Done." Where time is defined in the 'time.hs' as: module time where Import <necessary modules> time a = do start <- getCPUTime v <- a end <- getCPUTime let diff = (fromIntegral (end - start)) / (10^12) printf "Computation time: %0.3f sec\n" (diff :: Double) return v I don't know how to import or load a separate .hs file. Do I need to compile the time.hs file into a module before importing? Solution:1 Time.hs: module Time where ... script.hs: import Time ... Command line: ghc --make script.hs Solution:2 If the module time.hs is located in the same directory as your "main" module, you can simply type: import Time It is possible to use a hierarchical structure, so that you can write import Utils.Time. As far as I know, the way you want to do it won't work. For more information on modules, see here Learn You a Haskell, Making Our Own Modules. Solution:3 Say I have two files in the same directory: ModuleA.hs and ModuleB.hs. ModuleA.hs: module ModuleA where ... ModuleB.hs: module ModuleB where import ModuleA ... I can do this: ghc -I. --make ModuleB.hs Note: The module name and the file name must be the same. Otherwise it can't compile. Something like Could not find module '...' will occur. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/11/tutorial-how-to-import-hs-file-in.html
CC-MAIN-2019-04
refinedweb
293
78.85
Programming Idioms I just read Jeff Yearout's recent post titled The Beginner's Garden of Concepts. Not directly related but it got me thinking about programming idioms. I've been using the phrase "programming idiom" for years to describe a short useful recurring code construct. I didn't realize that it was officially "a thing" until doing a web search on the phrase years later. As our students grow from newbies on I think it's helpful for them to see recurring and related patterns and programming idioms gives us a name to apply to many beginner patterns. An early idiom might be "finding the smallest in a list:" dataset = [5,3,8,12,2,7] min_index = 0 for i in range(1,len(dataset)): if dataset[i] < dataset[min_index]: min_index = i Another is the very similar and more general "do something on every item in a list:" for i in range(len(dataset)): # do something to or with dataset[i] By talking about constructs like these as idioms it helps students see and develop coding patterns. It also helps them to build mental abstractions. Each of the above idioms are a few lines of code but each are also a single concept. Students learn to think of them as the concept. When students learn about list comprehensions in python they'll rewrite the "do something…" more like this: [ f(x) for x in dataset] but the pattern or idea is the same. Other early idioms might include swapping variables: tmp = a a = b b = tmp and loops until an exit condition are met: while (not_exit_condidtion): # do stuff modify variable that checks exit condition Even more difficult concepts like recursion can be described in an idiomatic way: def f(x): if BASE_CASE: return something else: new_x = modify_to_eventually_get_to_base_case(x) f(new_x) Patterns like these, or idioms, come up over and over again. We don't have to explicitly mention them in our teaching but I think it's helpful to our students if we do.
http://cestlaz.github.io/posts/programming-idioms/
CC-MAIN-2018-26
refinedweb
335
53.75
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. ABC: System for Sequential Logic Synthesis and Formal Verification ABC is always changing but the current snapshot is believed to be stable. Compiling: To compile ABC as a binary, download and unzip the code, then type make. To compile ABC as a static library, type make libabc.a. When ABC is used as a static library, two additional procedures, Abc_Start() and Abc_Stop(), are provided for starting and quitting the ABC framework in the calling application. A simple demo program (file src/demo.c) shows how to create a stand-alone program performing DAG-aware AIG rewriting, by calling APIs of ABC compiled as a static library. To build the demo program - Copy demo.cc and libabc.a to the working directory - Run gcc -Wall -g -c demo.c -o demo.o - Run gcc -g -o demo demo.o libabc.a -lm -ldl -rdynamic -lreadline -ltermcap -lpthread To run the demo program, give it a file with the logic network in AIGER or BLIF. For example: [...] ~/abc> demo i10.aig i10 : i/o = 257/ 224 lat = 0 and = 2396 lev = 37 i10 : i/o = 257/ 224 lat = 0 and = 1851 lev = 35 Networks are equivalent. Reading = 0.00 sec Rewriting = 0.18 sec Verification = 0.41 sec The same can be produced by running the binary in the command-line mode: [...] ~/abc> ./abc UC Berkeley, ABC 1.01 (compiled Oct 6 2012 19:05:18) abc 01>. or in the batch mode: [...] ~/abc> ./abc -c "r i10.aig; b; ps; b; rw -l; rw -lz; b; rw -lz; b; ps; cec" ABC command line: . Compiling as C or C++ The current version of ABC can be compiled with C compiler or C++ compiler. - To compile as C code (default): make sure that CC=gccand ABC_NAMESPACEis not defined. - To compile as C++ code without namespaces: make sure that CC=g++and ABC_NAMESPACEis not defined. - To compile as C++ code with namespaces: make sure that CC=g++and ABC_NAMESPACEis set to the name of the requested namespace. For example, add -DABC_NAMESPACE=xxxto OPTFLAGS. Building a shared library - Compile the code as position-independent by adding ABC_USE_PIC=1. Build the libabc.sotarget: make ABC_USE_PIC=1 libabc.so Bug reporting: Please try to reproduce all the reported bugs and unexpected features using the latest version of ABC available from If the bug still persists, please provide the following information: - ABC version (when it was downloaded from BitBucket) - Linux distribution and version (32-bit or 64-bit) - The exact command-line and error message when trying to run the tool - The output of the lddcommand run on the exeutable (e.g. ldd abc). - Versions of relevant tools or packages used. Troubleshooting: - If compilation does not start because of the cyclic dependency check, try touching all files as follows: find ./ -type f -exec touch "{}" \; - If compilation fails because readline is missing, install 'readline' library or compile with make ABC_USE_NO_READLINE=1 - If compilation fails because pthreads are missing, install 'pthread' library or compile with make ABC_USE_NO_PTHREADS=1 - See for pthreads on Windows - Precompiled DLLs are available from - If compilation fails in file "src/base/main/libSupport.c", try the following: - Remove "src/base/main/libSupport.c" from "src/base/main/module.make" - Comment out calls to Libs_Init()and Libs_End()in "src/base/main/mainInit.c" - On some systems, readline requires adding '-lcurses' to Makefile. The following comment was added by Krish Sundaresan: "I found that the code does compile correctly on Solaris if gcc is used (instead of g++ that I was using for some reason). Also readline which is not available by default on most Sol10 systems, needs to be installed. I downloaded the readline-5.2 package from sunfreeware.com and installed it locally. Also modified CFLAGS to add the local include files for readline and LIBS to add the local libreadline.a. Perhaps you can add these steps in the readme to help folks compiling this on Solaris." The following tutorial is kindly offered by Ana Petkovska from EPFL: Final remarks: Unfortunately, there is no comprehensive regression test. Good luck! This system is maintained by Alan Mishchenko alanmi@berkeley.edu. Consider also using ZZ framework developed by Niklas Een: This file was last modified on June 18, 2014
https://bitbucket.org/alanmi/abc
CC-MAIN-2017-39
refinedweb
726
66.44
PTS provides a way of internationalizing (i18n'ing) and localizing (l10n'ing) software for Zope 2. PlacelessTranslationService - Zope Corporation and Contributors This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See license.txt What is PlacelessTranslationService? PTS is a way of internationalizing (i18n’ing) and localizing (l10n’ing) software for Zope 2. It’s based on the files supported by the GNU gettext set of utilities. A good source of information and background reading is the gettext documentation: Installation PTS is installed as a normal Zope product. This is usually done by unpacking the distribution into the Products directory of your INSTANCE_HOME and restarting Zope. More information can be found in the Zope Book: Using PlacelessTranslationService PTS is used in the following steps: - i18n your software - Prepare a translation template - Prepare translations of the template - Install translations Each of these is explained below. - Internationalizing Your Software A good overview of this can be found at: - Preparing a Translation Template A translation template is an empty Portable Object file as defined by the gettext standard with a special header block. The PO format is described in detail here: The header block is fairly self explanatory and can be seen in the sample.pot file included in this directory. All phrases in capitals, the language code, language name and (optionally) the content type and preferred encodings should be replaced with their correct values. There are several ways to prepare a PO template: —By hand: This can be done by copying the blank.pot included in this directory, replacing the sample values as described above and and then manually adding msgid and empty msgstr pairs for each of the msgid’s used in your software. —Using i18ndude: i18ndude is a tool that is useful when all your software is in the form of ZPT’s that are stored in files on the filesystem. It can be downloaded from: Prepare Translations of the Template Preferably, find a translation company that can handle the gettext standards and send them your .pot file. They should send back .po files for the languages you require. If you’re doing it yourself, copy the .pot file to a file on the name of the language you’re translating to and with a .po extension. Then go through that file and fill in the msgstr sections. Finally, update all the metadata fields at the top of the file so they are correct for the translation you have just completed. At this point, you should have a .pot file and a collection of .po files. Install Translations PTS will look in folders called ‘i18n’ for .po files to use as translations. These ‘i18n’ folders will be searched if they are in the INSTANCE_HOME or in the directories of any of the Products you have installed. Copy your .po files to a ‘i18n’ folder of your choice in one of these locations. Once that’s done, restart Zope. Changelog 2.0.7 (2017-02-12) Bug fixes: - Fix import from Globals that is removed in Zope4. [pbauer] 2.0.6 (2016-08-11) Fixes: - Use zope.interface decorator. [gforcada] 2.0.5 (2014-04-16) - .gitignore added. [tisto] 2.0.4 (2013-08-13) - Add module security declarations. Prevent publishing of the translate method. (Fixes from PloneHotfix20130618.) [davisagli] 2.0.3 (2011-11-24) -] 2.0.2 - 2010-10-27 - Fixed chameleon incompatibility in catalog_broken.pt. [swampmonkey] 2.0.1 - 2010-08-04 - Made initialize2 function compatible with Zope 2.13. [hannosch] 2.0 - 2010-07-18 - No changes. 2.0b6 - 2010-06-13 - Removed the .registration.cache file from i18n directories. Since the majority of po files in plone.app.locales moved to locales folders, this optimization isn’t worth it anymore. [hannosch] - Avoid using the deprecated five:implements directive. [hannosch] 2.0b5 - 2010-03-26 - Fixed incompatibility in TEMPLATE_LANGUAGE support if you have specified an explicit language list via zope_i18n_allowed_languages. [hannosch] 2.0b4 - 2010-03-01 - Made sure that the request language memoization patch doesn’t throw errors if the request cannot be adapted to IAnnotations. This closes. [hannosch, davisagli] 2.0b3 - 2010-02-04 - Refactored the i18n folder discovery mechanism to be independent of the persistent product registry. [hannosch] 2.0b1 - 2010-01-24 - Extend our hack of the negotiator a bit more and introduce the concept of a template language. This one will always be allowed no matter if there’s a po file for it for every domain or not. This closes. [hannosch] - Added missing zope.annotation dependency. [hannosch] 2.0a2 - 2009-11-13 - Merged in the PTSLanguages class from Products.Five.i18n and enabled it in our overrides.zcml. [hannosch] - Fixed deprecation warnings for use of Globals. [hannosch] - Cleaned up package metadata and declare package dependencies. [hannosch] 2.0a1 - 2008-10-16 - Finished support for handling of po files inside i18n folders in normal Python packages. They need to registered as a Zope2 product but don’t need to be in the Products.* namespace anymore. [hannosch] - Removed the _compile_locales_dir method and patch. Compiling mo files is now handled by zope.i18n itself. [hannosch] - Added first version of a registration cache for i18n folders. We track the modification time, number of files and header information of the files and write those out to a cache file. On startup we read the cache file and use the information as long as it is still current, instead of reparsing the po files headers. The cache file is called ‘.registration.cache’. [hannosch] - Minor optimization in initialize code. [hannosch] - Optimized loading of po files from i18n files. We only parse the header of the files, when all we need is the language header. This requires a new version of python-gettext. [hannosch] - Optimized startup logging. Now we don’t spam the debug level anymore. [hannosch] - Deprecated all translation domain and service related methods and classes. [hannosch] - Removed the var/pts mo file cache in favor of compiling all mo files inplace. Also removed registration of catalogs with PTS and register them as Zope3 ITranslationDomain utilities instead. [hannosch] - Removed our own copy of the msgfmt module and use the one from the python-gettext package instead. [hannosch] - Stopped adding BrokenMessageCatalog objects to PTS. [hannosch] - Removed self recreation code when updating to newer versions of PTS. [hannosch] - Removed deprecated methods. [hannosch] 1.5.4 - 2010-03-01 - Fix the PTS_LANGUAGES and LazyGettextMessageCatalog optimization. A boolean check was inverted. This closes. [hannosch] 1.5.3 - 2009-06-11 - Fix support for merging multiple message catalogs for the same domain. Previously this only worked in test-land. [witsch] - Add test layer properly initializing the package so that the tests can also pass with the eggified version. [witsch] 1.5.2 - 2009-05-13 - Create unique catalog names for translation files found in packages. This closes. [hannosch] - Deferred our own initialization to the package load time, so the persistent product registry is populated with the product entries for all packages. This allows all translations to be registered at the first startup of a new instance and closes. [hannosch] 1.5.1 - 2009-02-22 - Uppercased the readme.txt file to README.txt. Some platforms don’t seem to like an all lowercase name here. [hannosch] 1.5 - 2009-02-20 - Reformatted changelog and updated package metadata. [hannosch] - Patched zope.i18n.zcml.registerTranslations in order to backport Hanno’s work on merging po files from the same domain. [tarek] - Added some tests for the registerTranslations patch. [tarek] 1.4.14 - Fixed setup.py that was referring to a non-existing file. [maurits] 1.4.13 - August 18, 2008 - Reworked the PTS loading code to not rely on the persistent product registry for file path anymore, since that can get out of sync too easily and cause problems with multiple ZEO clients on different file paths connected to the same database. This closes and. [hannosch] 1.4.12 - June 17, 2008 - Finished support for handling of po files inside i18n folders in normal Python packages. They need to registered as a Zope2 product but don’t need to be in the Products.* namespace anymore. [hannosch] - Added some missing ZCML statements, which allow to use PTS in a Zope-only environment. Thanks to Martijn Jacobs for the patch. [hannosch] 1.4.11 - April 28, 2008 - Work around a bug in addCatalog that would fail on broken message catalogs. This closes. [hannosch] 1.4.10 - April 20, 2008 - Switched mo file cache to store files under the client home instead of relying on the var folder to be present inside the instance home. This should fix permission errors for effective-user installs. This refs. [hannosch] - Do not use the lazy message catalog at all when the list of languages is restricted via PTS_LANGUAGES, as the advantage in memory footprint will no longer exist, but the tiny lookup penalty would still be there. [hannosch] - Added support for a new environment variable called PTS_LANGUAGES. If this variable is specified and contains a space separated list of language codes only those languages will be registered in the Zope instance. This can help in reducing the memory footprint and number of ZODB objects generated by PTS. For locales folders this also avoids compiling po files to mo files. [hannosch] 1.4.9 - March 26, 2008 -] 1.4.8 - January 5, 2008 - Fixed a bug in the persistent translation service creation code. It registered the wrapper with a _path of (‘TranslationService’, ) at first. After a restart that would be corrected to the correct one: (‘’, ‘Control_Panel’, ‘TranslationService’). This should fix a couple of bugs in the Plone bug tracker. [hannosch] 1.4.7 - December 24, 2007 - Raise a ValueError when the Zope3 translation utilities get passed in an invalid context argument. Translations in Zope3 work against the request alone and while the keyword is called context it was too easily confused with a contentish context. [hannosch] 1.4.6 - December 2, 2007 - Catch PoSyntaxError when loading translation files from locales folders and output a warning instead of preventing Zope from starting up. [hannosch] - Backed out handling of PTS as a global utility again. It turns out that registering a persistent object both as a global utility is as bad as registering it as a module level global. So we use the PTSWrapper again which stores only the physical path to the PTS and loads it on every access. This fixes the ConnectionStateErrors witnessed in Plone 3.0 and closes. [hannosch] - Backported LazyGettextMessageCatalog from the trunk and use it instead of the standard zope.i18n GettextMessageCatalog. This improves startup time and memory footprint, as only those catalog files will be parsed and loaded into memory which are actually used. [hannosch] 1.4.5 - October 7, 2007 - Guard against sporadic ConnectionStateErrors in the PTS utility implementation. [hannosch] 1.4.4 - July 9, 2007 - Added new memoize function, which is used to patch the Zope3 negotiator to store the results of the language negotiation on the request. [hannosch] - Various minor updates to msgfmt.py. [hannosch] 1.4.3 - May 1, 2007 - Added new mo file generation logic, which will automatically generate and update the mo files in all locales folders instead of in the var/pts cache, so these can be picked up by the Zope3 translation machinery directly. You need to make sure that the user running the Zope process has write permissions in all locales folders for this feature to work. Folders following the i18n folder layout will be treated the same way as before. [hannosch] - Removed mo files for the PTS domain. [hannosch] 1.4.2b2 - March 23, 2007 - Commented out the five:registerPackage for now, as it lead to ugly ConnectionStateErrors during tests, as PTS would have been set up as part of the ZCML layer. [hannosch] 1.4.2b1 - March 5, 2007 - Small optimization. Check if the context passed to the translate function is already a request, so we don’t need to acquire it from the context. [hannosch] - Added IPTSTranslationDomain interface and utility. These can be used to proxy a translation domain that is still handled by PTS to make it available as a Zope3 translation domain as well, so it can be used in pure Zope3 page templates for example. [hannosch, philiKON] 1.4.1 - February 10, 2007 -] 1.4.0 - October 25, 2006 - Removed the tracker functionality of automatically recording missing translations. This turned out to be quite resource intense. [hannosch] - Fixed translate method to work in an environment where the context is not acquisition wrapped. [hannosch] - Fixed one more deprecation warning in GettextMessageCatalog. [hannosch] - Removed PatchStringIO completely, it apparently wasn’t needed anymore. [hannosch] - Removed the FasterStringIO module and the accompanying monkey patch. These are part of CMFPlone/patches now. [hannosch] - Clarified some doc strings on the utranslate methods, these are identical to the translate methods now, don’t use them anymore. [hannosch] - Cleaned up the PatchStringIO a bit, as we require Zope 2.10 now, we always have the Zope3 TAL machinery around and we should suppress the annoying deprecation warnings. [hannosch] - Deprecated the RequestGetAccept language negotiation handler, as it interferes with forms that include a field called language. We do not register the handler in 1.4 anymore. This closes. [hannosch] - Cleaned up tests and removed custom testrunner (framework/runalltests). [hannosch] - All translation domains which are registered with the Zope3 translation service are now ignored by PTS, as PTS wouldn’t been queried for these anyways. [hannosch] - PTS’s translations (for the management screens) are now set up to use the Zope3 translation service. Quite ironic you may think, but this emphasizes even more the path PTS will take. [hannosch] - Converted PTS’s own translation to new-style locales folder layout. [hannosch] - Changed translate method of PTS to return Unicode by default to work better with Zope 2.10+, which uses the Zope3 tal and pagetemplate machinery which expects Unicode in all places. [hannosch] 1.3.6 - April 22, 2007 - Yet another Unicode error was fixed which was caused by non unicode characters in page template source (utf encoded string in page template source). This closes. [naro, hannosch] 1.3.5 - January 27, 2006 - The recent change to return Unicode exposed another place in the TAL interpreter that combines text, which wasn’t yet patched to allow a mixture of Unicode and utf-8 encoded text. A new monkey-patch has been introduced to fix this problem. This closes. [hannosch] 1.3.4 - December 13, 2006 - Changed translate method of PTS to return Unicode by default. This was needed for Plone 2.5 in order to get a sensible behaviour with the FiveTranslationService. This release is probably not compatible with Plone 2.1. [hannosch] 1.3.3 - September 29, 2006 - Provided some more nice fallback in the interpolate function for situations where you mixed encoded strings or unicode in the mapping dict compared to the text itself. We handle utf-8 encoded strings gracefully in all cases now. [hannosch] 1.3.2 - September 8, 2006 - Made the logging of broken message catalogs more verbose. Now both the filename and path are logged, so you actually have a chance of finding those files. Thx limi for the suggestion. [hannosch] - Fixed bugs in interpolate function, where mixing of Unicode and encoded strings failed, when the Unicode string contained only ASCII characters. This will work now. Nonetheless you should update your code to use Unicode internally, as support for translating non-Unicode strings will go away once we switch to a Zope3-based TranslationService. [hannosch] 1.3.1 - June 1, 2006 - Also apply our evil hack that allows mixing utf-8 encoded strings and Unicode to the Zope3 versions of pagetemplate and talinterpreter, so current Plone works under Zope 2.10. Note that PTS is slated for destruction and you should really start to update all your code to use Unicode internally and especially for output through TAL. [hannosch] 1.3.0 - May 15, 2006 - Fixed another problem in the interpolate function, where variables where not replaced if the string was an old-style normal string and not unicode. This closes. [hannosch] - Fixed a UnicodeDecodeError bug in the interpolate function, when a mapping or the text was Unicode but the other one was not. The function excepts only Unicode as both the text and for all entries of the mapping, as it has no way to guess the encoding of any of them. [hannosch] - Sanitized the interpolate function. It had various major bugs and was just unbelievable slow. This closes. [hannosch] - Removed OpenTal support in anticipation of having to support Zope3 zope.tal for Zope 2.10. We don’t want to support three tal implementations ;) [hannosch] - Big general spring cleaning. Moved to logging module instead of zLOG. The logging module is included in Python starting with 2.3. Running an older version of Python is therefore not supported anymore. This goes likewise for Zope < 2.7. [hannosch] - Include the filename of the po in the missing-domain error message [wichert] 1.2.7 - March 19, 2006 - Fixed a bug in msgfmt.py noted by Andrey Lebedev. All comments starting with ‘#,’] 1.2.6 - February 25, 2006 - Removed some Python 2.1 BBB and unused code. [hannosch] - Removed home-grown MessageID implementation. Using Zope 3 MessageID’s is now possible with Zope 2.8 / Five 1.1 or Zope > 2.9. [hannosch] - Moved changes.txt from doc subfolder to main folder and renamed it to HISTORY.txt to comply to the standard layout. [hannosch] - Changed standard logging level to BLATHER instead of INFO so the startup process isn’t bombarded with useless messages. [hannosch] - Added a environment variable “DISABLE_PTS” to entirely disable loading of translation files and registration of PTS as a translation service without removing the product from the ‘Products’ directory. HINT: One easy way to set environment variables is to use the <environment> ‘zope.conf’ directive. [dreamcatcher] 1.2.5 - 2005-12-06 - Fix problems with folder layout where INSTANCE_HOME.startswith(ZOPE_HOME) is True, as reported in. Thanks to ymahe for the patch, which I have slightly modified. [hannosch] 1.2.4 - 2005-11-16 - Removed some Python 2.1 compatibility code and added first very basic test for loading po files [hannosch] - Made some filesystem access code a bit more robust by additionally catching OSErrors. This fixes. [hannosch] - Increased class version again and wrote test to ensure matching class version and version in version.txt [hannosch] 1.2.3 - 2005-10-17 - Fixed - upgrade from 2.1 to 2.1.1 breaks all message catalogs. We now increment the internal class version of PTS, which will result in a recreation of the translation_service object in the ZODB, so all contained internal poFile objects get removed and freshly recreated [hannosch] 1.2.2 - 2005-10-08 - Replaced storing the persistent PTS at the module level in __init__.py with a PTSWrapper object. Added isRTL method to PTSWrapper. Should fix the connection issues. [alecm] - Merged missing fix from the 1.0 branch. It’s changelog entry was: “Fixed issue with multiple ZEO clients at differen filesystem locations.” This was done by longsleep on Feb 9, 2005 [hannosch] 1.2.1 - 2005-08-07 - Fresh tarball for Plone 2.1rc2 (without .svn directories) [batlogg] - Added greek translation [thx to Nikos Papagrigoriou] [hannosch] 1.2.0 - 2005-07-28 - Purge mo file cache when PTS is recreated [tiran] 1.2-rc3 - Fixed id generation for po files located in the “locales” directory [tiran] - Added a mo file cache which is storing the compiled files in INSTANCE_HOME/var/pts/${catalog_id}.mo [tiran] 1.2-rc2 … 1.2-rc1 - 2004-09-08 - New feature RTL support and RTL api for right to left languages. Po files may contain a header called X-Is-RTL with either yes, y, true or 1 for a rtl language or no, n, false, 0 for a ltr language (default value). The product module also contains a new method isRTL which is available TTW. 1.1-rc1 - 2004-07-15 New feature msgid tracker (thanks to ingeniweb): It’s tracking untranslated msgids inside the PTS. You can easily download them as po file. See ZMI for more informations Set MessageCatalog isPrincipiaFolderish to false to avoid infinite recursion of dtml-tree inside the ZMI. 1.0-rc8 - This version is no longer a fork, but is the official version now. Thanks to Lalo Martins for his tireless efforts in writing the original product. - Disabled usage of SESSION - Re-enabled .missing logging - Added documentation section, including details of how to use .missing logging to generate .pot files 1.0fork-rc7 - 2004-05-11 - Reenabled getRequest patch to avoid some ugly problems 1.0fork-rc6 - 2004-05-05 - Cleaned up all python files, realigned the code and removed spaces 1.0fork-rc5 - 2004-04-22 Changed logging to get use the methods and vars from utils.py Cleaned up the imports an seperate them into python, zope and PTS imports Removed the dependency and auto loading of the get_request patch. Now it’s loaded only when using the MessageID module, when applying unicode to FasterStringIO (shouldn’t happen!) or as fallback when PTS can’t get a valid context (REQUEST). The last two cases will break the first time after a (re)start of zope. If your software depends on get_request() apply the patch manually: from Products.PlacelessTranslationService.PatchStringIO import applyRequestPatch applyRequestPatch() NOTE: FOR THIS RELEAE THE get_request PATCH IS ENABLED BY DEFAULT! Better debugging message for PoSyntaxErrors 1.0fork-rc4 - 2004-04-05 Changed po file id creation: - id is MyProducts.i18n-pofile or MyProducts.locales-pofile for po files loaded from a product directory - id is GlobalCatalogs-pofile for po files loaded from INSTANCE_HOME/i18n/ Always append fallback catalogs to the catalogs used for translation Support INSTANCE_HOME/locales/ Move GlobalCatalogs from INSTANCE_HOME/i18n/ and INSTANCE_HOME/locales/ to the beginning of the catalogs used for translation Cache catalog names in the REQUEST using the domain and language as key 1.0fork-rc3 - 2004-03-09 - Added a product identifier to the control panel catalog id to allow same po filenames in different locations: - Catalog its are now like Products.CMFPlone.i18n.plone-de.po Catalogs not coming from a Product (eg from INSTANCE_HOME) are named like before (plone-de.po) - Fixed collector issue #910529 Thanks to Nicolas Ledez for the report and the patch 1.0fork-rc2 - 2004-03-01 - Fixed bug in FasterStringIO that added new lines to the output - Added zope 3 like locales directory support: Products/MyProduct/locales/${lang}/LC_MESSAGES/${domain}.po 1.0fork-rc1 - 2004-02-11 - Fixed minors problems with python 2.1 compatibility 1.0fork-beta5 - 2004-02-03 - Added utranslate method - Added negotiator chains and two new easy negotiators - Added zope 3 like MessageID and MessageIDFactory - Updated API and cleaned up code: - added security to classes - moved some classes to utils.py to avoid method level imports - added getTranslationService() method to get the PTS instance in other products 1.0fork-beta4 - 2004-01-28 - Read all files with “rb” in msgfmt.py - Display broken Message Catalogs in ControlPanel as “broken” - Synced with these latest PTS changes from savannah: - added as_unicode argument to translate - cleaned up msgfmt.py 1.0fork-beta3 - 2004-01-07 - Added a builtin mo compiler based on the msgfmt tool from the python source package. No need to compile the po files to mo files. Thanks to Christian ‘Tiran’ Heimes <tiran@cheimes.de> - No longer load mo files on startup. Catalogs are automatically compiled. 1.0fork-beta2 - 2003-11-24 - No longer register a persistent service to zope translation service registry. Instead wrap PTS with a non persistent class - Added a de (German) translation for PTS ZMI - Reimplemented hook to register own negotiaton method into Negotiator which was stripped out in 1.0beta1 (now works with PloneLanguageTool again) - Python 2.1 compatibility 1.0beta1 - 2003-10-?? - Internationalized our own page templates (for ZMI) and added a pt_BR translation - Generalized the Negotiator so that it may negotiate any header in the “accept” format 1.0alpha2 - 2003-09-26 - Some primitive DTML support - Fixed persistence issues that were arising from having the same object stored in the ZODB and in a module-level global var (thanks to Sidnei) 1.0alpha1 - 2003-08-27 - Removed dependency from PAX - Now PTS looks for an “i18n” subdirectory under each Product package, which makes it easier to package/install i18n-aware products. The i18n dir on INSTANCE_HOME is still kept, you can use it for local overrides - Improvements on the ZMI usability 0.5 - 2003-03-31 - Now we have a ZMI (Zope Management Interface) in Zope’s Control Panel. You can use it to refresh catalogs without restarting, and to test installed catalogs - Some functions at module-level are exported for use in Python Scripts and Page Templates (Open or Z): negotiate(), translate(), getLanguages(), getLanguageName() - Added a “hotfix” to StringIO that should make PTS work with ZPT without UnicodeError being raised constantly 0.4 - 2003-02-03 - Relicensed to GPL - Now it really works with ZPT (thoroughly tested) - If used with OpenPT, it will use the output encoding negotiation hooks - Negotiator now uses a cache (stored in the request) to speed things up - Can now use multiple catalogs for the same domain (but the order in which they are checked is a bit randomic) - Special thanks to Magnus Heino for the ZPT support hints and patches 0.3 - 2003-01-02 - This release marked the split of PlacelessTranslationService into its own package, and the initial attempts at making it compatible with ZPT. 0.2 - 2002-09-22 - Updated release 0.1 - 2002-08-24 - Initial release Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Products.PlacelessTranslationService/
CC-MAIN-2017-43
refinedweb
4,317
65.52
I know this topic has been posted a bunch of times, but I honestly don't know what I am doing wrong. It seems that the points displayed aren't what the user entered. here is my code ------------------------------------------------------------------------------------ #include <windows.h> //wasn't sure what headers I needed #include <stdlib.h> //so I included a bunch! #include <stdio.h> #include <conio.h> #include <iostream> using namespace std; void gotoxy(int X, int Y) //you need this whenever you use { //gotoxy COORD coord; coord.X = X; coord.Y = Y; HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE); SetConsoleCursorPosition(hConsole, coord); } void draw(int x1, int y1, int x2, int y2) //drawing function { //just draws a "." at the //points gotoxy(x1, y1); cout << "."; gotoxy(x2, y2); cout << "."; } int main() { int x1, y1, x2, y2; cout << "Enter your coordinates" << endl; cout << "Point 1" << endl; cin >> x1 >> y1; cout << "Point2" << endl; cin >> x2 >> y2; gotoxy(0, 0); system("CLS"); draw(x1, y1, x2, y2); cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"; return 0; } ------------------------------------------------------------------------------------ Like I said this kinda almost sort of works, but the actual output seems to not be what the user enters. Funky.
https://cboard.cprogramming.com/cplusplus-programming/17982-something-wrong-gotoxy-printable-thread.html
CC-MAIN-2017-22
refinedweb
198
74.49
sorry to both, but it's a person thing. i love use properties instead Get\Set. sorrysorry to both, but it's a person thing. i love use properties instead Get\Set. sorryQuote: The big question I have about all of this is - why?? :confused: Your class test has the class variables public so that any user of this class can set/get the contents of these variables at will (ok with a nod towards the restrictions) but basically these are public class variables. The whole point of having private class variables and public getter/setter functions is to separate the underlying representation of the data with the public use of that data. So if, at a later date, it is required to change the underlying private data representation, this can be done without making changes to the public interface. Indeed, users of the class need never know that the underlying data representation has changed. In this case, if the test class data representation is changed, then all the code that uses that data has to be changed as well! Take a simple example. In class test you are storing the age as an integer. What about later if you want to store the age as say the number of days since 1900? As age is public, all uses of age have to be found and changed. Using class get/set functions GetAge(..) and SetAge(..) just these two functions would need to be changed to accomodate the data representation change.
http://forums.codeguru.com/printthread.php?t=539499&pp=15&page=2
CC-MAIN-2013-48
refinedweb
250
72.97
collective.plonetruegallery 3.4.8 A gallery/slideshow product for Plone that can aggregate from Picasa (add collective.ptg.flickr) and Flickr (add collective.ptg.flickr) or use Plone images. collective.plonetruegallery Documentation Introduction collective.plonetruegallery is a Plone add-on that implements a very customizable and sophisticated gallery. Plone Version Compatibility Works with Plone 5.0 and earlier. How It Works collective.plonetruegallery adds a Gallery View to Folders and Collections. For any Folder or Collection containing or showing images, use the Display toolbar menu and select Gallery View. Once that is done, a Gallery Settings toolbar menu is enabled for the type. With this, you can customize the various settings for the Gallery. Supported Display Types To install any of the various extra display types, you need to install the dependent package in buildout - galleria (included in default installation of collective.plonetruegallery) - contact sheet (collective.ptg.contactsheet) - thumbnail zoom gallery (collective.ptg.thumbnailzoom) - presentation (collective.ptg.presentation) - galleriffic (collective.ptg.galleriffic) - highslide (collective.ptg.highslide) - fancybox (collective.ptg.fancybox) - pikachoose (collective.ptg.pikachoose) - s3slider (collective.ptg.s3slider) - nivo slider (collective.ptg.nivoslider) - nivo gallery (collective.ptg.nivogallery) - content flow (collective.ptg.contentflow) - supersized (collective.ptg.supersized) Buildout configuration eggs = ... collective.plonetruegallery collective.ptg.highslide collective.ptg.fancybox collective.ptg.galleriffic collective.ptg.s3slider collective.ptg.pikachoose collective.ptg.nivoslider collective.ptg.nivogallery collective.ptg.contentflow collective.ptg.supersized collective.ptg.thumbnailzoom collective.ptg.contactsheet ... Installing all galleries If you want to install all available galleries, you could add eggs = ... collective.plonetruegallery collective.ptg.allnewest ... to buildout’s egg section. This will also install some galleries that are “under development”. Features - Flickr and Picasa Support! - Dexterity “Lead Image behaviour” support - Works with ‘Image’, ‘News Item’ and other content types that has a Image Field (provides IImageContent). - Also works with redturtle.smartlink and collective.contentleadimage (install ) - Customize gallery size, transition(limited transitions right now), timed and other settings - Can use nested galleries - searching and category selection for nested galleries - Galleria, Galleriffic, Highslide JS, s3slider, Pikachoose and Fancybox display types - display gallery inline - Products.Collage integration - Compatible with new-style Plone collections - Provides base settings configlet Flickr and Picasa Web Album Support - to add support for these type of galleries you must install additional packages - install collective.ptg.flickr for Flickr support - install collective.ptg.picasa for Picasa Web Album Support(tested with 1.3.3 and 2.0.12) - on Plone 3.x you must also manually install hashlib for picasa support - these can just be added to your buildout or installed with easy_install or you can add the package to your egg section like Displaying Gallery inline A view (@@placegalleryview) can be used to place the gallery inside of other content. Pop-up effect you could do this: 1) Install 2) Mark the link to the gallery with "prettyPhoto" style (which has now been added) from Kupu or TinyMCE Inline Gallery For showing a gallery in another page, try something like this: <object data="path/to/gallery/@@placegalleryview" height="400" width="500"> <param name="data" value="path/to/gallery" /> </object> Notes for successful inline object tag usage: - You will have to “whitelist” <object> and <param> in portal_transform safe-html. - When editing in Plone 4.2 you will have to switch your editor to Kupu since TinyMCE fracks up the object tag into a flash item. - If testing without Apache in front of your Plone you will need to make sure that the “path/to/gallery” path from the example above includes any levels above the Plone object in the Zope instance (eg. if your Plone object is inside of a folder named “version1”, and the name of your gallery is “mygallery”, then the path should read “/version1/Plone/mygallery”. Of course, you will need to remove the “/version1/Plone” part when you put Apache in front of your Plone. Or you can do the same with an iframe Re-use gallery in page template If you want to place the gallery in another page template, you can re-use the entire HTML as-is: <tal:gallery tal: This has the advantage, over <object> embedding, that a modal (pop-up) showing the enlarged image will take up the entire screen, instead of just the <object> area. Troubleshooting safe-html If you have trouble, do this: Go to safe_html in portal_transforms tool Make sure param and object are valid tags (not nasty tag). After that, you should flush the cache of ZODB by going to 1. Zope root app ZMI 2. Control Panel 3. Database 4. main (or whatever zodb you have) 5. Flush Cache tab 6. Press “Minimize” button This will remove from ZODB cache all cooked texts. This procedure is mentioned at the top of safe_html in portal_transforms. Upgrading From 0.8* The upgrade to version 0.8* is an important and large update. Basically, it gets rid of the Gallery type, replaces it with the regular Folder type along with a new view applied to the folder, namely the “Gallery View.” You can only successfully upgrade from the 0.8* series by first upgrading to a 1.x series release and then upgrading to the 2.x series. From 1.x to 2.x No longer support Slideshow 2 gallery which has been replaced with galleria. From * to 3.x You’ll be required to change your respective collective.js dependencies to collective.ptg dependencies in buildout, re-run buildout. Installation Since this product depends on plone.app.z3cform, you’ll need to add a few overrides for products versions in your buildout if you aren’t using recent versions of Plone. Good news is that is you’re using any other product that uses plone.app.z3cform, you’ll already be good to go. Basically, you’ll need to add these to your buildout versions section ONLY IF you’re running a plone < 4.1. For Plone 4.0: [versions] z3c.form = 2.3.2 plone.app.z3cform = 0.5.0 plone.z3cform = 0.6.0 zope.schema = 3.6.0 and Plone 3.x: [versions] z3c.form = 1.9.0 plone.app.z3cform = 0.4.8 plone.z3cform = 0.5.10 zope.i18n = 3.4.0 zope.testing = 3.4.0 zope.component = 3.4.0 zope.securitypolicy = 3.4.0 zope.app.zcmlfiles = 3.4.3 These versions are not the exact versions plonetruegallery requires, it’s just a known working set. If you already have plone.app.z3cform installed under different versions or wish to upgrade versions, you’re fine doing so. Then once you run buildout with this configuration, install collective.plonetruegallery via the the add-on product configuration. Also, make sure Plone z3cform support is installed too. If you experience issues where no settings appear in the Gallery Settings tab, reinstall Plone z3cform support. Uninstall First uninstall the collective.plonetruegallery product just like you would any other product. Then, go to portal_setup in the zmi and click on the Import tab. Once there, select the collective.plonetruegallery Uninstall Profile profile and run all the steps. Once that is done, you can remove the egg from your buildout. Fetching of Images Explained - When rendering a picasa or flickr gallery, it checks if the images have been fetched within a day. If they have not, then it re-fetches the images for the gallery. -. License Notes This Plone product is under the GPL license; however, the Highslide JS display type uses the Creative Commons Attribution-NonCommercial 2.5 License and is only for non-commercial use unless you have purchased a commercial license from the Highslide website. collective.ptg.pixelentity gallery (under construction) also requires a license Credits Coding Contributions - Patrick Gerken - huge help with 0.8 release - Espen Moe-Nilssen - Harald Friessnegger - Sylvain Bouchard Translations - French - Sylvain Boureliou - Norwegian - Espen Moe-Nilssen - Brazilian Portuguese - Diego Rubert - Finnish - Ilja Everila - German - Jens W. Klein, Harald Friessnegger - Italian - Mirto Silvio Busico - Spanish - Enrique Perez Arnaud - Dutch - Rob Gietema, Martijn Schenk, Fred van Dijk SDG Changelog 3.4.8 (2017-02-26) - Document re-using gallery in page template [khink] - mention Plone 5.0 compatibility, tweak README, add screen shots [tkimnguyen] - Refactored __getattr__ in settings, for clarity and possibly some speed. [maurits] - Respect folder sort order for showing subgalleries on the galleryview just as we do for images in the normal galleryview. Suggallery ordering was semi random until now because no order was passed into the catalog query for subgalleries. [fredvd] - Don’t show the message “There are no images in this gallery.” if we enable showing subgalleries and there are actual subgalleries to display. [fredvd] - Add option to disable the random lead image from galleries so that always the first image is returned. Also useful with subgalleries. [dveeze, fredvd] - Updated Dutch translations. [fredvd] - Updated vocabulary so it works with Plone 5 3.4.7 (2016-02-01) - Use root navigation path for finding gallery for portlets. That’s fix portlet with plone.app.multilingual. [bsuttor] 3.4.6 (2015-11-04) - Added dexterity folder to classes implementing IGallery to work with dexterity types and Plone 5. [sandrarum] - Updated portuguese pt-br translation. [lccruz] 3.4.5 (2014-11-28) - Add destinations to old upgrade-steps to prevent steps all -> all. [pbauer] 3.4.4 (2014-06-05) - Exclude our own sizes when building the size vocabulary. [witsch] 3.4.3 (2014-05-12) - Remove requirements for plone.app.contenttypes which likely broke a lot of buildouts on a minor version bump. [vangheem] 3.4.2 (2014-05-11) - fix thumbnails… [vangheem] 3.4.1 (2014-04-30) - Remove plone.app.contenttypes version fix. [thet] 3.4.0 (2014-02-08) - Added behavior [jaroel] - Support plone.app.contentypes’ Image [jaroel] - Drop support for Plone 3.3 and 4.0. [hvelarde] - The Topic type is now deprecated [ale-rt] 3.3.2 (2013-07-05) - fix character encoding in portlet image titles (so it works with images on Plone and Flickr) [kysr] 3.3.1 (2013-05-31) - give site administrator manage galleries permission [vangheem] - fix character encoding in portlet image titles [bouchardsyl] - add portlet methods to return all images [bouchardsyl] 3.3.1b1 (2013-05-06) - fix getSite [espen] 3.3.1a2 (2013-04-04) - provide “download_url” in image data [vangheem] 3.3.0a1 (2013-03-18) - provide “original_image_url” image data [vangheem] - add ability to provide custom css for gallery to override styles [vangheem] - add integration with collective.ptg.galleryimage [vangheem] - restore plone 3 compatibility [vangheem] - explicitly close the iframe tag in the embedded portlet gallery–fixes some browsers borking on the tag [vangheem] 3.2a (2012-11-07) - moved picas and flickr support to their own products [espenmn] - added vocabulary for image sizes [espenmn] 3.1 (2012-10-12) - be able to show copyright information [eehmke] 3.0 (2012-10-08) - make final release 3.0b4 (2012-10-01) - fix collage support [vangheem] 3.0b3 (2012-07-24) - brown bag previous release [vangheem] 3.0b2 (2012-07-24) - get portal root without the getSite hook [vangheem] - dexterity compatible changes [vangheem] 3.0b1 (2012-07-04) - no longer use collective.js packages since they caused more problems and confusion than anything. All gallery dependencies will now be collective.ptg.* namespaced. [vangheem] - move to using collective.ptg.galleria - move to using collective.ptg.contactsheet - move to using collective.ptg.contentflow - move to using collective.ptg.fancybox - move to using collective.ptg.galleriffic - move to using collective.ptg.highslide - move to using collective.ptg.nivogallery - move to using collective.ptg.nivoslider - move to using collective.ptg.pikachoose - move to using collective.ptg.presentation - move to using collective.ptg.supersized - move to using collective.ptg.thumbnailzoom 2.4b3 (2012-06-25) - backward compatible way to use Collection(4.2) [vangheem] 2.4b2 (2012-06-21) - Plone 4.1 conditional zcml[Mikko] 2.4b1 (2012-06-19) - respect limiting number of items for collections[vangheem] - add supersized gallery[espen] - Added Basque (eu) translation [erral] - Regenerated i18n files [erral] - Fixed some i18n issues removing duplicated msgids [erral] - added more settings for contactsheet and modified contactsheet to use “speed” setting for how long time the effect takes. Is is now possible to use thumbnail sizes [espen] - added more settings for thumbnailzoom [espen] - added custom css settings for s3slider [espen] - Add ability to have default settings control panel[espen] - Add ability to have default settings control panel[espen] - fixed Thumbnailzoom, Contactsheet and Presentation to use Batch Size setting. - compatible with new-style collections[vangheem] 2.3.1 (2012-05-11) - place gallery iframe fixes [espen] 2.3.0b2 (2012-05-11) - style fixes [vangheem] - portlet fixes [vangheem] 2.3.0b1 (2012-05-09) - Added settings for background position for presentation gallery type [espen] - move collection text field rendering to below gallery [vangheem] - add content flow display type [vangheem] 2.2.0 (2012-05-02) - Add presentation display type. [vangheem] 2.1b2 (2012-04-27) - be able to position overlay controls of highslide gallery [domruf] 2.1b1 (2012-04-24) - add contact sheet and thumbnail zoom gallery [espen] 2.1a2 (2012-02-28) - more nivo slider themes [espen] 2.1a1 (2012-02-24) - nivo slider and gallery integration [espen] - Products.Collage integration(taken from collective.collage.plonetruegallery) [vangheem] 2.0a2 (2012-02-22) - allow you to place full gallery in portlet [vangheem] - added option for background color for pikachoose [espen] 2.0a1 (2012-02-22) - Remove Slideshow 2 display type (depends on mootools and has loads of conflicts) [vangheem] - switch to using collective.js.galleriffic [vangheem] - switch to using collective.js.highslide [vangheem] - switch to using collective.js.fancybox [vangheem] - finally remove remains to gallery content type. Can not upgrade directly to this product version now. [vangheem] - fix error with unicode-titled images [silviot] - added pikachoose support [espen] - added s3slider support [espen] 1.3.3 (2011-09-28) - fix placegalleryview [vangheem] - fix highslide gallery not auto-playing when pagination is enabled. [vangheem] 1.3.2 (2011-09-20) - set thumbnail height on galleriffic 1.3.1 (2011-09-20) - fix size and scale problems with galleriffic 1.3.0 (2011-09-20) - no longer use silly unique zcml to register display types - Add Galleriffic slideshow display type - Change the way the display types are used so that they can now be customized through portal_view_customizations 1.2.1 (2011-07-06) - add translation for pt_BR [rafabazzanella] 1.2.0 (2011-06-30) - Add option to set size for thumbnail images. TODO: Take available scales from plone.app.imaging () [hink] 1.1.0 ~ (2011-06-22) - fixes for Slideshow 2 in IE9 If you’re using custom styles for the Slideshow 2 gallery, please test this upgrade as some styling changes have been made. [vangheem] 1.0.5 ~ (2011-04-17) - fix plone 4.1 compatibility issue. Closes [vangheem] - reference all css and js with absolute urls [vangheem] - no longer server slideshow js from js registry [vangheem] 1.0.4 ~ (2011-03-14) - Add a Gallery Setting for Slideshow type to allow omitting the link to images. () [khink] 1.0.3 ~ (2011-02-20) - remove the restriction on requiring picasa web album accounts to end with ‘@gmail.com’. Fixes [vangheem] 1.0.2 ~ (2011-01-12) - Enable re-use of view template macro. collective.collage.plonetruegallery uses this. [khink] 1.0.1 ~ 2010-12-31 - added spanish translation [Enrique Perez Arnaud] - use ViewPageTemplateFile since you can get UnicodeDecodeError with non-ascii characters in the title and description. [Enrique Perez Arnaud] 1.0 - fix picasa support on Plone 4 1.0rc2 - made the menuitem, the settings action and tabs translatable and added German translations. [fRiSi] - move translation files to locales folder and added script to rebuild and sync the po(t) files and compile mo files (see) [fRiSi] - select the random image for subgalleries out of the subgallery’s images fixes [fRiSi] 1.0rc1 - Do not show “There are no images in this gallery” in case there are sub-galleries. [fRiSi] - added a placeful layout for adding galleries through kupu and prettyphoto [espen] - add description and text to rendered gallery page so people can have introductions to galleries [vangheem] - add hide controls options for gallery portlet. closes [vangheem] - fixed issue where portlets wouldn’t work properly when there were more than one on a page. Fixes [vangheem] - handle returns and quotes in descriptions. Fixes [vangheem] - gallery portlet now sets title and alt attribute of anchor tag for image. fixes [vangheem] 0.9.1rc5 - use plone.app.contentmenu.interfaces.IDisplayViewsMenu instead of plone_displayviews for menu declaration since it doesn’t work with plone.app.contentmenu > 2.0b3 and zope.browsermenu installed. [vangheem] 0.9.1rc4 - import Batch directly from PloneBatch since with Zope 2.13 Batch is not available at the package level when plonetruegallery is loaded. [vangheem] 0.9.1rc3 - made the basic gallery not store it’s cached images since it would never be able to really cache them anyways. This fixes the zodb potentially growing forever on sites that use the gallery portlet since it needed to calculate the gallery on every new image request, which would cause a new write to the database. FYI, packing the database brings it back down to it’s normal size. [vangheem] 0.9.1rc2 - changed added large plone folder view to code since in Plone 4 it is no longer available. fixes [vangheem] 0.9.1rc1 - Update to fancybox 1.3.1–should fix from showing up any longer [vangheem] - added easing and scrolling plugins to fancybox so it’s nicer now. [vangheem] 0.9.0b1 - use getAllowedSizes from plone.app.imaging.utils instead [vangheem] - fixed plone.app.imaging incompatibility with patches it uses–fixes [vangheem] - fixed fancy box not showing correctly occasionally [vangheem] - restructured display type code to be a little more compatible with templating. It was a little messy the way it was done before so it is now slightly less customizable for the sake of being more compatible and modular. If any gallery types were defined in the old fashion, they may no longer work without slight modification. That is what this is now tagged as a 0.9 release. [vangheem] - compatible with cmf.pt now–Chameleon. [vangheem] - gallery portlet now forces the height of the image so it doesn’t flicker if for some reason the image hasn’t finished loading yet. fixes [vangheem] - added plone.app.z3cform as dependency profile fixes [vangheem] - fixed css on gallery portlet to show title properly [vangheem] 0.8.2b4 ~ March 16 - annoying extra release since the previous one included extra “._” po files… [vangheem] 0.8.2b3 ~ March 10 - fixed gallery portlet js to work with Plone 4 [vangheem] - fixed max-width screwing up slideshow transition with some css [vangheem] - gallery is now plone.app.imaging aware, reflecting sizes specified there. [vangheem] - fixed translations not being added correctly [vangheem] 0.8.2b2 ~ February 10, 2010 - fixed page template traversal issue with plone 4 described here [vangheem] 0.8.2b1 ~ February 4, 2010 - Make compatible with Plone 4–fixes page template rendering and css issues [vangheem] 0.8.1b2 ~ January 27, 2010 - Adapting BasicImageInformationRetriever to IObjectManager does not work in Plone4 anymore. Use IBaseFolder instead which is also generic for ATFolder and ATBTreeFolder. [thet] - Added Italian translation [Mirto Silvio Busico] - Added z3c.autoinclude support–no more zcml entry in buildout on newer versions of plone. [vangheem] - Override button apply method instead of __call__ method to set the status for user warning and to set setting changes. This fixes issue with newer version of plone.z3cform not showing updated status message. [vangheem] - Added hashlib to list of install requires for picasa since some versions of gdata fail without it. [vangheem] 0.8.1b1 ~ December 17, 2009 - add german translation [jensens] - add extra requires to setup.py: Now one can set as dependency “collective.plonetruegallery[flickr], collective.plonetruegallery[picasa]” or collective.plonetruegallery[all] – [jensens] - removed logging statement in porltet js code [vangheem] - fixed bug with upgrading older versions during version check [vangheem] - fixed unicode decode error with picasa albums that have none-standard letters in them. [vangheem] 0.8.1a3 ~ December 3, 2009 - added gallery portlet [vangheem] 0.8a2 - fixed slideshow gallery css so that the green bar does not get covered up by the gallery when logged in. [vangheem] - fixed sub-gallery css issues [vangheem] - fixed ordering of images in gallery–now gallery images reorder when they are reordered in the container. [vangheem] - links now point to the view view of images if a user is logged in [vangheem] - fixed ?start_image parameter to work with batching. [vangheem] - highslide and fancybox slideshow will start slideshow automatically only if number of images fits in one batch page [do3cc] - highslide image slides now have a title that consists of the image title and link to the image [do3cc] 0.8a1 - removed Gallery content type - allows you to display gallery for Folder, Large Folder, and Collections - moved to using plone.app.z3cforms - remove event subscriptions and do not cook basic galleries - removed classic display type–don’t feel like maintaining anymore.. - added fancybox and highslide display types - slideshow 2 now pans without zooming in on image and distorting it - added more styling to slideshow 2’s type - fixed issue with slideshow 2 gallery type where image would show up a little blurry because of image scaling… - updated flickr size settings - no longer support private picasa albums(I don’t want to store passwords obviously…) 0.7.1 - fixed tests - added finnish translations[Ilja Everila] - added translatable sub-images 0.7rc1 - added “Refresh Gallery” button in case you change a gallery and need to re-cook the gallery images before it automatically does it for you. Especially useful for reordering of images in a basic gallery. - added go to image support via url like /url/to/gallery?start_image=theTitle. Not exactly perfect, but should work most of the time. No other way to know what image it is since I don’t keep ids on flickr and picasa galleries. make sure to url encode the title though..4 - fixed dependencies to be more flexible 0.6b2.3 - fixed really dumb basic image sizing problem 0.6b2.2 - added French translation(thanks to Sylvain Boureliou) - Author: Nathan Van Gheem - Keywords: gallery plone slideshow photo photos image images picasa flickr highslide nivoslider nivogallery pikachoose fancybox supersized quicksandgalleriffic galleria - Categories - Development Status :: 5 - Production/Stable - Environment :: Web Environment - Framework :: Plone - Framework :: Plone :: 4.1 - Framework :: Plone :: 4.2 - Framework :: Plone :: 4.3 - Framework :: Plone :: 5.0 - License :: OSI Approved :: GNU General Public License (GPL) - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Topic :: Software Development :: Libraries :: Python Modules - Package Index Owner: collective, tkimnguyen, vangheem, frisi, espen, witsch, bsuttor - DOAP record: collective.plonetruegallery-3.4.8.xml
https://pypi.python.org/pypi/collective.plonetruegallery
CC-MAIN-2017-17
refinedweb
3,778
59.19
Does anyone know if you can make a ray sensor search for more than just one property or material? I am trying to set up a camera/character system like in resident evil 2 remake. Any help would be appreciated! Have multiple ray sensors searching for each property connected to a single “OR” controller Better use python for it. Example if needed def raycast(cont): own = cont.owner start_vec = own end_vec = own.worldPosition + own.worldOrientation.col[1] # y direction distance = 10 objects_to_look_for = ['object1', 'object2', 'object3'] #object names to look for ray = own.rayCast(end_vec, start_vec, distance) if ray[0]: #ray hit something hit_obj = ray[0] if hit_obj.name in objects_to_look_for: print('ray hit:', hit_obj) if hit_obj.name == 'object1': #do something with object1 here elif hit_obj.name == 'object2': #do something with object2 here elif hit_obj.name == 'object3': #do something with object3 here How did you learn all of this? I really want to learn python but I keep putting it off. Do you have any suggestions for a tutorial series? And is the main python the same as the bge scripting? self thought trough youtube, google and a question on this forum once a while. Just start somewhere i would say. In this case you need raycasting so a simple search as ‘blender game engine ray cast python’, will uncover a lot of details that you then can read and try out. On youtube, search for ‘blender game engine python tutorials’ or something and follow those. Python is not hard to learn, however it’s hard to master it, but for bge you don’t need to master it, the basics are just fine. Not every line you write with python is the same in python or in blender, because in blender you need to call objects etc, from within (just like any other program you would use to program python), but yeah python is python. learn python in bge and you get the basics to use python for anything you like really. YouTube sadly doesn’t have many BGE Python tutorials - Here has some great BGE Python tutorials, but the creator isn’t active anymore sadly. - Some more good tutorials. There all in German sadly (Use YouTube translation subtitles) - YouTube Arsenal RSL had really great simple tutorials, but he removed all but 2 of his videos sadly for some reason (I think YouTube banned his account or something)
https://blenderartists.org/t/multiple-properties-for-ray-sensor/1210237
CC-MAIN-2020-24
refinedweb
398
64.61
Flask’s Latest Rival in Data Science Streamlit Is The Game Changing Python Library That We’ve Been Waiting For Developing a user-interface is not easy. I’ve always been a mathematician and for me, coding was a functional tool to solve an equation and to create a model, rather than providing the user with an experience. I’m not artsy and nor am I actually that bothered by it. As a result of this, my projects always remained, well, projects. It’s a bit of a problem. As ones own journey goes, I often need to do a task that’s outside of my domain: usually to deploy code in a manner by which other people can use it. Not even to launch the next ‘big thing’, but just to have my mother, sister or father use a cool little app I built that recommends new places to eat. The answer always required more effort than I desired to put into it, which used to be: Develop a novel solution (my speciality) — [This I can do] Design a website using a variety of frameworks that require months of education [This I cannot do] Deploy code to a server on some web domain [This I can do] So (2) is where I always lacked motivation because in reality, it wasn’t my speciality. Even if I did find the motivation to deploy some code, the aesthetics of my work would render it unusable. The problem with using a framework like Flask is that just requires way too much from the individual. Check this blog here: clearly it’s ridiculous to have to navigate all that just to build a small nifty website that can deploy some code. It would literally take ages. And that’s why Streamlit is here. On the comparison between Flask and Streamlit: a reader noted that Flask has capabilities in excess of Streamlit. I appreciate this point and would encourage users to look at their use cases and use the right technology. For the users who require a tool to deploy models for your team or clients, Streamlit is very efficient, however, for users who require more advanced solutions, Flask is probably better. Competitors of Streamlit would include Bokeh and Dash. Streamlit This is where Streamlit comes into its own, and why they just raised $6m to get the job done. They created a library off the back of an existing python framework that allows users to deploy functional code. Kind of similar to how Tensorflow works: Streamlit adds on a new feature in its UI, corresponding to a new function being called in the Python Script. For example the following 6 lines of code. I append a “title” method, a “write” method, a “select” method and a “write” method (from Streamlit): import streamlit as st st.title(‘Hello World’) st.write(‘Pick an option’) keys = [‘Normal’,’Uniform’] dist_key = st.selectbox(‘Which Distribution do you want?’,keys) st.write(‘You have chosen {}’.format(dist_key)) Save that into a file called “test.py”, then run “streamlit run test.py” and it produces the following in your browser on /: Code above produced this. Fantastic how efficient Streamlits library makes UI programming.
https://www.thetechplatform.com/post/flask-s-latest-rival-in-data-science
CC-MAIN-2022-21
refinedweb
534
62.68
OpenAerialMap/Meeting Feb 19, 2015 Feb 19 19:00:01 Cristiano: Good morning, we're are about to start the OAM weekly meeting Feb 19 19:00:10 Cristiano: (and good evening) Feb 19 19:00:10 BlakeGirardot: Good morning Feb 19 19:00:47 Cristiano: Here's the link to the agenda: Feb 19 19:00:58 smathermather: Hi. Feb 19 19:01:13 Cristiano: Hi smathermather! Feb 19 19:01:30 Cristiano: Please add anything else that you would like to talk about Feb 19 19:02:26 dodobas: I'm semi-AFK... will keep an eye on the chat... might not respond immediately Feb 19 19:02:31 smathermather: Just reading through now. Feb 19 19:03:24 Cristiano: Hi jj0hns0n! Feb 19 19:03:32 jj0hns0n: hi Cristiano Feb 19 19:03:38 Cristiano: Is wildintellect here? Feb 19 19:04:50 Cristiano: Well, let's start. First, here's the link to the published tech challenge: Feb 19 19:05:47 Cristiano: Please help to spread it. There's a two week period for proposals Feb 19 19:06:21 BlakeGirardot: Did we share it on the hot and osm email lists? Feb 19 19:06:37 smathermather: Will share. Feb 19 19:06:55 Cristiano: Not yet, please go ahead Blake Feb 19 19:07:00 smathermather: (in response to christiano Feb 19 19:07:01 BlakeGirardot: Ok, will do. Feb 19 19:07:19 Cristiano: I asked OSGeo to post on their @jobs list, but haven't received approval yet Feb 19 19:07:41 Cristiano: but I guess we can post on @discuss if the job list is not used anymore Feb 19 19:08:02 Cristiano: (there hasn't been any post since October) Feb 19 19:08:52 Cristiano: Anyway, any other community of open source devs that may be interested, please circulate, thanks Feb 19 19:09:59 Cristiano: jj0hns0n: I'm sure you'll forward to Geonode and Django's :-) Feb 19 19:10:21 jj0hns0n: most of the geonode folks have already looked at it, not sure if there is a geodjango specific list anymore? Feb 19 19:10:38 jj0hns0n: I guess there is, not that active Feb 19 19:10:52 jj0hns0n: will alert the eoxserver guys Feb 19 19:11:03 Cristiano: Cool, looking forward to their proposals! Feb 19 19:11:49 jj0hns0n: Im curious if its intended to for only people who want to do the work to make proposals? Feb 19 19:12:42 Cristiano: No, of course anyone's proposal is welcome. Here, through the list or GitHub Feb 19 19:12:57 jj0hns0n: ok, sounds good Feb 19 19:13:50 Cristiano: I would also like to get your input on how to evaluate proposals Feb 19 19:14:16 wildintellect: technical and practical Feb 19 19:14:33 Cristiano: I would like to make that process as open and collaborative as possible, but making sure to preserve sensitive information Feb 19 19:14:39 jj0hns0n: yeah, 2 separate things as wildintellect mentions Feb 19 19:14:47 wildintellect: just assign them some score on a set scale for each property Feb 19 19:15:00 wildintellect: technical - do they have the background and skills Feb 19 19:15:20 wildintellect: practical - work history, prev community experience, likelihood of completion Feb 19 19:15:47 Cristiano: yes, I was not talking about the evaluation process per se, but how to form the evaluation group Feb 19 19:15:49 wildintellect: you can put the proposals with Names redacted into a a google doc folder Feb 19 19:15:56 jj0hns0n: and I think it makes sense to consider the techical aspects of the proposal separately Feb 19 19:16:19 wildintellect: then it's up to you and HOT to pick a group to score each section Feb 19 19:16:21 jj0hns0n: is it a sound implementation, is it likely to be sustainable over time Feb 19 19:16:51 jj0hns0n: fwiw, I have used for a few projects to evaluate proposals Feb 19 19:17:02 Cristiano: I would like the process to be participatory, so I will think of the best method to involve you guys Feb 19 19:17:03 wildintellect: so I would suggest Cristiano you pick or invite people to score each section Feb 19 19:17:32 Cristiano: That's a good idea, I will check it out Feb 19 19:17:40 jj0hns0n: Im sure they would give us a free subscription if we want Feb 19 19:17:50 jj0hns0n: code for america folks Feb 19 19:17:58 wildintellect: obviously the people here are a portion of possible scorers Feb 19 19:18:14 jj0hns0n: I guess the question is how many proposals do you expect? Feb 19 19:18:28 wildintellect: with only 2 weeks to submit there might not be many Feb 19 19:18:38 jj0hns0n: I realize maybe its a bit too late in the game, but its good to do a kind of "Expression of Interest" first Feb 19 19:19:27 Cristiano: Yeah, unfortunately we needed to get things rolling, so hopefully we get enough good proposals in two weeks Feb 19 19:19:49 Cristiano: and we may decide to extend it if we need to Feb 19 19:20:04 jj0hns0n: yeah, makes sense to wait and see what you get Feb 19 19:20:17 Cristiano: The application should not take long Feb 19 19:21:05 Cristiano: 1 page for describing the proposal, 1 page resume and 1 page cover letter Feb 19 19:21:49 Cristiano: Anyway, now unless there's any other comment about the tech challenge, should we start brainstorming the API? Feb 19 19:23:28 Cristiano: wildintellect: please go ahead and edit the agenda with your proposal Feb 19 19:23:57 Cristiano: or should we have a separate doc for dumping and sketching ideas? Feb 19 19:27:06 Cristiano: wildintellect: I'm looking your proposed design in the agenda how do you define "node"? Feb 19 19:27:07 wildintellect: its fine where it is Feb 19 19:27:20 wildintellect: Node is a discrete computer Feb 19 19:27:32 jj0hns0n: I would say single container whether virtualized or bare metal Feb 19 19:27:38 wildintellect: yes container Feb 19 19:27:39 Cristiano: in my understanding there could be catalog and processing on the same computer Feb 19 19:27:44 wildintellect: yes Feb 19 19:27:49 Cristiano: (for the portable options) Feb 19 19:27:52 wildintellect: but they are independent Feb 19 19:27:55 wildintellect: of each other Feb 19 19:27:58 Cristiano: OK cool Feb 19 19:28:25 Cristiano: because I feel node may be confused with instance Feb 19 19:29:19 jj0hns0n: wildintellect "Distributed tile processing" you mean tile generation right? Feb 19 19:29:39 wildintellect: not sure I just moved it Feb 19 19:30:07 wildintellect: but in the old OAM there was some processing to convert images into most usable format Feb 19 19:30:09 jj0hns0n: well, the question is more whether you would want to do actual imagery processing here or simply tile generation Feb 19 19:30:17 wildintellect: not necessarily tile seed Feb 19 19:30:19 jj0hns0n: ok, that would be image pre-processing Feb 19 19:30:26 jj0hns0n: not done on the tiles but on raw imagery Feb 19 19:30:38 Cristiano: I would not include other image processing other than reprokjecting and tiling Feb 19 19:30:40 jj0hns0n: reprojection, stretches, clipping etc Feb 19 19:31:06 jj0hns0n: so, Im going to edit this Feb 19 19:31:07 wildintellect: right, the classic example is the conversion of from RGB to YC... Feb 19 19:31:21 wildintellect: to get huge file size savings Feb 19 19:31:27 jj0hns0n: hows that? Feb 19 19:31:44 wildintellect: you familiar with the telascience NAIP 2009 Feb 19 19:31:46 jj0hns0n: there you go Feb 19 19:31:48 jj0hns0n: yes :) Feb 19 19:31:55 wildintellect: that processes and stuff like it Feb 19 19:31:56 jj0hns0n: you should ask winkey all the stuff he had to do Feb 19 19:32:04 jj0hns0n: yeah, I think I listed the main ones now Feb 19 19:32:11 wildintellect: I plan to since new NAIP is out Feb 19 19:32:25 Cristiano: any process that is not fully automatic should not be part of the processing node Feb 19 19:32:38 Cristiano: it should be done before by the submitter Feb 19 19:32:49 jj0hns0n: nah no way Feb 19 19:33:02 jj0hns0n: you cannot ask them to give you files in the optimum format for tiling Feb 19 19:33:05 wildintellect: Cristiano, we're talking about optimizations Feb 19 19:33:07 jj0hns0n: they will give you jp2k or mrsid Feb 19 19:33:07 smathermather: Any need / intent to handle feathering / seam matching / histogram matching... ? Feb 19 19:33:08 jj0hns0n: et Feb 19 19:33:09 Cristiano: ideally we only ingest ready-imagery (or almost ready) and just compress it, convert it, reproj and tile it Feb 19 19:33:19 smathermather: (between datasets) Feb 19 19:33:30 jj0hns0n: ready imagery can be a huge jp2k that is in a strange projection Feb 19 19:33:45 jj0hns0n: and you still have to convert that to the most optimal format for quickly rendering tiles Feb 19 19:33:48 Cristiano: that's fine, then it's all automatic processing to make it ready Feb 19 19:34:04 jj0hns0n: yeah, it more or less goes in the order we have there under Uploads Feb 19 19:34:12 Cristiano: right. I'm saying, we should not include functions that require user intervention Feb 19 19:34:17 jj0hns0n: oh no Feb 19 19:34:32 smathermather: There are good out of the box tools for automatic translation of format and projection -- not for all use cases / uploads, but for a lot. Feb 19 19:34:38 jj0hns0n: whether uploading an individual file or indexing a directory, it should just 'go' from there Feb 19 19:35:07 jj0hns0n: yeah, crschmidt figured out the really most optimal formats for rendering tiles as quickly as possible, gdal can do everything necessary Feb 19 19:35:45 jj0hns0n: its very inefficient to render off of things like mrsid or jp2k especially if reprojecting Feb 19 19:37:27 wildintellect: right you'd have to decompress it first Feb 19 19:37:47 jj0hns0n: yep Feb 19 19:37:56 jj0hns0n: so all of that stuff goes on in pre-processing Feb 19 19:38:03 jj0hns0n: but requires no user intervention if done correctly Feb 19 19:39:02 jj0hns0n: you should make it clear that a tile 'node' could also just be something like S3 Feb 19 19:39:14 smathermather: So if there are standards for upload format that are restrictive, I assume there will be good, easy to use tools to point people to for client side pre-processing... . Feb 19 19:39:29 jj0hns0n: smathermather I think that would be backward though Feb 19 19:39:49 jj0hns0n: to put the onus on the imagery submitter, better the processing node is smart and can deal with whatever is thrown at it Feb 19 19:40:04 jj0hns0n: if its proper orthoimagery, regardless of format, projection etc Feb 19 19:40:04 smathermather: Oh good -- agreed. Feb 19 19:40:11 smathermather: Yup. Feb 19 19:40:15 Cristiano: well, that could be challenging Feb 19 19:40:20 jj0hns0n: nah, not really Feb 19 19:40:41 Cristiano: well, we should def suggest formats for upload Feb 19 19:40:41 wildintellect: of course we'd still make suggestions on easiest things to upload Feb 19 19:40:42 jj0hns0n: it becomes an iterative thing ... if someone throws something at it that doesnt work, you write a test that demonstrates the failure, fix it and ask them to re-upload Feb 19 19:41:12 Cristiano: and encourage development of plugins for other tools (e.g. QGIS, Pix4D, ODM, etc) Feb 19 19:41:26 jj0hns0n: I think we should also be clear in the processing node that you could also just point it at a dropbox, s3 dir, ftp dir etc Feb 19 19:41:36 wildintellect: Cristiano, sure that's the point of the api Feb 19 19:41:39 jj0hns0n: and the processing node would slurp that or if possible access it over FTP Feb 19 19:41:42 jj0hns0n: http I mean :) Feb 19 19:42:06 jj0hns0n: i.e. with vsicurl Feb 19 19:42:22 Cristiano: right, or what if we to upload ready-tiles (eg in Dropbox)? What's the best workflow there? Compress for transfer and then deflate on server side? Feb 19 19:42:47 jj0hns0n: thats on the other side Feb 19 19:42:56 Cristiano: assuming that the contributor already run gdal2tiles Feb 19 19:42:58 jj0hns0n: its commonly called "clip and ship" still Feb 19 19:43:09 jj0hns0n: ah, thats different I guess, I would not encourage that Feb 19 19:43:19 jj0hns0n: better they give oam the raw imagery then some tiles of unknown origin Feb 19 19:43:45 jj0hns0n: it would be rare to have someone that had tiles and not the raw imagery (which would be smaller and easier to transport) Feb 19 19:43:56 Cristiano: Well, If someone already has a QGIS plugin to prepare and tile imager, then they could just send it up to an OAM instance along with a metadata file Feb 19 19:43:58 jj0hns0n: Im thinking someone saying "here is my dropbox folder with a bunch of raw imagery in it" Feb 19 19:44:14 jj0hns0n: but by definition the tiles will be much bigger than the raw imagery Feb 19 19:44:23 wildintellect: I think the QGIS plugin would be an upload to OAM plugin Feb 19 19:44:24 jj0hns0n: better they send raw imagery Feb 19 19:44:50 Cristiano: well, yes, that's prob the most common situation. But would be nice if we can offload some of the processing on the client side :) Feb 19 19:44:53 jj0hns0n: ah, so then in theory you could run the processing node _inside_ qgis and just upload Feb 19 19:44:55 jj0hns0n: that is better Feb 19 19:45:05 Cristiano: right Feb 19 19:45:11 wildintellect: sure that is an option, since anyone can run a processing node Feb 19 19:45:15 jj0hns0n: but I would not be in favor of accepting a random zip of tiles that someone made Feb 19 19:45:29 jj0hns0n: well, one more +1 to do this in python then ... so it runs as a qgis plugin :) Feb 19 19:45:42 smathermather: hehe. Feb 19 19:46:22 wildintellect: if someone does use a processing node locally then there are only 2 steps after that - 1. push them to a tile node, 2. notify the Catalog once they are on the tile node Feb 19 19:46:24 jj0hns0n: so, does the use case I mention make sense though? "here is a remote directory of raw imagery, please download it and make tiles from it" Feb 19 19:46:30 jj0hns0n: correct wildintellect Feb 19 19:46:31 wildintellect: yes Feb 19 19:46:46 jj0hns0n: and again "push to tile node" could just be to S3 Feb 19 19:46:51 wildintellect: I expect plenty of people will just have a dir of images somewhere Feb 19 19:46:57 jj0hns0n: and then you just let the catalog node know "they are here" Feb 19 19:46:58 wildintellect: ie thats how we pull NAIP Feb 19 19:47:02 jj0hns0n: right Feb 19 19:47:09 jj0hns0n: thats the dream to just point it at NAIP and say "go" Feb 19 19:47:50 Cristiano: Or the L8 on S3 CHolmes just mentioned :) Feb 19 19:48:03 jj0hns0n: yeah, but they are going to preprocess all that Feb 19 19:48:20 jj0hns0n: we should just be able to add that to the catalog someday Feb 19 19:48:29 smathermather: So does OAM ever optionally return raw imagery in cases where only tiles have been uploaded? Feb 19 19:48:42 jj0hns0n: I would say that is a totally secondary concern Feb 19 19:49:03 jj0hns0n: I think by and large it is not the goal of OAM to ever return 'raw' imagery Feb 19 19:49:13 jj0hns0n: right? Feb 19 19:49:19 Cristiano: It would be nice (in the future) Feb 19 19:49:43 Cristiano: so that users can do stuff with the imagery other than looking at it or tracing Feb 19 19:50:02 jj0hns0n: ok, but if you _really_ want to do that kind of thing, you should just look at OMAR and OSSIM Feb 19 19:50:18 jj0hns0n: OAM should be primarily about tiles IMO Feb 19 19:50:27 Cristiano: I guess so :) Feb 19 19:50:35 jj0hns0n: omar/ossim or eoxserver Feb 19 19:50:56 jj0hns0n: that is a very different use case right? Feb 19 19:51:25 wildintellect: L8 would still be we just want the RGB, and to process that into tiles Feb 19 19:51:30 jj0hns0n: i.e. having access to the raw imagery is important to an analyst or scientist, and not to the ordinary user who just wants tiles Feb 19 19:51:38 Cristiano: Yes, but it would be nice if there's a list some metadata entry so that people can eventually go find the raw data if needed Feb 19 19:51:39 smathermather: Can those tools ingest a set of tiles and do processing on them? Feb 19 19:51:57 wildintellect: Cristiano, sure the Catalog should say where the data came from Feb 19 19:52:06 jj0hns0n: yeah, I would make sure that whenever possible the link to the raw imagery should be stored in the catalog, but it should not be the goal of OAM to actually serve that stuff up Feb 19 19:52:18 jj0hns0n: yeah, both can Feb 19 19:52:58 tomkralidis: catalogue is simply yellow pages w/ links Feb 19 19:53:01 jj0hns0n: omar/ossim in particular is a very robust automated image processing toolchain and catalog Feb 19 19:53:15 Cristiano: ideally the "central" OAM should be a place where people can find, view and use tiles, but also just find/link free imagery that is available anywhere in the world Feb 19 19:53:21 jj0hns0n: right, tomkralidis what I mean is that OAM should link back to the raw imagery, but not be concerned with trying to serve it up Feb 19 19:53:33 tomkralidis: exactly Feb 19 19:53:33 jj0hns0n: i.e. OAM is not a WCS Feb 19 19:53:49 jj0hns0n: if you want a WCS, use eoxserver or omar or whatever Feb 19 19:54:18 Cristiano: Right, so that brings us to the catalog and metadata. How much PyCSW can we just use straight off? Feb 19 19:54:27 jj0hns0n: 100% of it Feb 19 19:54:28 jj0hns0n: :) Feb 19 19:55:09 Cristiano: I'm not 100% familiar with it, so how do we plug it in and how do we use its API Feb 19 19:55:16 Cristiano: ? Feb 19 19:57:07 dodobas: pycws sohuld be just another 'endpoint' Feb 19 19:57:15 tomkralidis: Cristiano: pycsw's a Python library that you deploy standalone CGI or mod_wsgi (latter is better), or embed into your own framework like Django, Flask, etc. Feb 19 19:57:20 tomkralidis: like GeoNode did Feb 19 19:57:22 jj0hns0n: yeah, it has to have a datastore too Feb 19 19:57:33 jj0hns0n: but then yeah, you just have to stick http in front of it somehow Feb 19 19:57:35 dodobas: that will 'publish' OAM metadata... Feb 19 19:57:36 jj0hns0n: various ways to do that Feb 19 19:57:45 tomkralidis: jj0hns0n: agree. Either u can use pycsw's datastore OR it can bind to an existing datastore. Feb 19 19:57:46 dodobas: no need to integrate anything Feb 19 19:57:49 jj0hns0n: right Feb 19 19:57:59 jj0hns0n: in the smallest possible case it can just be a sqlite db right? Feb 19 19:58:10 tomkralidis: jj0hns0n: yup Feb 19 19:58:10 jj0hns0n: and uses sqlalchemy in that case? I cant remember Feb 19 19:58:27 tomkralidis: yes, sqlalchemy, lxml, OWSLib, geolinks are the deps. Feb 19 19:58:56 jj0hns0n: so yeah, its pretty straightforward IMO Feb 19 19:59:05 Cristiano: PyCSW handles metadata and other catalog harvesting right? How do you search all that and use it through a Web UI? Is it straight forward or there's some other middleware? Feb 19 19:59:05 jj0hns0n: tomkralidis is eoxserver using pycsw now too? Feb 19 19:59:48 Cristiano: I'm saying, can we just expose PyCSW API for all catalog functions? Feb 19 19:59:48 tomkralidis: Cristiano: yes, harvesting and transactions via HTTP. OWSLib is typically the Python based CSW client used. Feb 19 20:00:00 jj0hns0n: Cristiano the interface is a csw which can be queried, but you have to put some UI on it, its just an http endpoint that returns json or xml Feb 19 20:00:27 jj0hns0n: tomkralidis to do harvesting you have to spin a new process in a cron or some queue right? Feb 19 20:00:39 tomkralidis: for webui you can put simple js together to do the ops Feb 19 20:01:05 Cristiano: OK, cool. So there's no need of things like elasticsearch to handle searches right? Feb 19 20:01:15 tomkralidis: jj0hns0n: to do harvesting, you harvest a resource, and set a time interval by which the harvester fetches/refrehses Feb 19 20:01:26 tomkralidis: Cristiano: IMHO no. Feb 19 20:01:50 jj0hns0n: tomkralidis the question there is how well does that scale? Feb 19 20:02:10 tomkralidis: ask data.gov :) Feb 19 20:02:10 Cristiano: Right, that was my next question :) Feb 19 20:02:15 jj0hns0n: tomkralidis how do you daemonize the harvester or is it in a cron Feb 19 20:02:27 jj0hns0n: in data.gov the db is in postgres? Feb 19 20:02:41 jj0hns0n: are you using postgres full text search or something similar? Feb 19 20:02:42 tomkralidis: jj0hns0n: cron Feb 19 20:03:01 jj0hns0n: tomkralidis ok, but that could also be done with something like django-celery or any other queue right? Feb 19 20:03:03 tomkralidis: jj0hns0n: data.gov is PostgreSQL + PostGIS + PostgreSQL fts, yes. Feb 19 20:03:06 tomkralidis: jj0hns0n: sure Feb 19 20:03:11 jj0hns0n: ok, so yeah, fts then for sure Feb 19 20:03:25 jj0hns0n: so yeah, I would say that obviates the need for elasticsearch et al Feb 19 20:03:31 Cristiano: and is there a way to sync two or more PyCSW on different OAM isntances? Feb 19 20:03:46 tomkralidis: Cristiano: harvesting Feb 19 20:04:00 Cristiano: so, they harvest each other? Feb 19 20:04:08 jj0hns0n: yeah, its got to be the same methodology Feb 19 20:04:19 Cristiano: I though harvesting was one way Feb 19 20:04:32 jj0hns0n: tomkralidis how is it now handling a distributed search across several catalogs that may not ever be in sync? Feb 19 20:05:34 tomkralidis: distributed search can be sync or async, it's a CSW thing Feb 19 20:06:18 jj0hns0n: yeah, I guess the real question in my mind is can we do _everything_ within the formal csw protocol or is there something else necessary to make it easier for mere mortals to understand Feb 19 20:06:26 jj0hns0n: in geonode at least, we have a separate more simple API Feb 19 20:06:36 jj0hns0n: using tastypie Feb 19 20:07:03 tomkralidis: IMHO it can, but people have disagreed with me on that :) Feb 19 20:08:01 jj0hns0n: this is much easier to work with than csw :) Feb 19 20:08:14 Cristiano: And the OAM metadata schema will also be somehow simpler than OGC EO Feb 19 20:08:16 tomkralidis: jj0hns0n: FYI pycsw supports OpenSearch. Which is dead easy. Feb 19 20:08:37 jj0hns0n: yeah, I guess it boils down to who is going to consume the API Feb 19 20:09:00 tomkralidis: IMHO you want a cross cutting offering. for the specialists/GIS crowd CSW, for mass market OpenSearch. Feb 19 20:09:39 jj0hns0n: yep Feb 19 20:09:48 jj0hns0n: in geonodes case we use this other api to drive the UI in a lot of places Feb 19 20:09:57 jj0hns0n: but it all comes from the same data source Feb 19 20:10:41 jj0hns0n: thats the real key is binding whatever apis you are going to provide to the same datasource and not keep multiple copies of the same catalog data on the same node/instance whatever Feb 19 20:10:46 tomkralidis: true. Single data source is important. And so is single search design pattern. You don't want someone to use CSW with diff search results than, say UI Feb 19 20:10:59 jj0hns0n: I think we have found that not to be a real problem so far Feb 19 20:11:00 Cristiano: The set of metadata to index will be very small compared to other catalogs with full text searches and complex spatial geometries Feb 19 20:11:41 jj0hns0n: yeah, should be Feb 19 20:11:52 tomkralidis: I think pycsw + sqlite3 covers the light use case Feb 19 20:12:09 Cristiano: the only spatial data in the database will be the dataset footprint, either BBOX or polygon coverage Feb 19 20:12:42 tomkralidis: so the other part here is the metadata model Feb 19 20:12:59 jj0hns0n: yeah, but should we not just follow ISO completely? Feb 19 20:13:31 nhv: maybe BBOX, and CRS Feb 19 20:13:37 Cristiano: I would at least do a subset. And then extend with specific OAM namespaces Feb 19 20:14:18 Cristiano: Here's the initial draft: Feb 19 20:14:42 jj0hns0n: you should try to shove everything into an existing formal standard as much as possible Feb 19 20:15:07 tomkralidis: Cristiano: the OAM namespaces: need to be aware that extending the metadata model with 'local' elements is fine, but then that affect queryables by standards based things like CSW. Feb 19 20:15:21 tomkralidis: agree w/ jj0hns0n on leveraging an existing standard Feb 19 20:15:31 jj0hns0n: ebRIM would be pushing things too far IMO Feb 19 20:15:45 tomkralidis: that's like a sledgehammer trying to crack a nut Feb 19 20:15:48 Cristiano: Yes, that was more an abstract level list. It would be nice to fit it to a standard :) Feb 19 20:15:58 tomkralidis: Dublin Core is a good start. Feb 19 20:16:13 jj0hns0n: yeah, but just going ISO is the best right? Feb 19 20:16:19 jj0hns0n: I mean thats the tact we take in geonode for usre Feb 19 20:16:21 jj0hns0n: sure Feb 19 20:17:05 tomkralidis: jj0hns0n: vanilla ISO is best/safe, albeit a bit more complex than DC. Feb 19 20:17:24 tomkralidis: like we do in GeoNode yes Feb 19 20:17:29 jj0hns0n: yeah, but not sure you can cram everything we need in DC right? Feb 19 20:18:15 jj0hns0n: but -0 on doing anything that extends an existing profile unless there is a very very good reason for doing so Feb 19 20:18:40 tomkralidis: agree. vanilla ISO like we do in GeoNode would yield better results. Feb 19 20:19:03 tomkralidis: -1 on extending, push into ISO 19115/19139 proper Feb 19 20:19:14 jj0hns0n: yep, I would not be -1 but certainly -0 Feb 19 20:19:21 tomkralidis: based on, I don't see any gaps in vanilla ISO Feb 19 20:19:27 jj0hns0n: I didnt either Feb 19 20:20:12 jj0hns0n: then the real question in my mind is do you also stick another API or do everything through CSW and OpenSearch Feb 19 20:20:29 jj0hns0n: but I think that can be thought through when building a UI and not before Feb 19 20:20:31 tomkralidis: how important are facets here? Feb 19 20:20:39 jj0hns0n: could potentially be very important Feb 19 20:20:41 tomkralidis: jj0hns0n: agreed, true. Feb 19 20:21:48 Cristiano: So, how to include e.g. datasource (raw files, WMS, TMS, MBTiles) would be defined when building the UI? Feb 19 20:22:11 jj0hns0n: not sure I understand that question Feb 19 20:22:49 Cristiano: or is that something that you can already define in vanilla ISO to start? Feb 19 20:23:19 jj0hns0n: you mean how would you reference those kinds of attributes in vanilla iso? Feb 19 20:23:24 jj0hns0n: I think thats totally doable Feb 19 20:23:26 Cristiano: I'm just wondering if we can define the complete metadata schema before implementing the UI and the other components Feb 19 20:23:35 Cristiano: OK Feb 19 20:23:50 jj0hns0n: tomkralidis may get irritated at me for saying this ;) but some people are now shoving a json dict into something like a supplemental_information field Feb 19 20:23:59 jj0hns0n: and then you can do what you like to 'extend' if you need something for the UI Feb 19 20:24:04 jj0hns0n: without breaking ISO Feb 19 20:26:14 nhv: mumbles something about if we also consider video STANAG 4609 maybe worth investigating Feb 19 20:25:29 jj0hns0n: nhv wouldnt we just go the ossim/omar route if we wanted to do that? Feb 19 20:25:30 Cristiano: That's probably OAM 2 :) Feb 19 20:25:43 nhv: that is the OMAR route :-) Feb 19 20:25:51 nhv: or part of it Feb 19 20:25:52 Cristiano: live earth video feed? Feb 19 20:26:14 nhv: yes Feb 19 20:26:14 jj0hns0n: yeah, or doing something with all the go pro video from every DJI user out there :) Feb 19 20:27:03 jj0hns0n: that would be something to get STANAG 4609 from a gopro video and a pixhawk log :) Feb 19 20:27:03 Cristiano: All right, lots of good ideas, thank you guys! We should wrap up Feb 19 20:27:34 jj0hns0n: I gotta get going too, thanks for leading cris Feb 19 20:27:37 tomkralidis: jj0hns0n: json in supplemental_information yikes! But yes doable Feb 19 20:27:48 Cristiano: Please send your comments and ideas to the list or open it in github Feb 19 20:27:49 jj0hns0n: yeah, better to do that than break ISO :) Feb 19 20:28:24 Cristiano: Thank you all and see you next week! Feb 19 20:28:40 BlakeGirardot: Thank you all ! Feb 19 20:28:44 smathermather: thanks!
http://wiki.openstreetmap.org/wiki/OpenAerialMap/Meeting_Feb_19,_2015
CC-MAIN-2017-26
refinedweb
5,324
53.82
trying to set up multiple click functions from one script, then add that script to the various game objects. I am trying to set up a script that checks if variable 1, in this case its called techpoints in my script. if they have 50 I want to be able to buy a virus and subtract those points. I got it working, but i realized that i had to keep inheriting back to check for the variables that i was using as the currency to purchase the new ones. I thought it might be easier to set it all up in one script, but when i did that nothing seems to work. I added the scrip to the 2 different game objects, and selected the different variables on each one. here is my script. can anyone tell me what im doing wrong, or if there is a much better way to do this? using UnityEngine; using System.Collections; using UnityEngine.UI; public class BuildVirus : TPB { // public UnityEngine.UI.Text TPC; public UnityEngine.UI.Text TechPoints; public static float techPoints = 0.00f; public float techPointsPerClick = 1; // private Slider mySlider; // public float counter = 10.0f; public int click; public int VirusCost ; // public VirusClick click; // public UnityEngine.UI.Text vpc; public UnityEngine.UI.Text VirusCount; public float virus = 0.00f; public int virusPerClick = 1; void Start(){ } void Update(){ // VirusCount.text = "Virus Count: " + virus; // TechPoints.text = "Tech Points: " + techPoints; // vpc.text = "VPC: " + virusPerClick; Debug.Log ("techPoints:" + techPoints); } public void Clicked(){ VirusCost = 50; // Debug.Log("techPoints before:"+techPoints); if(techPoints >= VirusCost){ techPoints -= VirusCost; virus += 1; } // Debug.Log("techPoints after:"+techPoints); } public void TechPointsClicked(){ techPoints += techPointsPerClick; { } } } I was going through my game outline, and realized that it might work best if i just make one script that has all the variables defined, and then anytime i make a new one i can have it inherit from that first script, and then have the first one inharit monobehaviour, would that be the best option? hmm.. that seems to still bring up all the options on the editor interface still. not sure, what do i try? lol To make variables public that you can access in other scripts but don't want to be visible in the editor, type this directly above the variable [HideIn. CLick on terrain and record vector3 location I clicked at? 3 Answers Moving to an object i click. 1 Answer Object Creation with GUI Button. 1 Answer Button does not clicked 0 Answers How can I make an object disappear when clicked? 3 Answers
https://answers.unity.com/questions/938093/can-you-use-multiple-click-functions-with-one-scri.html
CC-MAIN-2020-10
refinedweb
421
75.4
You've read the hype surrounding Agile software development. You've persuaded your boss that the business will benefit from adopting it and should bring in some expert consultants to help. Your team has embraced the advantages of frequent releases, iterative development, and daily stand-ups. Everything is going fine. But as the team grows and the systems mature, you start to run into problems. The codebase is too large for collective ownership. People forget details, and no documentation exists. Developers complain that the build takes too long. Some even check in code before the build completes, and sometimes this breaks it. You start to think that the expensive Agile consultants you hired were armchair generals who knew nothing about being a foot soldier. This story is all too common. Sadly, some teams then go back to the waterfall approach. Some become "watergile." But a few continue being highly Agile and prosper. I've had the good fortune to work as a developer with one such department in a large telecommunications company. They've been delivering software in an Agile way for nearly a decade, long after the Agile consultants left. This article tells how they do it. I'll give you a picture of the work environment, the processes used, and the challenges — both met and so far unmet. I can't offer an Agile blueprint (no one can), but the lessons learned from my experience can, I hope, help other organizations in their battles to become truly Agile. Team and system configuration Teams of database administrators, system administrators, data warehouse administrators, and network administrators are all essential to system operations at the telecoms company. But when it comes to being Agile, it's all about (to quote Steve Balmer) developers, developers, developers. Development team composition All 50 or so developers are based on one floor and work in about 10 teams of 6 or fewer. With teams this size, there's little chance of miscommunication, and daily stand-ups take no more than 15 minutes. Each team is responsible for writing and managing a handful of small applications that address a discrete area of business. Apart from agreeing on its interfaces with other teams, each team is an autonomous unit that decides its own way of doing things. Although the applications under development have a clear architecture, we use no architects. All architectural decisions are made by consensus amongst the developers who work on the code face. Each team does, however, have a business analyst. He or she acts as a proxy for the customer, who might not be located in the same city. The business analyst feeds requirements for each iteration to the developers (who are careful not to overcommit). They are the persons the developers see who are closest to what can be described as a manager. In addition to the business analyst, each team has its own dedicated user-acceptance tester. Both the tester and the analyst attend the developer stand-ups. In addition to the developer stand-ups, we hold a daily stand-up that a representative of each development team attends. Cross-cutting concerns such as expiring certificates and licenses are discussed here. The architecture The architecture of all the applications reflects the team composition. All the apps talk to one another via XML. The transfer protocol is sometimes Java™ Message Service (JMS) but more often HTTP. In keeping with autonomous teams, the choice of web server is irrelevant, and large Enterprise Service Bus solutions have so far been avoided. This approach largely works well, as long as some guidelines are followed. The first guideline is to bake versioning into the messages sent between apps by including it either in the URL or in the XML payload. This way, the server application can fail quickly if it detects it has been given incomplete data. Another is to give client teams the XML schema (XSD) for the XML payload. This simple gesture elicits surprisingly large amounts of gratitude from the client team, which then feels more comfortable about what it is their code is asking for. A third is to use the free and open source Yatspec for documentation that runs as part of the test suite (see Resources). If documentation becomes out of date, then the build breaks until it is updated. The documentation Yatspec produces uses human-readable web pages to describe the XML that is passed back and forth for a variety of paths through the system. At the end of every iteration it can be given to the client. It is far more readable than a Web Services Description Language (WSDL) file. I'll discuss Yatspec more later. Perfect harmony? With such small, self-organizing teams following sensible guidelines, what can possibly go wrong? Lots, actually. Intrateam friction Egoless programming can only truly be achieved in a world of egoless beings. Wherever that world is, it's not Earth. Although conditions can minimize friction between strong personalities, breakdowns will inevitably occur when a sufficiently large number of persons are interacting. One instance I saw was when half a team was adding Spring to the codebase while another was busy removing it. In cases like this, management must be strong and break the deadlock by whatever means are necessary. Sometimes, there's no alternative but to swap people out of a team that is suffering from a civil war. Incompatible architectures Teams that make their own architectural decisions leave little room for prima donnas. But architectural decisions that one team makes can impact others. For instance, one team decided to use a NoSQL, in-memory database. Later, they decided that in fact the data must survive a crash. So they asked the team from which they took the data to store it for them. This was a suboptimal idea for the team providing the data, which had not planned for this work. They felt that it was architecturally repugnant for business concerns to span multiple apps (and that the team consuming the data had indulged in Career Driven Development). The teams turn to an arbitrator when situations like this arise. The arbitrator is a developer who does not sit in any one team; he or she is not necessarily the best developer on the floor but one who is commonly regarded as having good people skills and the ability to hear both sides of an argument impartially. Release-night blues Unfortunately, we are currently not leveraging our federated environment of small apps quite as much as we could. On the night of the release, one application that fails to function causes the release of all applications to roll back. This necessity is not simply due to a difference in interfaces between releases. (Because of version numbers in the messages sent between systems, an application can be prepared for that contingency.) Rather, it's due to the possibility that messages might disappear. Consider the scenario where one successfully deployed application starts up and passes a message to another newly deployed app. The first app then considers its work done. Then, it is noticed that the second application that consumed the message is faulty, so both its deployment and its database are rolled back. The first application knows nothing of the rollback of its collaborator and hence has no interest in trying to resend its message. The first approach to this problem was to break down the applications into different families. It was hoped that this would at least mean that only a subset of applications would be rolled back if a problem occurred. For instance, it seemed that the workflow of customer ordering (the client first wants to be connected to our network), provisioning (the client's line needs to be physically connected at the telephone exchange), and service assurance (checking the client's connectivity) were totally separate domains. But it later became clear that a support engineer who wants to know why a new client's telephone line is not working crosses all domains. By necessity, therefore, the apps need to communicate with one another. A further problem with this superficial analysis is that applications even in the same family didn't necessarily talk to one another. Although the families seemed obvious from a business viewpoint, they failed to capture the dependencies between apps. For example, there was no common work flow in checking a landline and checking a connection from a wireless hotspot. The two applications that handled this flow did not talk to each other, nor did they share common dependencies despite being part of the putative assurance family. Alas, we have not yet managed to produce a solution to all-or-nothing releases, although we are lobbying for more equipment so we can thoroughly test in a production-like environment. Until we solve this problem, either all applications must be deployed or all must be rolled back. A single application that fails to deploy will then jeopardize the whole release — a very expensive eventuality. Reinventing the wheel Although the pain of rewriting applications is reduced when they are small, not all rewrites are painless. Certain cross-cutting concerns keep getting rewritten. An example is an authentication system that affects all apps. Lack of long-term planning has caused it to go through several rewrites that affect almost all applications. Unfortunately, we have not yet solved this problem. The folly of crowds It is still perfectly possible to make architectural mistakes even when two teams agree on an approach. A case in point is when a team short of work agreed to take on common functionality from an overworked team. The idea was for both teams to work on a shared library. But this quickly led to one team inadvertently breaking the build of the other as they fixated on their own build monitors. The other team's build was a strange land they did not wish to explore. In retrospect, the first team should have offered some sort of XML/RPC service to the second. Because the first team was overworked, they could have taken members from the second. This painful experience is still ongoing after six months. Agile/waterfall impedance mismatch The whole floor operates on iterations that last two weeks. On iterations that have odd numbers, the apps are promoted to the environment just before production but not into production itself. This gives the system administrators and database administrators a dummy run. On iterations that have even numbers, the apps go live. All apps are released on the same night, ensuring that they all work together. The problem is that there are teams and systems outside the department that do not follow the same cycle or perhaps don't even use the iterative releases of Agile at all. They might belong to different departments or even different companies. Because we don't have control over our clients, there's a limit to what we can do. But we have discovered that providing the client with a "primer" application for each client-facing app helps their adoption. These primer apps stub out back-end systems that our client-facing app talks to. The primers have a simple web front end that enables our client to set up the systems in whatever configuration they want to test against. Because releases of these primers are in lockstep with the production releases, they too are available in the test environment with every incremental build. This enables our clients to play with the applications as they progress through the iteration. Falling between the cracks It is easier for some work to fall between the cracks if each team is self-contained in its own silo. For instance, one application relied heavily on Apache ActiveMQ, but who was responsible for it was a constant source of argument. The developers saw it as infrastructure. The system admins argued that the developers were better placed to deal with messaging issues because they knew the software much better. Only trial-and-error plus a little horse-trading resulted in an agreement. Tricks of the trade By sometimes being far-sighted — but more often through trial and error — we have come up with some best practices to help us deliver. These recognize that the code is only part of a successful iteration. Tests, builds, deployment, and documentation all have their own processes that must be tuned over time. Here, I'll share some techniques that work for us. Your mileage might vary, but if you adopt and adapt them your productivity could increase. Builds It is essential to have fast builds if the developers' attention is to be maintained. Fast builds also enable them to be brave when making changes, because they can very quickly test they haven't broken anything. Build times are naturally lower if applications are smaller. But another way to reduce the build time is to make the build multithreaded. Listing 1 shows part of an Apache Maven project object model (POM) file that uses the Maven Surefire Plugin (see Resources) for running multithreaded tests: Listing 1. Snippet from a Maven pom.xml for multithreaded tests . . <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.11</version> <configuration> <parallel>classes</parallel> <threadCount>2</threadCount> <perCoreThreadCount>true</perCoreThreadCount> </configuration> <dependencies> <dependency> <groupId>org.apache.maven.surefire</groupId> <artifactId>surefire-junit47</artifactId> <version>2.11</version> </dependency> </dependencies> </plugin> . . A full build for the application from which this excerpt was lifted takes less than a minute. You should heed two pieces of advice when making your builds multithreaded. The first is that doing so on a mature project often does not work out-of-the-box. Tests that run serially can mysteriously fail when running in parallel. It is therefore better to make builds multithreaded from the start. Second, thought must be given to how the tests are to be run. For instance, it's common for some tests to clean the database before they start. These tests are not appropriate for multithreading, because one might be tearing down the data while another is populating the table. Releases A cross-cutting concern for our many federated applications is their release into the various environments. To this end, we have written and open-sourced an Ant library called Conduit that facilitates the release process (see Resources). It has helpful macros that, for instance, scp the artifact to an appropriate environment and start it remotely using SSH. One nice side-effect of having a uniform way of starting and stopping applications is that writing integration tests that must start and stop whole sets of applications is much easier. Each script is the same. Another advantage of a library that deploys the application for you is that you can release to your many environments many times a day. The more often you do so, the less chance of surprises when you do the real release. One word of warning, however: Conduit is still somewhat immature, and the error messages can be rather esoteric. Living documentation Agile consultants often advise avoiding documentation, saying that it quickly falls out of date. But this is not true if the documentation is part of the build. The department I work in uses Yatspec (see Resources), a tool that they wrote and open-sourced. It runs tests written in Java and turns the results and the source code into HTML documents. The output looks something like Figure 1: Figure 1. Yatspec documentation The test method that generated the output in Figure 1 can look as simple as the code in Listing 2: Listing 2. A Yatspec test method to generate documentation import org.junit.Test; . . @Test public void pingAWifiServiceWithARouterAndCheckItIsReachable() throws Exception { givenOpusHasA(aWifiServiceWithARouter()); andTheRouterCanBePingedSuccessfully(); whenOpusReceivesFromTheOperator(aRequestForRouterStatusForTheWifiService()); thenOpusCallsPinguWithTheIpAddressOfTheRouter(); thenOpusSendsASuccessToTheOperator(); } Each method in Listing 2 makes assertions and highlights data that is particularly interesting. Yatspec can even automatically generate sequence diagrams like the one in Figure 2: Figure 2. Automatically generated sequence diagram Capturing the actions in the sequence diagram is simply a matter of injecting a test-code listener into your production code. Source code as documentation Ultimately, the source code is the documentation for developers. And sooner or later another team is going to have to look at yours. I've found that you will receive the undying admiration of your neighboring table of developers if you release your source code with each build. This is easy to do in Maven using just a standard plug-in (see Resources). Listing 3 shows you how to use the Maven Source Plugin: Listing 3. Snippet from a Maven pom.xml for deploying source code . . <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-source-plugin</artifactId> <version>2.1.2</version> <executions> <execution> <id>attach-sources</id> <phase>verify</phase> <goals> <goal>jar-no-fork</goal> </goals> </execution> <execution> <id>attach-test-sources</id> <phase>verify</phase> <goals> <goal>test-jar-no-fork</goal> </goals> </execution> </executions> </plugin> . . Release-night lifesavers Imagine this scenario: It's 3.30 a.m. on the night of a release, and your application fails to start. You wonder why, because you've spent most of a whole month writing tests. You're tired, you want to go home, and your colleagues are looking at you with growing annoyance. You double-check the code, but nothing seems to be wrong with it.... It's easy to forget that configuration is as much a part of your application as your code is. For instance, one application of ours went three whole releases without a single code bug. However, it was plagued with configuration issues that made release night a nail-biting experience. To avoid this unpleasantness, we wrote a class that would sanity-check all the properties. This simple JUnit test checks all the values in the property files for all the environments. It might execute a logical check of the value, maybe just checking that it matches a regular expression. Or, it might simply check to ensure that the value actually exists. This kind of test has prevented no end of late-night panics. Conclusion Agile methodologies recognize that projects never end. That's how software development rolls in the real world. Consequently, even after nearly a decade of Agile development, new issues (such as a coarse-grained release process and agreement on cross-cutting concerns) are still hitting us. But having spent 16 years as a developer, I believe that we're in better shape than most other companies. Releases are rarely rolled back. Production issues tend to be minor. Team spirit is mostly high. Agile methodologies will vastly improve your delivery of software, but you must still be realistic about what can be achieved and how quickly. Any consultant who tells you that Agile effortlessly fixes all the problems with your process is probably trying to sell you something. To paraphrase Winston Churchill, Agile is the worst methodology — except for all the others. Resources Learn - Agile Java Man: See Phillip Henry's blog for his musings on Java, software architecture, and Agile methodologies. - "Three deadly pitfalls to avoid on agile implementations" (Paul Gorans, developerWorks, January 2012): Find out how inexperience, lack of a plan, and limited executive sponsorship can doom an Agile project. - Agile DevOps: Read this article series on developerWorks to find out how collaborative development and operations teams can improve software-delivery processes. - Apache Maven plug-ins: Learn more about the Maven Source Plugin and Maven Surefire Plugin. Get products and technologies - Yatspec (Yet Another Test Specification Library) : Yatspec lets tests stay maintainable while producing human-readable documentation. - Conduit: Conduit is a library of Ant targets that can be imported into your project to make releasing your code easier..
http://www.ibm.com/developerworks/library/a-agiletrenches/index.html
CC-MAIN-2016-18
refinedweb
3,283
55.54
Introduction To start, I’d like to note that I’m not using the word error in this talk to describe error types. Instead, it’s to encompass anything in your code that is wrong, or not on the ideal code path. In development, we want code that is stable, behaves correctly, and does what we intend it to do. This allows us to continue delivering new features and improvements at a fast pace. We also want to anticipate any “unexpected errors” and minimize the impact this has on the user. For example, if you aren’t able to read part of the user’s data from the database, we wouldn’t want to just crash and just give up. We also don’t want to resort to having the user contact customer service support, only to be asked to reinstall the app. Not only is this an awful user experience, it can mean users losing their own data, and their trust in the app. This is what I’d like to cover to achieve the above: - How to write maintainable code - How to write testable code - What’s our goal during the correct behavior? - Does it become incorrect in the future? Input In order to write correct code and prevent errors from occurring, we need to be able to handle all input. Inputs can be reduced into two types: Get more development news like this - Explicit - Implicit Explicit input, by nature of being explicit, will stand out to the reader and more clearly communicate intent, and be the parameters to your functions. Implicit input is any other data accessible to the function; it’s usually referred to as state. In this very simple example, the parameter id would be explicit and settings would be implicit as it’s not a part of the method signature. struct Foo { func bar(id: String) -> String { return settings[id] ?? "" } } State A state typically consists of variables and constants at global scope. Any outer scope accessible to a function can be a potential source of state, such as singletons. If your function depends upon the values of any of these, the output can differ even with the same explicit input. Functions that don’t depend on or affect the state of varying degrees are known as pure functions. A temporal state is the state of the program being executed at any time. This is more of an issue when dealing with concurrent programming, but it’s a usable concept here. A temporal state is partially defined by the order of statements in imperative code, and because the order of code tends to be taken for granted, it’s classified as mostly implicit. The order code needs to be run can be made more explicit. The most basic way to accomplish this is to write a comment. For example, “ A must be called before B”. But if you want to communicate this to the compiler as well, you should make your code not compilable when rearranged in any other order. Here is an example with Grand Central Dispatch. This will execute the print statements independently of each other without a defined order: let queue = DispatchQueue.global() queue.async { print("1") } queue.async { print("2") } print("3") You can think of the possible states of execution order being multiplied as the threads of execution interweave each other in different patterns. The print statements share the same serialized output. Because there are three places to fill, it’s three factorial, or otherwise six potential outcomes. Here is another example without the concurrency: func f() -> Int { let a = foo() let b = bar("b") doSomething() let = c bar(a) return b + c } Each statement here is evaluated in a top-down order, and there is a relationship enforced here by the compiler. I cannot place the line for let c first even if I wanted to because we don’t have the value of a yet. Real world state/input A “real-world” state or input isn’t a distinct type of state/input but rather describes a property of it. For example in a database, we may assume a field only has non-negative numbers because we made sure not to insert a negative value. But at one point someone may have stored a -1 or a null to indicate that the data was out of sync. You can see how easily this would get out of hand. Any changes to code handling this kind of state shouldn’t be modified haphazardly. Real world/state example bug In the Line app, there is a feature called Theme. It lets the user change the look of the app. My task was to migrate the string format of the setting that holds which Theme is currently in use by the user. import foundation final class ThemeSubsystem { init() { let theme = UserDefaults.standard.string(forKey: "theme") // ... } } I soon found there is a problem with this code. The theme subsystem was inadvertently being initialized before my migration code because the new version of this code expects the newer format if it failed to initialize correctly with the value from user defaults. However, once the migration code ran, subsequent launches found the setting and it initialized properly. The bug was created because of the shared state and ambiguous temporal relationship or dependency between the migration and the theme subsystem initialization. How do we go about fixing this? One way is to make the relationship between the two explicit at compile time using the type system. In other words, if you don’t have an instance of a type, you cannot call a function that requires it as input. Using this, we can create types that represent a transition in the state. Requiring instances of those types as parameters to our functions allows us to specify the state as a prerequisite to calling a function. A basic way to implement this fix would be by creating a setup complete type and adding it to the initializer’s parameters. All that’s left is to create this instance of SetupComplete only when that state occurs, then passing it along. import Foundation final class ThemeSubsystem { init(_: SetupComplete) { let theme = UserDefaults.standard.string(forKey: "theme") // ... } } Error vs Optional I’d like to switch gears and talk about the actual error type in Swift. More specifically, when to use the error type over an optional. An optional type typically represents a right or wrong value or the presence or absence of a value. When used as input, it tends to represent optional semantics, as the name implies. Swift’s syntax makes it fairly easy and straightforward to safely handle these optionals. The error type has similar semantics of “wrong” or unable to complete successfully. I find the obvious and most common use of error types is for fatal or catastrophic errors. Well known examples of these would be trying to connect to a corrupt database or a failed IO. I find it helpful to use Swift’s error type to handle problematic situations that had a low probability of occurring. As an example, I’ve written a struct to represent ChatIDs in a specific string format. The struct validates the string and can provide some additional information on the type of chat it represents. At first, I represented this in code as a struct with a failable initializer. Then, I decided to use try-statements so that I could provide custom errors to ease troubleshooting later. Only knowing why I couldn’t parse a ChatID string wasn’t good enough. I wanted to know the overall context in which this occurred for error monitoring purposes. This is the type of situation that I feel the error type is best suited for. Conclusion Be cognizant of input/output flow, as knowing its source helps in judging what’s important to need more explicit modeling. This will lead to code that is more robust. When dealing with problematic input, if the problem is generic to the point that you can’t provide diagnostic information of any use, use an optional. But if there are several failure points or you want to add more context, using error propagation with your catch statement works well. About the content This talk was delivered live in March 2017 at try! Swift Tokyo. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.
https://academy.realm.io/posts/christopher-rogers-lessons-swift-error-handling-resilience-try-swift-2017/
CC-MAIN-2018-22
refinedweb
1,408
62.07
In the past, I showed you how to parse XML using the SAX approach and how to boost your Android XML parsing with XML Pull. Both these techniques work, but are rather boring since they qualify as “plumbing code”. In this tutorial I am going to show you how to perform XML binding in Android using the Simple framework. XML data binding is quite popular in Java and there are multiple frameworks that allow binding. Solutions like JAXB and XStream are well established and heavily used. However, these libraries come with a large footprint, something that makes them inappropriate for use in the resources constraint world of mobiles. The good news is that there is a new kid on the block, the Simple framework. As its name implies, it strives to bring some simplicity in the bloated world of XML. From the official Simple framework site:. Very nice. You can get started with the framework by visiting the documentation page and you should also read how does JAXB compare to Simple. Simple could change the way you handle XML in your Java applications, give it a try. The big question is whether Simple is supported in Android’s JVM. Android uses Dalvik, a specialized virtual machine optimized for mobile devices. Additionally, Dalvik use a subset of Apache’s Harmony project for the core of its Class Library. Not all Java core classes are supported. For example, some of the javax.xml.* subpackages are not included. Well, Simple CAN work with Android. More specifically, I managed to use version 2.3.2 which can be found in Simple’s download page. The corresponding JAR has a size of 287KB. The release notes for that version mention: - Addition of DOM provider so that StAX is not a required dependency - Fix made to ensure property defaults are applied correctly to classes The first issue is very important because the StAX API is not included in Android’s SDK. Note that the latest versions of Simple (after v2.3.2) also work and can be used for our purposes. Let’s cut to the chase and see how to perform the binding. As an example I will use an XML document that is returned as a response from the TMDb API which I use in the sample full Android application I build. Here is the document: Movies search for “Transformers” and (year) “2007” The response example can also be found here. First of all download Simple version 2.3.2 and include it in your project’s classpath. Then take a quick look at the Simple framework Javadocs. The most important thing is to create our model objects and map the appropriately to the XML formatted document. If we take a look at the XML file, we shall see that the root element is called OpenSearchDescription and it includes a Query element, a “totalResults” element and a number of movies. Here how our main model class looks like: package com.javacodegeeks.xml.bind.model; import java.util.List; import org.simpleframework.xml.Element; import org.simpleframework.xml.ElementList; import org.simpleframework.xml.Root; @Root public class OpenSearchDescription { @Element(name="Query") public Query query; @Element public int totalResults; @ElementList public List<Movie> movies; } The Root annotation denotes that the specific class represents a root XML element. We also use the Element and ElementList annotations for the nested elements. Note that Simple can handle both “getters/setters” and “public fields” approaches. I use the latter in this example. One thing to be aware of is that we use the name field (for “Query”) in order to provide the corresponding XML element name. This should be done when the XML element has a different name than the Java field, since Simple by default looks for an element with the same name as the field. Let’s now see the Query class: package com.javacodegeeks.xml.bind.model; import org.simpleframework.xml.Attribute; import org.simpleframework.xml.Element; @Element public class Query { @Attribute public String searchTerms; } This class contains only an attribute called “searchTerms” so the relevant field is annotated with Attribute. Very easy until now. Let’s check the Movie class: package com.javacodegeeks.xml.bind.model; import java.util.List; import org.simpleframework.xml.Element; import org.simpleframework.xml.ElementList; @Element(name="movie") public class Movie { @Element(required=false) public String score; @Element(required=false) public String popularity; @Element(required=false) public String name; @Element(required=false) public String id; @Element(required=false) public String biography; @Element(required=false) public String url; @Element(required=false) public String version; @Element(required=false) public String lastModifiedAt; @ElementList public List<Image> images; } The only new thing is that the required field is used in order to declare that a field is not required (can be null). This is done because some fields are empty in the API response. Let’s see the Image class: package com.javacodegeeks.xml.bind.model; import org.simpleframework.xml.Attribute; import org.simpleframework.xml.Element; @Element(name="image") public class Image { @Attribute public String type; @Attribute public String url; @Attribute public String size; @Attribute public int width; @Attribute public int height; @Attribute public String id; } This class includes only attributes so we annotate the fields accordingly. The final step is to read the source XML and let Simple wire all the classes and populate the fields. This is done by using the Persister class which provides an implementation for the Serializer interface. We shall use its read method which reads the contents of the XML document from a provided source and convert it into an object of the specified type. Note that we have disabled the strict mode. Here is how it looks inside an Android Activity: package com.javacodegeeks.xml.bind; import java.io.IOException; import java.io.Reader; import java.io.StringReader; org.simpleframework.xml.Serializer; import org.simpleframework.xml.core.Persister; import android.app.Activity; import android.os.Bundle; import android.util.Log; import android.widget.Toast; import com.javacodegeeks.xml.bind.model.OpenSearchDescription; public class SimpleExampleActivity extends Activity { private static final String url = ""; private DefaultHttpClient client = new DefaultHttpClient(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { String xmlData = retrieve(url); Serializer serializer = new Persister(); Reader reader = new StringReader(xmlData); OpenSearchDescription osd = serializer.read(OpenSearchDescription.class, reader, false); Log.d(SimpleExampleActivity.class.getSimpleName(), osd.toString()); } catch (Exception e) { Toast.makeText(this, "Error Occured", Toast.LENGTH_LONG).show(); } } public String retrieve(String url) { HttpGet getRequest = new HttpGet(url); try { HttpResponse getResponse = client.execute(getRequest); final int statusCode = getResponse.getStatusLine().getStatusCode(); if (statusCode != HttpStatus.SC_OK) { return null; } HttpEntity getResponseEntity = getResponse.getEntity(); if (getResponseEntity != null) { return EntityUtils.toString(getResponseEntity); } } catch (IOException e) { getRequest.abort(); Log.w(getClass().getSimpleName(), "Error for URL " + url, e); } return null; } } This is a typical Android Activity. We retrieve the XML document as an internet resource (check my tutorial on how to use the HTTP API) and then create a StringReader from the response. We feed the Serializer with that and then let Simple perform its magic and return as a full class with the appropriate fields and embedded classes all populated. The specific app will just dump the classes string representations to the system’s log, which can be monitored in the DDMS view. That’s all guys. No more manual XML parsing for me. As always, you can download the Eclipse project created for this tutorial. Happy Android coding! And don’t forget to share! Related Articles: 05-18 12:05:35.832: W/System.err(930): org.simpleframework.xml.stream.NodeException: Document has no root element Great tutorial, and thanks a lot. Please, I have a question. If I want to post from my android app the same data to be stored in an external database using the web service, how do I do this? Job well done! Although SimpleXml can work on Android, it’s still a little heavy-weight for resource limited mobile device, anyway, SimpleXml originally was not designed with Android in mind. I’ve built a light-weight xml binding framework tailored for Android platform, its called Nano, and I have adapted the author’s original movie search full android application to use Nano binding instead, if you are interested, you can find Nano and the full adapted movie search application by following links below: how can i read xml from assets folder ? >> XML is still important in the area of web services even though REST has gained significant attention lately. REST does not imply JSON. You can have a REST API using XML.
http://www.javacodegeeks.com/2011/02/android-xml-binding-simple-tutorial.html/comment-page-1/
CC-MAIN-2015-11
refinedweb
1,426
50.63
Pants uses the popular Pytest test runner to run Python tests. You may write your tests in Pytest-style, unittest-style, or mix and match both. Benefit of Pants: runs each file in parallel Each file gets run as a separate process, which gives you fine-grained caching and better parallelism. Given enough cores, Pants will be able to run all your tests at the same time. This also gives you fine-grained caching. If you run ./pants test ::, and then you only change one file, then only tests that depended on that changed file will need to rerun. Examples # Run all tests in the repository. ./pants test :: # Run all the tests in this target. ./pants test helloworld/util:test # Run just the tests in this file. ./pants test helloworld/util/lang_test.py # Run just one test. ./pants test helloworld/util/lang_test.py -- -k test_language_translator Pytest version and plugins To change the Pytest version, set the version option in the [pytest] scope. To install any plugins, add the pip requirement string to pytest_plugins in the [pytest] scope, like this: [pytest] version = "pytest>=5.4" pytest_plugins.add = [ "pytest-django>=3.9.0,<4", "pytest-rerunfailures==9.0", ] Alternatively, if you only want to install the plugin for certain tests, you can add the plugin to the dependencies field of your python_tests. See Third-party dependencies for how to install Python dependencies. For example: pytest-django==3.10.0 python_tests( name="tests", # Normally, Pants infers dependencies based on imports. # Here, we don't actually import our plugin, though, so # we need to explicitly list it. dependencies=["//:pytest-django"], ) Testing Python 2 code? Use Pytest 4.x By default, Pants uses Pytest 6.x, which only supports Python 3 code. If you need to run Python 2 tests, set the option versionin the scope pytestto pytest>=4.6,<5. Avoid the pytest-xdistplugin We do not recommend using this plugin because its concurrency conflicts with Pants' own parallelism. Using Pants will bring you similar benefits to pytest-xdistalready: Pants will run each test target in parallel. Tip: plugins for better Pytest output Add pytest-icdiffand pygmentsto the option pytest_pluginsfor better error messages and diffs from Pytest. Controlling output By default, Pants only shows output for failed tests. You can change this by setting --test-output to one of all, failed, or never, e.g. ./pants test --output=all ::. You can permanently set the output format in your pants.toml like this: [test] output = "all" Tip: Use Pytest options to make output more or less verbose See "Passing arguments to Pytest". For example: $ ./pants test project/app_test.py -- -q You may want to permanently set the Pytest option --no-headerto avoid printing the Pytest version for each test run: [pytest] args = ["--no-header"] Passing arguments to Pytest To pass arguments to Pytest, put them at the end after --, like this: $ ./pants test project/app_test.py -- -k test_function1 -vv -s You can also use the args option in the [pytest] scope, like this: [pytest] args = ["-vv"] Tip: some useful Pytest arguments See for more information. - -k expression: only run tests matching the expression. - -v: verbose mode. - -s: always print the stdout and stderr of your code, even if a test passes. How to use Pytest's --pdboption You must run ./pants test --debugfor this to work properly. See the section "Running tests interactively" for more information."] Ignore the cache with --force --force To force your tests to run again, rather than reading from the cache, run ./pants test --force path/to/test.py. Running tests interactively Because Pants runs multiple test targets in parallel, you will not see your test results appear on the screen until the test has completely finished. This means that you cannot use debuggers normally; the breakpoint will never show up on your screen and the test will hang indefinitely (or timeout, if timeouts are enabled). Instead, if you want to run a test interactively—such as to use a debugger like pdb—run your tests with ./pants test --debug. For example: def test_debug(): import pdb; pdb.set_trace() assert 1 + 1 == 2 $ <<pantscmd>> test --debug test_debug_example.py ===================================================== test session starts ===================================================== platform darwin -- Python 3.6.10, pytest-5.3.5, py-1.8.1, pluggy-0.13.1 rootdir: /private/var/folders/sx/pdpbqz4x5cscn9hhfpbsbqvm0000gn/T/.tmpn2li0z plugins: cov-2.8.1, timeout-1.3.4 collected 6 items test_debug_example.py >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB set_trace (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > /private/var/folders/sx/pdpbqz4x5cscn9hhfpbsbqvm0000gn/T/.tmpn2li0z/test_debug_example.py(11)test_debug() -> assert 1 + 1 == 2 (Pdb) 1 + 1 2 If you use multiple files with test --debug, they will run sequentially rather than in parallel. Tip: using ipdbin tests ipdbintegrates IPython with the normal pdbdebugger for enhanced features like autocomplete and improved syntax highlighting. ipdbis very helpful when debugging tests. To be able to access ipdbwhen running tests, add this to your pants.toml: [pytest] pytest_plugins.add = ["ipdb"] Then, you can use import ipdb; ipdb.set_trace()in your tests. Tip: using the IntelliJ/PyCharm remote debugger in tests First, add the following target in some BUILD file (e.g., the one containing your other 3rd-party dependencies): python_requirement_library( name = "pydevd-pycharm", requirements=["pydevd-pycharm==203.5419.8"], # Or whatever version you choose. ) You can check this into your repo, for convenience. Now, use the remote debugger as usual: - Start a Python remote debugging session in PyCharm, say on port 5000. - Add the following code at the point where you want execution to pause and connect to the debugger: import pydevd_pycharm pydevd_pycharm.settrace('localhost', port=5000, stdoutToServer=True, stderrToServer=True) Run your test with ./pants test --debugas usual. Note: The first time you do so you may see some extra dependency resolution work, as pydevd-pycharmhas now been added to the test's dependencies, via inference. If you have dependency inference turned off in your repo, you will have to manually add a temporary explicit dependency in your test target on the pydevd-pycharm target. Using timeouts Pants can cancel tests which take too long. This is useful to prevent tests from hanging indefinitely. To add a timeout for a particular python_tests target, set the timeout field to an integer value of seconds, like this: python_tests( name="tests", timeout=120, # seconds. ) This timeout will apply to each file belonging to the python_tests target, meaning that test_f1.py will have a timeout of 120 seconds and test_f2.py will have a timeout of 120 seconds. If you want finer-grained timeouts, create a dedicated python_tests target for each file: python_tests( name="test_f1", sources=["test_f1.py"], timeout=20, ) python_tests( name="test_f2", sources=["test_f2.py"], timeout=35, ) You can also set a default value and a maximum value in pants.toml: [pytest] timeout_default = 60 timeout_maximum = 600 If a target sets its timeout higher than --pytest-timeout-maximum, Pants will use the value in --pytest-timeout-maximum. Tip: temporarily ignoring timeouts When debugging locally, such as with pdb, you might want to temporarily disable timeouts. To do this, set --no-pytest-timeouts: $ ./pants test project/app_test.py --no-pytest-timeouts conftest.py conftest.py Pytest uses conftest.py files to share fixtures and config across multiple distinct test files. The default sources value for a python_tests target includes conftest.py. So, if you declare a BUILD file like this, the conftest.py will be included: python_tests( name="tests", timeout=120, # We leave off `sources` to use the default value. ) Otherwise, you can explicitly include the conftest.py in the sources field of a python_tests() target. Pants will also infer dependencies on any confest.py files in the current directory and any ancestor directories, which mirrors how Pytest behaves. This requires that each conftest.py has a target referring to it. You can verify this is working correctly by running ./pants dependencies path/to/my_test.py and confirming that each conftest.py file shows up. (You can turn off this feature by setting conftests = false in the [python-infer] scope.) Depending on test utilities, resources, and built packages Depending on test utilities Use the target type python_library for test utilities, rather than python_tests. For example: python_library( name="testutils", sources=["testutils.py"] ) # We leave off the `dependencies` field because Pants will infer # it based on import statements. python_tests(name="tests") ... @contextmanager def setup_tmpdir(files: Mapping[str, str]) -> Iterator[str]: with temporary_dir() as tmpdir: ... yield rel_tmpdir from helloworld.testutils import setup_tmpdir def test_app() -> None: with setup_tmpdir({"f.py": "print('hello')"}): assert ... Depending on resources Refer to Resources for how to include resource files in your tests. It's often most convenient to use files and relocated_files targets in your test code, although you can also use resources. Depending on packages For integration tests, you may want to include the result of ./pants package in your test, such as a generated .pex file. You can then, for example, use it with subprocess.run() or unzip the package. To depend on a built package, use the runtime_package_dependencies field on the python: pex_binary( name="bin", sources=["say_hello.py"], ) python_tests( name="tests", runtime_package_dependencies=[":bin"], ) print("Hello, test!") import subprocess def test_say_hello(): assert b"Hello, test!" in subprocess.run(args=['helloworld/bin.pex']) Coverage To report coverage using Coverage.py, set the option --test-use-coverage: $ ./pants test --use-coverage helloworld/util/lang_test.py Or to permanently use coverage, set in your config file: [test] use_coverage = true Failure to parse files? Coverage defaults to running with Python 3.6+ when generating a report, which means it may fail to parse Python 2 syntax and Python 3.8+ syntax. You can fix this by changing the interpreter constraints for running Coverage: # pants.toml [coverage-py] interpreter_constraints = [">=3.8"] However, if your repository has some Python 2-only code and some Python 3-only code, you will not be able to choose an interpreter that works with both versions. So, you will need to set up a .coveragercconfig file and set ignore_errors = Trueunder [report], like this: # .coveragerc [report] ignore_errors = True # pants.toml [coverage-py] config = ".coveragerc" ignore_errors = Truemeans that those files will simply be left off of the final coverage report. There's a proposal for Pants to fix this by generating multiple reports when necessary:. We'd appreciate your feedback. Coverage will report data on any files encountered during the tests. You can filter down the results by using the option --coverage-py-filter and passing the name(s) of modules you want coverage data for. Each module name is recursive, meaning submodules will be included. For example: $ ./pants test --use-coverage helloworld/util/lang_test.py --coverage-py=helloworld.util $ ./pants test --use-coverage helloworld/util/lang_test.py --coverage-py='["helloworld.util.lang", "helloworld.util.lang_test"]' Coverage will not report on unencountered files Coverage will only report on files encountered during the tests' run. This means that your coverage score may be misleading; even with a score of 100%, you may have files without any tests. This is a shortcoming of Coverage itself. Pants will default to writing the results to the console, but you can also output in HTML, XML, JSON, or the raw SQLite file: [coverage-py] report_type = ["raw", "xml", "html", "json", "console"] You can change the output dir with the output_dir option in the [coverage-py] scope. You may use a custom .coveragerc config file by setting the option coverage in the [coverage-py] scope. You must include relative_files = True in the [run] section for Pants to work. [coverage-py] config = ".coveragerc" [run] relative_files = True branch = True When generating HTML, XML, and JSON reports, you can automatically open the reports through the option --test-open-coverage. Saving JUnit XML results Pytest can generate JUnit XML result files. This allows you to hook up your results, for example, to dashboards. To save JUnit XML result files, set the option junit_xml_dir in the [pytest] scope, like this: [pytest] junit_xml_dir = "dist/pytest_results" You may also want to set the option junit_family in the [pytest] scope to change the format. Run ./pants help-advanced pytest for more information. Updated 11 days ago
https://www.pantsbuild.org/docs/python-test-goal
CC-MAIN-2020-50
refinedweb
1,992
59.8
Politics & Current Affairs Society Politics & Current Affairs Society Просмотров: 18 123010 Gilley Motion to Compel Memorandum People vs Espina Criminal Procedure 2010 U.S. versus Vadim Mikerin, Judge Theodore Chuang, 40 page sentencing transcript dated Aug 31st 2015 United States v. Benitez, 4th Cir. (2011) United States v. Kaydahzinne, 10th Cir. (2009) RA 8353 USA v. Vallmoe Shqaire - Order Re Judicial Removal of Defendant From United States United States v. George W. Cermark, 622 F.2d 1049, 1st Cir. (1980) United States v. Michael Rafael Collins, 141 F.3d 1186, 10th Cir. (1998) Timothy Willie Sweetwine v. State of Maryland and the Warden of the Maryland House of Correction, 769 F.2d 991, 4th Cir. (1985) Dead Pledge George W. Del Vecchio v. Illinois, 474 U.S. 883 (1985) United States v. Robert White, 4th Cir. (2013) Bustos vs Lucero United States v. Ishmael Santiago, 4th Cir. (2012)) United States v. Harris, 4th Cir. (1996) United States v. Hill, 4th Cir. (2011) United States v. Larry Jarome Rogers, 848 F.2d 166, 11th Cir. (1988) 10 Поиск в документе PEOPLE OF THE PHILIPPINES, plaintiff-appellee, vs. NIGEL RICHARD GATWARD, and U AUNG WIN,accused, NIGEL RICHARD GATWARD, accused-appellant. REGALADO, J.: The accession into our statute books on December 31, 1993 of Republic Act No. 7659, [1] which authorized the re-imposition of the death penalty and amended certain provisions of the Revised Penal Code and the Dangerous Drugs Act of 1972, raised the level of expectations in the drive against criminality. As was to be expected, however, some innovations therein needed the intervention of this Court for a judicial interpretation of amendments introduced to the dangerous drugs law.[2] The same spin-off of novelty, this time by the new provision fixing the duration of reclusion perpetua which theretofore had not been spelled out with specificity in the Revised Penal Code, produced some conflicting constructions, more specifically on whether such penalty is divisible or indivisible in nature. That is actually the major issue in these cases, the factual scenario and the culpability of both accused having been. The antecedents being undisputed, and with a careful review and assessment of the records of this case having sustained the same, we reproduce hereunder the pertinent parts of the decision of the trial court jointly deciding the criminal cases separately filed against each of the accused. Although only one of them, Nigel Richard Gatward, has appealed his conviction to us, for reasons hereinafter explained we shall likewise include the disposition by the court a quo of the case against U Aung Win. 1. The lower court stated the cases against the accused, the proceedings therein and its findings thereon, as follows: In Criminal Case No. 94-6268, the accused is charged with violating Section 4 of Republic Act No. 6425, the Dangerous Drugs Act of 1972, allegedly in this manner: That on or about the 31st (sic) day of August 1994, in the vicinity of the Ninoy Aquino International Airport, Pasay City, x x x , the above-named accused not being authorized by law, did then and there wilfully, unlawfully and feloniously transport heroin (2605.70 grams and 2632.0 grams) contained in separate carton envelopes with a total weight of 5237.70 grams which is legally considered as a prohibited drug. (Information dated Sept. 14, 1994) In Criminal Case No. 94-6269, the accused is indicted for transgressing Section 3 of the Dangerous Drugs Act of 1972, purportedly in this way: That on or about the 30th day of August 1994, at the arrival area of Ninoy Aquino International Airport, Pasay City, x x x, the above-named accused not being authorized by law, did, then and there wilfully, unlawfully and feloniously import and bring into the Philippines 5579.80 grams of heroin which is legally considered as a prohibited drug. (Information also dated Sept. 14, 1994) Accused Nigel Richard Gatward in Criminal Case No. 94-6268 pleaded not guilty of the charge when arraigned. On the other hand, accused U Aung Win in Criminal Case No. 94-6269, assisted by Atty. Willy Chan of the Public Attorneys Office of the Department of Justice, entered a plea of guilty of the crime charged upon his arraignment. Since it is a capital offense, the Court asked searching questions to determine the voluntariness and the full comprehension by the accused of the consequences of his plea. The accused manifested that he was entering a plea of guilty voluntarily without having been forced or intimidated into doing it. The nature of the charge was explained to him, with emphasis that the offense carries with it the penalty of reclusion perpetua to death and his pleading guilty of it might subject him to the penalty of death. The accused answered that he understood fully the charge against him and the consequences of his entering a plea of guilty. The defense counsel likewise made an assurance in open court that he had explained to U Aung Win the nature of the charge and the consequences of his pleading guilty of it. Having been thus apprised, the accused still maintained his plea of guilty of the offense charged against him. Since the offense admitted by him is punishable by death, the case was still set for trial for the reception of the evidence of the prosecution to prove the guilt and the degree of culpability of the accused and that of the defense to establish mitigating circumstances. Upon motion of the prosecution without any objection from the defense, these two cases were consolidated and tried jointly, since the offenses charged arose from a series of related incidents and the prosecution would be presenting common evidence in both. At about 3:30 in the afternoon of August 30, 1994, accused U Aung Win, a passenger of TG Flight No. 620 of the Thai Airways which had just arrived from Bangkok, Thailand, presented his luggage, a travelling bag about 20 inches in length, 14 inches in width and 10 inches in thickness, for examination to Customs Examiner Busran Tawano, who was assigned at the Arrival Area of the Ninoy Aquino International Airport (NAIA) in Pasay City. The accused also handed to Tawano his Customs Declaration No. 128417 stating that he had no articles to declare. When Tawano was about to inspect his luggage, the accused suddenly left, proceeding towards the direction of Carousel No. 1, the conveyor for the pieces of luggage of the passengers of Flight No. 620, as if to retrieve another baggage from it. After having inspected the luggages of the other incoming passengers, Tawano became alarmed by the failure of U Aung Win to return and suspected that the bag of the accused contained illegal articles. The Customs Examiner reported the matter to his superiors. Upon their instructions, the bag was turned over to the office of the Customs Police in the NAIA for x-ray examination where it was detected that it contained some powdery substance. When opened, the bag revealed two packages containing the substance neatly hidden in between its partitions. Representative samples of the substance were examined by Elizabeth Ayonon, a chemist of the Crime Laboratory Service of the Philippine National Police (PNP) assigned at the Arrival Area of the NAIA, and by Tita Advincula, another chemist of the PNP Crime Laboratory Service at Camp Crame, and found to be positive for heroin. The two chemists concluded that the entire substance, with a total weight of 5,579.80 grams, contained in the two packages found in the bag of U Aung Win, is heroin. A manhunt was conducted to locate U Aung Win. The personnel of the Bureau of Immigration and Deportation in the NAIA were asked to place the accused in the hold order list. The offices of the different airlines in the airport were also alerted to inform the Enforcement and Security Service and the Customs Police Division of the NAIA of any departing passenger by the name of U Aung Win who would check in at their departure counters. A team was likewise sent to the Park Hotel in Belen St., Paco, Manila, which accused U Aung Win had indicated in his Customs Declaration as his address in the Philippines. But the accused was not found in that hotel. At about 7:45 p.m. of the same date of August 30, 1994, Rey Espinosa, an employee of the Lufthansa Airlines, notified the commander of the NAIA Customs Police District Command that a certain Burmese national by the name of U Aung Win appeared at the check-in counter of the airline as a departing passenger. Immediately, a team of law enforcers proceeded to the Departure Area and apprehended the accused after he had been identified through his signatures in his Customs Declaration and in his Bureau of Immigration and Deportation Arrival Card. Customs Examiner Tawano also positively identified the accused as the person who left his bag with him at the Arrival Area of the NAIA. During the investigation of U Aung Win, the agents of the Customs Police and the Narcotics Command (NARCOM) gathered the information that the accused had a contact in Bangkok and that there were other drug couriers in the Philippines. Following the lead, a team of lawmen, together with U Aung Win, was dispatched to the City Garden Hotel in Mabini St., Ermita, Manila, to enable U Aung Win to communicate with his contact in Bangkok for further instructions. While the police officers were standing by, they noticed two persons, a Caucasian and an oriental, alight from a car and enter the hotel. U Aung Win whispered to Customs Police Special Agent Edgar Quiones that he recognized the two as drug couriers whom he saw talking with his contact in Bangkok named Mau Mau. The members of the team were able to establish the identity of the two persons as accused Nigel Richard Gatward and one Zaw Win Naing, a Thailander, from the driver of the hotel service car used by the two when they arrived in the hotel. It was gathered by the law enforcers that Gatward and Zaw Win Naing were scheduled to leave for Bangkok on board a KLM flight. On August 31, 1994, operatives of the NAIA Customs Police mounted a surveillance operation at the Departure Area for Gatward and Zaw Win Naing who might be leaving the country. At about 7:45 p.m. of the same date, Special Agent Gino Minguillan of the Customs Police made a verification on the passenger manifest of KLM Royal Dutch Airlines Flight No. 806, bound for Amsterdam via Bangkok, which was scheduled to depart at about 7:55 that evening. He found the name GATWARD/NRMR listed therein as a passenger for Amsterdam and accordingly informed his teammates who responded immediately. Customs Police Captain Juanito Algenio requested Victorio Erece, manager of the KLM airline at the NAIA, to let passenger Gatward disembark from the aircraft and to have his checked-in luggage, if any, unloaded. The manager acceded to the request to off-load Gatward but not to the unloading of his check-in bag as the plane was about to depart and to do so would unduly delay the flight. However, Erece made an assurance that the bag would be returned immediately to the Philippines on the first available flight from Bangkok. Upon his disembarkment, Gatward was invited by the police officers for investigation. At about 3:00 oclock in the afternoon of September 1, 1994, Gatwards luggage, a travelling bag almost of the same size as that of U Aung Win, was brought back to the NAIA from Bangkok through the Thai Airways, pursuant to the request of Erece which was telexed in the evening of August 31, 1994, to the KLM airline manager in Bangkok. Upon its retrieval, the law enforcers subjected the bag to x-ray examinations in the presence of accused Gatward and some Customs officials.It was observed to contain some powdery substance. Inside the bag were two improvised envelopes made of cardboard each containing the powdery substance, together with many clothes. The envelopes were hidden inside the bag, one at the side in between a double-wall, the other inside a partition in the middle. Upon its examination by Chemists Ayonon and Advincula pursuant to the request of Police Senior Inspector John Campos of the NARCOM, the powdery substance contained in the two cardboard envelopes, with a net weight of 5,237.70 grams, was found to be heroin.[3] The court below made short shrift of the defense raised by herein appellant. Apart from the well-known rule on the respect accorded to the factual findings of trial courts because of the vantage position they occupy in that regard, we accept its discussion thereon by reason of its clear concordance with the tenets of law and logic. Again we quote: Accused Gatward denied that the bag containing the heroin was his luggage. However, that the said bag belongs to him is convincingly shown by the fact that the serial number of the luggage tag, which is KL 206835, corresponds to the serial number of the luggage claim tag attached to the plane ticket of the accused.Moreover, as testified to by Manager Erece of the KLM airline, the luggage of Gatward located in Container No. 1020 of KLM Flight No. 806 was the same luggage which was returned to the NAIA on September 1, 1994, on board Thai Airways TG Flight No. 620, pursuant to the request made by him to the KLM manager in Bangkok. The testimony of Erece should be given weight in accordance with the presumption that the ordinary course of business has been followed.(Sec. 3(q), Rule 131, Revised Rules on Evidence). No circumstance was shown by the defense which would create a doubt as to the identity of the bag as the luggage of Gatward which he checked in for KLM Flight No. 806 for Amsterdam with stopover in Bangkok. Accused Gatward was present during the opening of his bag and the examination of its contents. He was also interviewed by some press reporters in connection with the prohibited drug found in the bag. Gatward did not then disclaim ownership of the bag and its heroin contents. His protestations now that the bag does not belong to him should be deemed as an afterthought which deserves no credence. Gatward posited that he checked in a different bag when he boarded KLM Flight No. 806, explaining that upon his apprehension by the agents of the NAIA Customs Police, he threw away the claim tag for the said luggage. He alleged that the said bag contained, among other things, not only important documents and papers pertaining to his cellular phone business in the pursuit of which he came to the Philippines, but also money amounting to L 1,500.00. Gatward stressed that the bag did not have any illegal articles in it. If this were so, it was unusual for him, and certainly not in accordance with the common habit of man, to have thrown away the claim tag, thereby in effect abandoning the bag with its valuable contents. Not having been corroborated by any other evidence, and being rendered unbelievable by the circumstances accompanying it as advanced by him, the stand of accused Gatward that his luggage was different from that which contained the 5,237.70 grams of heroin in question commands outright rejection.[4] The trial court was also correct in rejecting the challenge to the admissibility in evidence of the heroin retrieved from the bag of appellant.While no search warrant had been obtained for that purpose, when appellant checked in his bag as his personal luggage as a passenger of KLM Flight No. 806 he thereby agreed to the inspection thereof in accordance with customs rules and regulations, an international practice of strict observance, and waived any objection to a warrantless search. His subsequent arrest, although likewise without a warrant, was justified since it was effected upon the discovery and recovery of the heroin in his bag, or in flagrante delicto. The conviction of accused U Aung Win in Criminal Case No. 94-6269 is likewise unassailable. His culpability was not based only upon his plea of guilty but also upon the evidence of the prosecution, the presentation of which was required by the lower court despite said plea. The evidence thus presented convincingly proved his having imported into this country the heroin found in his luggage which he presented for customs examination upon his arrival at the international airport. There was, of course, no showing that he was authorized by law to import such dangerous drug, nor did he claim or present any authority to do so. 2. It is, however, the penalties imposed by the trial court on the two accused which this Court cannot fully accept. This is the presentation made, and the rationalization thereof, by the court below: According to Section 20 of the Dangerous Drugs Act of 1972, as amended by Republic Act No. 7659, the penalties for the offenses under Sections 3 and 4 of the said Act shall be applied if the dangerous drugs involved, with reference to heroin, is 40 grams or more. Since the heroin subject of each of these two cases exceeds 40 grams, it follows that the penalty which may be imposed on each accused shall range from reclusion perpetua to death. To fix the proper penalty, it becomes necessary to determine whether any mitigating or aggravating circumstance had attended the commission of the offenses charged against the accused. With respect to Gatward, no aggravating or mitigating circumstance was shown which might affect his criminal liability. Relative to U Aung Win, no aggravating circumstance was likewise established by the prosecution. However, the voluntary plea of guilty of the said accused, which was made upon his arraignment and therefore before the presentation of the evidence of the prosecution, should be appreciated as a mitigating circumstance. Under Article 63 of the Revised Penal Code, which prescribes the rules for the application of indivisible penalties, in all cases in which the law prescribes a penalty composed of two indivisible penalties, the lesser penalty shall be applied, if neither mitigating nor aggravating circumstances are present in the commission of the crime, or if the act is attended by a mitigating circumstance and there is no aggravating circumstance. However, this rule may no longer be followed in these cases, although the penalty prescribed by law is reclusion perpetua to death, since reclusion perpetua, which was an indivisible penalty before, is now a divisible penalty with a duration from 20 years and one (1) day to 40 years, in accordance with Article 27 of the Revised Penal Code, as amended by Republic Act No. 7659. Consequently, the penalty of reclusion perpetua to death should at present be deemed to fall within the purview of the penalty prescribed which does not have one of the forms specially provided for in the Revised Penal Code, the periods of which shall be distributed, applying by analogy the prescribed rules, in line with Article 77 of the Revised Penal Code. Pursuant to this principle, the penalty of reclusion perpetua to death shall have the following periods: Death, as the maximum; thirty (30) years and one (1) day to forty (40) years, as the medium; and twenty (20) years and one (1) day to thirty (30) years, as the minimum. As there is no mitigating or aggravating circumstance shown to have attended the commission of the offense charged against Gatward, the penalty to be imposed on him shall be within the range of the medium period. On the other hand, since U Aung Win is favored by one mitigating circumstance without any aggravating circumstance to be taken against him, the penalty which may be imposed on him shall be within the range of the minimum period. (Art. 64(1) & (2), Revised Penal Code) The accused in these cases may not enjoy the benefit of Act No. 4103, the Indeterminate Sentence Law, for under Section 2 of the said Act, its provisions shall not apply to those convicted of offenses punished with life imprisonment, which has been interpreted by the Supreme Court as similar to the penalty of reclusion perpetua as far as the non-application of the Indeterminate Sentence Law is concerned. (People vs. Simon, G.R. No. 93028, July 29, 1994)[5] On those considerations, the trial court handed down its verdict on March 3, 1995 finding both accused guilty as charged, thus: WHEREFORE, in Criminal Case No. 94-6268, accused Nigel Richard Gatward is found guilty beyond reasonable doubt of transporting, without legal authority therefor, 5,237.70 grams of heroin, a prohibited drug, in violation of Section 4 of Republic Act No. 6425, otherwise known as the Dangerous Drugs Act of 1972, as amended by Republic Act No. 7659; and there being no aggravating or mitigating circumstance shown to have attended the commission of the crime, he is sentenced to suffer the penalty of imprisonment for thirty-five (35) years of reclusion perpetua and to pay a fine of Five Million Pesos (P5,000,000.00). In Criminal Case No. 94-6269, accused U Aung Win is found guilty beyond reasonable doubt of importing or bringing into the Philippines 5,579.80 grams of heroin, a prohibited drug, without being authorized by law to do so, contrary to Section 3 of Republic Act No. 6425, the Dangerous Drugs Act of 1972, as amended by Republic Act No. 7659; and in view of the presence of one (1) mitigating circumstance of voluntary plea of guilty, without any aggravating circumstance to offset it, he is sentenced to suffer the penalty of imprisonment for twenty-five (25) years of reclusion perpetua and to pay a fine of One Million Pesos (P1,000,000.00). The heroin involved in these cases is declared forfeited in favor of the government and ordered turned over to the Dangerous Drugs Board for proper disposal. With costs de oficio.[6] It is apropos to mention at this juncture that during the pendency of this appeal, and while awaiting the filing of appellants brief on an extended period granted to his counsel de parte, the Court received on September 5, 1995 a mimeographed form of a so-called Urgent Motion to Withdraw Appeal. It bears the signature of appellant but without the assistance or signature of his counsel indicated thereon. No reason whatsoever was given for the desired withdrawal and considering the ambient circumstances, the Court resolved on September 27, 1995 to deny the same for lack of merit.[7] On June 10, 1996, a letter was received from one H.M. Consul M.B. Evans of the British Embassy, Consular Section, Manila, seeking an explanation for the aforesaid resolution and with the representation that a convicted person who did not, on reflection, wish to continue with an appeal would not need to prove merit but could simply notify the courts of his wish to withdraw and that would be the end of the matter. To be sure, this is not the first time that members of foreign embassies and consulates feel that they have a right to intrude into our judicial affairs and processes, to the extent of imposing their views on our judiciary, seemingly oblivious or arrogantly disdainful of the fact that our courts are entitled to as much respect as those in their own countries. Such faux pas notwithstanding, a reply was sent to Mr. Evans informing him that, while there is no arrangement whereby a foreign consular officer may intervene in a judicial proceeding in this Court but out of courtesy as enjoined in Republic Act No. 6713, the unauthorized pleading of appellant was made under unacceptable circumstances as explained in said reply; that it is not mandatory on this Court to dismiss an appeal on mere motion of an appellant; that the Court does not discuss or transmit notices of judicial action except to counsel of the parties; and that, should he so desire, he could coordinate with appellants counsel whose address was furnished therein.[8] In a resolution dated June 19, 1996, appellants counsel was ordered to show cause why he should not be disciplinarily dealt with or held for contempt for his failure to file appellants brief. On July 24, 1996, said counsel and the Solicitor General were required to comment on the aforestated motion of appellant to withdraw his appeal, no brief for him having yet been filed. Under date of September 6, 1996, the Solicitor General filed his comment surprisingly to the effect that the People interposed no objection to the motion to withdraw appeal. Appellants counsel, on the other hand, manifested on November 4, 1996 that he was willing to file the brief but he could not do so since appellant asked for time to consult his pastor who would later inform said counsel, but neither that pastor nor appellant has done so up to the present. It would then be worthwhile to restate for future referential purposes the rules in criminal cases on the withdrawal of an appeal pending in the appellate courts. The basic rule is that, in appeals taken from the Regional Trial Court to either the Court of Appeals or the Supreme Court, the same may be withdrawn and allowed to be retracted by the trial court before the records of the case are forwarded to the appellate court. [9] Once the records are brought to the appellate court, only the latter may act on the motion for withdrawal of appeal. [10] In the Supreme Court, the discontinuance of appeals before the filing of the appellees brief is generally permitted. [11] Where the death penalty is imposed, the review shall proceed notwithstanding withdrawal of the appeal as the review is automatic and this the Court can do without the benefit of briefs or arguments filed by the appellant.[12] In the case at bar, however, the denial of the motion to withdraw his appeal by herein appellant is not only justified but is necessary since the trial court had imposed a penalty based on an erroneous interpretation of the governing law thereon. Thus, in People vs. Roque,[13] the Court denied the motion of the accused to withdraw his appeal, to enable it to correct the wrongful imposition by the trial court of the penalty ofreclusion temporal to reclusion perpetua for the crime of simple rape, in clear derogation of the provisions of Article 335 of the Revised Penal Code and the Indeterminate Sentence Law. Similarly, in another case,[14] the motion to withdraw his appeal by the accused, whose guilt for the crime of murder was undeniable and for which he should suffer the medium period of the imposable penalty which is reclusion perpetua, was not allowed; otherwise, to permit him to recall the appeal would enable him to suffer a lesser indeterminate sentence erroneously decreed by the trial court which imposed the minimum of the penalty for murder, that is, reclusion temporal in its maximum period. In the cases at bar, the same legal obstacle constrained the Court to deny appellants motion to withdraw his appeal. The trial court had, by considering reclusion perpetua as a divisible penalty, imposed an unauthorized penalty on both accused which would remain uncorrected if the appeal had been allowed to be withdrawn. In fact, it would stamp a nihil obstantium on a penalty that in law does not exist and which error, initially committed by this Court in another case on which the trial court relied, had already been set aright by this Court. 3. As amended by Republic Act No. 7659, the respective penalties imposable under Sections 3 and 4 of the Dangerous Drugs Act, in relation to Section 20 thereof, would range from reclusion perpetua to death and a fine of P500,000.00 to P10,000,000.00 if the quantity of the illegal drug involved, which is heroin in this case, should be 40 grams or more. In the same amendatory law, the penalty of reclusion perpetua is now accorded a defined duration ranging from twenty (20) years and one (1) day to forty (40) years, through the amendment introduced by it to Article 27 of the Revised Penal Code. This led the trial court to conclude that said penalty is now divisible in nature, and that (c)onsequently, the penalty of reclusion perpetua to death should at present be deemed to fall within the purview of the penalty prescribed which does not have one of the forms specially provided for in the Revised Penal Code, and the periods of which shall be distributed by an analogous application of the rules in Article 77 of the Code.Pursuant to its hypothesis, the penalty of reclusion perpetua to death shall have the following periods: death, as the maximum; thirty (30) years and one (1) day to forty (40) years, as the medium; and twenty (20) years and one (1) day to thirty (30) years, as the minimum.[15] We cannot altogether blame the lower court for this impass'e since this Court itself inceptively made an identical misinterpretation concerning the question on the indivisibility of reclusion perpetua as a penalty. In People vs. Lucas,[16] the Court was originally of the view that by reason of the amendment of Article 27 of the Code by Section 21 of Republic Act No. 7569, there was conferred upon said penalty a defined duration of 20 years and 1 day to 40 years; but that since there was no express intent to convert said penalty into a divisible one, there having been no corresponding amendment to Article 76, the provisions of Article 65 could be applied by analogy. The Court then declared thatreclusion perpetua could be divided into three equal portions, each portion composing a period. In effect, reclusion perpetua was then to be considered as a divisible penalty. In a subsequent re-examination of and a resolution in said case on January 9, 1995, occasioned by a motion for clarification thereof,[17] the Court en banc realized the misconception, reversed its earlier pronouncement, and has since reiterated its amended ruling in three succeeding appellate litigations.[18] The Court, this time, held that in spite of the amendment putting the duration of reclusion perpetua at 20 years and 1 day to 40 years, it should remain as an indivisible penalty since there was never any intent on the part of Congress to reclassify it into a divisible penalty. This is evident from the undisputed fact that neither Article 63 nor Article 76 of the Code had been correspondingly altered, to wit: Verily, if reclusion perpetua was reclassified as a divisible penalty, then Article 63 of the Revised Penal Code would lose its reason and basis for existence. To illustrate, the first paragraph of Section 20 of the amended R.A. No. 6425 provides for the penalty of reclusion perpetuato death whenever the dangerous drugs involved are of any of the quantities stated therein. If Article 63 of the Code were no longer applicable because reclusion perpetua is supposed to be a divisible penalty, then there would be no statutory rules for determining when either reclusion perpetua or death should be the imposable penalty. In fine, there would be no occasion for imposing reclusion perpetua as the penalty in drug cases, regardless of the attendant modifying circumstances. This problem revolving around the non-applicability of the rules in Article 63 assumes serious proportions since it does not involve only drug cases, as aforesaid. Under the amendatory sections of R.A. No. 7659, the penalty of reclusion perpetua to death is also imposed on treason by a Filipino (Section 2), qualified piracy (Section 3), parricide (Section 5), murder (Section 6), kidnapping and serious illegal detention (Section 8), robbery with homicide (Section 9), destructive arson (Section 10), rape committed under certain circumstances (Section 11), and plunder (Section 12). In the same resolution, the Court adverted to its holding in People vs. Reyes, [19] that while the original Article 27 of the Revised Penal Code provided for the minimum and the maximum ranges of all the penalties therein, from arresto menor to reclusion temporal but with the exceptions of bond to keep the peace, there was no parallel specification of either the minimum or the maximum range of reclusion perpetua. Said article had only provided that a person sentenced to suffer any of the perpetual penalties shall, as a general rule, be extended pardon after service thereof for 30 years. Likewise, in laying down the procedure on successive service of sentence and the application of the three-fold rule, the duration of perpetual penalties is computed at 30 years under Article 70 of the Code. Furthermore, since in the scales of penalties provided in the Code, specifically those in Articles 25, 70 and 71, reclusion perpetua is the penalty immediately higher than reclusion temporal, then its minimum range should by necessary implication start at 20 years and 1 day while the maximum thereunder could be co-extensive with the rest of the natural life of the offender. However, Article 70 provides that the maximum period in regard to service of the sentence shall not exceed 40 years. Thus, the maximum duration of reclusion perpetua is not and has never been 30 years which is merely the number of years which the convict must serve in order to be eligible for pardon or for the application of the three-fold rule. Under these accepted propositions, the Court ruled in the motion for clarification in the Lucas case that Republic Act No. 7659 had simply restated existing jurisprudence when it specified the duration ofreclusion perpetua at 20 years and 1 day to 40 years. The error of the trial court was in imposing the penalties in these cases based on the original doctrine in Lucas which was not yet final and executory, hence open to reconsideration and reversal. The same having been timeously rectified, appellant should necessarily suffer the entire extent of 40 years of reclusion perpetua, in line with that reconsidered dictum subsequently handed down by this Court. In passing, it may be worth asking whether or not appellant subsequently learned of the amendatory resolution of the Court under which he stood to serve up to 40 years, and that was what prompted him to move posthaste for the withdrawal of his appeal from a sentence of 35 years. 4. The case of U Aung Win ostensibly presents a more ticklish legal poser, but that is not actually so. It will be recalled that this accused was found guilty and sentenced to suffer the penalty of reclusion perpetua supposedly in its minimum period, consisting of imprisonment for 25 years, and to pay a fine of P1,000,000.00. He did not appeal, and it may be contended that what has been said about the corrected duration of the penalty of reclusion perpetua which we hold should be imposed on appellant Gatward, since reclusion perpetua is after all an indivisible penalty, should not apply to this accused. Along that theory, it may be asserted that the judgment against accused U Aung Win has already become final. It may also be argued that since Section 11(a) of Rule 122 provides that an appeal taken by one accused shall not affect those who did not appeal except insofar as the judgment of the appellate court is favorable and applicable to the latter, our present disposition of the correct duration of the penalty imposable on appellant Gatward should not affect accused U Aung Win since it would not be favorable to the latter. To use a trite and tired legal phrase, those objections are more apparent than real. At bottom, all those postulations assume that the penalties decreed in the judgment of the trial court are valid, specifically in the sense that the same actually exist in law and are authorized to be meted out as punishments. In the case of U Aung Win, and the same holds true with respect to Gatward, the penalty inflicted by the court a quo was a nullity because it was never authorized by law as a valid punishment. The penalties which consisted of aliquot one- third portions of an indivisible penalty are self-contradictory in terms and unknown in penal law. Without intending to sound sardonic or facetious, it was akin to imposing the indivisible penalties of public censure, or perpetual absolute or special disqualification, or death in their minimum or maximum periods. This was not a case of a court rendering an erroneous judgment by inflicting a penalty higher or lower than the one imposable under the law but with both penalties being legally recognized and authorized as valid punishments. An erroneous judgment, as thus understood, is a valid judgment.[20] But a judgment which ordains a penalty which does not exist in the catalogue of penalties or which is an impossible version of that in the roster of lawful penalties is necessarily void, since the error goes into the very essence of the penalty and does not merely arise from the misapplication thereof. Corollarily, such a judgment can never become final and executory. Nor can it be said that, despite the failure of the accused to appeal, his case was reopened in order that a higher penalty may be imposed on him. There is here no reopening of the case, as in fact the judgment is being affirmed but with a correction of the very substance of the penalty to make it conformable to law, pursuant to a duty and power inherent in this Court. The penalty has not been changed since what was decreed by the trial court and is now being likewise affirmed by this Court is the same penalty of reclusion perpetua which, unfortunately, was imposed by the lower court in an elemental form which is non-existent in and not authorized by law. Just as the penalty has not been reduced in order to be favorable to the accused, neither has it been increased so as to be prejudicial to him. Finally, no constitutional or legal right of this accused is violated by the imposition upon him of the corrected duration, inherent in the essence and concept, of the penalty. Otherwise, he would be serving a void sentence with an illegitimate penalty born out of a figurative liaison between judicial legislation and unequal protection of the law. He would thus be the victim of an inadvertence which could result in the nullification, not only of the judgment and the penalty meted therein, but also of the sentence he may actually have served. Far from violating any right of U Aung Win, therefore, the remedial and corrective measures interposed by this opinion protect him against the risk of another trial and review aimed at determining the correct period of imprisonment. WHEREFORE, the judgment of the court a quo, specifically with regard to the penalty imposed on accused-appellant Nigel Richard Gatward in Criminal Case No. 94-6268 and that of accused U Aung Win in Criminal Case No. 94-6269, is hereby MODIFIED in the sense that both accused are sentenced to serve the penalty of reclusion perpetua in its entire duration and full extent. In all other respects, said judgment is hereby AFFIRMED, but with costs to be assessed against both accused in all instances of these cases. Похожие интересы Plea Sentence (Law) Crimes Crime & Justice Arraignment Документы, похожие на «14. People v. Gatward_Case» Карусель назад Карусель вперед 123010 Gilley Motion to Compel Memorandum Загружено: John S Keppy People vs Espina Загружено: Dexter Gayawet Batalao Criminal Procedure 2010 Загружено: qanaq U.S. versus Vadim Mikerin, Judge Theodore Chuang, 40 page sentencing transcript dated Aug 31st 2015 Загружено: Harry the Greek United States v. Benitez, 4th Cir. (2011) Загружено: Scribd Government Docs United States v. Kaydahzinne, 10th Cir. (2009) Загружено: Scribd Government Docs RA 8353 Загружено: marz sid USA v. Vallmoe Shqaire - Order Re Judicial Removal of Defendant From United States Загружено: Legal Insurrection United States v. George W. Cermark, 622 F.2d 1049, 1st Cir. (1980) Загружено: Scribd Government Docs United States v. Michael Rafael Collins, 141 F.3d 1186, 10th Cir. (1998) Загружено: Scribd Government Docs Timothy Willie Sweetwine v. State of Maryland and the Warden of the Maryland House of Correction, 769 F.2d 991, 4th Cir. (1985) Загружено: Scribd Government Docs Dead Pledge Загружено: creativeslicer George W. Del Vecchio v. Illinois, 474 U.S. 883 (1985) Загружено: Scribd Government Docs United States v. Robert White, 4th Cir. (2013) Загружено: Scribd Government Docs Bustos vs Lucero Загружено: brendamanganaan United States v. Ishmael Santiago, 4th Cir. (2012) Загружено: Scribd Government Docs) Загружено: Scribd Government Docs United States v. Harris, 4th Cir. (1996) Загружено: Scribd Government Docs United States v. Hill, 4th Cir. (2011) Загружено: Scribd Government Docs United States v. Larry Jarome Rogers, 848 F.2d 166, 11th Cir. (1988) Загружено: Scribd Government Docs Momir Nikolic Appeals Judgement [Plea Agreement] (Srebrenica Genocide) Загружено: Srebrenica Genocide Library Gordon v. Franklin, 10th Cir. (2012) Загружено: Scribd Government Docs Stat Загружено: Paula Katrina Dizon Aberratio Ictus Error Praeter Загружено: Samantha Kate Chua United States v. James Ralph Harris, 82 F.3d 411, 4th Cir. (1996) Загружено: Scribd Government Docs U.S. v Scruggs, 11-60564-CV0, Opinion, Misprision of Felony, 18 USC 4 Загружено: Neil Gillespie State v. Lopez, Ariz. Ct. App. (2015) Загружено: Scribd Government Docs United States of America, Harold Omar Mack, 669 F.2d 28, 1st Cir. (1982) Загружено: Scribd Government Docs Show Temp 3.Pl Загружено: John S Keppy People v. Tuvera Загружено: Ven Ven Другое от пользователя: Robeh Atud Карусель назад Карусель вперед 3. Board of Assessment Appeals v. Meralco_Case Digest Загружено: Robeh Atud Nitafan Full Загружено: Maam Babatas Case Digests Загружено: Robeh Atud 1. Bermejo v. Barrios_Case Загружено: Robeh Atud Property Cases Finals Загружено: Robeh Atud 1. AAA v. Carbonell CD Загружено: GeanelleRicanorEsperon 2.-PP-v.-Molina_CD.docx Загружено: GeanelleRicanorEsperon 11. Pp. v. Balisacan_Case Digest Загружено: Robeh Atud Abelita v. Doria_Case Загружено: Robeh Atud 1. Pigcaulan v. SCI_Case Digests Загружено: Robeh Atud 41. Sonza v. ABSCBN_Case Загружено: Robeh Atud 61. Montierro v. Rickmers_Case Загружено: Robeh Atud 41. Sonza v. ABSCBN_Case Загружено: Robeh Atud 31. Fuji v. Espiritu_Case Загружено: Robeh Atud 1. Pigcaulan v. SCI_Case Загружено: Robeh Atud BOOK TWO Загружено: Robeh Atud Title v Property Загружено: Robeh Atud Alexandra v. LLDA_Case Загружено: Robeh Atud 13. Bachrach v. Talisay_Case Загружено: Robeh Atud 18. Romero v. People_Case Digest Загружено: Robeh Atud 18. Romero v. People_Case Загружено: Robeh Atud 1. Simex v. CA_Case Загружено: Robeh Atud 2. Sandejas v. Ignacio, Jr._case Загружено: Robeh Atud 12. Abobon v. Abobon_Case Загружено: Robeh Atud 4. Laurel v. Abrogar_Case Загружено: Robeh Atud 4. Laurel v. Abrogar_Case Digest Загружено: Robeh Atud PROPERTY - Prudential Bank vs Panis Загружено: Maria Lorna O Beriones 2. Prudential Bank v. Panis_Case.docx Загружено: Robehgene Atud 1. Leung Yee v. Strong Machinery Company_Case Загружено: Robehgene Atud Популярные на тему «Politics» Карусель назад Карусель вперед OBCM Nissan Загружено: avijeetboparai sparoutline3 Загружено: api-280706151 Foucault's Kantian Critique Загружено: Filip Tripp Pasig Catholic College Centennial Graduation Speech - College Загружено: pattclaudio Unified Forms_General Customer Information[1] Загружено: Ahmad Nazrin 10338_ism_unit-3_part-3 Загружено: Gaurav Chauhan Notice: Agency information collection activities; proposals, submissions, and approvals Загружено: Justia.com 51 Modern Hindu Law Загружено: Avinash Naik Document 41 Загружено: Ron Acero Police Log April 6, 2016 Загружено: MansfieldMAPolice KSI Journal of Space Philosophy Загружено: Kepler Space Institute/ University _pop 3.42 Readme Загружено: Nawed Nafees Alaska Lobbyist Directory 4-24-2014 Загружено: Steve MIS Development Загружено: Sonia Lawson Aug 1 2014 Загружено: fijitimescanada SAP FI Загружено: Pankaj Sharma LOD Remediation Загружено: KennethQueRaymundo Balanced Scorecard Public Sector Загружено: Ark Group Atlas of world population history b.pdf Загружено: Alfonso Mena Zara Case Study Загружено: Gunjan Vishal Tyagi Blog Skins 405751 Загружено: drgladys29 The Names of God Загружено: Kumpee Alex Thongmak Urban Development of Sapporo Загружено: Diana Carrillo Judicial Watch Motion to Depose Hillary Clinton Загружено: Law&Crime List_of_Data_Sets_and_Examples (1).doc Загружено: Sarmad Ali 213082680-Summit-Draft Загружено: climatehomescribd Arthur Te v CA Digest Загружено: Yette Vi SuccessHawk Secrets of a Successful Job Search Загружено: Mark Alvin Punzalan DJUMANTAN CASE Загружено: Anthony Navales Bayawa IV tkatsuno literacy coaching project Загружено: api-471352343
https://ru.scribd.com/document/338452992/14-People-v-Gatward-Case
CC-MAIN-2019-39
refinedweb
7,396
57.4
Red Hat Bugzilla – Bug 427617 Python.h unconditionally (re)defines _XOPEN_SOURCE Last modified: 2008-01-31 18:08:39 EST Description of problem: The Python.h header unconditionally defines _XOPEN_SOURCE, even if it's already defined. This causes redefinition errors in kdebindings (PyKDE4) with GCC 4.3. Version-Release number of selected component (if applicable): python-2.5.1-19.fc9 How reproducible: Always Steps to Reproduce: 1. Try building kdebindings in Rawhide with GCC 4.3. Actual results: In file included from /usr/include/python2.5/pyconfig.h:6, from /usr/include/python2.5/Python.h:8, from /usr/include/python2.5/sip.h:29, from /builddir/build/BUILD/kdebindings-3.97.0/x86_64-redhat-linux-gnu/python/pykde4/sip/solid/sipAPIsolid.h:28, from /builddir/build/BUILD/kdebindings-3.97.0/x86_64-redhat-linux-gnu/python/pykde4/sip/solid/sipsolidpart1.cpp:24: /usr/include/python2.5/pyconfig-64.h:944:1: error: "_XOPEN_SOURCE" redefined <command-line>: error: this is the location of the previous definition Expected results: No error 1. Python isn't exactly "well namespaced". 2. This file is providing information you can't get any other way, about what Python thinks the environment is. 3. pytohon*/*.h can/do trigger off these headers, so if you don't include Python.h _first_ (so the same features are visible) you are taking your compiles into your own hands IMO. ...feel free to file upstream, but this isn't something I want to change. You're misunderstanding me: all I'm asking for is to change this: #define _XOPEN_SOURCE 600 to: #undef _XOPEN_SOURCE #define _XOPEN_SOURCE 600 or: #ifndef _XOPEN_SOURCE #define _XOPEN_SOURCE 600 #endif or maybe, to make it absolutely correct: #if !defined(_XOPEN_SOURCE) || _XOPEN_SOURCE<600 #undef _XOPEN_SOURCE #define _XOPEN_SOURCE 600 #endif all 3 of which will have absolutely no effect on programs which are already working right now, but will all fix kdebindings. Oh, and to implement the #undef solution, adding: #undef _XOPEN_SOURCE to the top of the multilib hack /usr/include/python2.5/pyconfig.h (before including the wordsize-specific version) should be enough. If you think that'll fix your problem, and cause no other problems, just do the #undef manually before including pyconfig.h To expand on point #3, it doesn't really matter what the header _changes_ _XOPEN_SOURCE to after features.h has been included and other parts of the the python headers can be relying on those features being enabled. So IMNSHO, anything using pyconfig.h should be including it first ... so that features.h gets setup correctly, or alternatively set everything up manually with the required #define's and #undef's (good luck with that). I understand that just putting this hack in pyconfig.h now will probably not do anything bad, but I'd have to make sure it continues to not do anything bad for each new upstream change we take ... and again, it gives people an unrealistic impression of how they can/should be using pyconfig.h. 1. This is generated code, we'd have to hack the sip generator to add the #undef, there's probably no way it can be fixed in kdebindings (except possibly by filtering out the -D_XOPEN_SOURCE from the command line, but I don't even know where this is set, probably a global cmake or KDE setting). 2. The sources _are_ including pyconfig.h first, the existing _XOPEN_SOURCE definition comes from the command line, not some other header. 3. I believe the correct solution is really to fix this in Python. A header must not blindly assume that feature macros like this are not set, many projects set these in the command line. And 4. I don't understand this resistance to a change which will change absolutely nothing except to make projects which currently have redefinition errors from GCC 4.3 compile again (and makes them do the exact same thing they always did with GCC 4.1). I can reassign this against sip, of course, but I believe adding the #undef in sip would be a crude hack around a bug in Python, which is likely to affect other Python-using software too, not just sip-generated bindings, and which is trivial to fix correctly. Ok, if you can show me one other thing that puts -D_XOPEN_SOURCE on the command line ... I'll work around it for that case, but that's a pretty poor thing to do IMNSHO. Googling for "-D_XOPEN_SOURCE" (with the quotes) returns thousands of results. Also searching for: "-D_XOPEN_SOURCE" Python.h in Google shows that at least (some versions of) SuperKaramba and mod_python are using both -D_XOPEN_SOURCE on the command line and #include <Python.h>. They may have their own workarounds for this issue though, and AFAIK this issue also only affects C++ (it's still a warning in C). > Googling for "-D_XOPEN_SOURCE" (with the quotes) returns thousands of results. That's fine, any apps. not including pyconfig.h can do that without any problems. SuperKaramba isn't in Fedora and I've just done a mod_python mock build and it doesn't use _XOPEN_SOURCE on any files AFAICS. FWIW kdebindings 4.0.1 now builds with GCC 4.3, so this must have been worked around somewhere in SIP or kdebindings itself.
https://bugzilla.redhat.com/show_bug.cgi?id=427617
CC-MAIN-2018-26
refinedweb
876
57.37
In the world of computer science, recursion refers to the technique of defining a thing in its own terms. In other words, a recursive function calls itself for processing. In this article, we will understand the concept of recursive function in Python, a widely-used programming language of the 21st century. What is Python Recursion? In Python, a group of related statements that perform a specific task is termed as a ‘function.’ So, functions break your program into smaller chunks. And it is common knowledge that a function can call other functions in Python. But some other functions can call themselves. These are known as recursive functions. Consider two parallel mirrors placed facing one another. Now, any object that is kept between the mirrors would be reflected recursively. Let us go into detail about the recursive function to understand its working clearly. The Recursive Function We know that a recursive function in Python calls itself as it is defined via self-referential expressions, i.e., in terms of itself. It keeps repeating its behaviour until a particular condition is met to return a value or result. Let us now look at an example to learn how it works. Also read: Python Interview Questions & Answers Suppose that you want to find out the factorial of an integer. Factorial is nothing but the product of all numbers, starting from 1 to that integer. For instance, the factorial of 5 (written as 5!) would be 1*2*3*4*5*6, i.e., 720. We have a recursive function calc_factorial(x), which is defined as follows: def calc_factorial(x): #Recursive function to find an integer’s factorial if x == 1: return 1 else return (x * calc_factorial(x-1)) What would happen if you call this function with a positive integer like 4? Well, each function call will add a stack frame until we reach the base case (when the number reduces to 1). The base condition is required so that the recursion ends and does not go on indefinitely. So, in the given case, the value 24 will be returned after the fourth call. Implementation of Recursive Function in Python There can be varied applications of recursive functions in Python. For instance, you want to make a graphic with a repeated pattern, say a Koch snowflake. Recursion can be used for generating fractal patterns, which are made up of smaller versions of the same design. Another example is that of game-solving. You can write recursive algorithms for solving Sudoku and numerous complex games. Recursion is most commonly used in searching, sorting, and traversal problems. A striking feature of the function is that recursive implementation allows backtracking. So, recursion is all about building a solution incrementally and removing those solutions that do not satisfy the problem constraints at any stage. Two things are necessary to achieve this – maintaining state and suitable data structure. Read on to get familiar with these terms. Read: Python Developer Salary in India Maintaining the state Each recursive call in Python has its own execution context. While dealing with recursive functions in Python, you have to thread the state through each recursive call. With this, the current state becomes a part of the current call’s execution context. You can also keep the state in global scope. For example, if you are using recursion to calculate 1+2+3+4+…….+10. Here, the current number you are adding and the sum accumulated to that point forms the state that you need to maintain. Maintaining the state involves passing the updated current state as arguments through each call. Here’s how you can do it. def sum_numbers(current_number, accumulated_sum) #Base case #Return final state if current number==11: return accumulated_sum #Recursive case #Thread the state through recursive call Else: return sum_numbers(current_number + 1, accumulated_sum + current_number) Alternatively, you can use the global mutable state. To maintain state using this method, you keep the state in global scope. current_number = 1 accumulated_sum = 0 def sum_numbers(): global current_number global accumulated_sum #Base case if current_number==11 return accumulated_sum #Recursive case else: accumulated_sum = accumulated_sum + current_number current_number = current_number + 1 return sum_numbers() Recursive Data Structures A data structure is considered recursive if it can be defined in terms of smaller and simpler versions of itself. Examples of recursive data structures include lists, trees, hierarchical structures, dictionaries, etc. A list can have other lists as elements. A tree has sub-trees, leaf nodes, and so on. It is important to note here that the structure of recursive functions is often modeled after the data structures that it takes as inputs. So, recursive data structures and recursive functions go hand in hand. Recursion in Fibonacci Computation Italian mathematician Fibonacci first defined the Fibonacci numbers in the 13th century to model the population growth of rabbits. He deduced that starting from one pair of rabbits in the first year, the number of rabbit pairs born in a given year equals the number of rabbit pairs born in each of the last two years. This can be written as: Fn = Fn-1 + Fn-2 (Base cases: F0=1 and F1=1). When you write a recursive function to compute the Fibonacci number, it can result in naive recursion. This happens when the definition of the recursive function is naively followed, and you end up recomputing values unnecessarily. To avoid recomputation, you can apply lru_cache decorator to the function. It caches the results and saves the process from becoming inefficient. Read more: Top 10 Python Tools Every Python Developer Should Know Pros and Cons of Recursive Recursion helps simplify a complex task by splitting it into sub-problems. Recursive functions make cleaner code and also uncomplicate sequence generation. But recursion does not come without its limitations. Sometimes, the calls may prove expensive and inefficient as they use up a lot of time and memory. Recursive functions can also be difficult to debug. Wrapping Up In this article, we covered the concept of Python recursion, demonstrated it using some examples, and also discussed some of its advantages and disadvantages. With all this information, you can easily explain recursive functions in your next Python.
https://www.upgrad.com/blog/python-recursive-function-concept/
CC-MAIN-2021-43
refinedweb
1,019
55.84
Details - Type: Improvement - Status: Closed - Priority: Major - Resolution: Won't Fix - Affects Version/s: 2.0.0 - Fix Version/s: None - Component/s: core/search - Labels:None - Lucene Fields:New, Patch Available Description. Activity Also tried this with DOCS_NUM == 1,000,000. Right after index creation it was [junit] 1687ms elapsed for DocCaching sort [junit] 1922ms elapsed for FieldCache'd sort But next 2 time I ran it (without index set up) the timings were near this: [junit] 94ms elapsed for DocCaching sort [junit] 1797ms elapsed for FieldCache'd sort fork="true" maxmemory="105m" attributes need to be added to <junit> task for the test to be runnable with DOCS_NUM == 1,000,000 Artem: I've only skimmed your patch breifly, but i have a few comments: 1) since Lucene sorting has historicly been based on indexed fields, and this new patch results in fields being sorted on the stored values, you should definitely point this out in the javadocs of your DocCachingSortFactory and DocCachingFieldComparatorSource classes .. in big bold letters .. i would go even so far as to name the classes StoredFieldComparatorSrouce and StoredFieldSortFactory instead of the names you currently use. 2) your use of WeakHashMap with keys that are never refrenced elsewhere to ensure the cache is purged on every GC is not something i've ever seen before... +public class WeakDocumentsCache { + private Map cache = Collections.synchronizedMap(new WeakHashMap()); + + public Document getById(int docId) + + public void put(int docId, Document document) ...have you tested out the performacne of this approach with various GC implimentations? skimming the javadocs for WeakReference/WeakHashMap it seems like perhaps SoftRefrence would be better suited for your purposes. 3) making your new DocCachingIndexReader and WeakDocumentsCache clases part of the Lucene public API seems to be a little outside of the scope of this change ... perhapsthey should be left as a private static inner class es inside of the individual classes they are used by (DocCachingFieldComparatorSource and DocCachingIndexReader respectively) .... even if these classes are left public, DocCachingIndexReader should probably subclass FilterIndexReader to reduce the amount of duplication. 4) your check for FieldSelector usage in DocCachingIndexReader doesn't check if the FieldSelector used is the same every time – which means you can't trust your cache ... fixing this could be complicated, and serves as another reason why it would be easier if DocCachingIndexReader was made a private inner class of DocCachingFieldComparatorSource where you know exactly how it's going to be used. 5) speaking of FieldSelector, your use case seems like a perfect example of when a FieldSelector would make sense to only read the field(s) that are needed for sorting. The test case uses only tiny documents, and the reported timings for multiple searches with FieldCache make it appear that the version of lucene used contains the bug that caused FieldCaches to be frequently recomputed unnecessarily. I suggest trying the test with much larger documents, of realistic size, and using current Lucene source. I'm sure the patch will make things much slower with the current implementation. As Hoss suggests, performance would be improved considerably by using a FieldSelector to obtain just the sort field, but even so will be slow unless the sort field is arranged to be early on the documents, ideally the first field, and a LOAD_AND_BREAK FieldSelector is used. Another important performance variable will be the number of documents retrieved in the test query. If the number of documents satisfying the query is a sizable percentage of the total collection size, I'm pretty sure the patch will be much slower than using FieldCache. Those performance numbers don't make sense to me. Why would DocCaching sort be so much faster than FieldCache sort the second time on the same IndexReader? Using a cached FieldCache entry for sorting involves an array lookup... how do you improve on that? Or do you open a new reader for each test? Also, you specify the size of the index, but not the size of the number of documents to be sorted (that match the query). DocCacheSorting should use much more memory than the FieldCache (and be slower) if the number of documents to be sorted is large, right? Sorry for some of the redundant comments... Chucks comment wasn't visible to me for some strange reason when I left mine. Hi guys! Thanks for value comments. What a feedback! I'd like to stress the point of my fix - to avoid costly FieldCache population with field values from the whole index. Your point that it will be slower for cases when filtered sets be nearly as large as the whole index is valid. But is it a practical point? Lucene shines on big indexes and queries resulting with full index are not very useful I guess. I think it's good idea to hide the caching reader class and utilize FieldSelector mechanism to make the fix more effective. However do you think this improvement worth doing? You are strong opposition and I'm not feeling up to an endless fight I'm serious, let me know what you think. This fix will have its limitations by no means but I think the above OutOfMemory scenario with current sorting mechanism alone makes this fix legitimate. Renamed classes as Hoss proposed. Tried to hide DocFieldCachingIndexReader, no luck - IndexReader members access rights problems raised. FieldSelector is now verified to be the same and is used by StoredFieldComparatorSource for DocFieldCachingIndexReader creation. Timings didn't change much - they probably would if documents in index were larger. I have this same issue with a constantly changing large index where users needs a current view. The frist search after each frequent IndexReader reopen is slow due primarily to the requirement to rebuild the FieldCache for sort fields. I don't believe this patch, or any continuation along these lines, will help my issue. Documents are lage and queries frequently return large results sets, say 20% of the entire multi-million document index or more. Hundreds of thousands of document() retrievals, even with a fast LOAD_AND_BREAK FieldSelector finding sort fields at the beginning of each Document, is not going to beat FieldCache's single traversal of the postings for the sort fieds. Another approach I've looked at is Robert Engel's IndexReader.reopen(). I think this direction is more promising. Artem, you might want to look at this. At least the version I've seen is not integrated with FieldCache, but it seems this would be feasible. Segments to the left of the first changed segment maintain their doc-ids, so an improved FieldCache could iterate just the postings in the first changed segment and those to the right. Unless somebody else does this first, it's on my list to improve IndexReader.reopen() with this optimization and to make other enhancements my app needs (e.g., support for ParallelReader – the current implementation fails in this case). A specific comment on the new patch: the introduction of FieldSelectors is too restrictive. The same doc-id may be retrieved using multiple FieldSelectors in different calls to IndexReader.document(). Any implementation of the cache needs to support this. Ok guys, I think I'm finished on this. Feel free to include it in Lucene or not. I'm quite happy already using it in my app (Sharehound), it does solve the problem for me. Artem: while i agree with Yonik/Chuck's comments about your performance tests probably not being realistic in the general case, what i really like about your patch is that it makes no attempt to change the default behavior of sorting in a way that would hurt users by default – users would only get this behavior if they choose to use it, and while the "typical" case may not bnefit from it, i'm sure there are plenty of situations where people know their index is big, and know that they are doing a search that should have a small number of results. adding something like this doesn't proclude future work on making sorting using FieldCache's less prohibitive (ie: an IndexReader.reopen approach) what does concern me about this patch is that without better javadocs explaining exactly what it does and when it's usefull, it could easily be missued by people who stumble upon it. I also don't understand why in your updated version of the patch, you aren't making an attempt to use the FieldSelector version of IndexReader.document(), since it should allways be faster in this use case, and would result in your memory cache talking up less space. I also don't understand your "IndexReader members access rights problems raised" ... a subclass of IndexReader should be able to live freely in any package – including as a private static class inside of another class. perhaps you ran into problems because you are attempting to subclass methods you don't really need to worry about subclassing? ... yet another reason to subclass FilterIndexReader and save yourself some headaches. Robert, Could you attach your current implementation of reopen() as well? The attachment did not come through in your java-dev message today, or the one from 12/11. I'd like to look at an incremental implementation of reopen() for FieldCache. Thanks The IndexReaderUtils I posted is not compilable - there are a few more classes needed. These are unnecessary to understand the technique. It was written this way to minimize the dependencies with Lucene, and not have to apply patches for my local codebase. Refactored the fix according to Hoss's recomendations. Now only StoredFieldSortFactory class is left public; FieldSelector is always used to fetch the only field from documents. Guys please remove all the attachments except #7 - things get messy.. Btw I've integrated the modified fix into sharhound, 4000 documented sorted search improved from 0,4s to 0,1s. That's great, thanks guys for you time and consideration! Removed several inner classes and documents cache from StoredFieldSortFactory. Now the whole class is pretty clean and simple. Checked the timings in the test - they remain pretty much the same. Checked it in Sharehound with more large index (~1mln documents), it sorts resultset with 4000 docs in ~0,2s now taking 70M RAM, that's fine to me. With standart new Sort(field, false) it takes (for the first search on a field) about 30-40s and quite a lot of memory (after several sorted searches different fields it took about 500M). SPRING_CLEANING_2013 We can reopen if necessary. Think this code has been extensively re-worked anyway. I've tried the test with DOCS_NUM == 10,000,000. DocCaching sort took about 1s with standart amount of memory (-Xmx80m) while FieldCache'd trapped with OutOfMemoryError even with 1G (-Xmx1000m). Resulting index size was 640M, its creation took at least 7hrs.
https://issues.apache.org/jira/browse/LUCENE-769?page=com.atlassian.jira.plugin.ext.subversion:subversion-commits-tabpanel
CC-MAIN-2015-14
refinedweb
1,795
61.67
All the client really has to know about the remote object is its remote interface. Everything else it needsfor instance, the stub classescan be loaded from a web server (though not an RMI server) at runtime using a class loader. Indeed, this ability to load classes from the network is one of the unique features of Java. This is especially useful in applets. The web server can send the browser an applet that communicates back with the server; for instance, to allow the client to read and write files on the server. However, as with any time that classes are loaded from a potentially untrusted host, they must be checked by a SecurityManager . Unfortunately, while remote objects are actually quite easy to work with when you can install the necessary classes in the local client class path, doing so when you have to dynamically load the stubs and other classes is fiendishly difficult. The class path , the security architecture, and the reliance on poorly documented environment variables are all bugbears that torment Java programmers. Getting a local client object to download remote objects from a server requires manipulating all of these in precise detail. Making even a small mistake prevents programs from running, and only the most generic of exceptions is thrown to tell the poor programmers what they did wrong. Exactly how difficult it is to make the programs work depends on the context in which the remote objects are running. In general, applet clients that use RMI are somewhat easier to manage than standalone application clients . Standalone applications are feasible if the client can be relied on to have access to the same .class files as the server has. Standalone applications that need to load classes from the server border on impossible . Example 18-6 is an applet client for the Fibonacci remote object. It has the same basic structure as the FibonacciClient in Example 18-5. However, it uses a TextArea to display the message from the server instead of using System.out . import java.applet.Applet; import java.awt.*; import java.awt.event.*; import java.rmi.*; import java.math.BigInteger; public class FibonacciApplet extends Applet { private TextArea resultArea = new TextArea("", 20, 72, TextArea.SCROLLBARS_BOTH); private TextField inputArea = new TextField(24); private Button calculate = new Button("Calculate"); private String server; public void init( ) { this.setLayout(new BorderLayout( )); Panel north = new Panel( ); north.add(new Label("Type a non-negative integer")); north.add(inputArea); north.add(calculate); this.add(resultArea, BorderLayout.CENTER); this.add(north, BorderLayout.NORTH); Calculator c = new Calculator( ); inputArea.addActionListener(c); calculate.addActionListener(c); resultArea.setEditable(false); server = "rmi://" + this.getCodeBase( ).getHost( ) + "/fibonacci"; } class Calculator implements ActionListener { public void actionPerformed(ActionEvent evt) { try { String input = inputArea.getText( ); if (input != null) { BigInteger index = new BigInteger(input); Fibonacci f = (Fibonacci) Naming.lookup(server); BigInteger result = f.getFibonacci(index); resultArea.setText(result.toString( )); } } catch (Exception ex) { resultArea.setText(ex.getMessage( )); } } } } You'll notice that the rmi URL is built from the applet's own codebase . This helps avoid nasty security problems that arise when an applet tries to open a network connection to a host other than the one it came from. RMI-based applets are certainly not exempt from the usual restrictions on network connections. Example 18-7 is a simple HTML file that can be used to load the applet from the web browser. <html> <head> <title>RMI Applet</title> </head> <body> <h1>RMI Applet</h1> <p> <applet align="center" code="FibonacciApplet" width="300" height="100"> </applet> <hr /> </p> </body> </html> Place FibonacciImpl_Stub.class , Fibonacci.class , FibonacciApplet.html , and FibonacciServer.class in the same directory on your web server. Add this directory to the server's class path and start rmiregistry on the server. Then start FibonacciServer on the server. For example: % rmiregistry & % java FibonacciServer & Make sure that both of these are running on the actual web server machine. Many web server farms use different machines for site maintenance and web serving, even though both mount the same filesystems. To get past the applet security restriction, both rmiregistry and FibonacciServer have to be running on the machine that serves the FibonacciApplet.class file to web clients. Now load FibonacciApplet.html into a web browser from the client. Figure 18-2 shows the result. For applications, it's much easier if you can load all the classes you need before running the program. You can load classes from a web server running on the same server the remote object is running on, if necessary. To do this, set the java.rmi.server.codebase Java system property on the server (where the remote object runs) to the URL where the .class files are stored on the network. For example, to specify that the classes can be found at, you would type: % java -Djava.rmi.server.codebase= FibonacciServer & Fibonacci Server ready. If the classes are in packages, the java.rmi.server.codebase property points to the directory containing the top-level com or org directory rather than the directory containing the .class files themselves . Both servers and clients will load the .class files from this location if the files are not found in the local class path first. Loading classes from the remote server makes the coupling between the server and the client a little less tight. However, any client program you write will normally have to know quite a bit about the system it's talking to in order to do something useful. This usually involves having at least the remote interface available on the client at compile time and runtime. Even if you use reflection to avoid that, you'll still need to know the signatures and something about the behavior of the methods you plan to invoke. RMI just doesn't lend itself to truly loose coupling like you might see in a SOAP or, better yet, RESTful server. The RMI design metaphor is more running one program on several machines than it is having several programs on different machines that communicate with each other. Therefore, it's easiest if both sides of the connection have all the code available to them when the program starts up.
https://flylib.com/books/en/1.135.1.121/1/
CC-MAIN-2019-09
refinedweb
1,021
56.86
What’s the potential for incorporating grasshopper into compute? e.g. is there the plan for grasshopper ‘components’ to be incorporated into the API? Hello, @dekanifisher ! It’s not only possible, but there is already a prototype! Have you heard of Resthopper? It was a project at the Thornton Tomasetti AEC Tech hackathon that McNeel kindly incorporated into their compute.rhino3d repo. @will just finished updating the public server at compute.rhino3d.com to reflect the latest changes. The route that you are looking for is compute.rhino3d.com/grasshopper. It accepts a POST request with three fields: { "algo" : "Your bit-64 encoded string containing entire GHX file goes here", "pointer": "If algo is null, an optional public url to your GHX goes here", "values": [ A list of Resthopper data trees which serves as your input goes here] } So, essentially, you are passing your entire GHX file (either as a string in the POST body or a public URL) and a list of Resthopper data trees that mimicks Grasshopper data tree structure and contains serialized RhinoCommon geometry. Here’s what the route looks like: And this is the I\O necessary to talk to it: Finally, there is a formatting standard that defines how the Grasshopper file has to be organized. Each Resthopper Tree has a ParamName field. This lets the compute server know which param in the definition each data tree from the “values” section belongs to. In the definition itself all you have to do is create a group around each input param and call it “RH_IN:<type_enum>:<unique_name>”. For instance, consider this definition: Your input schema will include two trees: one named “RH_IN:108:Crv1” and another one named “RH_IN:108:Crv2”. The 108 enum simply lets the server know that you are passing a curve. The complete list of type enums is available here: You will then receive back the same schema, however the “algo” and the “pointer” fields would be empty, and the values section will contain one DataTree of points with ParamName set to “RH_OUT:102:OutPt”. You can have any number of inputs / outputs as long as the names remain unique. Let me know if you have any questions! Hope this helps, Best! Sergey wow thank you. very excited about this! Does resthopper work with the javascript compute api? Also does resthopper allow for third party grasshopper components? (I suppose ghx files would have to store third party components for this to work?) At the moment Resthopper doesn’t really have a client-side API. You would have to serialize and pass along C# objects with RhinoCommon geometry inside. But this is a very interesting thought - could we potentially accept a JavaScript version of the DataTree with 3dm.io objects inside… Probably not in the nearest future, but that is definitely something for us to keep on the radar! With regards to the third-party plugins, that depends on the server-side setup. In order to run, Compute needs an installed version of Rhino 7 (WIP), so basically, whatever GH plugins are installed on that machine are the plugins that you can call from Resthopper. The GHX file doesn’t really contain any executable code - it is just an XML markup that tells Grasshopper which components are located where, what connects to what and which assemblies each component corresponds to. The assemblies themselves have to be present on the server. @will will have a better idea of what additional plugins (if any) are currently available on compute.rhino3d.com. If you do want to use plugins that are not available on the public Compute server, you can always install Rhino WIP, then clone the repo and spin up your own server. Rhino 7 can be downloaded from and will share a license with your Rhino 6 installation. If you have any questions about the setup process, I’d be happy to walk you through. There aren’t any third-party plug-ins on compute.rhino3d.com right now for either Rhino or Grasshopper. It’s something we’d like to support in the future though! Okay cool and how about having multiple instances running? If there is one installed version of Rhino 7 running on a server with one license does this mean that only one user can be accessing the Compute functionality at any given time? We only run one instance of compute per VM, with a load balancer to distribute requests amongst the VMs. Mostly requests are quick enough that you don’t need to worry about requests timing out in the queue, but it is possible to lock up the machine if you feed it, say, a big mesh boolean operation to calculate. Do you have an example of what the “list of Resthopper data trees which serves as your input” looks like in the POST request to compute.rhino3d.com/grasshopper? Thanks! Hi @kylafarrell! Yes, certainly. The funny thing is that when I looked through the old POST requests in order to send you a sample, I’ve noticed that there is a serialization hiccup that we’ve missed haha. Everything still works, but we are inadvertently doubling up on the input data (which just shows how very alpha all of this is at the moment). We’ll try to fix this shortly. But here is your sample. ResthopperSample.zip (12.1 KB) So, again, the three main parts here are “algo”, “pointer” and “values”. Algo and Pointer are interchangeable. “algo” houses a bit64 encoded string representing the entirety of the GHX file. If “algo” is null, Resthopper will look at the “pointer” in search of a public URL that houses a GHX accessible via a GET request (say, on an s3 bucket). Then the “values” section contains the actual list of Resthopper DataTrees. Each DataTree has a “ParamName” field that corresponds to the name of the group the target input param lives inside (see above). The inner tree is the dictionary of Grasshopper Path as string to serialized Rhino3dmIo geometry. The “Keys” and “Values” fields are unnecessarily reiterated as I mentioned in the beginning. So here we’re sending a simple script with a point at “RH_IN:102:0001” and a double at “RH_IN:105:0001” You should get something like this back: ResthopperSampleResponse.zip (1.4 KB) Hope that helps! @enmerk4r A follow up question. I have been searching the forums but I cannot find an example of what utility or Rhino3dm function to use generate the dictionary of the grasshopper path and/ or serialize the Rhino3dm geometry. Ideally I would like to use the Rhino3dm Python library. Is this currently only supported in C#? @kylafarrell Sure, if you wanted to construct a tree in C# you would do something like this. (I am writing this directly here, so there might be typos, haha - but this is the general gist). To send a flat list of geometries: // Create a Resthopper Data Tree and give it a param name ResthopperDataTree myAwesomeTree = new ResthopperDataTree(); myAwesomeTree.ParamName = "RH_IN:108:0001"; // Create a Resthopper path int pathInteger = 0; GhPath path = new GhPath(pathInteger); foreach(Polyline pline in myPolylineList) { // Convert each geometry to Resthopper object // simply by feeding it to the constructor ResthopperObject rhObj = new ResthopperObject(pline); // Add to data tree at path myAwesomeTree.Append(rhObj, path); } // Add tree to schema schema.Values.Add(myAwesomeTree); To send a grafted tree: In order to send more complex trees, you would have to increment the value of the path above. So, for instance you could do GhPath path = new GhPath(pathInteger++) inside the foreach loop in order to put each ResthopperObject on it’s own branch. You can create trees of any complexity by feeding an integer array to the GhPath constructor. So adding an object at any arbitrary path such as new GhPath(new int[] { 0, 0, 1, 3 }); is totally possible. You just have to figure out a way of incrementing the path that serves your business logic. Do we have any helper functions? No. But we should. We’ll try to look into adding some helper functions that can easily convert a nested list to a ready-to-go RH Data Tree at some point in the future. The only limitation is that in C# you can only have items in the leaves of the nested data structure, whereas data trees allow you to put objects anywhere in the nesting. For instance, you can’t have a new List<<List<string>>() { new List<string>(), "someRandomString"}, because C# doesn’t allow a List<string> and a string to coexist at the same level. But this is something you could do in a Data Tree. So, if you need to work with some edge case complicated tree structure, you’ll have to do it by hand regardless Python? Right now Resthopper is C# only with most of the development focused on getting everything to work smoothly on that front. A Python API would be cool and would definitely expand RH’s user base, but that’s not something we are considering in the nearest future. Good luck! Let me know if you have any further questions. Here are a few more RESTHopper samples: (edited with latest versions and live examples) - Delaunay Mesh - (src) - Passes a list of points which get meshed via a gh definition. - Extrusions - (src) - Uses html sliders as input to a gh definition solved with compute. -updated broken links… Thanks a lot Luis Abi @fraguada : I’ve tried to use rhino3dm and compute_rhino3d in Python to compute a Grasshopper model using Rhino Compute Service. When I try to execute a POST request to the REST API with the following json: { “algo”: decoded, “pointer”: None, “values”: [ { “ParamName”: “RH_IN:Point”, “InnerTree”: { “{ 0; }”: [ { “type”: “Rhino.Geometry.Point3d”, “data”: “{“X”:0.0,“Y”:0.0,“Z”:0.0}” } ] }, “Keys”: [ “{ 0; }” ], “Values”: [ [ { “type”: “Rhino.Geometry.Point3d”, “data”: “{“X”:0.0,“Y”:0.0,“Z”:0.0}” } ] ] }, { “ParamName”: “RH_IN:Size”, “InnerTree”: { “{ 0; }”: [ { “type”: “Rhino.Geometry.Point3d”, “data”: “{“X”:0.0,“Y”:0.0,“Z”:0.0}” } ] }, “Keys”: [ “{ 0; }” ], “Values”: [ [ { “type”: “Rhino.Geometry.Point3d”, “data”: “{“X”:0.0,“Y”:0.0,“Z”:0.0}” } ] ] } ] } I get the response code 500 (Internal server error). I mention that I’ve been following the tutorial and with the voronoi.ghx file provided here: and the corresponding JSON. The POST request ran properly, giving a 200 OK response. Could you tell me, please, if you think is it something wrong with my JSON. I mention also that this JSON is for a box model in Grasshopper. Try the latest version of the compute-rhino3d package from PyPi. It has few helper classes/functions that mean you don’t have to construct the JSON yourself. Here’s a sample. import compute_rhino3d.Util import compute_rhino3d.Grasshopper as gh import rhino3dm import json compute_rhino3d.Util.authToken = ADD_TOKEN_HERE pt1 = rhino3dm.Point3d(0, 0, 0) circle = rhino3dm.Circle(pt1, 5) angle = 20 # convert circle to curve and stringify curve = json.dumps(circle.ToNurbsCurve().Encode()) # create list of input trees curve_tree = gh.DataTree("RH_IN:curve") curve_tree.Append([0], [curve]) rotate_tree = gh.DataTree("RH_IN:rotate") rotate_tree.Append([0], [angle]) trees = [curve_tree, rotate_tree] output = gh.EvaluateDefinition('workshop_step5.ghx', trees) print(output) Thank you! I will try. Based on the discussion here I was able to load Grasshopper file in Unity so I would like to share the process which I gathered in a video. You can also download the project data. Hi Will! Do you have the full python script for this one somewhere? Also showing how you get the output lines? Rick Hey @r.h.a.titulaer, the full sample is on GitHub and I’ve updated it to extract the output from the grasshopper definition and write it to a 3dm file. It’s not pretty, but it works.
https://discourse.mcneel.com/t/grasshopper-within-compute-rhino3d/77656
CC-MAIN-2021-49
refinedweb
1,962
64.51
Post your Comment Java Word Occurrence Example Java Word Occurrence Example In this example we will discuss about the how... can count the occurrences of each word in a file. In this example we will use... will demonstrate you about how to count occurrences of each word in a file. In this example Java String Occurrence in a String Java String Occurrence in a String In this program you will learn how to find the occurrence...:\unique>java StringCount 2 C:\unique> Download Java count frequency of words in the string Java count frequency of words in the string. In this tutorial, you will learn how to count the occurrence of each word in the given string. String... of words in a string. In the given example, we have accepted a sentence USE ARRAY TO FIND THE LARGEST NUMBER AND OCCURRENCE and count the occurrence of the largest number number entered from the keyboard. for example, you entered 4, 6, 3, 2, 5, 6, 6, 6, 1 and 6. your program will display the largest number is 6 and the occurrence count is 5. * Also display Java count occurrence of number from array Java count occurrence of number from array Here we have created an example that will count the occurrence of numbers and find the number which has the highest occurrence. Now for this, we have created an array of 15 integers how to delete a letter in a word? how to delete a letter in a word? how to delete a letter in a word? for example if i enter= roseindia, i want to delete 's', then output= roeindia open word document open word document how to open a word document ?? Please go through the following link: Java Read word document file The above link will provide an example that will read the document file using POI library in java Word Count Word Count This example counts the number of occurrences of a specific word...\kodejava>java WordCountExample 3 occurrences of the word 'you' in 'How r you?R you Display the data to MS word the database(say im searching using an id) and should display it on the ms word , i want it to be in a good format. FOr example my word doc has to be Name... a word doc would help a lot!! thank you from number to word from number to word i want to know weather there is any method that can be use in changing value from number to word. Example if i write ten thousand, it will automatically be written as 10000. Java convert number Java Word Processor Java Word Processor Problem: Design and implement a class called... of words (a hyphenated word is one word), o number of sentences (ends...: ? You can break up a String into words or "tokens" by using java?s Scanner Count instances of each word Count instances of each word I am working on a Java Project that reads a text file from the command line and outputs an alphabetical listing of the words preceded by the occurrence count. My program compiles and runs Question on reversing word of a sentnce Question on reversing word of a sentnce Write a function that accepts a sentence as a parameter, and returns the same with each of its words.... Demonstrate the usage of this function from a main program. Example: Parameter Convert Text To Word Convert Text To Word In this example, You will learn how to convert text to word file. Here, we are going to discuss about the conversion of text to word file. Core java word counting - Java Beginners java word counting Hi I want a code in java that replaces a word with another when its occurred independently and ignores the case of the text... but this will change all the occurrence even if its part of another word the run export to word document - Java Beginners export to word document hi sir,when i am click on a button under the jtable,for example (print button),then i want to print that jtable in word document,automatically,plz provide program sir Hi Friend, Try display co-occurrence words in a file display co-occurrence words in a file how to write java program for counting co occurred words in the file return the position of last occurrence of an element return the position of last occurrence of an element A method " findLast(E element) " returns the position of the last occurrence of the given element. If this element is not in the list, then the null reference should Finding word using regular expression Finding word using regular expression This Example describes the way to find the word from... library Java is now Java": Declaring text in which word java is to be find Determining the Word Boundaries in a Unicode String Determining the Word Boundaries in a Unicode String In this section, you will learn how to determine the word boundaries in a unicode string. Generally, we... to break the string into words. In the given example, we have invoked the factory Java Word Count - Word Count Example in Java Java Word Count - Word Count Example in Java This example illustrates how...;} } } Download Word count Example Final Key Word in Java .style1 { text-align: center; } Final key word in java In java final is a key word used in several different contexts. It is used before... will generate error. Example- public class FinalField { public final int Java search word from text file Java search word from text file In this tutorial, you will learn how to search a word from text file and display data related to that word. Here, we have... students. The example prompt the user to enter any student's name Finding a given word using regular expression Finding a given word using regular expression This Example describes the way to find a given word from the String and also the no of times the word exists using regular parsing word xml file using SAX parser - XML parsing word xml file using SAX parser i am parsing word 2003's XML file using SAX.here my question is,i want to write some tag elements which... example Post your Comment
http://roseindia.net/discussion/47535-Java-Word-Occurrence-Example.html
CC-MAIN-2015-32
refinedweb
1,042
57.4
Red Hat Bugzilla – Bug 808413 Dependency on python-argparse Last modified: 2012-04-21 16:59:58 EDT Dependency on python-argparse needs to be added to python-keystoneclient its imported in shell.py [root]# grep "import argparse" keystoneclient/shell.py import argparse Hmm, on my F16 box: $ rpm -qf /usr/lib64/python2.7/argparse.py python-libs-2.7.2-5.2.fc16.x86_64 So no explicit dep needed AFAICT. Closing as NOTABUG, please reopen if I missed something Sorry Cole, the bug should have been against epel 6 not F17, I'll see if I can update the version in the description more details below [root@rhel62dh ~]# yum install -y --enablerepo=epel-testing python-keystoneclient [root@rhel62dh ~]# rpm -qa | grep -i keystone python-keystoneclient-2012.1-0.5.e4.el6.noarch [root@rhel62dh ~]# keystone Traceback (most recent call last): File "/usr/bin/keystone", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2655, in <module> [root@rhel62dh ~]#;a=commitdiff;h=71a135bc2e2de0f38c533f9e57b04c121faaa9d4 python-keystoneclient-2012.1-1.el6 has been submitted as an update for Fedora EPEL 6. Package python-keystoneclient-2012.1-1.el6: * should fix your issue, * was pushed to the Fedora EPEL 6 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=epel-testing python-keystoneclient-2012.1-1.el6' as soon as you are able to. Please go to the following url: then log in and leave karma (feedback). python-keystoneclient-2012.1-1.el6 has been pushed to the Fedora EPEL 6 stable repository. If problems still persist, please make note of it in this bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=808413
CC-MAIN-2017-47
refinedweb
283
56.25
sending mail using jsp sending mail using jsp please give me the detailed procedure and code for sending mail through jsp program Please visit the following links: http sending mail with attachment in jsp - JSP-Servlet sending mail with attachment in jsp Hi, Can any one pls tell me how to send email with attachment in jsp. I know how to send mail without...:// iphone mail sending problem iphone mail sending problem Hi, I'm receiving the following error ... while sending mail in my iphone application Terminating app due to uncaught... getting this error and to solve it? Thanks in Advance! Hi all, I get Sending mail - JavaMail Sending mail Need a simple example of sending mail in Java Hi,To send email you need a local mail server such as apache james. You first... java.lang.String subject = ""; public java.lang.Exception error;public Mail sending mail - JSP-Servlet sending mail Hi, what is the code for sending mail automatically without user intervention? thanks in advance Introduction of Java Mail Introduction of Java Mail  ... of Electronic Mail for example- composing, reading, sending text mails and also with attached files. Introduction to Java Mail API java mail sending with images java mail sending with images I need to send images through java mail without giving content path(i.e. we don't want hard code the image path)can you tell me the idea? Please visit the following links: http sending mails - JSP-Servlet sending mails sending mail using smtp protocal ,while running,i got error an javax.mail.sendfailed exception. what is this error Introduction to Java Mail API Introduction to Java Mail API The Java Mail API allows the developers to add mailing... type of java application. Composing, reading and sending electronic mail Sending email without authentication /mail/sending-an-email-in-jsp.shtml...Sending email without authentication Hi sir, Am doing a project in JSP, in that i want to send mail without any authentication of password so send mail mail I wrote a program to send mail using smtp. But there is an error message " could not connect to smtp.gmail.com port 465. what is the problem how can i solve it ... Please visit the following link: Java Mail API Sending images through java mail and by clicking one image the control go to mail sending page in that the image should add to body part and the mail will sent to recipients..... please give me any idea...Sending images through java mail Am trying to develop greeting sending mail with attachment in jsp - JSP-Servlet sending mail with attachment in jsp Hi Experts java mail (); out.println("Thanks for sending mail!"); } catch(Exception e...java mail this code showing an error couldn't connect smtp host please help <%@ page language="java" import="javax.naming.*,java.io. mail problem for sending mail!"); } catch(Exception e){ out.println(e.getMessage...mail problem Sir, I tried ur send mail example but it giveng follwing error "Could not connect to SMTP host: localhost, port: 25" ,tried sendind regarding sending mesage - JavaMail regarding sending mesage i have tried the following program... properties = System.getProperties(); // Setup mail server........"); } }*/ but could not compile due to the following error /* C:\jdk\bin>javac Sending emails and insert into trable Sending emails and insert into trable I have created a form, once... doing anything wrong? I'm not getting any error by the way. <?php include..."; function died($error) { echo "We Sending an email in JSP Sending an email in JSP Sending an email in JSP In this section, you will learn how to send an email in jsp. Following is a simple JSP page for sending JAVA MAIl - JavaMail JAVA MAIl Hi, I am trying to send a mail using java mail api. But I am getting following error: javax.mail.SendFailedException: Sending failed... this error? Hi Friend, Please visit the following link Java Mail - JMS , For solving the problem visit to :... running very well,creating no other problems. Now i am trying to learn java mail service. I hav written a sample mail program to just test the code.The API (); out.println("Thanks for sending mail!"); } catch(Exception e){ out.println...java mail API <html> <head> <title>Mail API<...%" cellpadding="0" cellspacing="0"> <h1>Mail API</h1> <tr> Programming error - JSP-Servlet the following links: http... it. Actually i want to send a mail to my clients with a small text message. Please ejb - EJB (setenv and serwlsenv) iam getting the error as: hellohome1.java: cannot resolve... a same type of error, actually, its not detecting Remote interface file, plz guide values of the "from address & to addresses" while sending a mail to localhost using javamail values of the "from address & to addresses" while sending a mail to localhost... in JavaMail in ur website.I tried 1st program for sending the Mail.It's... is,What i have to mention in from address field and To address filed while sending Email queue while sending mail using Struts Class Can I maintain a queue of mails being sent from a JSP page in a DB to get its status Deployement error - EJB Deployement error Cannot resolve reference unresolved News entity facade@jndi news entity facade local news entity facades a ejb file SMS Sending SMS Sending i create one java swing standalone application....i need to send sms in my software application....can you guide how write the code....i wrote one code using way-2-sms but it show an error url action is wrong....how Reply to the mail(import files error) Reply to the mail(import files error) Hi! Thank you. That error has...-to-the-mail(import-files-error).html Thanks... as org.hibernate.MappingException: Error reading resource: contact.hbm.xml Identify EJB 2.0 container requirements. Extension (for sending mail only)JAXP 1.0Visit http... Identify EJB 2.0 container requirements.Prev Chapter 1. EJB Overview Next Identify EJB 2.0 container Introduction of Java Mail java mail :// Thanks Hi, Java Mail API is now open source and you can download the source code and read. If you want to understand the working of Java Mail download the source code from here Cmp Bean - EJB It will give the error, Operation 'pingConnectionPool' failed in 'resources' Config... i have to set the class path? Hi I am sending code, it will help you mail with multiple attachments mail with multiple attachments code for sending mail with multiple attachments in jsp please solve this error Warning: mail() [function.mail]: please solve this error Warning: mail() [function.mail]: please solve this error Warning: mail() [function.mail]: "sendmail_from" not set in php.ini or custom "From:" header missing j2ee - EJB j2ee i want to know the ejb 2.0 architecture by diagram and also ejb 3.0 architecture i want to know the flow in ejb 2.0 and ejb 3.0 Hi friend, I am sending you a link. This link will help you. Please visit php send mail with attachment php send mail with attachment Syntax of sending email with attachment in PHP java mail (MailClient.java:90) help me clear this error..send suggestions to mail mail...) throws MessagingException, AddressException { // Setup mail...("mail.smtp.host", mailServer); // Get a mail session Session session Sending Emails In Java Sending Emails In Java I want to send emails from within a java program. I saw some online java programs to do that, but they needed me to enter... with how to write the code? Please visit the following link: Java Mail Cheking birthday excel sheet everyday and Sending Automated MAIL to birthday boy on his birthday Cheking birthday excel sheet everyday and Sending Automated MAIL to birthday... anyone help me out in the procedure for sending automated mail to the birthday... = systemdate, 2) Auto mail should be sent to paticular person with all the other sending email code - JSP-Servlet sending email code How To Send Emails using jsp Hi friend, I am sending you a link. This link will help you. Please visit for more information. Technology - EJB sending link. you can lean more information about ejb, Spring and Hibernate. http... i want to learn further .So please tell me whether should i learn EJB 3.0 Introduction To Enterprise Java Bean(EJB). WebLogic 6.0 Tutorial. to EJB Section (Learn to Develop World Class... Introduction To Enterprise Java Bean(EJB) Enterprise Java Bean architecture is the component Sending email with read and delivery requests Sending email with read and delivery requests Hi there, I am sending emails using JavaMail in Servlets on behalf of a customer from the website..., Gareth Please visit the following link: JSP Servlet Send Mail Java Mail API Tutorial . Check the tutorial: Introduction of Java Mail Thanks...Java Mail API Tutorial I have to write programs using the Java Mail API. I want to learn the Java Mail API. Let's know the urls to learn Java Mail sending commands through RxTx sending commands through RxTx i am trying to call lightOn..."); if(commPortIdentifier.isCurrentlyOwned()) { System.out.println("Error: Port is currently in use...("Error: Only serial ports are handled by this example api - JavaMail and Tutorials on Mail visit to : mail api Hi, Pls give me the detailed information on how... points : 1.Download the Apache Mail server (James). mail api - Development process mail api hi to all, it,s abhishek................. actually i am using mail api for sending the mail with attachments(audio and video)..... suppose i am sending a video which will be open on real player 3.21.. so i wants send mail using smtp in java send mail using smtp in java How to send mail using smtp in java? Sending mail in JSP - SMTP HTML for sending, updating and sending a mail. HTML.... Introduction to HTML Here, we will introduce... a title tag name aligning Images. Send E-mail Problem in EJB application creation - EJB Problem in EJB application creation Hi, I am new to EJB 3.0... getting the following error message when running an enterprise application. Deployment error: The Sun Java System Application Server could not start. More code for sending email using j2me code for sending email using j2me could someone tell me why when i... an error. What does the smtpclient.java code look like? ive checked to see if all libs... for sending a file attachment to gmail account Sending an email in JSP In this section, you will learn how to send An introduction to spring framework SPRING Framework... AN INTRODUCTION..., written a book titled 'J2EE Develoment without using EJB' and had introduced... EJB as unduly complicated and not susceptible to unit-testing. Instead of EJB java mail - Java Beginners java mail how to send a mail without authentication one domain.... Hi, You have to use the Java mail api. You need to use the SMTP... is tutorials on Java Mail API: http error while inserting millions of records - EJB error while inserting millions of records Hello, I am using ejb cmp... e.printStackTrace(); } } The following error is occur java.rmi.ServerError: Unexpected Error; nested exception JAVA Mail - JavaMail JAVA Mail Hi! I am trying to send the mail using SMTP of gmail. program was compiled successfully. when i am trying to execute that program it is showing error --- Must issue STARTTLS command first... How to avoid Im trying to send email from my jsp... i've already tried a sample code from google, for sending email to my office 'smtp... attached the whole coding as below: For input :index.jsp Sending Sending query with variable - JSP-Servlet Sending query with variable While displaying pages in frames concept, one page contains links and other page contains messages for that links... database. Using userno within single quotes shows error i have split a string by space ,done some operation and now how to properly order them for sending a mail in java or jsp - JSP-Servlet order them for sending a mail in java or jsp Dear sir, I have... arraylist and used that string as a argument ,passed in a sending a mail method .Now my problem is that after sent a mail when a receiver got that mail why mails are sending from the linux server - JavaMail java application,mail are not sending .Mail server is installed on windows 2003...why mails are sending from the linux server Mails are not sending from the linux server.We have 3 systems. server system is windows 2003 server Introduction To Enterprise Java Bean(EJB). Developing web component. Introduction To Java Beans... and the ejb components separately on the web server and the application sending email using smtp in java sending email using smtp in java Hi all, I am trying to send and email to through my company mail server. Following is my code package com.tbss... IHGWTMEX07.SerWizSol.com Microsoft ESMTP MAIL Service ready at Wed, 21 Dec 2011 14 mail mail how to send mail using jsp Please visit the following links: JSP Send Mail Java Mail Tutorials Programming Error - JSP-Servlet a simple, single part, text/plain e-mail public class TestEmail { public... = " zzzzz@gmail.com "; // SUBSTITUTE YOUR ISP'S MAIL SERVER HERE!!! String...(Message.RecipientType.TO, address); msg.setSubject("Test E-Mail through: Sending message using Java Mail Sending message using Java Mail This Example shows you how to send a message... = System.getProperties(); // Setup mail server   how to send a mail - JSP-Servlet when a receiver gets a mail ,the matter will be shown in a single line . I am sending a following matter Dear Harini , U r bonus is 2000 . Regards Hr. But when a receiver got a this mail as a Dear Harini,U r bonus ejb ejb what is ejb ejb is entity java bean ejb ejb why ejb components are invisible components.justify that ejb components are invisible is electronic mail format and API for sending, receiving and electronic mail...Email - Electronic mail In this article we will understand the E-mail and see... internet users. We prefer EJB EJB How is EJB different from servlets EJB what is an EJB container? Hi Friend, Please visit the following link: EJB Container Thanks what kind of web projects requires ejb framework
http://www.roseindia.net/tutorialhelp/comment/87451
CC-MAIN-2014-41
refinedweb
2,376
65.73
/#include <AFMotor.h>#define FAILSAFE 2000unsigned long last;AF_DCMotor motor(1);AF_DCMotor turn(2);void setup() { Serial.begin(115200); // set up Serial library at 115200 bps Serial.println("Let's Drive"); last = millis(); // turn on motor motor.setSpeed(255); turn.setSpeed(255); motor.run(RELEASE); turn.run(RELEASE);}void motor_fwd(){ motor.run(FORWARD); motor.setSpeed(255);}void motor_back(){ motor.run(BACKWARD); motor.setSpeed(255);}void turn_r(){ turn.run(FORWARD); turn.setSpeed(255);}void turn_l(){ turn.run(BACKWARD); turn.setSpeed(255);}void turn_s(){ turn.run(RELEASE); turn.setSpeed(0);}void motor_s(){ motor.run(RELEASE); motor.setSpeed(0);}void loop() { if (Serial.available()) { //is there anything to read? char getData = Serial.read(); //if yes, read it last = millis(); switch (getData) { case 'F': //Forward turn_s(); motor_fwd(); Serial.println("Forward"); break; case 'B': //Reverse turn_s(); motor_back(); Serial.println("Reverse"); break; case 'L': //Left turn_l(); motor_s(); Serial.println("Left"); break; case 'R': //Right turn_r(); motor_s(); Serial.println("Right"); break; case 'S': //Stop motor_s(); turn_s(); Serial.println("Stop"); break; } } else { if (millis() - last > FAILSAFE) motor_s(); turn_s(); }} What is controlling your steering? It sounds as if you're using a proportional motor control for that, which seems like a pretty strange approach. What hardware is used to control the steering, and how is it connected to the Arduino? I have tried to switch pins, and pinnumber in the code, but i still can't get both motors to work at the same time. When i switched pins, the steering would work, but not the other, and vica verce when i switched back. Quote from: redkite on Oct 20, 2012, 12:40 pmI have tried to switch pins, and pinnumber in the code, but i still can't get both motors to work at the same time. When i switched pins, the steering would work, but not the other, and vica verce when i switched back.Getting both motors working at the same time was not the goal. The goal was to confirm that both motors worked, and both channels on the motor drive shield worked, and both software controllers worked. Unfortunately, since you haven't told us exactly which configurations you tested and what worked in each configuration, we have no idea what your test proves. AF_DCMotor motor(1); AF_DCMotor motor(2); When i changed the channel, i also change the code. #include <AFMotor.h>AF_DCMotor Lmotor(3);AF_DCMotor Rmotor(4);void setup() { Serial.begin(115200); // set up Serial library at 115200 bps // turn on motor Lmotor.setSpeed(200); Rmotor.setSpeed(200);}void move_fwd(){Lmotor.run(FORWARD); // turn it on going forwardRmotor.run(FORWARD); // turn it on going forward}void move_back(){Lmotor.run(BACKWARD); // turn it on going BACKWARDRmotor.run(BACKWARD); // turn it on going BACKWARD}void move_left(){Lmotor.run(FORWARD); // turn it on going forwardRmotor.run(BACKWARD); // turn it on going BACKWARD}void move_rigth(){Lmotor.run(BACKWARD); // turn it on going BACKWARDRmotor.run(FORWARD); // turn it on going forward}void loop() {if (Serial.available()) { //is there anything to read? char getData = Serial.read(); //if yes, read it switch (getData) { case 'F': //Forward move_fwd(); break; case 'B': //Reverse move_back(); break; case 'L': //Left move_left(); break; case 'R': //Right move_rigth(); break; default: Lmotor.run(RELEASE); Rmotor.run(RELEASE); } }} Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=128849.0;prev_next=next
CC-MAIN-2015-06
refinedweb
565
52.56
Pre requirements: - Windows, Linux or OSX machine running latest Java JDK or JRE. - Github account - You could fork the example project repo or you could use project of your own if you prefer. - if you want to set up a Webhook that will trigger automatic builds you will need Jenkins to be accessible from outside your network. You will have to set up port forwarding in your rauther. Example project repo: Installing Jenkins Jenkins come in two versions: Long-term Support (LTS) and Weekly releases. If you want stable version choose (LTS) Jenkins also require Java so make sure that you have the appropriate version installed. By the time I’m writing this it requires - On MAC OS brew install jenkins-lts - On CentOS add Jenkins repo sudo rpm --import then install it. yum install jenkins Then start the service brew services start jenkins-lts Change default port (if needed, homebrew only) Jenkins runs by default on port 8080, but I have another app running there so I had to change the default port. Edit homebrew plist file as follows: (replace 2.222.1 with the actual installed version) /usr/local/Cellar/jenkins-lts/2.222.1/homebrew.mxcl.jenkins-lts.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <key>Label</key> <string>homebrew.mxcl.jenkins-lts</string> <key>ProgramArguments</key> <array> <string>/usr/libexec/java_home</string> <string>-v</string> <string>1.8</string> <string>--exec</string> <string>java</string> <string>-Dmail.smtp.starttls.enable=true</string> <string>-jar</string> <string>/usr/local/opt/jenkins-lts/libexec/jenkins.war</string> <string>--httpListenAddress=0.0.0.0</string> <string>--httpPort=8082</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> Line 18: change the port to 8082. Line 17: Change httpListenAddress from 127.0.0.1 to 0.0.0.0. This is necessary if you want to access Jenkins from Internet, outside of the internal network. Now run the server as service. brew services start jenkins-lts Install the necessary plug-ins Jenkins -> Manage Jenkinst -> Manage Plugins Create First Pipeline Create New Item Pipeline ->Pipeline Script From SCM and put Git repository link. Create Jenkins file with the pipeline steps The whole pipeline should be wrapped in pipeline { } A few words about pipeline syntax: Agent is declared in the very beginning of the pipeline. This instructs Jenkins to allocate an executor (on a node) and workspace for the entire Pipeline. An agent is typically a machine, or container, which connects to a Jenkins master and executes tasks when directed by the master. Stage is part of Pipeline, and used for defining a conceptually distinct subset of the entire Pipeline, for example: “Build”, “Test”, and “Deploy”, which is used by many plugins to visualize or present Jenkins Pipeline status/progress. Step A single task; fundamentally steps tell Jenkins what to do inside of a Pipeline or Project. Full glossary could be found here Let’s get started by creating pipeline file in the example project folder: ./jenkins/pr.groovy pipeline { agent any tools {nodejs "SparkJS"} stages { stage('Cloning Git Repo') { steps { git '' } } stage('Install dependencies') { steps { echo '######################' echo 'Building...' echo '######################' sh '/usr/local/bin/yarn install' } } stage('Running Tests') { steps { echo '######################' echo 'Running tests ...' echo '######################' sh '/usr/local/bin/yarn test' } } } post { always { echo 'Starting server ...' sh '/usr/local/bin/yarn clean; /usr/local/bin/yarn build-prod; /usr/local/bin/yarn build-prod-ssr;' sh '/usr/local/bin/pm2 start ./server-build/server-bundle.js -f' } } } what we just did: – line 2, told Jenkins that it could run this pipeline for any agent. Agent basically allows you to specify where the task is to be executed. It could be Docker, Node or any agent. – line 6, we defined our stages: Cloning Git Repo, Install dependencies, Running Tests – line 32: finally after all stage script passed we defined the post script to run the server. I’m using pm2 (a process manager and launcher) for running the app so if you don’t have it installed you should install it using npm or yarn. npm install pm2@latest -g or yarn global add pm2 Running the pipeline task So now everything is set up, let’s test the pipeline. Navigate to the pipeline and from the vertical menu on the right select “build now”. If everything is good you should see a pipeline stages with progress bars filling out. After the execution you could navigate to the log (build history in the right side -> select last job ->Console output) There you could see a log of all stages executions including the snapshot tests Test Suites: 2 passed, 2 total Setting up Jenkins to listen to Github Webhook and trigger automatic builds on every commit This is probably the best and the most tricky one to make it work. We are going to add Github Webhook that will make a post request to Jenkins every time when we push code change and this will trigger our pipeline and will rebuild the app and redeploy it. We are building so called continuous integration process. CI Adding API key to the admin user. Select the current (admin) user from the top right. then on the left vertical menu choose “configure”. Navigate to the “API Token” section and click “Add new token” Important!!! Copy the token and save it somewhere safely because you won’t be able to see it again. Navigate back to the pipeline that we created and click “configure” from the left vertical menu. Scroll down to “Build Triggers” and check “Trigger builds remotely (e.g., from scripts)” Paste the authentication token in the field and copy the example url below the text field where it says: “Use the following URL to trigger build remotely” We will need this to add it into Github webhook. Click “save” on the bottom. Setting up Github Webhook Navigate to the example project in your Github space, select “settings” and “Webhooks”. Click on “add webhook” and you will see this screen: In the payload URL put the url that we copied from Jenkins -> Build Triggers above. Important!!! Make sure that you replace ‘JENKINS_URL’ with the actual IP of the machine where Jenkins is running or the hostname if you set up one, and replace the token with the actual token that we generated for the ‘admin’ user. On the dropdown below “Content type” select “application/json” Leave “Secret” below empty. Next on “Which events would like to trigger this webhook” is up to you, but for simplicity I just left the default “Just push event” Make sure that “Active” is checked and click “Add webhook” At this point if you commit some changes to the example project and push them a webhook should fire and do a POST request to your jenkins instance, notifying it that there are code changes and triggering the pipeline process … but when I did this and looked at the response I saw: “403 No valid crumb was included in the request“ This simply means that Jenkins require another token to be sent in the headers, to make sure that only authorised cities (Github in this example) will trigger the pipeline process. This is the most obscure and unclear part of setting up Webhooks. I google it for quite some time and figured out that there is no way to send custom header parameters (like Jenkins-crumb) from Github so the only option was to disable this security feature … which I think is fine since the pipeline is already protected with API key that we added. Disabling CSRF Protection in Jenkins The CSRF protection settings lives in “Manage Jenkins” under “Configure Global Security” but as it looks like the lates Jenkins releases don’t have an option to disable this, so the only alternative was to do it through the groovy script. Go to Manage Jenkins -> Script Console and paste the code below in the console. import jenkins.model.Jenkins def instance = Jenkins.instance instance.setCrumbIssuer(null) Click run. You will see empty result which is expected. Go to the example project commit some change and push it again. git commit . -m"Testing push webhook";git push Navigate to Jenkins and you will observe that the new tack is queued in the “Build executor status” in the bottom left. Test it Let’s do it again by making some code changes and commit and push and observe how Jenkins will run the pipeline, test and deploy the project! Cheers!
https://www.toni-develops.com/2020/04/24/adding-continuous-integration-with-jenkins-pipeline-and-github-webhooks/?utm_source=rss&utm_medium=rss&utm_campaign=adding-continuous-integration-with-jenkins-pipeline-and-github-webhooks
CC-MAIN-2021-43
refinedweb
1,420
60.45
On 11/07/2010 02:42 PM, Gene Cooperman wrote:> I'd like to add a few clafifications, below, about DMTCP concerning> Oren's comments. I'd also like to point out that we've had about 100> downloads per month from sourceforge (and some interesting use cases> from end users) over the last year (although the sourceforge numbers> do go up and down :-) ). In general, I think we'll all understand the> situation better after having had the opportunity to talk offline.> Below are some clarifications about DMTCP.> ===>>> For example, in your example, you'd need to wrap the library calls>> (e.g. of MPI implementation) and replaced them to use TCP/IP or>> infiniband. Wrapping on system calls won't help you.>> We do not put any wrappers around MPI library calls. MPI calls things> like open, close, connect, listen, execve({"ssh", ...}, ...), etc.> At this time, DMTCP adds wrappers _only_ around calls to libc.so> and libpthread.so . This is sufficient to checkpoint a distributed> computation like MPI.Of course. And you don't need syscall virtualization for this.Zap did it already many years ago :) Only problem with the aboveis that, conveniently enough, you _left out_ the context: >> For example, >> if a distributed computation runs over infiniband, can we migrate to a TCP/IP >> cluster. For this, one needs the flexibility of wrappers around system calls.Do you also support checkpoint a distributed app that uses aninfiniband MPI stack and restart it with a TCP based MPI stack ?Can you do it with only syscall wrapping and without knowledgeon the MPI implementation and some MPI-specific logic in thewrappers ? I'm curious how you do that without wrapping aroundMPI calls, or without an c/r-aware implementation of MPI.Again, this is unrelated to how you do the core c/r work. I thinkwe both agree that _this_ kind of app-wrappers/app-awareness isuseful for certain uses of c/r.[snip]>> So I'll repeat the question I asked there: is re-reimplementing>> chunks of kernel functionality and all namespaces in userspace>> the way to go ?>> If you're referring to interposition here, that takes place essentially> in the wrappers, and the wrappers are only 3000 lines of code in DMTCP.> Also, I don't believe that we're "re-implementing chunks of kernel> functionality", but let's continue that discussion offline.The interposition itself is relatively simple (though not atomic).The problem is the logic to "spy" on and "lie" to the applications.Examples: saving ptrace state, saving FD_CLOEXEC flag, correctlymaintaining a userspace pid-ns, etc.[...]>>> ... (yes, transparent means that>> it does not require LD_PRELOAD or collaboration of the application!>> nor does it require userspace virtualizations of so many things>> already provided by the kernel today), more generic, more flexible,>> provides more guarantees, cover more types or states of resources,>> and can perform significantly better.>> I still haven't understood why you object to the DMTCP use of LD_PRELOAD.> How will the user app ever know that we used LD_PRELOAD, since we remove> LD_PRELOAD from the environment before the user app libraries and main> can begin? And, if you really object to LD_PRELOAD, then there are> other ways to capture control. Similarly, I'll have to understand betterI don't object to it per se - it's actually pretty useful oftentimes.But in our context, it has limitations. For example, it does notcover static applications, nor apps that call syscalls directlyusing int 0x80. Also, it conflicts with LD_PRELOAD possibly neededfor other software (like valgrind) - for which again you would needyet another per-app wrapper, at the very least.> what you mean by the _collaboration of the application_. DMTCP operates> on unmodified application binaries.I mean that the applications needs to be scheduled and to run toparticipate in its own checkpoint. You use syscall interpositionand signals games to do exactly that - gain control over the appand run your library's code. This has at least three negatives:first, some apps don't want to or can't run - e.g. ptraced, orswapped (think incremental checkpoint: why swap everything in ?!);Second, the coordination can take significant time, especially ifmany tasks/threads and resources are involved; Third, it modifiesthe state of the app - if something goes wrong while you use c/rto migrate an app, you impact the app.(While 'ptrace' relieves you from the need for "collaboration"of processes, but doesn't address the other problems and addsits own issues).> Basically, if _transparent_ means> that one is not allowed to use anything at all from userland, then I> agree with you that no userland checkpointing can ever be transparent.> But, I think that's a biased definition of _transparent_. :-)"Transparent" c/r means "invisible" to the user/apps, i.e. thatyou don't restrict the user or the app in what they do and howthey do it.Did you ever try to 'ltrace skype' ? there exists useful andpopular software that doesn't like being spied after...Oren.
https://lkml.org/lkml/2010/11/7/142
CC-MAIN-2016-44
refinedweb
830
55.64
Blender 3D: Noob to Pro/Advanced Tutorials/Python Scripting/Import scripts Importing objects into Blender is not that different from exporting. However, there are a few additional things to take care of. Firstly, all references to "export" in the header should be changed to "import". Secondly, instead of simply writing out data that Blender provides to us, we are responsible for giving data to Blender and ensuring that it is properly formatted. Although Blender is flexible, allowing us to ignore things like vertex indices, we do need to be careful that we do things in a sensible order. Additionally, there is a bit of housekeeping to deal with. We should be in edit mode while modifying the mesh data. We also need to link up our newly created data to the scene, after it has been properly constructed, so that Blender can see it and maintain it. This makes it visible to the user, as well as ensuring that it gets saved along with the scene. Importing a Mesh[edit] Here is a simple script that can import an OBJ file created by the export script. import Blender def import_obj(path): Blender.Window.WaitCursor(1) name = path.split('\\')[-1].split('/')[-1] mesh = Blender.NMesh.New( name ) # create a new mesh # parse the file file = open(path, 'r') for line in file: words = line.split() if len(words) == 0 or words[0].startswith('#'): pass elif words[0] == 'v': x, y, z = float(words[1]), float(words[2]), float(words[3]) mesh.verts.append(Blender.NMesh.Vert(x, y, z)) elif words[0] == 'f': faceVertList = [] for faceIdx in words[1:]: faceVert = mesh.verts[int(faceIdx)-1] faceVertList.append(faceVert) newFace = Blender.NMesh.Face(faceVertList) mesh.addFace(newFace) # link the mesh to a new object ob = Blender.Object.New('Mesh', name) # Mesh must be spelled just this--it is a specific type ob.link(mesh) # tell the object to use the mesh we just made scn = Blender.Scene.GetCurrent() for o in scn.getChildren(): o.sel = 0 scn.link(ob) # link the object to the current scene ob.sel= 1 ob.Layers = scn.Layers Blender.Window.WaitCursor(0) Blender.Window.RedrawAll() Blender.Window.FileSelector(import_obj, 'Import') This will load an OBJ file into Blender, creating a new mesh object. Let's take a look at the more interesting portions. Blender.Window.WaitCursor(1) Turn on the wait cursor so the user knows the computer is importing. name = path.split('\\')[-1].split('/')[-1] mesh = Blender.NMesh.New( name ) # create a new mesh Here, we create a new mesh datablock. The name is made from the path only with the filename. ob = Blender.Object.New('Mesh', name) ob.link(mesh) Next, we create a new object and link it to the mesh. This instantiates the mesh. scn = Blender.Scene.GetCurrent() scn.link(ob) # link the object to the current scene ob.sel= 1 ob.Layers = scn.Layers Finally, we attach the new object to the current scene, making it accessible to the user and ensuring that it will be saved along with the scene. We also select the new object so that the user can easily modify it after import. Copying the scenes layers ensures that the object will occupy the scenes current view layers. Blender.Window.WaitCursor(0) Blender.Window.RedrawAll() Now the finishing touches. We turn off the wait cursor. We also redraw the 3D window to ensure that the new object is initially visible. If we didn't do this, the object might not appear until the user changes the viewpoint or forces a redraw in some other way.
https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Advanced_Tutorials/Python_Scripting/Import_scripts
CC-MAIN-2015-40
refinedweb
597
59.6
Earlier in this chapter, we worked with Office 2003 documents and converted them to an XML format. Using a schema allowed us to specify the element names and structures for the documents that we created. Schemas also helped to determine if data in the XML document was valid. In this section, well create both Document Type Definitions (DTDs) and schemas. Collectively, we call DTDs and schema document models . Document models provide a template and rules for constructing XML documents. When a document matches these rules, it is a valid document. Valid documents must start by being well formed . Then they have to conform to the DTD or schema. The rules contained in DTDs and schemas usually involve the following: Specifying the name of elements and attributes Identifying the type of content that can be stored Specifying hierarchical relationships between elements Stating the order for the elements Indicating default values for attributes Before you create either a DTD or schema, you should be familiar with the information that youre using and the relationships between different sections of the data. This will allow you to create a useful XML representation. I find it best to work with sample data in an XML document and create the DTD or schema once Im sure that the structure of the document meets my needs. Its good practice to create a DTD or schema when you create multiple XML documents with the same structure. Document models are also useful where more than one author has to create the same or similar XML documents. Finally, if you need to use XML documents with other software, there may be a requirement to produce a DTD or schema so that the data translates correctly. If youre writing a one-off XML document with element structures that youll never use again, its probably overkill to create a document model. It will certainly be quicker for you to create the elements as you need them and make changes as required without worrying about documentation. The DTD specification is older than XML schemas. In fact, DTDs predate XML documents and have their roots in Standard Generalized Markup Language (SGML). Because the specification is much older than XML, it doesnt use an XML structure. On the other hand, schemas use XML to provide descriptions of the document rules. This means that its possible to use an XML editor to check whether a schema is a well-formed document. You dont have this kind of checking ability with DTDs. Schemas provide many more options for specifying the type of data for elements and attributes than DTDs. You can choose from 44 built-in datatypes so, for example, you can specify whether an element contains a string, datetime, or Boolean value. You can also add restrictions to specify a range of values, for example, numbers greater than 500. If the built-in types dont meet your needs, you can create your own datatypes and inherit details from existing datatypes. The datatype support within XML schemas gives you the ability to be very specific in your specifications. You can include much more detail about elements and attributes than is possible in a DTD. Schemas can apply more rigorous error checking than DTDs. Schemas also support namespaces. Namespaces allow you to identify elements from different sources by providing a unique identifier. This means that you can include multiple schemas in an XML document and reuse a single schema in multiple XML documents. Organizations are likely to work with the same kinds of data, so being able to reuse schema definitions is an important advantage when working with schemas. One common criticism of XML documents is that they are verbose. As XML documents, the same criticism could be leveled at schemas. When compared with DTDs, XML schemas tend to be much longer. It often takes several lines to achieve something that you could declare in a single line within a DTD. Table 3-1 shows the main differences between DTDs and schemas. A DTD defines an XML document by providing a list of elements that are legal within that document. It also specifies where the elements must appear in the document as well as the number of times the element should appear. You create or reference a DTD with a DOCTYPE declaration; youve probably seen these at the top of XHTML and HTML documents. A DTD can either be stored within an XML document or in an external DTD file. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML Basic 1.0//EN" ""> The simplest DOCTYPE declaration includes only a reference to the root element of the document: <!DOCTYPE phoneBook> This declaration can also include other declarations, a reference to an external file, or both. DTD declarations are listed under the XML declaration: <?xml version="1.0"?> <!DOCTYPE documentRoot [element declarations]> All internal declarations are contained in a DOCTYPE declaration at the top of the XML document. This includes information about the elements and attributes in the document. The element declarations can be on different lines: <!DOCTYPE documentRoot [ <!ELEMENT declaration 1> <!ELEMENT declaration 2> ]> External file references point to declarations saved in files with the extension .dtd . They are useful if you are working with multiple documents that have the same rules. External DTD references are included in an XML document with <!DOCTYPE documentRoot SYSTEM "file.dtd"> DTDs contain declarations for elements, attributes, and entities. You declare an element in the following way: <!ELEMENT elementName (elementContents)> Make sure that you use the same case for the element name in both the declaration and XML document. Elements that are emptythat is, that dont have any contentuse the word EMPTY : <!ELEMENT elementName (EMPTY)> Child elements appear in a list after the parent element name. The order within the DTD indicates the order for the elements in the XML document: <!ELEMENT elementName (child1, child2, child3)> Elements can also include modifiers to indicate how often they should appear in the XML document. Children that appear once or more use a plus + sign as a modifier: <!ELEMENT elementName (childName+)> The pipe character ( ) indicates a choice of elements. Its like including the word or . <!ELEMENT elementName (child1child2)> You can combine a choice with other elements by using brackets to group elements together: <!ELEMENT elementName ((child1child2),child3)> <!ELEMENT elementName (child1, child2(child3,child4))> Optional child elements are shown with an asterisk. This means they can appear any number of times or not at all. <!ELEMENT elementName (childName*)> A question mark ( ? ) indicates child elements that are optional but that can appear a maximum of once: <!ELEMENT elementName (childName?)> Elements that contain character data include CDATA as content: <!ELEMENT elementName (#CDATA)> You can also use the word ANY to indicate that any type of data is acceptable: <!ELEMENT elementName (ANY)> The element declarations can be quite complicated. For example: <!ELEMENT elementName ((child1child2child3),child4+,child5*,#CDATA)> This declaration means that the element called elementName contains character data. It includes a choice between the child1 , child2 , or child3 elements, followed by child4 , which can appear once or more. The element child5 is optional. Table 3-2 provides an overview of the symbols used in element declarations. Attributes declarations come after the elements. Their declarations are a little more complicated: <!ATTLIST elementName attributeName attributeType defaultValue> The elementName is the element that includes this attribute. Table 3-3 shows the main values for attributeType . Most commonly, attributes are of the type CDATA. The defaultValue indicates a default value for the element. In the following example, the XML element <address> will have an <addressType> attribute with a default value of home . In other words, if the attribute isnt included in the XML document, a value of home will be assumed. <!ATTLIST address addressType CDATA "home"> Using #REQUIRED will force a value to be set for the attribute in the XML document: <!ATTLIST address addressType CDATA #REQUIRED> You can use #IMPLIED if the attribute is optional: <!ATTLIST address addressType CDATA #IMPLIED> If you always want to use the same value for an attribute and dont want it to be overridden, use #FIXED : <!ATTLIST address addressType CDATA #FIXED "home"> You can also specify a range of acceptable values separated by a pipe character : <!ATTLIST address addressType (homeworkmailing) "home"> You can declare all attributes of a single element at the same time within the same ATTLIST declaration: <!ATTLIST address addressType (homepostalwork) #REQUIRED addressID CDATA #IMPLIED addressDefault (truefalse) "true"> The declaration lists a required addressType attribute, which has to have a value of home , postal , or work . The addressID is a CDATA type and is optional. The final attribute, addressDefault , can have a value of either true or false with the default value being true . You can also declare attributes separately: <!ATTLIST address addressType (homepostalwork) #REQUIRED> <!ATTLIST address addressID CDATA #IMPLIED > <!ATTLIST address addressDefault (truefalse) "true"> Entities are a shorthand way to refer to something that you want to use in more than one place or in more than one XML document. You also use them for specific characters on a keyboard. If youve worked with HTML, youve probably used entities for nonbreaking spaces ( ) and the copyright symbol ( © ). You declare an entity as follows : <!ENTITY entityName "entityValue"> Whenever you want to use the value of the entity in an XML document, you can use &entityName; . In the following example, Ive declared two entities, email and author : <!ENTITY email "sas@aip.net.au"> <!ENTITY author "Sas Jacobs, AIP"> I could refer to these entities in my XML document using &email; or &author; . The entities mean sas@aip.net.au and Sas Jacobs, AIP . Entities can also reference external content; we call these external entities . They are a little like using a server-side include file in an HTML document. <!ENTITY address SYSTEM "addressBlock.xml"> The XML document would use the entity &address; to insert the contents from the addressBlock.xml file. You could also use a URL like. The advantage here is that you only have to update the entity in a single location and the value will change throughout the XML document. The following listing shows a sample inline DTD. The DTD describes our phone book XML document: <!DOCTYPE phoneBook[ <!ELEMENT phoneBook (contact+)> <!ELEMENT contact (name,address,phone)> <!ELEMENT name (#PCDATA)> <!ELEMENT address (#PCDATA)> <!ELEMENT phone (#PCDATA)> <!ATTLIST contact id CDATA #REQUIRED> ]> Ive saved the XML document containing these declarations in the resource file addressDTD.xml . Figure 3-36 shows this file validated within XMLSpy. The file addressEDTD.xml refers to the same declarations in the external DTD. If you open the resource file phoneBook.dtd youll see that it doesnt include a DOCTYPE declaration at the top of the file. This listing shows the content: <!ELEMENT phoneBook (contact+)> <!ELEMENT contact (name,address,phone)> <!ELEMENT name (#PCDATA)> <!ELEMENT address (#PCDATA)> <!ELEMENT phone (#PCDATA)> <!ATTLIST contact id CDATA #REQUIRED> This DTD declares the root element phoneBook . The root element can contain a single element contact , which can appear one or more times. The contact element contains three elements name , address , and phone each of which must appear exactly once. The data in these elements is of type PCDATA or parsed character data. The DTD includes a declaration for the attribute id within the contact element. The type is CDATA, and it is a required attribute. Designing DTDs can be a tricky process, so you will probably find it easier if you organize your declarations carefully . You can add extra lines and spaces so that the DTD is easy to read. An XML schema is an XML document that lists the rules for other XML documents. It defines the way elements and attributes are structured, the order of elements, and the datatypes used for elements and attributes. A schema has the same role as a DTD. It determines the rules for valid XML documents. Unlike DTDs, however, you dont have to learn new syntax to create schemas because they are another example of an XML document. Schemas are popular for this reason. Some people find it strange that DTDs use a non-XML approach to define XML document structure. At the time of writing, the current recommendation for XML schemas was at. Youll find the Datatypes section of the recommendation at. The working drafts for XML Schema version 1.1 are at and. Schemas offer several advantages over DTDs. Because schemas can inherit from each other, you can reuse them with different document groups. Its easier to use XML documents created from databases with schemas because they recognize different datatypes. You write schemas in XML so you can use the same tools that you use for your other XML documents. You can embed a schema within an XML document or store it within an external XML file saved with an .xsd extension. In most cases, its better to store the schema information externally so youll be able to reuse it with other XML documents that follow the same format. An external schema starts with an optional XML declaration followed by a <schema> element, which is the document root. The <schema> element contains a reference to the default namespace. The xmlns declaration shows that all elements and datatypes come from the namespace. In my declaration, elements from this namespace should use the prefix xsd . <?xml version="1.0"?> <xsd:schema xmlns: As with a DTD, a schema describes the document model for an XML document. This can consist of declarations about elements and attributes and about datatypes. The order of the declarations in the XSD document doesnt matter. You declare elements as either simpleType or complexType . They can also have empty, simple, complex, or mixed content. Elements that have attributes are automatically complexType elements. Elements that only include text are simpleType . Ive included a sample schema document called addressSchema.xsd with your resources to illustrate some of the concepts in this section. Youll probably want to have it open as you refer to the examples that follow. You can see the complete schema at the end of this section. In the sample schema, youll notice that the prefix xsd is used in front of all elements. This is because Ive referred to the namespace with the xsd prefix, that is, xmlns:xsd= . Everything included from this namespace will be prefixed in the same way . Simple type elements contain text only and have no attributes or child elements. In other words, simple elements contain character data. The text included in a simple element can be of any datatype. You can define simple element as follows: <xsd:element In our phone book XML document, the <name> , <address> , and <phone> elements are simple type elements. The definitions in the XSD schema document show this: <xsd:element <xsd:element <xsd:element There are 44 built-in simple types in the W3C Schema Recommendation. You can find out more about these types at. Common simple types include string , integer, float , decimal , date , time , ID , and boolean . Attributes are also simple type elements and are defined with <xsd:attribute All attributes are optional unless their use attribute is set to required : <xsd:attribute The id attribute in the <contact> element is an example of a required attribute: <xsd:attribute A default or fixed value can be set for simple elements by using <xsd:attribute or <xsd:attribute You cant change the value of a simple type element that has a fixed value. Complex type elements include attributes and/or child elements. In fact, any time an element has one or more attributes it is automatically a complex type element. The <contact> element is an example of a complex type element. Complex type elements have different content types, as shown in Table 3-4. Its a little confusing. An element can have a complex type with simple content, or it can be a complex type element with empty content. Ill go through these alternatives next . A complex type element with empty content such as <recipe id="1234"/> is defined in a schema with <xsd:element <xsd:complexType> <xsd:attribute </xsd:complexType> </xsd:element> The <recipe> element is a complexType but only contains an attribute. In the example, the attribute is declared. We could also use a ref attribute to refer to an attribute that is already declared elsewhere within the schema. A complex type element with simple content like <recipe id="1234"> Omelette </recipe> is declared in the following way: <xsd:element <xsd:complexType> <xsd:simpleContent> <xsd:extension <xsd:attribute </xsd:extension> </xsd:simpleContent> </xsd:complexType> </xsd:element> In other words, the complex element called <recipe> has a complex type but simple content. The content has a base type of string . The element includes an attribute called id that is a positiveInteger . Complex types have content that is either a sequence, a list, or a choice of elements. You must use either <sequence> , <all> , or <choice> to enclose your child elements. Attributes are defined outside of the <sequence> , <all> , or <choice> elements. A complex type element with complex content such as <recipe> <food> Eggs </food> </recipe> is declared as follows: <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> A complex type element with mixed content such as <recipe> Omelette <food> Eggs </food> </recipe> is defined with <xsd:element <xsd:complexType <xsd:sequence> <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> The mixed attribute is set to true so that the <recipe> element can contain a mixture of both child elements and text or character data. If an element has children, the declaration needs to specify the names of the child elements, the order in which they appear, and the number of times that they can be included. The sequence element specifies the order of child elements: <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> You can replace sequence with all where child elements can be written in any order but each child element must appear only once: <xsd:all> <xsd:element <xsd:element <xsd:element </xsd:all> The element choice indicates that only one of the child elements should be included from the group: <xsd:choice> <xsd:element <xsd:element </xsd:choice> The number of times an element appears within another can be set with the minOccurs and maxOccurs attributes: <xsd:element In the previous example, the element is optional but if it is present, it must appear only once. You can use the value unbounded to specify an unlimited number of occurrences: <xsd:element When neither of these attributes is present, the element must appear exactly once. If youre not sure about the structure of a complex element, you can specify any content: <xsd:element <xsd:complexType> <xsd:any </xsd:complexType> </xsd:element> The author of an XML document that uses this schema will be able to create an optional child element. You can also use the element anyAttribute to add attributes to an element: <xsd:element <xsd:complexType> <xsd:element <xsd:anyAttribute /> </xsd:complexType> </xsd:element> You can use annotations to describe your schemas. An <annotation> element contains a <documentation> element that encloses the description. You can add annotations anywhere , but its often helpful to include them underneath an element declaration: <xsd:element <xsd:annotation> <xsd:documentation> A description about the element </xsd:documentation> </xsd:annotation> ... more declarations </xsd:element> You can include a schema in an XML document by referencing it in the document root. Schemas always include a reference to the XMLSchema namespace. Optionally, they may include a reference to a target namespace: <phoneBook xmlns: The reference uses noNamespaceSchemaLocation because the schema document doesnt have a target namespace. The topic of schemas is very complicated. There are other areas that I havent discussed in this chapter. An example that relates to the phone book XML document should make things a little clearer. This listing shows the complete schema from the resource file addressSchema.xsd : <?xml version="1.0"?> :attribute </xsd:complexType> </xsd:element> </xsd:schema> The schema starts by declaring itself as an XML document and referring to the namespace. The first element defined is <phoneBook> . This is a complexType element that contains one or more <contact> elements. The attribute ref indicates that Ive defined <contact> elsewhere in the document. The <contact> element contains the simple elements <name> , <address> , and <phone> in that order. Each child element of <contact> can appear only once and is of type string . The <contact> element also contains a required attribute called id that is an integer type. The schema is saved as resource file addressSchema.xsd . The XML file that references this schema is addressSchema.xml . You can open the XML file in XMLSpy or another validating XML editor and validate it against the schema. We havent covered everything there is to know about XML schemas in this section, but there should be enough to get you started.
https://flylib.com/books/en/1.350.1.33/1/
CC-MAIN-2021-21
refinedweb
3,455
56.55
It looks like you're new here. If you want to get involved, click one of these buttons! Hello, I am using python to draw my layout, and for some reason I do not manage to use the round corners function. Please help! At first I was using Box but it appears that it does not inherit the Round Corners from Polygon, so I changed it.. Here's the relevant code I'm using, an example for a single box-polygon: layout = pya.Layout() top = layout.create_cell("TOP") Layer1 = layout.layer(1, 0) def CreateRect(x1,y1,x2,y2): BL = pya.Point(x1,y1) BR = pya.Point(x2,y1) TR = pya.Point(x2,y2) TL = pya.Point(x1,y2) RECT= pya.Polygon([BL,BR,TR,TL]) return RECT rec = CreateRect (-30,-30,30,30) top.shapes(Layer1).insert(rec.round_corners(10,1,100)) layout.write("test.gds") When I run this code, I get the same result as if I just use insert(rec), without the round corners. Note that in the GUI, after that is done, I can go edit>selection>Round Corners> 10 / 1 / 100 and it works perfectly. Any chance you know why this doesn't work through python? Hi, first thing is that your coordinates are pretty small. With the integer types (Polygon, Point, ...), the coordinates are in database units (typically 1nm) and arithmetic is in integers. You have given an inner radius of 10 and an outer radius of 1. as 1 is the smallest possible radius, if will eventually just notch your corners a little, but that's it. If you want micrometer units, use the floating-point types (DPoint, DPolygon, ...). You'll see rounded edges with a 1µm radius on the corners then. Matthias Dear Matthias, It seems that DPolygon does not have the function round_corners. Could you explain how to use this function there? The rounding that currently happen with Polygon really do need better resolution... Thank you! Shoval. If you need a better resolution, use a smaller DBU. If the curve gets fuzzy, use less points. DPolygon is just an intermediate container - finally you'll always need to turn it back into integer, because that is what the layout DB accepts. So even if it had round_corners that wouldn't be helpful. Matthias Dear Matthias, That is the solution that I used, indeed, dbu changed to 0.1um instead of 1um. But doing that means that you have to manually change all the numbers in your script to 10* (for example, if previously you hade a 50x50 box, now it is 500x500). Is there no easier way to fix this? Changing the dbu also made the file significantly larger and with longer reload times. The smallest part in my layout is 5um, and so, a 1um resolution is just fine for me if we don't count the round_corner thing.. Thanks, Shoval. Hi Shoval, changing the DBU requires changes of the values, that's right. But a well-made script should use variables. If you hardcode the values in many places it will be difficult to maintain the script anyway. Typical DBU values are much smaller than micrometers. GDS is usually written with nm resolution (DBU = 0.001). The file size of DBU will not change with DBU. OASIS, CIF and DXF may get a little larger, but that's it. If you see a file growing much bigger with a smaller DBU that's probably because more points are resolved. So that's eventually what you want to achieve. Matthias
https://www.klayout.de/forum/discussion/1092/rounding-corners
CC-MAIN-2019-13
refinedweb
590
76.42
When I was making the Topsy Turvy Clock I created a non blocking delay class so that the Arduino could be doing other things whilst waiting for a timeout. This avoids the use of the delay function which otherwise "blocks" the operation until the timeout is completed. I realised that this same technique could be used to control the flashing of the RGB LED. This will need to be expanded to handle the colour cycling which will indicate that the Wifi settings need configuring. Code: Example Usage Here's an example of the blinking code using the LED on pin 13. #include "Blink.h" Blinker b(1,0); void setup() { b.Blink(Short_Blink); pinMode(13, OUTPUT); } void loop() { digitalWrite(13, b.Level()); //Do other things here as desired } Virtual Prototyping As an experiment the code was uploaded to 123D circuits. This allowed the example to be tested from a web page without the need for actually having an Arduino plugged into the computer. Next: Swapping out the bridge
https://www.element14.com/community/community/design-challenges/enchanted-objects/blog/2015/06/18/enchanted-design-challenge--blinker
CC-MAIN-2017-43
refinedweb
168
63.8
We’ll use the graphviz library to generate DOT-formatted data, and the dot command to generate an image: import subprocess import graphviz _RENDER_CMD = ['dot'] _FORMAT = 'png' def build(): comment = "Test comment" dot = graphviz.Digraph(comment=comment) dot.node('P', label='Parent') dot.node('G1C1', label='Gen 1 Child 1') dot.node('G1C2', label='Gen 1 Child 2') dot.node('G2C1', label='Gen 2 Child 1') dot.node('G2C2', label='Gen 2 Child 2') dot.edge('P', 'G1C1') dot.edge('P', 'G1C2') dot.edge('G1C2', 'G2C1') dot.edge('G1C2', 'G2C2') return dot def get_image_data(dot): cmd = _RENDER_CMD + ['-T' + _FORMAT] p = subprocess.Popen( cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = p.communicate(input=dot) r = p.wait() if r != 0: raise ValueError("Command failed (%d):n" "Standard output:n%sn" "Standard error:n%s" % (r, stdout, stderr)) return stdout dot = build() dot_data = get_image_data(dot.source) with open('output.png', 'wb') as f: f.write(dot_data) Note that we can provide labels for the edges, too. However, they tend to crowd the actual edges and it has turned out to be non-trivial to add margins to them: Note that there are other render commands available for different requirements. This list is from the homepage: -. This is an example of sfdp with the “-Goverlap=scale” argument with a very large graph (zoomed out). If you’re running OS X, I had to uninstall graphviz, install the gts library, and then reinstall graphviz with an extra option to bind it: $ brew uninstall graphviz $ brew install gts $ brew install --with-gts graphviz If graphviz hasn’t been built with gts, you will get the following error: Standard error: Error: remove_overlap: Graphviz not built with triangulation library Advertisements One thought on “Simple Graphs/DiGraphs with graphviz”
https://dustinoprea.com/2015/06/04/simple-graphsdigraphs-with-graphviz/
CC-MAIN-2017-26
refinedweb
296
60.51
Getting Started With Kubernetes: Networking Posted on In previous installments, we examined how to deploy applications. However, we only touched on how applications talk to each other inside and outside the cluster. Whether you are building a modern application or modernizing a legacy application, understanding how resources and components talk to each other is essential. In this installment, we’ll examine networking in Kubernetes. Networking in a distributed environment can be complicated. Let’s use a hypothetical example of a web application deployed as microservices in Kubernetes. Containers must be able to communicate with each other; for example, we have a Java app running on Tomcat served by Nginx for the application’s frontend. Pods must be able to connect to other pods. In our example, the frontend sends requests to a REST API connected to a database deployed as a StatefulSet in another Pod. Kubernetes assigns each Pod an IP address, which lets them send requests to each other. However, as Pods are destroyed and created, the IP addresses will change. Kubernetes uses the Service object as an abstraction to define a logical set of Pods, labels and selectors to access them. Our web application also needs to be available outside the cluster. We’ll examine how traffic receives and responds to requests outside the cluster. Container to container networking A Pod is a logical host, and each Pod is assigned a network namespace, which is a logical networking stack that provides isolation between network devices. Containers in a Pod have the same IP address assigned to the Pod’s networking namespace, which means that containers within a Pod can communicate using localhost. In our example application, the nginx container at port 80 is a reverse proxy for the Java application at port 8080. At this point, neither the Java application nor the reverse proxy is available outside the cluster. We could define a nodePort, a LoadBalancer, or Ingress to expose the pod outside of the cluster, but we’ll address that later. What does this look like in code? // Create the frontend Pod. const pod = new k8s.core.v1.Pod("frontend", { spec: { restartPolicy: "OnFailure", containers: [ { name: "nginx", image: "nginx", resources: {requests: {cpu: "50m", memory: "50Mi"}}, ports: [{ containerPort: 80 }], }, { name: "webapp", image: "java_app", resources: {requests: {cpu: "50m", memory: "50Mi"}}, ports: [{ containerPort: 8080 }], command: [ "catalina.sh", "run"], } ], } }, { provider: provider }); Pod to Pod networking In our the web application example, the REST microservice and the database are deployed in separate pods. The frontend pod sends requests to the REST microservice, which sends requests to the database and returns data to the frontend application. Pods can communicate with any Pod on any Node without a NAT (Network Address Translation). Pod to Pod networking is accomplished through a virtual ethernet device on the Node. Each Pod connects to a virtual ethernet device (called vethx) that acts as a tunnel between the Pod and the Node. A network bridge (called cbr0) connects the Pod’s network stack to the Node’s network by creating a single aggregate network from multiple networks. A Pod can communicate with Pods on different Nodes. If the Pod isn’t found on the Node, the network bridge goes to the cluster level to map the IP address ranges to nodes. Once the Pod with the matching address range is found, the request is sent to the target Pod. In our example, the webapp sends a request to the database (10.0.20.1) on another node. Pod to Service networking Pods are ephemeral, Kubernetes add or destroy pods when scaling up or down, applications crash, and runtimes reboot. Pod IP addresses are dynamic; Services are built to ensure that network traffic reaches the correct Pod by managing the state of Pods. Kubernetes assigns a virtual IP address called a clusterIP when a Service is created. Multiple Pods can be associated with a service, and Kubernetes load-balances requests to Pods associated with a Service. Kubernetes keeps track of changing IP addresses through these three methods: iptables, IPVS, and DNS. Let’s take a look at iptables, which is a common case. Each node has a kube-proxy which maintains a table of IP addresses for nodes. The table keeps track of how to reach Pods inside or outside your cluster. Kube-proxy watches for the addition and removal of Service and Endpoint objects, which are the list of addresses (IP and port) of endpoints that implement a Service. Iptables use fewer system resources because traffic is handled by Linux netfilter, a packet filtering framework inside the Linux kernel. This means that routing traffic doesn’t require switching between the user space where Kubernetes is running and makes use of the host system’s kernel network stack, which also makes it more reliable. In iptables mode, if the first Pod selected fails to respond, the connection fails. You can use readiness probes to tell kube-proxy which pods are healthy as in this example. There are other methods of Pod to Service networking, such as: - userspace with kube-proxy. - IP Virtual Server (IPVS), an in-cluster load balancer that uses more efficient hash tables that scale better than iptables. - DNS running as a Kubernetes service We’ve reviewed how packets are routed between Pods and Services, but how does this work in practice? A Service is a logical collection of pods with a policy that defines access among them. Pods are grouped into a Service with labels and selectors. In our web application example, we can label Pods with “webapp” and declare the service’s selector. Below is an example of how to create a service with code. const frontendLabels = { app: "frontend" }; const frontendDeployment = new k8s.apps.v1.Deployment("frontend", { spec: { selector: { matchLabels: frontendLabels }, replicas: 3, template: { metadata: { labels: frontendLabels }, spec: { containers: [ { name: "nginx", image: "nginx", resources: { requests: { cpu: "100m", memory: "100Mi" } }, env: [{ name: "GET_HOSTS_FROM", value: "env" }], ports: [{ containerPort: 80 }], }, { name: "webapp", image: "java_app", resources: { requests: { cpu: "100m", memory: "100Mi" } }, env: [{ name: "GET_HOSTS_FROM", value: "env" }], ports: [{ containerPort: 8080 }], }, ], }, }, }, }); const frontendService = new k8s.core.v1.Service("frontend", { metadata: { labels: frontendDeployment.metadata.labels, name: "frontend", }, spec: { type: LoadBalancer, ports: [{ port: 80 },{ port:8080 }], selector: frontendDeployment.spec.template.metadata.labels, }, }); Egress and Ingress Egress is moving data and requests from the cluster to the Internet. Kubernetes can accomplish egress with an Internet gateway that routes packets from inside the cluster by performing network address translation (NAT). NAT maps a Node’s internal IP address to an external IP address on the public Internet. However, Pods have their IP address that virtual machines or Nodes can’t reach. If we trace a packet’s route, it starts from the Pod’s namespace and connects to root namespace of the Node via veth to the network bridge, cbr0. Iptables replaces the source IP of the Pod to the Node Ip address to get from the Node to the gateway. Changing the source IP is called a source NAT (SNAT), and it lets the packet travel from the Node to the Internet gateway. Ingress is the process of routing Internet traffic to the Kubernetes cluster. There are two solutions for getting Internet traffic into your cluster: a Service LoadBalancer and an Ingress controller. A Service load balancer is also called a Layer 4 LoadBalancer. The name refers to Layer 4, or the transport layer of the OSI network model. At Layer 4, the transport layer routes packets and resends them if they are not received. You can specify a LoadBalancer when creating a Service. The Service will advertise the IP address, and you can send traffic to LoadBalancer, which will route packets to the service. Kubernetes can create a Service with an optional LoadBalancer. When the Service is created, it advertises the IP address for the load balancer. As an end-user, you can start directing traffic to the load balancer to begin communicating with your Service. The load balancer is not aware of Pods or containers, so it sends packets to the Nodes in the cluster. The iptables rules in each Node will send the packets to Pod. The Pod’s response uses the Pod’s IP, but iptables rewrites the correct IP on the return with NAT. The other way of routing packets to a pod from a gateway is with a Layer 7 Ingress Controller. Layer 7 is the application layer of the OSI network model. Layer 7 Ingress works on the HTTP/HTTPS portion of the network stack. Similar to a Layer 4 LoadBalancer, it is part of a Service. You must open a port in the Service with the NodePort Service type, and Kubernetes will allocate a port from a specified range. Traffic routed to the Node’s port will be sent to the service by iptables rules. An Ingress object exposes a Node’s port to the Internet. An Ingress load balancer maps HTTP requests to Kubernetes Services. The Ingress method differs depending on how the Kubernetes provider implements it. HTTP load balancers, like Layer 4 network load balancers, only understand Node IPs (not Pod IPs), so traffic routing uses the internal load-balancing in iptables. This process is similar to a Layer 4 Load Balancer, but the significant difference is that the Application Load Balancer is HTTP aware and can perform host or path based routing. Building on the previous example, we can expose the frontend by adding the following. export let frontendIp: pulumi.Output<string>; frontendIp = frontend.status.loadBalancer.ingress[0].ip; Summary The goal of this article is to provide an overview of the Kubernetes Networking model. The implementation of the Container Network Interface (CNI) varies from provider to provider. You can read more about CNI implementations on the Kubernetes documentation site. Each article in this series is intended to be independent of each other. However, we build upon concepts introduced in previous articles. If some concepts or terminology are unfamiliar, I encourage reading the earlier articles:
https://www.pulumi.com/blog/getting-started-with-k8s-part5/
CC-MAIN-2022-21
refinedweb
1,650
55.03
FiPy on Telia Norway LTE-M1 Hello, Norway based Telia employee here! I just got the GPy and FiPy in the mail today and I am trying to connect to a base station which is M1-enabled for testing purposes. I copied the example from the documentation but it does not seem to work. lte.isconnected() returns False no matter what I try? I saw that LTE.init() takes a cid parameter, should it be set to 1 for all operators except Verizon (that's how I read the documentation)? There are some posts about a debug mode where I can see some more information and perhaps AT-commands. How do I enable this? There are also some posts about the module needing to be certified with different operators. This has not been a problem for any of the others that I have tried (Ublox, Telit, Quectel, Simcom) and after asking around no one seems to know about us requiring any certification to allow devices in our network. - Fredrik Andersson last edited by @GeirFrimann hello, I`m just starting out, can you share your settings for telia? - GeirFrimann last edited by @albert Well I'm living in Sweden, and after updating the modem firmware with the latest file (upgdiff_33080-to-41065.dup) I finally got the device connected: from network import LTE lte = LTE() def connect_internet(): MyApn = "lpwa.telia.iot" lte = LTE() lte.init() lte.attach(apn=MyApn) time.sleep(1) print("LTE coverage:", lte.ue_coverage()) while not lte.isattached(): time.sleep(0.5) print('Attaching...') lte.connect() while not lte.isconnected(): time.sleep(0.5) print('Connecting...') print("\nConnected to LTE-M:",MyApn) #--------------------------------------------------------- # MAIN #--------------------------------------------------------- connect_internet() client = connect_MQTT_Broker() Output: Connecting to COM3... LTE coverage: False Attaching... Attaching... Connected to LTE-M: lpwa.telia.iot Connected to MQTT Broker: mymqttbroker.com I wish there where a easier way to upgrade the modem firmware though... - jmarcelino last edited by Hi @colateral I'm not at Pycom anymore, the last manual I had access is the one published for Cat M1. Maybe one of the current team can help, sorry. @jmarcelino I found on docs but the manual seems to not be NbIot @jmarcelino Where we can find the Seaquans manual for AT commands? Above link from xykon is not working anymore . Hi! Today I finally got a breakthrough and managed to get the FiPy online on NB-IoT! Pycom are still working on LTE-M, but at least we are one step further! This is what needs to be done: - Upgrade Pycom firmware to at least version 1.18.1.r2 using the Pycom firmware update tool. At the time of writing, this version is only available through manual download via the forum: [LINK NO LONGER AVAILABLE]. Because of this, you can not use the automatic download in the updater tool but need to download the firmware and select "Upgrade from file". - Upgrade Sequans firmware to at least version NB1–38729. This is the LTE modem inside the Pycom module. Note that only NB-IoT is working at the moment, Pycom are still working on LTE-M. Follow the steps here: - Below is an example sending and receiving data from an echo server, in this case echo.ublox.com: from network import LTE import socket lte = LTE() lte.send_at_cmd('AT+CFUN=0') lte.send_at_cmd('AT!="clearscanconfig"') lte.send_at_cmd('AT!="addscanfreq band=20 dl-earfcn=6252"') lte.send_at_cmd('AT!="zsp0:npc 1"') lte.send_at_cmd('AT+CGDCONT=1,"IP","lpwa.telia.iot"') lte.send_at_cmd('AT+CFUN=1') print("Attaching...") while not lte.isattached(): pass print("Connecting...") lte.connect() while not lte.isconnected(): pass print("Creating socket...") s = socket.socket() address = '195.34.89.241' # echo.ublox.com port = 7 try: s.connect((address, port)) except OSError: print("Failed to open socket!") exit(0) print("Sending message...") s.send(b"Hello world\r\n") print("Getting response:") print(s.readline()) print(s.readline()) print("Closing socket and disconnecting...") s.close() lte.disconnect() lte.dettach() print("Test complete!")``` - jmarcelino last edited by jmarcelino Thanks for your interest. I've reached out by e-mail because we need to collect some details regarding your network as you're running a test setup - we have no information on what Telia Norway is using for Cat M1.
https://forum.pycom.io/topic/2366/fipy-on-telia-norway-lte-m1
CC-MAIN-2022-21
refinedweb
708
59.9
pyMCFSimplex is a Python Wrapper for MCFSimplex Project description pyMCFSimplex * - Version 0.9.1 * - Johannes Sommer, 2013* - What? pyMCFSimplex is a Python-Wrapper for the C/C++ MCFSimplex Solver Class from the University of Pisa. MCFSimplex is a Class that solves big sized Minimum Cost Flow Problems very fast. See also [1] for a comparison. - How? pyMCFSimplex was being made through SWIG. Don't ask for the time I spent on figuring out how SWIG works. With more knowledge in C++ I would have been faster - but I'm not a C++ guy! I want it in Python! - Who? The authors of MCFSimplex are Alessandro Bertolini and Antonio Frangioni from the Operations Research Group at the Dipartimento di Informatica of the University of Pisa [2]. pyMCFSimplex is brought to you by Johannes from the G#.Blog Feel free to contact me: info(at)sommer-forst.de - Installation Installation prerequisites are: - Python 2.7 or Python 2.6 (only Windows) - numpy (tested with 1.6.1) - a build environment if you want to install from source distribution 4.1 Windows Select the appropriate MSI package for your installed Python version (2.6 or 2.7) an simply execute the installer. 4.2 Linux Untar the binary dist package pyMCFSimplex-0.9.1.linux-x86_64.tar.gz with tar xfvz pyMCFSimplex-0.9.1.linux-x86_64.tar.gz It will install into /usr/local/lib/python2.7/dist-packages/. 4.3 Source Distribution Grab the pyMCFSimplex-0.9.1_src_dist.zip file, extract it and run a) linux: sudo python setup.py install b) windows: start a command line as Administrator and run python setup.py install - Usage Here is a first start. "sample.dmx" must be in the same location of your python script. With these lines of code you can parse a minimum cost flow problem in DIMACS file format and solve it. from pyMCFSimplex import * print "pyMCFSimplex Version '%s' successfully imported." % version() mcf = MCFSimplex() print "MCFSimplex Class successfully instantiated." FILENAME = 'sample.dmx' print "Loading network from DIMACS file %s.." % FILENAME f = open(FILENAME,'r') inputStr = f.read() f.close() mcf.LoadDMX(inputStr) print "Setting time.." mcf.SetMCFTime() print "Solving problem.." mcf.SolveMCF() if mcf.MCFGetStatus() == 0: print "Optimal solution: %s" %mcf.MCFGetFO() print "Time elapsed: %s sec " %(mcf.TimeMCF()) else: print "Problem unfeasible!" print "Time elapsed: %s sec " %(mcf.TimeMCF()) If you want to load a network not from a DIMACS file, you'll have to call LoadNet() while passing C-arrays to the method. C arrays in Python? Yes - don't worry. There are helper methods in pyMCFSimplex, that'll do this for you. Look at the following piece of code. mcf = MCFSimplex() print "MCFSimplex Class successfully instantiated." print "Reading sample data.." ''' Problem data of a MCFP in DIMACS notation c Problem line (nodes, links) p min 4 5 c c Node descriptor lines (supply+ or demand-) n 1 4 n 4 -4 c c Arc descriptor lines (from, to, minflow, maxflow, cost) a 1 2 0 4 2 a 1 3 0 2 2 a 2 3 0 2 1 a 2 4 0 3 3 a 3 4 0 5 1 ''' MCFP problem transformed to integers and lists nmx = 4 # max number of nodes mmx = 5 # max number of arcs pn = 4 # current number of nodes pm = 5 # current number of arcs pU = [4,2,2,3,5] # column maxflow pC = [2,2,1,3,1] # column cost pDfct = [-4,0,0,4] # node deficit (supply/demand) pSn = [1,1,2,2,3] # column from pEn = [2,3,3,4,4] # column to call LoadNet() with the return values of the helper methods e.g. CreateDoubleArrayFromList(pU) takes a python list and returns a pointer to a corresponding C array, that is passed as an argument to the method LoadNet() mcf.LoadNet(nmx, mmx, pn, pm, CreateDoubleArrayFromList(pU), CreateDoubleArrayFromList(pC), CreateDoubleArrayFromList(pDfct), CreateUIntArrayFromList(pSn), CreateUIntArrayFromList(pEn)) print "Setting time.." mcf.SetMCFTime() mcf.SolveMCF() if mcf.MCFGetStatus() == 0: print "Optimal solution: %s" %mcf.MCFGetFO() print "Time elapsed: %s sec " %(mcf.TimeMCF()) else: print "Problem unfeasible!" print "Time elapsed: %s sec " %(mcf.TimeMCF()) Please check out the sample script gsharpblog_mcfsolve_test.py for more information. - Good to know I changed the original source code of MCFClass.h a little bit for SWIG compatibility. All changes are marked by the following comment line "//Johannes Sommer". This included: - LoadDMX() accepts in pyMCFSimplex a string value (original: c iostream). The original LoadDMX method is omitted. - as SWIG cannot deal with nested classes, I pulled the classes Inf, MCFState and MCFException out of the main class MCFClass. Perhaps the above mentioned changes to the original source is not necessary, if you know SWIG very well. But I could not figure out how to get these things to work in the SWIG interface file. Useful hints are very welcome. [1] [2] Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyMCFSimplex/
CC-MAIN-2022-21
refinedweb
835
60.41
See also: IRC log <hhalpin> PROPOSED: to approve SWXG WG Weekly -- 20 May 2009 as a true record <hhalpin> RESOLVED: approved SWXG WG Weekly -- 20 May 2009 as a true record <hhalpin> PROPOSED: to meet again Wed, 3 June. scribe volunteer? <hhalpin> Scribe? <adam> i can do june 10th +1 <hhalpin> OK, let's have adam provisionally as scribe for next meeting. <hhalpin> RESOLVED: move to a scribing list <hhalpin> apassant is a SPARQL liasion in need of other ones ? <hhalpin> Do we have any volunteers? <rreck> are the groups that need liasons listed on the wiki? <hhalpin> Yes. <tinkster> I've put myself down for Microformats. are in need of liasons <jsalvachua> i may try to interface with dataportability.org <hhalpin> Could you add yourselves to the wiki? jsalvachua: to be liaison with dataportability.org toby to be liason with the microformats community <claudio> +039011228aahh is claudio <hhalpin> C'mon no volunteers :( <petef> I volunteered <petef> for Social Network Portability Group List <jsalvachua> i may help with other groups, with the vcard ietf group renanto is on the wiki as the liason with "Policy Language Interest Group" <AlexPassant> Uldis Bojars or John Breslin can be SIOC liaisons <AlexPassant> her's uldis <hhalpin> OK - could you check on them. <hhalpin> ACTION: AlexPassant to see about SIOC liason [recorded in] <trackbot> Created ACTION-15 - See about SIOC liason [on Alexandre Passant - due 2009-06-03]. <uldis> sounds good AlexPassant to check to see if we can find a SIOC liaison <petef> I have added myself to wiki as volunteer liason for data portability, diso and social network portability <hhalpin> ACTION: [DONE] danbri sketch a 5 line template for interaction with other groups (cf InvitedExperts, DiscussionTopics) [recorded in] <hhalpin> ACTION: [DONE] karl to produce a template for TF deliverables. [recorded in] <hhalpin> Karl - do you wish to explain your template? Karl put together a template for the user stories karl has put up 2 templates <rreck> are user stories aka use cases? so user stories seemed too long <rreck> i am developing a use case atm karl suggests we should be concise and to the point <cperey> can someone put the URI to the templates into IRC <tinkster> Karl's template - goal of the templates are to speed up the writing <cperey> thanks Toby! templates will give us a common look and feel Karl is open to comments/modifications re: the templates hhalpin happy with the user stories templates <petef> how come petef.a not petef? hhalpin would like a template for the final report to be put up on the wiki hhalpin asks if anyone has any experience in this ? <hhalpin> Does anyone have a final report template as well? <hhalpin> Does anyone want to take that action - i.e. finding a template for final deliverables? <tinkster> microformats.org write all specs on mediawiki - perhaps useful? karl stated that someone should take an action to port the final report template to the wiki can we not look at other XGs? <AlexPassant> can a chair close this action -> <adam> i can take a stab at it <hhalpin> ACTION: adam to find a good final report template and port it to the wiki [recorded in] <trackbot> Created ACTION-16 - Find a good final report template and port it to the wiki [on Adam Boyet - due 2009-06-03]. hhalpin: adam seemed to suggest that he would look into the template hhalpin asks if anyone has any comments re: the organisation so we can move onto the task force issue Task force issues: should we merge context / privacy ? <hhalpin> Context and Privacy Task Force (Karl Dubost)? For Portability and Architectures Task Force (@@)? so, harry is asking if people like proposed task force titles <petef> Portability and Architectures - jsalvachua and petef volunteered last telecon. cperey: has no preference for times, but thinks we need critical mass and we need a clear agenda <petef> signing up to task forces where? <hhalpin> Maybe we could remind people to sign up for task forces cperey participation has gone down since the start of XG, cperey thinks we should find out how many people are interested in each Task force <jsalvachua> petef : we both may start together to push the task force <karl> karl *MAY* be the task force leader ;) should we find out how many are interested in each task force <tinkster> Also interested in portability/arch but not leading. <cperey> I agree <hhalpin> Wiki page for each of these task forces? <petef> where? <hhalpin> I don't think we do. <tinkster> I can set up template wiki pages for them. <petef> I will draft one for portability and architectures <hhalpin> ACTION: tinkster to draft wiki pages for task forces [recorded in] <trackbot> Created ACTION-17 - Draft wiki pages for task forces [on Toby Inkster - due 2009-06-03]. toby to set up wiki page re task forces, and people should add what they are thinking portability task force, so hhalpin is interested in how the W3C can promote how data can be made portable <rreck> i would be happy to contribute but would like to have the discussion on the list <tinkster> I can just hear typing. <tinkster> jsalvachua: said that he will try and populate the wiki, regarding a roadmap for the portability task froce <tinkster> <hhalpin> 4. Invited Guests I will put some stuff on the wiki for privacy task force <hajons> -q <hhalpin> Anyone else? invited speaker lists have been populated for the widget topic and the Vcard topic, does anyone else have any ideas on this ? hajons, asks if people should just write to the wiki, if they want to join a task force ? <petef> Yes, just sign up to task force on wiki page or is there some other protocol <hajons> how do we join the task forces <hajons> ok <adam> maybe once the task force pages are created, each person can add themselves as a member <adam> of the task force <hhalpin> +1 adam hhalpin states that we should just add names to the list of task forces on the wiki <rreck> i will join the privacy task force ;) <rreck> i dont feel capable of leading it are we going to have invited guest for privacy and context ? harry wodners if there is any mobile interest from the group <hajons> yes, I will propose a guest on context / mobile <hhalpin> tim gave his regrets and daniel applequist can't make this meeting in particular :( <hhalpin> Perhaps you can explain what the OSLO group is? <hhalpin> Does it have a web-page? <hhalpin> Open Sharing of LOcations christine, is interested the mobile technologies, and christine also contacted someone (?) external, and they are not interested in developing protocols OSLO group announced start earlier this year, and they are NOT interested with speaking to W3C <tinkster> Open Sharing of Location-based Objects (OSLO) - <hhalpin> Christine, perhaps stay in touch with them? <caribou> there are others which could speak to the w3c but we need some specific questions <hhalpin> Maybe brainstorm on the wiki what specific questions would be relevant to these mobile operators? so that christine could approach people <hhalpin> Or what is part of their problems? <hhalpin> What could they need? nokia people did context and mobile stuff, and I could ask Mor Naaman to talk about the Zonetag project but i am out of touch with the mobile stuff harry asked if we could have a wiki page / content regarding the mobile space <hhalpin> ACTION: cperey to add mobile companies to to Invited Guests and to brainstorm what exact questions or topics would be most interesting [recorded in] <trackbot> Created ACTION-18 - Add mobile companies to to Invited Guests and to brainstorm what exact questions or topics would be most interesting [on Christine Perey - due 2009-06-03]. christine would like an agenda to take to the mobile experts before contacting them <hhalpin> any more use-cases? there is some work i emailed round from a chap from cambridge which had some good examples of how privacy in the social web tends to look like karl thinks that we should add some more user stories, so that we can get a feel for what the XG should be looking at <rreck> yes i will add a story <hhalpin> I feel we are still missing some use-cases regarding businesses and developers <rreck> i wanted to get privacy classes but no one answered my email ah excellent point <karl> karl: we should take the current user stories and check them against the actual social networks such as frienfeed, facebook, etc. <karl> then we can see if our cases make sense <karl> and then we can identify if there are missing ones. i posted about this <rreck> great idea <tinkster> I've got a developer story - I'll create an action for myself. harry says that we should have a matrix <rreck> social network matrix <tinkster> ACTION tinkster to document developer stories on wiki. <trackbot> Created ACTION-19 - Document developer stories on wiki. [on Toby Inkster - due 2009-06-03]. showing how social networks uphold privacy <caribou> almost all the user stories that we have are related to privacy/data protection <karl> ACTION: karl to create the matix to be filled [recorded in] <trackbot> Created ACTION-20 - Create the matix to be filled [on Karl Dubost - due 2009-06-03]. we have developed at garlik <rreck> can someone help me find classes of privacy but it is for government institutions about what they do with your data <hhalpin> mischa - could we expland the matrix to deal with commercial social networks? i could put to a similar oen for social networking sites <hhalpin> Maybe we could look through alexa to get out the top social networking sites. <hhalpin> I can do that... karl says that he will put up a matrix based on the current user stories <hhalpin> ACTION: To retrieve top X social networking sites from the top 500 sites of Alexa [recorded in] <trackbot> Sorry, couldn't find user - To <hhalpin> ACTION: hhalpin to retrieve top X social networking sites from the top 500 sites of Alexa [recorded in] <trackbot> Created ACTION-21 - Retrieve top X social networking sites from the top 500 sites of Alexa [on Harry Halpin - due 2009-06-03]. so that people can look at each individual social networking sites <cperey> Karl, what criteria do you want for this list? <cperey> the "top" social networks? <karl> cperey, Alexa traffic <cperey> irrelevant <cperey> for mobile <karl> cperey, what would be your criteria? :) <hhalpin> how could we get the top X social networking sites for mobile? <cperey> Number of unique users per month <hhalpin> is that list available anywhere? <cperey> more relevant for all types of social networks <cperey> I have a list (not published) <rreck> wouldnt you just select out of the top 500 alexa sites? <hhalpin> I thin that list on wikipedia uses the company's own data, right? there has been some work from cambridge were they looked into <karl> I will do Mixi (social network in Japan) because I have an account and they don't open account to people outside Japan social networking sites T&Cs i would like to invite the phd student which did the work <cperey> this is a metrics question <rreck> humming is back looking for the persons link <hhalpin> I'm happy to go through alexa if someone else will merge it with wikipedia and Christine's list <cperey> how do you measure a social network <hhalpin> this is returning to the metric question <cperey> OK <karl> +1 for merging +1 merging <rreck> +1 merging <cperey> 30-50 social networks <rreck> yes that sounds reasonable <hhalpin> So we merge list from alexa\wikipedia with Christine's list? <cperey> by country? worldwide? <rreck> worldwide <hhalpin> I would assume world-wide at first, and then later we can break it down by country hhalpin: asked if we could merge christine's list with the alexa rankign and the wikipedia list so as to pick 30 social networking sites <hhalpin> unless your data is already broken down by country Christine <cperey> I can work with Harry on this <cperey> zakim unmute me <melvster> I would suggest you also need IM based networks such as Skype, GTalk, XMPP, if they are not included already, as they have significant usage and maturity cperey's data is broken down by country cperey: on/deck and business models. <hhalpin> 30 or 50? <hhalpin> Start with 30 and then build if needed? cperey would be happy with 30 +1 <adam> +1 cperey: will make a list of the last 30 <rreck> top based on number of users? i would like us to check their Terms and Conditions and see if they actually abide by it <hhalpin> ACTION: cperey to make list of top 30 to do profiles on, to merge with hhalpin's list on alexa [recorded in] <trackbot> Created ACTION-22 - Make list of top 30 to do profiles on, to merge with hhalpin's list on alexa [on Christine Perey - due 2009-06-03]. <karl> I'll have to find someone for <hajons> Christine, do you list mobile access as a feature too? <hhalpin> we do it as more of long-term action cperey: data has information regarding social networking sites, and their features not terms and conditions <hhalpin> does your list have the feature? <hhalpin> feature-criteria like instant-messaging, and all of that? all of the social networking sites on christine's list are mobile centric <hhalpin> cperey: "PC-centric" vs. "mobile-centric" and then a continuum inbetween. <karl> what is a mobile access? specific software for mobile devices? <rreck> my cell does the same things as my PC <hhalpin> sounds good to me cperey: will get her mobile-centric list and we should look at it <hhalpin> yes rreck, but some sites don't well with mobile phones if you don't have a gphone/iphone/other dataphone and add an pc-centric social networkings site to the cperey's list <hhalpin> can we check to see if any of these sites take advantage of user-context, like geolocation from mobile phone? <hhalpin> Sounds great! matrix of social networking site and their features <cperey> where does this live? <hhalpin> we should a manufacture a wiki page for the list we need a wiki page for this list <cperey> yes, please <cperey> yes, I will fill in <cperey> I need to sign off and go to another meeting, bye all <petef> bye cperey vcard and portability issues will have its own discussion after this call <hhalpin> any more comments before we move to vcard? <hhalpin> such as things we are missing? does anyone have any issues, there are lots of quiet people about? have we looked over anything <tinkster> Dammit. My phone's gone dead. <tinkster> Still on IRC though. are we missing anything obvious ? <rreck> im not sure this should be an ongoing process if you think we are missing something <hhalpin> 6. Invited Guest Telecon: VCard in RDF let the mailing list know <rreck> im trying to find classes of privacy and i cant be the first person to want them rreck: look at this guys work ? I have to leave this call, it is 3 :( <hhalpin> <rreck> mischat: ty <hhalpin> That's the older vCard in RDF format <tinkster> Aren't rdf:Bag/rdf:Alt/rdf:Seq generally seen as poor cousins of rdf:List these days? <petef> I have to leave too, bye <hhalpin> 1) No use of containers in vCard at all <hhalpin> 2) Using *only* rdf:List <rreck> why isnt rdf:bag enough <tinkster> If the order of items is important, use rdf:List, otherwise don't use a container at all. <hhalpin> 3) Letting it all be a free for all. i have to leave this chat now :( i am sorry <hhalpin> Someone else can scribe? right i am sorry i am off now, you are less a scribe now bye all <hhalpin> is there use for RDF containers? <hhalpin> PeterMika: vCard is 99 percent hCard <Norm> Really? Surely there are gobs of vcards out there never rendered in HTML at all <hhalpin> PeterMika: vCard in RDF is fairly minimal <hhalpin> PeterMika: Lots of hCard - between 1-2 billion URLs <hhalpin> Can you share the stats on the usage of the attribute? <tinkster> hhalpin: But 98% of those 1-2 billion URLs are presumably on a handful of domain names. Just a few script tweaks could change everything. <hhalpin> should we merge or not do data-typring? <hhalpin> so for complex structure is the older version better? <hhalpin> do we need or/want to substructure? <hhalpin> I thin Renato had some concern for the subset that wasn't hCard. <jsalvachua> sorry i have to leave now, sorry, see you. <hhalpin> And there was a big argument over round-tripping in SWIG a while back... <hhalpin> timbl: I use this for modelling my addresses <hhalpin> timbl: these are proper vCards <hhalpin> timbl: but I can see working with well-defined subset, but would like round-tripping in this subset <tinkster> There are some parts of vCard which are pretty useless. <hhalpin> timbl: made contact ontology <hhalpin> norm: we should stick faithfully to vCard spec <hhalpin> Here is some of the structuring I think: <hhalpin> <vCard:EMAIL rdf: <rdf:value> corky@qqqfoo.com </rdf:value> <rdf:type rdf: <Norm> More precisely: I said that if we claim to model vCard, we should model it. If not, we shouldn't claim to be modeling it. <hhalpin> You CAN do that with the newer hCard <tinkster> The CLASS property is useless. <hhalpin> It's just it's a bit confusing because we then use v:EMAIL as a subject, not a predicate <hhalpin> but we can type predicates <hhalpin> this I think leads to problems with OWL-DL. <tinkster> MAILER is pretty useless too. <hhalpin> But I'm OK with that. <tinkster> (And has been removed in latest drafts.) <timbl> CLASS was for what groups it is in? <tinkster> CLASS has three allowed values: PRIVATE, CONFIDENTIAL and PUBLIC. <timbl> You don't need a bifg process to make a note obsolete, i think -- jsut change the Status Of This Document. <hhalpin> no process issue? <timbl> no process issue. <hhalpin> So, then we can just merge it with Renato's? <hhalpin> So, no process <hhalpin> I would like to NOT have more than one URI for vCard <Norm> The two are these: <Norm> <Norm> <hhalpin> Then I would like to put the SIMPLE stuff up front in Renato's, and put more difficult things involving data-structuring and rdf:List towards the end of the spec <hhalpin> There's also a silly difference in capitalization <hhalpin> I prefer lower-case <tinkster> Newer URI comes up #1 on Google for me, searching "vcard rdf". <tinkster> (without quotes) <tinkster> Norm's spec==namespace URI. <hhalpin> I prefer having spec URI == namespace URI and then use conneg <tinkster> Rennato's namespace = <hhalpin> I mean, one option <hhalpin> I mean one option is that we re-use Renato's URI, then use as the namespace URI. <hhalpin> And if one requests "text/html" for <tinkster> +1 to timbl's suggestion of marking old one as obsolete and recommending new. <hhalpin> That's the issue with TR. <tinkster> hCard GRDDL profile <> uses 2006 namespace. <hhalpin> Well, Norm, I think this is at least part of the community. <Norm> Fair enough <hhalpin> What is way forward here? <tinkster> +1 to just an "obsolete" note, as long as it's clear. <timbl> Proposed ACTION: Harry to check with Renato he is OK with: <ivan> action on harry: would refer to the new version, there will be a 'previous version' link to the current one <trackbot> Sorry, couldn't find user - on <tinkster> ACTION hhalpin to would refer to the new version, there will be a 'previous version' link to the current one <trackbot> Created ACTION-23 - would refer to the new version, there will be a 'previous version' link to the current one [on Harry Halpin - due 2009-06-03]. <hhalpin> that works for me and I think Renato will agree with it. <timbl> ... would be kept as the "latest-version" URI, and Reanto's veion woul dbe linke dfrom the new one as a "previous version". <hhalpin> I guess the other question is we keep the "2006" namespace? <hhalpin> <tinkster> My vote: keep both namespaces but only recommend 2006. <timbl> That is Renato's namespace URI. <hhalpin> Ivan: suggests namespace URIs that tend to use version causes version <hhalpin> Ivan: So let's use "2006" <timbl> I agree that it is unwise to use a version number in the URI <timbl> The year has no semantics <tinkster> It's based on vCard 3.0. <hhalpin> I am also a bit against years in URIs, but that's a minority opinion. <timbl> But you can use /ns/vcard if you want <hhalpin> Ivan: would prefer to vCard <hhalpin> PeterMika: I would agree with "2006" <timbl> or /ns/pim/adr <hhalpin> For the time being let's use "2006" <tinkster> Doesn't that just create yet another URI to include in SPARQL queries, etc? <tinkster> We already have one too many. <timbl> Ok, so keep the same 2006 ns <ivan> <hhalpin> RESOLVED: keep <hhalpin> The vCard ontology needs examples. At least one that explains you can attach the properties to URIs for people, not just cards! <hhalpin> we need a list of examples <hhalpin> from experience. <hhalpin> PeterMika: Transforming hCard into RDF <hhalpin> PeterMika: vCard represents both a person and organization <hhalpin> PeterMika: hCard is value of organization and fn are the same, the hCard is actually representing an organization and not a person <timbl> That is not RDF <hhalpin> PeterMika: The equivalent properties determine type of object <timbl> So the hcrad > vcard mapping has to do some mapping <hhalpin> PeterMika: so address and whatnot can all apply to person <hhalpin> PeterMika: AND organization <hhalpin> PeterMika: Does person have vCard etc. <hhalpin> PeterMika: And then the vCard have an address etc. <hhalpin> PeterMika: These are two main points people struggle with <hhalpin> TimBL: The last one is the major one. <hhalpin> TimBL: Are we modeling a file or person <hhalpin> PeterMika: The documentation leads this question open <hhalpin> I'm noting unclarity about this is WHY there's no examples :) <hhalpin> I could not consensus on this. <timbl> I agree that one should moddl the person not the card. <timbl> Like you model a book, not a library card. <adam> yes <hhalpin> toby: 4.0 includes a property "kind" that demarcates between people organization and group <hhalpin> +1 vCard 4.0 <hhalpin> timbl: which would be a functional mapping to a RDF class <hhalpin> toby: individual or pre-defined group or organization <hhalpin> toby: also maybe one for "place" <timbl> Those shoudl defintely map to classses. <hhalpin> how stable is vCard 4.0? <hhalpin> Should we track it? <tinkster> Not especially stable. <tinkster> yes, certainly - should be able to attach these to Person/Organisation URIs. <hhalpin> ivan: hhalpin said having the same person with several vCard is an edge-case <hhalpin> ivan: since I have two addresses <hhalpin> ivan: so in my phone I have two entries for my name <tinkster> I think vCard 4.0 drafts have ways of representing multiple sets of contact information in one card. <hhalpin> I am liking vCard 4.0 :) <tinkster> e.g. this phone, this fax and this address are for one set of uses; and this phone and this address are for another. <tinkster> I've not really studied that part of the syntax though. <hhalpin> timbl: so you'll miss the fact that these two vCards are not the same in RDF. <uldis> a person may have different vCards which they "give" to people same as one may have different versions of a business card <hhalpin> timbl: about the same person <hhalpin> timbl: does this mean ontology is broken/ <hhalpin> ivan: vCard represents me or my address, I think it represents my address <hhalpin> timbl: we do not walk about what a vCard represents <tinkster> [ a vcard:Vcard] vcard:sameOwnerAs [a vcard:Vcard ] . <tinkster> (No, vCard doesn't have a sameOwnerAs property, but we could always define such a term.) <hhalpin> so we don't drop the vCard class <hhalpin> we keep it and keep domains pretty open-ended <libby> I think that's because the spec's a bit ambivalent norm <hhalpin> but in our examples we use People and Organizations <hhalpin> This would make it clear to users, since most users will just look at examples <hhalpin> timbl: we should be default not give it any class. <hhalpin> should we add this to the GRDDL and the spec, this weird hCard algorithm? <hhalpin> to determine people and organization? <pmika> if fn=org => organization is not possible to express in OWL <hhalpin> so it should be in GRDDL? <hhalpin> Not express it in OWL, but *mention* it in RDF spec and then implement it in GRDDL. <pmika> yes, it has to be in the hcard-to-rdf conversion <hhalpin> we should make a test case here <hhalpin> org class only has name and unit. <hhalpin> PeterMika: So these properties should be extended to organization class <hhalpin> PeterMika: ALL properties can be extended to organization class <hhalpin> PeterMika: for example, "adr". Strictly, in vCard people have addresses,not organizations, but people use addresses directly on organizations in hCard. <tinkster> contact:SocialEntity ~= foaf:Agent. <tinkster> orgname, orgunit <tinkster> orgunit can be repeated. <tinkster> [ a v:vCard ; v:fn "Tim Berners-Lee" ; v:org [ a v:Organization ; v:organization-name "MIT" ; v:organization-unit "CISAL" ] ] <tinkster> is how it currently works. <hhalpin> OK, am a bit confused. <tinkster> Organisations are messy in vCard RDF because they're messy in vCard. <timbl> org MIT unit CSAIL means memberOf [ a Unit; name "CSAIL"; partOf [ a Org; name "MIT"]] <tinkster> There are organisation properties which "hang off" people, plus the convention of fn==org whih means that the entire vCard represents an organisation. <hhalpin> PeterMika: allow both, have examples for both cases <timbl> org MIT means memberOf [ a Org; name "MIT"] <hhalpin> PeterMika: show an organization where the unit is not used, just give it name and some other properties. <hhalpin> PeterMika: A case where the vCard is using to describe an organization <tinkster> FOAF's model for people, orgs, membership is a lot more sensible. <hhalpin> TimBL: You need to spot these patterns <hhalpin> PeterMika: Yes, you need to do that in transform. <hhalpin> Perhaps we can add that to GRDDL. <tinkster> foaf:member <hhalpin> Norm? <timbl> Has FOAF got org is part of biggerOrg? <hhalpin> skos:widerThan :) <tinkster> no, but dublin core has "hasPart/isPartOf" <libby> foaf:Organization <timbl> I wonder whether a gorup is a SocialEntity <libby> don;t see any properties tho <AlexPassant> <libby> "This is a more 'solid' class than foaf:Group, which allows for more ad-hoc collections of individuals. These terms, like the corresponding natural language concepts, have some overlap, but different emphasis. " <hhalpin> ok <libby> not sure if there's any formal subclassing <libby> probaby not <timbl> Maybe socialEntity should be explicitly allowed to incldeu a group. <libby> of group, that is <AlexPassant> may be used together with foaf:member ? (":csail foaf:member :mit") <libby> not very clear <tinkster> AlexPassant: not sure of :csail foaf:member :mit. <timbl> Good Q Alex <uldis> re. earlier mention of SIOC - for representing organisations FOAF would be more appropriate that SIOC <pmika> there is no equivalent of organization-unit in FOAF and I would not be in favor of bringing in a single FOAF class into the VCard spec <tinkster> :microsoft foaf:member :w3c . <timbl> FAOF more appropraite thatn SIOC? <hhalpin> Re FOAF and VCard, I think we first fix vCard RDF spec, because that's relatively easy, and then see what the future of FOAF is in the next telecon. <libby> sounds good <AlexPassant> previous thread on foaf:Group / foaf:Organisation <Norm> Good luck to all! See you on the next telcon <ivan> thanks to norm <libby> cheers AlexPassant <libby> did you get a response? doesn't look like it AlexPassant <libby> perhaps bump the thread? <AlexPassant> libby: unfortunately, no answer - I sent a similar one a year later but no answer as well <hhalpin> republish it for next week? <hhalpin> Formal note, it's an IG note, not a SWXG note. <hhalpin> XGs can't do Notes even :)e <hhalpin> send us examples <hhalpin> PeterMika will examples. <hhalpin> Meeting adjourned
http://www.w3.org/2009/05/27-swxg-minutes.html
CC-MAIN-2018-39
refinedweb
4,759
65.56
Euler problems/11 to 20 From HaskellWiki Revision as of 14:07, 2 December 2011: unboxed types and parallel computation: import Control.Parallel import Data.Word collatzLen :: Int -> Word32 -> Int collatzLen c 1 = c collatzLen c n = collatzLen (c+1) $ if n `mod` 2 == 0 then n `div` 2 else 3*n+1 pmax x n = x `max` (collatzLen 1 n, n) solve xs = foldl pmax (1,1) xs main = print soln where s1 = solve [2..500000] s2 = solve [500001..1000000] soln = s2 `par` (s1 `pseq` max s1 s2) Even faster solution, using an Array to memoize length of sequences : import Data.Array import Data.List import Data.Ord (comparing)] = sum $ map Char.digitToInt $ show $ product [1..100]
https://wiki.haskell.org/index.php?title=Euler_problems/11_to_20&diff=43329&oldid=13926
CC-MAIN-2016-40
refinedweb
118
68.36
This. Software Used in Configuring JBossws - JBoss Application Server 4.0.5.GA. - Eclipse Europa (WTP all in one pack) - JDK 1.5.x Pre-Requirements to Learn JBossWs and follow this article. - Should have Java knowledge. - Should know how to use Eclipse. (Creating web projects in Eclipse) - Should have basic knowledge of webservice. Where to get the JBossWs from? You can download the software from the following URL: - JBoss Application Server 4.0.5.GA. == - Eclipse Europa (WTP all in one pack)== - JDK 1.5.x == Defining JBoss server in Eclipse First thing what you have to do is to define JBoss server in Eclipse. The steps below will explain how to define JBoss server in Eclipse. - Step 1 : Open Eclipse WTP all in one pack in a new work space. - Step 2 : Change the perspective to J2EE Perspective if it is not currently in J2EE Perspective. - Step 3 : Once the Perspective is changed to J2EE, you can see a tab called Servers in the bottom right panel along with Problems, Tasks, Properties. - Step 4 : If the Servers tab is not found. Go to Eclipse menu : Windows > Show view and click on Servers, so that Server tab will be displayed. - Step 5 : Go to Servers tab window and right click the mouse. You will get a pop up menu called “New”. - Step 6 : Clicking on the New menu you will get one more pop up called “Server”. Click on it. - Step 7 : Now you will get Define New Server Wizard. - Step 8 : In the wizard there are options to define many servers. One among them is JBoss. Click on JBoss and Expand the tree. - Step 9 : Select JBoss v 4.0 and click next. - Step 10 : Now give the JDK directory and JBoss home directory. Click Next. - Step 11 : Now the wizard will show you the default Address, port, etc., Leave it as it is and click on Next. - Step 12 : Click on finish. - Step 13 : Now you can see the JBoss server listed in the Servers window and the status is Stopped. - Step 14 : JBoss server is now defined in Eclipse now and its ready to use from with in Eclipse IDE. Creating a Dynamic Web Application Project Now it is time to create a web application in order to Expose a method as a Web service. Create a Dynamic Web Application Project in eclipse by selecting the JBoss server what we have defined in the Eclipse IDE as the default server for the project. (We assume that who ever is reading this article knows how to create a dynamic web application in Eclipse, So that part is not detailed out here). Once the JBoss server is selected as the server for the web applications. All the libraries existing in JBoss will be selected and used by eclipse in the Build Path. So no need to add any extra jar files for our work. Now we will start with a Java code: This is a simple Java code and does not have any thing to do with Webservices. JBossWs Code sample without annotations: (TestWs.java) Our Java code will have a single method called “greet”. Its functionality will be just to accept a string and return the same prefixed with “Hello”. package com.test.dhanago; public class TestWs { /** * This method will accept a string and prefix with Hello. * * @param name * @return */ public String greet( String name ) { return "Hello" + name; } } We will add annotations to the above code and modify the code like below: JBossWs Code sample with annotations: (TestWs.java) package com.test.dhanago; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; /** * This is a webservice class exposing a method called greet which takes a * input parameter and greets the parameter with hello. * * @author dhanago */ /* * @WebService indicates that this is webservice interface and the name * indicates the webservice name. */ @WebService(name = "TestWs") /* * @SOAPBinding indicates binding information of soap messages. Here we have * document-literal style of webservice and the parameter style is wrapped. */ @SOAPBinding ( style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED ) public class TestWs { /** * This method takes a input parameter and appends "Hello" to it and * returns the same. * * @param name * @return */ @WebMethod public String greet( @WebParam(name = "name") String name ) { return "Hello" + name; } } JBossWs annotations Walk Through @WebService(name = "TestWs") Here, @WebService Indicates that this is a webservice class. name = “TestWs” Indicates the webservice name. @SOAPBinding ( style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED ) Here, @SOAPBinding Indicates binding information of soap messages. The properties below them indicates the style of web service, Here it is document-literal style. And parameter style is Wrapped. Here, @WebMethod Indicates this is a method exposed as web service. @WebParam Indicates the parameter name to be used in soap message. JBossWs Deployment Descriptor Once the code is ready and compiled. You have modify the web.xml file located in WEB-INF folder. Modify the web.xml file like below. (web.xml) <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>TestWS</display-name> <servlet> <servlet-name>TestWs</servlet-name> <servlet-class>com.test.dhanago.TestWs</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>TestWs</servlet-name> <url-pattern>/TestWs</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> </web-app> Deploying the JBoss web service application Once this is done, its time to build and deploy the application in JBoss Application server. Once every thing is compiled with out any errors. And if you have enabled the Auto build unctionality of the Eclipse IDE, You have already done with building the application. If the auto build functionality of eclipse is not enabled, then right click on the project and build the project using build option. Go to Servers window and right click on the JBoss server listed over there and select Run. Wait for server to start. Once it starts, right click on the server listing. You can find a option called “Add and Remove Project”. Click on the option. You will get a wizard where you can select your projects to move to right and configure with server. Once you moved your project. Click on finish. Once it is done, you can find that the project is again build and moved to server default deployment folder automatically. Console will display you like below. Buildfile: D:\ec2\eclipse\plugins\org.eclipse.jst.server.generic.jboss_1.5.102.v20070608\buildfiles\jboss323.xml deploy.j2ee.web: [jar] Building jar: D:\validation\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\Tws.war [move] Moving 1 file to D:\MyBoss\jboss-4.0.5.GA_ws121\server\default\deploy BUILD SUCCESSFUL Total time: 10 seconds The dynamic web application i created is with the name “Tws”. So the build has created Tws.war and moved it to the default deploy folder of the JBoss server. To make sure web service is started once it is deployed in the JBoss console you can find the log like below. 13:57:52,306 INFO [ServiceEndpointManager] WebService started: To view the WSDL follow the link http://<machine name>:8080/Tws/TestWs?wsdl To see the list of webservices deployed in your JBoss Application server follow the link. This browser console will have link to see your deployed webservices and their WSDL files. JBossWs Browser Console. Clicking on View a list of deployed services will list you the deployed web services. In our case we will get the following screen where we can see the registered service endpoints. Here in this screen you can see the ServiceEndpointAddress link which will take you to the WSDL file. You can also find the WSDL file in the following path: <jboss_path>\server\default\data\wsdl\<project_name>.war\<filename>.wsdl You can generate the client stubs using this file and access the web service. Creating the client stubs to access the web service is out of scope of this article. The WSDL file generated using JBossWs is shown below: <?xml version="1.0" encoding="UTF-8"?> <definitions name="TestWsService" targetNamespace=""="TestWs_greet"> <part name="greet" element="tns:greet"/> </message> <message name="TestWs_greetResponse"> <part name="greetResponse" element="tns:greetResponse"/> </message> <portType name="TestWs"> <operation name="greet" parameterOrder="greet"> <input message="tns:TestWs_greet"/> <output message="tns:TestWs_greetResponse"/> </operation> </portType> <binding name="TestWsBinding" type="tns:TestWs"> <soap:binding <operation name="greet"> <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> </binding> <service name="TestWsService"> <port name="TestWsPort" binding="tns:TestWsBinding"> <soap:address </port> </service> </definitions> Summary This article is just a quick start to start with for developers who want to quickly proceed with JBoss web services. It is up to the developers interest to leverage on this and proceed further. This is not the only procedure to expose a web service in JBoss. There might be lot of ways to do that and this is one of the way. So don’t stop here and continue Exploring. - 5 New Features in HTML5 - November 29, 2013 - How To Set Tomcat JVM Heap Size in Eclipse? - November 28, 2013 - How To Resolve “Resource Is Out Of Sync With The Filesystem” Error in Eclipse? - November 28, 2013 [...] [...]
http://www.javabeat.net/creating-webservice-using-jboss-and-eclipse-europa/3/
CC-MAIN-2013-48
refinedweb
1,581
59.19
Subject: Re: [boost] [review] Multiprecision review scheduled for June 8th - 17th, 2012 From: Vicente J. Botet Escriba (vicente.botet_at_[hidden]) Date: 2012-05-31 01:43:25 Le 29/05/12 23:08, Jeffrey Lee Hellrung, Jr. a écrit : > Hi all, > > The review of the proposed Boost.Multiprecision library authored by John > Maddock and Christopher Kormanyos has been scheduled for > > June 8th - June 17th, 2012 > > and will be managed by myself. > > I hope everyone interested can reserve some time to read through the > documentation, try the code out, and post a formal review, either during > the formal review window or before. > > Hi, glad to see that the library will be reviewed soon. I have spent some hours reading the documentation. Here are some * As all the classes are at the multi-precision namespace, why name the main class mp_number and not just number? typedef mp::number<mp::mpfr_float_backend<300> > my_float; * I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations: I would expect the result of unary operator-() always signed? Is this operation defined for signed backends? I would expect the result of binary operator-() always signed? Is this operation defined for signed backends? what is the behavior of mp_uint128_t(0) - mp_uint128_t(1)? It would be great if the tutorial could show that it is possible however to add a mp_uint128_t and a mp_int256_t, or isn't it possible? I guess this is possible, but a conversion is needed before adding the operands. I don't know if this behavior is not hiding some possible optimizations. I think it should be possible to mix backends without too much complexity and that the library could provide the mechanism so that the backend developer could tell to the library about how to perform the operation and what should be the result. * Anyway, if the library authors don't want to open to this feature, the limitation should be stated more clearly, e.g in the reference documentation "The arguments to these functions must contain at least one of the following: An mp_number. An expression template type derived from mp_number. " there is nothing that let think mixing backend is not provided. * What about replacing the second bool template parameter by an enum class expression_template {disabled, enabled}; which will be more explicit. That is typedef mp::mp_number<mp::mpfr_float_backend<300>, false> my_float; versus typedef mp::mp_number<mp::mpfr_float_backend<300>, mp::expression_template::disabled> my_float; * As I posted in this ML already I think that allocators and precision are orthogonal concepts and the library should allow to associate one for fixed precision. What about adding a 3rd parameter to state if it is fixed or arbitrary precision? * Why cpp_dec_float doesn't have a template parameter to give the integral digits? or as the C++ standard proposal from Lawrence Crow (), take the range and resolution as template parameters? * What about adding Throws specification on the mp_number and backend requirements operations documentation? * Can the user define a backend for fixed int types that needs to manage with overflow? * Why bit_set is a free function? * I don't see nothing about overflow for cpp_dec_float backend operations. I guess it is up to the user to avoid overflow as for integers. what would be the result on overflow? Could this be added to the documentation? * can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes, which rounding policy is applied? Do you plan to let the user configure the rounding policy? BTW, I see in the reference "Type mp_number is default constructible, and both copy constructible and assignable from: ... Any type that the Backend is constructible or assignable from. " I would expect to have this information in some way on the tutorial. I will appreciate also if section "Constructing and Interconverting Between Number Types" says something about convert_to<T> member function. If not, what about a mp_number_cast function taking as parameter a rounding policy? * Does the cpp_dec_float back end satisfies any of the Optional Requirements? The same question for the other backends? * Is there a difference between implicit and explicit construction? * On c++11 compilers providing explicit conversion, couldn't the convert_to function be replaced by a explicit conversion operator? * Are implicit conversion possible? * Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? * Why do you allow the argument of left and right shift operations to be signed and throw an exception when negative? Why don't just forbid it for signed types? * Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types? * What is the type of boost::multiprecision::number_category<B>::type for all the provided backends? Could the specialization for boost::multiprecision::number_category<B>::type be added in the documentation of each backend? and why not add also B::signed_types, B::unsigned_types, B::float_types, B::exponent_type? * Why have you chosen the following requirements for the backend? - negate instead of operator-() - eval_op instead of operator op=() - eval_convert_to instead of explicit operator T() - eval_floor instead of floor Optimization? Is this optimization valid for short types (e.g. up to 4/8 bytes)? * As the developer needs to define a class with some constraints to be a model of backend, which are the advantages of requiring free functions instead of member functions? * Couldn't these be optional if the backend defines the usual operations? * Or could the library provide a trivial backend adaptor that requires the backend just to provide the usual operations instead of the eval_xxx? * How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float? * I don't see in the reference section the relation between files and what is provided by them. Could this be added? * And last, I don't see anything related to rvalue references and move semantics. Have you analyzed if its use could improve the performances of the library? Good luck for the review. A really good work. Vicente Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
http://lists.boost.org/Archives/boost/2012/05/193534.php
CC-MAIN-2015-32
refinedweb
1,040
58.18
Created on 2014-02-21.17:29:41 by zyasoft, last changed 2015-04-15.20:48:19 by zyasoft. As seen in the socket-reboot project, it's necessary to wrap calls to bound methods with a function so that it can be directly used by single method interfaces, eg: def workaround_jython_bug_for_bound_methods(_): self._notify_selectors() future.addListener(workaround_jython_bug_for_bound_methods) The likely solution is to add a PyMethod#__java__, taking in account being bound or not, comparable to PyFunction#__java__. Target beta 4 Bound methods are supported as of Per my commit message: Note that any callable object should have this behavior, since there is no possibility of ambiguity; however, this change did not add such support for __call__ since it seems to be a bit more complex in how it interacts with class derivation. But a test, currently skipped, as been added for such future support.
http://bugs.jython.org/issue2115
CC-MAIN-2016-40
refinedweb
146
52.9
Cigarette Taxes TL;DR Raising the cigarette tax by one cent in every state will save us about $1 billion per year. In the grand scheme of health care costs, it isn’t that much money. So with this post, I wanted to get back to my roots as an economist. I decided that I wanted to write a blog post similar to the economics papers that got me so gosh darn interested in the field of economics. When I was a tender undergraduate majoring in physics, I ended up taking a class called “The economics of drugs, sex, and crime”. It was supposed to be my social science general credit. Unfortunately, that class derailed me from my path as a physicist. Oh well, I never really got an intuitive grasp on Maxwell’s equations so probably for the best. At any rate, the thing that fascinated me about this class was how it blended all sorts of thorny things together. You had politics, in bed with rational policies, making out with culture, and crime, all wrapped up in the comfortable blanket of mathematics. It was like a drug for me, I couldn’t turn away. I had to have more. The thought that something as simple as increasing or decreasing a tax changing people’s behavior so that they smoked, or not. Or perhaps, how much money was to be had in the drug trade is an illusion, it was more about indebtedness and aspirational goods. It makes these kids selling dope on the street corner look more like the middle class than you might think. Just delicious. So I started down this path, and it became fascinating to me to see how policy levers can be pulled to affect actual real life behavior. And that’s where this analysis falls. Today, I want to answer the question, “What would happen to health expenditures if we raised cigarette taxes by 1 percent?” So I jumped onto the CDC’s website and pulled down the data related to cigarette taxes, and tobacco related health expenditures. You can get this data from here. And you can follow along at home. Effect Identification Strategy So this is a blog post and not an academic article. I’m not trying to pretend that it rises to the level of rigor that would be required of a peer reviewed academic article, but I see a lot of blog posts out there that I look at and go, well that’s probably not the right answer. To that end, I want to take some time and talk about the panel data method that I am using and why I think that it will get us pretty close to the right answer when it comes to this question. The basic idea for why panel regressions give the correct answer comes down to quasi-experiments. The idea is that you have some entity, a person, state, whatever that you are going to measure at various points in time. Concretely, in our example, we’ll be looking at states every year between 2005 to 2009. It isn’t a huge dataset, but again, I want to approach believability, not academic peer review rigor, so this will do fine. So each of these entities acts as a potential control for the others. Also, looking over several time periods act as controls within a state. How? Basically, we can turn variables up or down over time and see what effect that they have independent of the state by washing out any state-level fixed effects, and any temporal-fixed effects. I think a picture is worth a thousand words to explain what I’m talking about here. More on how to generate this figure below. Each line in this figure represents a state. You will notice that some states played around with their cigarette taxes during this time. Other states kept their taxes exactly the same. Nature has essentially handed us a perfect experiment. Some of our states are going to act as a control group, the treatment group is the set of states that made changes to their cigarette tax laws during this time period. Moreover, some of the treatment group lowered the tax, some increased it, others did both. So we should be able to see what happened to their health expenditures over time. This is the essence of why I think that this method will work to get me the right answer. Clean the Data So let’s start by cleaning the data. It came in two separate files, so we know there will be merging involved. Also, some of our numerical values are encoded as strings, so we’ll need to convert those to numbers, which may involve some fancy footwork with regular expressions, as some of the numbers have words such as “per pack” appended to the end or other strings like that, we’ll also do some grouping because the tax is being recorded every quarter, but health expenditures are only recorded for the year. So we’ll take the average tax for the year. If this were an academic paper, we’d be examining the assumption that the average over the year was appropriate, since it is not, we’re just going to say that we’re good. We’re also going to drop missing values, that’s probably a bad idea too but let’s go with it because it makes life easier for a humble blogger. import pandas as pd import re import numpy as np import matplotlib.pyplot as plt import pymc3 as pm temp1 = pd.read_csv('~/Documents/tobacco/Expenditures.csv') temp2 = pd.read_csv('~/Documents/tobacco/Taxes.csv') temp3 = temp2[(temp2['ProvisionDesc'] == 'Cigarette Tax ($ per pack)')] tax = [] for obj in temp3['ProvisionValue']: try: tax.append(re.findall('^\d*[0-9](|.\d*[0-9]|,\d*[0-9])?$',obj)[0]) except: tax.append(np.nan) temp3['tax'] = tax temp3['tax'] = temp3['tax'].astype('float64') temp4 = temp3.groupby(['Year','LocationAbbr'])['tax'].mean().reset_index() df = pd.merge( temp1[temp1['Variable']=='Total'], temp4, how='inner', on=['Year','LocationAbbr'] ) df = df[['Year','LocationAbbr','Data_Value','tax']] Great! Now we have data in a usable format. Let’s take a look at some preliminary visualization to see if things make sense. I already introduced this chart, and why it is important, so I won’t belabor that point any longer. I will, however, show you the code that generates it: for state in df['LocationAbbr']: plt.plot(df[df['LocationAbbr']==state]['Year'],df[df['LocationAbbr']==state]['tax']) plt.title('Variation in Tobacco Tax By State') plt.ylabel('Tax ($ per pack)') plt.xlabel('Year') plt.show() Here is a similar plot, this time for our response variable. The interesting thing is that we see a clear upward trend on all variables. Some do trend upward faster than others, but it points to the fact that we’ll need to control for time somehow, so we’ll use temporal fixed effects, in addition to state-level fixed effects. It was generated very similarly, so similar that I probably should have written a function. I’ll kick myself for not doing that later. for state in df['LocationAbbr']: plt.plot(df[df['LocationAbbr']==state]['Year'],df[df['LocationAbbr']==state]['Data_Value']) plt.title('Variation in Tobacco Tax By State') plt.ylabel('Expenditures (Millions of Dollars)') plt.xlabel('Year') plt.show() So now all we need to do is generate our fixed effects, and then get started building a model. df['year_index'] = pd.factorize(df['Year'])[0] df['state_index'] = pd.factorize(df['LocationAbbr'])[0] df['tax'] = [obj if obj!=0 else 0.0001 for obj in df['tax']] df['logtax'] = np.log(df['tax']) df['logExpense'] = np.log(df['Data_Value']) df.dropna(inplace=True) I did some stuff here. The year and state indices that I created using pandas’s factorize method are going to let us get our fixed effects in place when we build the model. Also I want to take the log of taxes and expenditures. This will allow me to estimate the elasticity of health expenditure with respect to taxes, heretofore the elasticity. This is also where I drop the missing values. Now let’s build a model. with pm.Model() as model: state_fixed_effects=pm.Flat('State_Fixed', shape=len(df['LocationAbbr'].unique())) time_fixed_effects=pm.Flat('Time_Fixed',shape=len(df['Year'].unique())) tax_beta = pm.Flat('beta') lm = pm.Deterministic('mu',tax_beta*np.array(df['logtax'])+state_fixed_effects[np.array(df['state_index'])]+time_fixed_effects[np.array(df['year_index'])]) sigma = pm.Flat('sigma') sigma2 = pm.Deterministic('sigma2',sigma**2) obs=pm.Normal('Observed', mu=lm, sd=sigma2, observed=df['logExpense']) trace = pm.sample(1000, tune=1000) This will set up and run the model. I used flat priors so that I wasn’t making too many assumptions. We could have set stronger priors, or we could have even set up a hierarchical model. But that is just adding fanciness to this model that doesn’t do much for us unless our prior is stronger to the point of affecting the outcome. We can take a look at the posterior distribution for the elasticity by looking at the trace plot. We can do that with a simple line of code. pm.traceplot(trace,varnames=['beta']) plt.show() And that should get you something that looks like this: This figure clearly shows that the elasticity is somewhere around -0.0035. That would mean that a 1% increase in the cigarette tax should lead to a 0.0035% decrease in expenditures related to tobacco. That’s pretty inelastic, but then again, think about it, smoking is an addiction, we should expect it to be inelastic. A one cent increase in the tax, on average is about 2.3% increase in the tax per pack. So given this information on average, raising the cigarette tax by one cent should decrease expenses by something like 0.0081%. Which turns out to be about $19.29 million per state per year, or $964.57 million nationally per year. That’s a huge number, but its basically rounds up to be about a billion dollars per year for a one cent per pack increase nationally in the cigarette tax. This is, of course, a back of the envelope calculation, and we could improve it significantly. Now, I don’t know how much consumer surplus you would lose under such a plan, and I am not sure how much tax revenue this would raise. It is quite possible the net present value of this hair-brained scheme of imposing a national 1 cent tax on every pack of cigarettes could be negative. Also when talking about health care costs a billion dollars annually just isn’t that much. The center for medicare and medicaide tell me that we spent $3.3 trillion in 2016. $1 billion dollars amounts to a decrease in spending on health care of about 0.03% of all spending. You could easily see that get gobbled up by inflation next year. So we aren’t talking about a huge gain here, but it could help a little. What do you think? Leave me a comment below. One thought on “Bayesian Panel Data Cigarette Tax Analysis”
https://barnesanalytics.com/effect-cigarette-tax-health-expenditures-using-bayesian-panel-data-methods
CC-MAIN-2018-09
refinedweb
1,867
65.83
Before everyone had a multitude of computers of their own computers were rare, and if you wanted to use one you had to share it. Given the demand for computers exceeded the supply, people had to share time using it. Initially you could do this with a stop watch, but it's better for the computer itself to be able to measure this time since as computers became more complicated: Pre-emption, where one process can be interrupted to run another, means the time take up by a program isn't just the difference between when the program started and ended. Multi-threading, where a program can run multiple commands simultaneously, means you can use CPU time at a rate of more than one CPU second per second. Computers became so pervasive that most computer users don't need to share, but virtual server providers also need to account for time used, and CPU time can also be used to measure how long it takes to perform an operation for profiling purposes so when a program is slow you know which part is the most worth your time to optimise. Getting CPU time The CPU time is read in the same way as other clocks, with different clock IDs for each process or thread. Current process with CLOCK_PROCESS_CPUTIME_ID. int ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time); Current thread with CLOCK_THREAD_CPUTIME_ID. int ret = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &time); Another process with clock_getcpuclockid(3). int pid_gettime(pid_t pid, struct timespec *tp) { int ret; clockid_t clockid; ret = clock_getcpuclockid(pid, &clockid); if (ret != 0) { return ret; } ret = clock_gettime(clockid, tp); return ret; } Another thread with pthread_getcpuclockid(3). int thread_gettime(pthread_t thread, struct timespec *tp) { int ret; clockid_t clockid; ret = pthread_getcpuclockid(thread, &clockid); if (ret != 0) { return ret; } ret = clock_gettime(clockid, tp); return ret; } See gettime.c for an example program for reading the times, and Makefile for build instructions. Profiling We can instrument code (see profile-unthreaded.c) to see how much time a section takes to run. #include <limits.h> #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <time.h> static void print_time(FILE *f, struct timespec time) { fprintf(f, "%lld.%09lld\n", (long long)time.tv_sec, (long long)time.tv_nsec); } int main(int argc, char **argv) { enum { ITERATIONS = 1000000, }; int ret, exit = 0; struct timespec start, end; ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); if (ret != 0) { perror("clock_gettime"); exit = 1; goto exit; } for (int i = 0; i < ITERATIONS; i++) { fprintf(stdout, "% 7d\n", i); } exit; } $ make profile-unthreaded $ ./profile-unthreaded >/tmp/f 0.073965395 We can make use of threads to try to speed this up (see profile-threaded.c). #include <limits.h> #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <time.h> #include <unistd.h> #define ARRAY_SIZE(x) (sizeof(x) / sizeof(*x)) static void print_time(FILE *f, struct timespec time) { fprintf(f, "%lld.%09lld\n", (long long)time.tv_sec, (long long)time.tv_nsec); } struct thread_args { int fd; int start; unsigned len; }; void *thread_run(void *_thread_args) { struct thread_args *thread_args = _thread_args; char buf[9]; for (int i = thread_args->start; i < thread_args->start + thread_args->len; i++) { ssize_t len = snprintf(buf, ARRAY_SIZE(buf), "% 7d\n", i); pwrite(thread_args->fd, buf, len, i * len); } return NULL; } int main(int argc, char **argv) { enum { ITERATIONS = 1000000, THREADS = 4, }; int i, ret, exit = 0; struct timespec start, end; pthread_t threads[THREADS]; struct thread_args thread_args[THREADS]; ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); if (ret != 0) { perror("clock_gettime"); exit = 1; goto exit; } for (i = 0; i < ARRAY_SIZE(threads); i++) { thread_args[i].fd = 1; thread_args[i].start = ITERATIONS / THREADS * i; thread_args[i].len = ITERATIONS / THREADS; ret = pthread_create(&threads[i], NULL, thread_run, &thread_args[i]); if (ret != 0) { perror("pthread_create"); exit = 1; } } if (exit != 0) { for (; i >= 0; i--) { (void) pthread_cancel(threads[i]); } goto exit; } for (i = 0; i < ARRAY_SIZE(threads); i++) { ret = pthread_join(threads[i], NULL); if (ret != 0) { perror("pthread_join"); exit = 1; } } if (exit != 0) { goto exit; } 0; } $ make profile-threaded $ ./profile-threaded >/tmp/f 3.185380729 By instrumenting we can tell that this actually made this section a lot slower. Don't do this Manually instrumenting things is a lot of work which means you are only going to do it for bits you already suspect are slow. GCC's -pg adds instrumentation to dump times in a format readable by gprof. valgrind when invoked like valgrind \-\-tool=callgrind prog and kcachegrind to view it. This runs your program on an emulated CPU, so it can use its own model of how long an operation takes for accounting time so it is unaffected by the overhead of profiling. perf makes use of CPU features to measure with minimum overhead. make CFLAGS=-ggdb command perf record \-\-call-graph=dwarf ./command perf report This. You. Previously I mentioned the Advent of Code as a possible thing you might want to look at for using as a way to learn a new language in a fun and exciting way during December. This year, it'll be running again, and I intend to have a go at it again in Rust because I feel like I ought to continue my journey into that language. It's important to note, though, that I find it takes me between 30 and 90 minutes per day to engage properly with the problems, and frankly the 30 minute days are far more rare than the 90 minute ones. As such, I urge you to not worry if you cannot allocate the time to take part every day. Ditto if you start and then find you cannot continue, do not feel ashamed. Very few Yakking readers are as lucky as I am in having enough time to myself to take part. However, if you can give up the time, and you do fancy it, then join in and if you want you can join my private leaderboard and not worry so much about competing with the super-fast super-clever people out there who are awake at midnight Eastern time (when the problems are published). If you want to join the leaderboard (which contains some Debian people, some Codethink people, and hopefully by now, some Yakking people) then you will need (after joining the AoC site) to go to the private leaderboard section and enter the code: 69076-d4b54074. If you're really enthusiastic, and lucky enough to be able to afford it, then support AoC via their AoC++ page with a few dollars too. Regardless of whether you join in with AoC or not, please remember to always take as much pleasure as you can in your coding opportunities, however they may present themselves.
https://yakking.branchable.com/archives/2017/11/
CC-MAIN-2018-13
refinedweb
1,091
64.1
For. The code is not that hard to understand, the hard part is understanding how the serial communication works. So, I'm going to show you how to tell your arduino to blink using your computer. Once you understand this you should be able to expand both the python code and the arduino code to fit your own projects. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Sorting Out Python Now, obviously, we're going to need python if we want to do anything so we better get that! If you don't have it installed, head over to the python website and download it! () Once we have python installed were going to need a new library called PySerial. This is going to provide all the functions and methods we will need to talk to our arduino! If you're using a windows machince check out their source forge page for the windows installer. If you're using MacOS/linux your going to have to look around the PySerial website. Also, if you're familiar with using Eclipse you might be intersted in the Python add-on for eclipse. Check it out if you would like to program in the eclipse environment. Now if everything is installed we can actually start writing our Python program! Step 2: Python Code! Now we can actually start programming! So, in order to actually use the PySerial methods we need to import the serial library before we try to use it. Next I declare a variable that will act as a flag.When serial connections are opened with the arduino it takes a little while to sort things out. So we wont try to send anything to the arduino until it sends something to us. Next we initialize a serial variable, "ser", that will be communicating with the arduino. Two parameters are sent when initializing a serial variable. First you have to port that it will be communicating with. In my case It was COM11, but yours may differ. To find out what port your arduino is using, connect it to your computer and open up device manager. The arduino IDE will also tell you which port it is using. The second parameter that is sent is the baud rate. The baud rate is the speed that the serial controller will send and receive at, the important thing is this baud rate matches the baud rate you use in the arduino sketch. I chose 9600 since it is a middle of the road speed and we don't need anything too fast for this example. If you want to use a faster or slower speed, use Google to figure out which speeds to use. We want to tell the arduino to blink! So I have a write function that sends the number 1 to the arduino. When the arduino sees this it's going to blink twice! Now we want to wait until the arduino tells us that it has blinked twice. By having the while loop the program will loop (do nothing) until it receives a message. If we were to leave this while loop out, the program would close the serial port and the arduino wold stop blinking. When we receive the message from the arduino we can close the serial port and end the program. So that's all we need for the python program, just 10 lines of code! Step 3: Setting Up the Arduino Im going to assume everyone has the adruino software installed and working. Since we just want to make a light blink we can just use the light that is on the arduino or connect and LED to pin 13 and ground. Like the circuit the Code isn't too scary either. In void setup () we start the serial monitor with a baud rate of 9600. The rate doesnt matter, just make sure it matches the baud rate in the python program. Next, we make an output pin that out LED is connected to. Lastly, we write something over serial so the python program knows we are ready. In void loop() we have one big if statement. Basically this is waiting for the python program to send something over serial. Once the arduino receives something the LED will blink twice. After two blinks the arduino sends a message back saying it is finished blinking. The python program will see this and then stop. Step 4: Test It Out! Upload the sketch to your board and then run the python program. If everything was done properly you should see the light blink twice!! If not make sure the programs are the same as mine or leave a comment here or message me, I'll be happy to help! If you want start messing around this the code, try adding in another LED and make them alternate, try sending different things over serial. If you get stuck just remember Google is your friend! Thanks for reading! Participated in the Arduino Challenge 1 Person Made This Project! Recommendations 40 Discussions Question 1 year ago while running these codes I am getting an error on the python code given here. The error is: FFEBG36GY4RFYGO.py", line 24, in <module> ser.write("1") Python\Python37\lib\site-packages\serial\serialwin32.py", line 308, in write data = to_bytes(data) Python\Python37\lib\site-packages\serial\serialutil.py", line 63, in to_bytes raise TypeError('unicode strings are not supported, please encode to bytes: {!r}'.format(seq)) TypeError: unicode strings are not supported, please encode to bytes: '1' How to handle this error? Answer 21 hours ago ser.write(1); worked for me 6 months ago Great tutorial! Thanks. 1 year ago Check the following logo I created 3 years ago can we write a python code on Arduino IDE. please do reply. thank you! Reply 2 years ago Not on Arduino IDE but you can use another IDE, Zerynth Studio: With Zerynth you can program Arduino (32-bit MCU) boards directly in Python Reply 2 years ago exclude 8bit boards :( Reply 2 years ago You can with microPython : Reply 3 years ago currently, no. the only way is using pySerial (and compiling a .py script to .ino (not sure its possible)) Reply 3 years ago Hello, have a look at the Pumbaa project. It is available in the Arduino IDE. 2 years ago The example give's errors b.append(item) # this one handles int and str for our emulation and ints for Python 3.x TypeError: an integer is required sorry too difficult for me. thank you anyway 3 years ago The code that says "import" is the Python code. You run that on your PC using Python. The Arduino code is "sketch_SerialTest.ino" Reply 3 years ago Sorry I accidently deleted my first post :/ okay so i set it up as a python script and I ran it in the terminal and got this error: Traceback (most recent call last): File "SerialTest.py", line 8, in <module> import serial File "/home/echo/Desktop/serial.py", line 5, in <module> ser = serial.Serial("COM11", 9600) AttributeError: 'module' object has no attribute 'Serial' So is it a problem in the code? 3 years ago Hi, I have tried to make this project using a Relay rather than an LED. I can get the relay to switch using Arduino and Python together, but then the Arduino continues to loop, switching the relay on and off, rather than ending once it has sent ‘0’ back to python to indicate it has done the loop. Any ideas why this may be? Thanks Martin 3 years ago I love the combination of Python and the Arduino. So I have created a collection about it. I have added your instructable, you can see the collection at: >>... 4 years ago Hi guys, if you're looking for programming Arduino in Python, take a look at VIPER!website: instructables profile: 7 years ago on Introduction At fiest I wanna thank you for submitting your tuotial here, its very nice and helpful however it has some flaws. This Arduino code will blink for infinity UNLESS you 'clear' memory (which stores one value at a time). One way is to read from serial (eg. in Arduino software) and second one is replacing Serial.wite('0'); with Serial.read(); After doing this it works like a charm! Reply 4 years ago on Introduction Thx for the fix ! 4 years ago on Introduction Thanks!!!! it helped me a lot.... 4 years ago on Introduction Thanks bra
https://www.instructables.com/id/Arduino-and-Python/
CC-MAIN-2019-51
refinedweb
1,440
74.49
[solved] QHash’s insert() wrong use?—> VS-Debugger showed botch data due to missing Qt-Addon for VS Hello peoplez, I have a beginner problem with my QHash I built a controller, which handles my QHash. This Controller has got a method: - - { - SpecialObject tPI (tID, tName, tCat, tCon); // tPI is set correctly - this->myQHash.insert(tID, tPI); // wrong values inserted into myQHash - } The object tPI is correctly; all its attributes are correctly set. The error is in the next line, where the key-SpecialObject-value pair (tID, tPI) should be inserted to myQHash. It is not real error, but rather an incorrect insertion. For testing purposes, I have elsewhere: - pDataController.addNewData(2511, tr("Home"), 20481, 1272923); But according to my debugger following values are stored (1, BadPtr, 262148, 17) instead of (2511, tr(“Home”), 20481, 1272923) Does anybody know which mistake I have done? I am thankful for any advise. Cheers Huck 44 replies You have to create your SpecialObject on the heap, because this way it gets deleted after you leave the addNewData function. Ok, and as far as I know every with new created pointer should be deleted by programmer later on. - - SpecialObject *tPI = new SpecialObject(tID, tName, tCat, tCon); - this->myQHash.insert(tID, tPI); - delete tPI; - } does not make any sence right? Shall I delete all those SpecialObjects pointers in the destructor later? How would you handle that? A bit difficult to give advice with so little information. If SpecialObject is a small class, and it makes sense to pass SpecialObject objects around by value, then the code you have written should be OK. If not, then you must create the objects on the heap, using new, and then delete them when you no longer need them, e.g. when/before the QHash is destroyed. @ludde: Yes at that moment SpecialObject has got one QString and 5 qint32 attributes. However, this is going to be developed and thus rising. @ZapB: Thatswhy I wrote an own class for it, I know that I need special format for output purposes late on. And thatswhy I haven’t take “struct”. Anyway, creating my object on the heap does not solve my problem either. During debugging I can see botch values in pDataController as on the screenshot above. And same CXX0030 error. Looks to me like you have not properly implemented the assignment and/or copy constructor of your SpecialObject class properly.Looks to me like you have not properly implemented the assignment and/or copy constructor of your SpecialObject class properly. @Hunger I thought QHash took a copy of objects when you insert them rather than using a reference to them as you are implying? Correct! To quote from the docs about container classes [doc.qt.nokia.com] *>. And more important: If we don’t provide a copy constructor or an assignment operator, C++ provides a default implementation that performs a member-by-member copy. Theses autogenerated operators and constructors are most likely not sufficient! Here you have compileable but not yet linkable codes. LinkError see bottom. DataItem.cpp - #include "DataItem.h" - #include <QString> - DataItem::DataItem() - { - this->dataID = 0; - this->dataCon = 0; - this->dataCat = 0; - this->dataName = "defName"; - } - - { - this->dataID = tId; - this->dataCon = tCon; - this->dataCat = tCat; - this->dataName = tName; - } - qint32 DataItem::getID() const { return this->dataID; } - - qint32 DataItem::getCat() const { return this->dataCat; } - qint32 DataItem::getCon() const { return this->dataCon; } - DataItem::~DataItem(void) {} DataItem.h - #pragma once - #include <QString> - class DataItem - { - public: - DataItem(); - - ~DataItem(void); - qint32 getID() const; - - qint32 getCat() const; - qint32 getCon() const; - private: - qint32 dataID; - - qint32 dataCat; - qint32 dataCon; - }; DataController.cpp - #include <QHash> - #include "DataController.h" - #include "DataItem.h" - //DataController::DataController(void) - //{ - // - //} - //void DataController::addNew(DataItem * tPI) { this->myQHash.insert(tPI->getID(), *tPI); } - - { - //DataItem tPI (tID, tName, tCat, tCon); - DataItem * tPI = new DataItem(tID, tName, tCat, tCon); - this->myQHash.insert(tID, tPI); - } - DataController::~DataController(void) {} DataController.h - #pragma once - #include <QString> - #include <QHash> - #include "DataItem.h" - class DataController - { - public: - //DataController(void); - ~DataController(void); - //void addNew(DataItem * tPI); - - private: - - }; MyDockWidget.cpp - #include "MyDockWidget.h" - #include "DataController.h" - #include <QString> - MyDockWidget::MyDockWidget( ) - { - //DataController myDataController; - } - int MyDockWidget::main(int p_argsc, char *p_argsv[] ) - { - //this->myDataController = new DataController(); - myDataController.addNew(2511, "Heim", 20481, 1272923); - myDataController.addNew(2512, "Work", 20482, 1272963); - // Breakpoint here - return 1; - } MyDockWidget.h - #ifndef MY_DOCK_WIDGET_H - #define My_DOCK_WIDGET_H - #include <QString> - #include "DataController.h" - class MyDockWidget - { - //Q_OBJECT - public: - /// @brief constructor - MyDockWidget(); - DataController myDataController; - int main( int p_argsc, char *p_argsv[] ); - }; - #endif // MY_DOCK_WIDGET_H I now get a Linker-Error: 1>Linking… 1>DataItem.obj : error LNK2019: unresolved external symbol “__declspec(dllimport) public: __thiscall QString::~QString(void)” (__imp_??1QString@@QAE@XZ) referenced in function __unwindfunclet$??0DataItem@@QAE@XZ$0 but I did not use any Libs or Dlls??! How is that be? You must log in to post a reply. Not a member yet? Register here!
http://qt-project.org/forums/viewthread/7419/
CC-MAIN-2014-49
refinedweb
792
50.63
@types Definitely Typed is definitely one of TypeScript's greatest strengths. The community has effectively gone ahead and documented the nature of nearly 90% of the top JavaScript projects out there. This means that you can use these projects in a very interactive and exploratory manner, no need to have the docs open in a seperate window and making sure you don't make a typo. Using @types Installation is fairly simple as it just works on top of npm. So as an example you can install type definitions for jquery simply as: npm install @types/jquery --save-dev @types supports both global and module type definitions. Global @types By default any definitions that support global consumption are included automatically. E.g. for jquery you should be able to just start using $ globally in your project. However for libraries (like jquery) I generally recommend using modules: Module @types After installation, no special configuration is required really. You just use it like a module e.g.: import * as $ from "jquery"; // Use $ at will in this module :) Controlling Globals As can be seen having a definition that supports global leak in automatically can be a problem for some team so you can chose to explicitly only bring in the types that make sense using the tsconfig.json compilerOptions.types e.g.: { "compilerOptions": { "types" : [ "jquery" ] } } The above shows a sample where only jquery will be allowed to be used. Even if the person installs another definition like npm install @types/node its globals (e.g. process) will not leak into your code until you add them to the tsconfig.json types option.
https://basarat.gitbooks.io/typescript/docs/types/@types.html
CC-MAIN-2017-43
refinedweb
269
61.26
Button not initialized964914 Dec 3, 2012 7:28 PM I have a controller class which extends a simple java class. Now inside the base class I declare a button whose name matches the fx:id of the button in the fxml file. Now when the fxml is loaded the controller is initialized. however, the button is not initialized and is still null. I presume the button should have been initialized by itself. Sample code: Sample code: Any idea why it's not initialized. If the button is in MyController class, then buttons are initialized.Any idea why it's not initialized. If the button is in MyController class, then buttons are initialized. public class TopClass { public TopClass () { super(); } @FXML private Button button1; @FXML private Button button2; } public class MyController extends TopClass implements Initializable { @Override public void initialize(URL url, ResourceBundle rb) { // do some initlization work button1.setDisable(true); // It throws null pointer here. } } *FXML:* <AnchorPane id="AnchorPane" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="500.0" prefWidth="500.0" xmlns: <children> <Button id="button1" fx: <Button id="button2" fx: </children> </AnchorPane> This content has been marked as final. Show 4 replies 1. Re: Button not initializededward17 Dec 3, 2012 7:53 PM (in response to 964914)Because you have put them in the parent class as private? Change to protected (or let default to package).1 person found this helpful 2. Re: Button not initializedjsmith Dec 3, 2012 8:11 PM (in response to 964914)Perhaps @FXML doesn't work with inheritence.1 person found this helpful I'd be interested to know if this works when you follow edward's advice and change the access permissions of the fields. If it doesn't, then you will need to define the superclass fields in the subclass to get their injection working. 3. Re: Button not initializededward17 Dec 3, 2012 8:28 PM (in response to jsmith)Just dummied up a test and inheritance works fine for @FXML - i.e. I changed Button to protected and moved it to parent class and it was populated.1 person found this helpful 4. Re: Button not initialized964914 Dec 3, 2012 8:33 PM (in response to edward17)I tried it and it worked. Thanks.
https://community.oracle.com/message/10727265
CC-MAIN-2017-17
refinedweb
373
65.62