text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Let's say I have 15 elements. I want to group them such a way that:
group1 = 1 - 5
group2 = 6 - 9
group3 = 10 - 12
group4 = 13 - 14
group5 = 15
group1 = 5
group2 = 4
group3 = 3
group4 = 2
group5 = 1
frames "loop" value
1 - 5 => 0
6 - 9 => 1
10 - 12 => 2
13 - 14 => 3
15 => 4
int loopint = 5 - @loop;
if (@Frame % loopint == 0)
@loop += 1;
If I understand this correctly, then
#include <stdio.h> #include <stdlib.h> #include <math.h> int main(int argc, char *argv[]) { int n = atoi(argv[1]); for(int i = 1; i <= n; ++i) { printf("%d: %f\n", i, ceil((sqrt(8 * (n - i + 1) + 1) - 1) / 2)); } }
is an implementation in C.
The math behind this is as follows: The 1 + 2 + 3 + 4 + 5 you have there is a Gauß sum, which has a closed form S = n * (n + 1) / 2 for n terms. Solving this for n, we get
n = (sqrt(8 * S + 1) - 1) / 2
Rounding this upward would give us the solution if you wanted the short stretches at the beginning, that is to say 1, 2, 2, 3, 3, 3, ...
Since you want the stretches to become progressively shorter, we have to invert the order, so S becomes (n - S + 1). Therefore the formula up there.
EDIT: Note that unless the number of elements in your data set fits the n * (n+1) / 2 pattern precisely, you will have shorter stretches either at the beginning or in the end. This implementation places the irregular stretch at the beginning. If you want them at the end,
#include <stdio.h> #include <stdlib.h> #include <math.h> int main(int argc, char *argv[]) { int n = atoi(argv[1]); int n2 = (int) ceil((sqrt(8 * n + 1) - 1) / 2); int upper = n2 * (n2 + 1) / 2; for(int i = 1; i <= n; ++i) { printf("%d: %f\n", i, n2 - ceil((sqrt(8 * (upper - i + 1) + 1) - 1) / 2)); } }
does it. This calculates the next such number beyond your element count, then calculates the numbers you would have if you had that many elements. | https://codedump.io/share/8ZkXIa1UDExL/1/decreasing-loop-interval-by-1-in-cc-solved | CC-MAIN-2018-09 | refinedweb | 350 | 76.35 |
SOCKETPAIR(2) BSD Programmer's Manual SOCKETPAIR(2)
socketpair - create a pair of connected sockets
#include <sys/types.h> #include <sys/socket.h> int socketpair(int d, int type, int protocol, int *sv);
The socketpair() call creates an unnamed pair of connected sockets in the specified domain d, of the specified type, and using the optionally specified protocol. The descriptors used in referencing the new sockets are returned in sv[0] and sv[1]. The two sockets are indistinguishable.
A 0 is returned if the call succeeds, -1 if it fails.
The call succeeds unless: . [ENFILE] The system file table is full.
pipe(2), read(2), write(2)
The socketpair() function conforms to X/Open Portability Guide Issue 4.2 ("XPG4.2").
The socketpair() function call appeared in 4.2BSD.
This call is currently implemented only for the LOCAL domain. Many operating systems only accept a protocol of PF_UNSPEC, so that should be used instead of PF_LOCAL for maximal portability.. | http://www.mirbsd.org/htman/sparc/man2/socketpair.htm | CC-MAIN-2017-09 | refinedweb | 160 | 50.94 |
Table Of Contents
- Events and Properties
Events and Properties¶
Events are an important part of Kivy programming. That may not be surprising to those with GUI development experience, but it’s an important concept for newcomers. Once you understand how events work and how to bind to them, you will see them everywhere in Kivy. They make it easy to build whatever behavior you want into Kivy.
The following illustration shows how events are handled in the Kivy framework.
Introduction to the Event Dispatcher¶
One of the most important base classes of the framework is the EventDispatcher class. This class allows you to register event types, and to dispatch them to interested parties (usually other event dispatchers). The Widget, Animation and Clock classes are examples of event dispatchers.
EventDispatcher objects depend on the main loop to generate and handle events.
Main loop¶
As outlined in the illustration above, Kivy has a main loop. This loop is running during all of the application’s lifetime and only quits when exiting the application.
Inside the loop, at every iteration, events are generated from user input, hardware sensors or a couple of other sources, and frames are rendered to the display.
Your application will specify callbacks (more on this later), which are called by the main loop. If a callback takes too long or doesn’t quit at all, the main loop is broken and your app doesn’t work properly anymore.
In Kivy applications, you have to avoid long/infinite loops or sleeping. For example the following code does both:
while True: animate_something() time.sleep(.10)
When you run this, the program will never exit your loop, preventing Kivy from doing all of the other things that need doing. As a result, all you’ll see is a black window which you won’t be able to interact with. Instead, you need to “schedule” your animate_something() function to be called repeatedly.
Scheduling a repetitive event¶
You can call a function or a method every X times per second using schedule_interval(). Here is an example of calling a function named my_callback 30 times per second:
def my_callback(dt): print 'My callback is called', dt Clock.schedule_interval(my_callback, 1 / 30.)
You have two ways of unscheduling a previously scheduled event. The first would be to use unschedule():
Clock.unschedule(my_callback)
Or, you can return False in your callback, and your event will be automatically unscheduled:
count = 0 def my_callback(dt): global count count += 1 if count == 10: print 'Last call of my callback, bye bye !' return False print 'My callback is called' Clock.schedule_interval(my_callback, 1 / 30.)
Scheduling a one-time event¶
Using schedule_once(), you can call a function “later”, like in the next frame, or in X seconds:
def my_callback(dt): print 'My callback is called !' Clock.schedule_once(my_callback, 1)
This will call my_callback in one second. The second argument is the amount of time to wait before calling the function, in seconds. However, you can achieve some other results with special values for the second argument:
- If X is greater than 0, the callback will be called in X seconds
- If X is 0, the callback will be called after the next frame
- If X is -1, the callback will be called before the next frame
The -1 is mostly used when you are already in a scheduled event, and if you want to schedule a call BEFORE the next frame is happening.
A second method for repeating a function call is to first schedule a callback once with schedule_once(), and a second call to this function inside the callback itself:
def my_callback(dt): print 'My callback is called !' Clock.schedule_once(my_callback, 1) Clock.schedule_once(my_callback, 1)
While the main loop will try to keep to the schedule as requested, there is some uncertainty as to when exactly a scheduled callback will be called. Sometimes another callback or some other task in the application will take longer than anticipated and thus the timing can be a little off.
In the latter solution to the repetitive callback problem, the next iteration will be called at least one second after the last iteration ends. With schedule_interval() however, the callback is called every second.
Trigger events¶
If you want to schedule a function to be called only once for the next frame, like a trigger, you might be tempted to achieve that like so:
Clock.unschedule(my_callback) Clock.schedule_once(my_callback, 0)
This way of programming a trigger is expensive, since you’ll always call unschedule, whether or not you’ve even scheduled it. In addition, unschedule needs to iterate the weakref list of the Clock in order to find your callback and remove it. Use a trigger instead:
trigger = Clock.create_trigger(my_callback) # later trigger()
Each time you call trigger(), it will schedule a single call of your callback. If it was already scheduled, it will not be rescheduled.
Widget events¶
A widget has 2 default types of events:
- Property event: if your widget changes its position or size, an event is fired.
- Widget-defined event: e.g. an event will be fired for a Button when it’s pressed or released.
For a discussion on how widget touch events managed and propagated, please refer to the Widget touch event bubbling section.
Creating custom events¶
To create an event dispatcher with custom events, you need to register the name of the event in the class and then create a method of the same name.
See the following example:
class MyEventDispatcher(EventDispatcher): def __init__(self, **kwargs): self.register_event_type('on_test') super(MyEventDispatcher, self).__init__(**kwargs) def do_something(self, value): # when do_something is called, the 'on_test' event will be # dispatched with the value self.dispatch('on_test', value) def on_test(self, *args): print "I am dispatched", args
Attaching callbacks¶
To use events, you have to bind callbacks to them. When the event is dispatched, your callbacks will be called with the parameters relevant to that specific event.
A callback can be any python callable, but you need to ensure it accepts the arguments that the event emits. For this, it’s usually safest to accept the *args argument, which will catch all arguments in the args list.
Example:
def my_callback(value, *args): print "Hello, I got an event!", args ev = MyEventDispatcher() ev.bind(on_test=my_callback) ev.do_something('test')
Pleases refer to the kivy.event.EventDispatcher.bind() method documentation for more examples on how to attach callbacks.
Introduction to Properties¶
Properties are an awesome way to define events and bind to them. Essentially, they produce events such that when an attribute of your object changes, all properties that reference that attribute are automatically updated.
There are different kinds of properties to describe the type of data you want to handle.
Declaration of a Property¶
To declare properties, you must declare them at the class level. The class will then do the work to instantiate the real attributes when your object is created. These properties are not attributes: they are mechanisms for creating events based on your attributes:
class MyWidget(Widget): text = StringProperty('')
When overriding __init__, always accept **kwargs and use super() to call the parent’s __init__ method, passing in your class instance:
def __init__(self, **kwargs): super(MyWidget, self).__init__(**kwargs)
Dispatching a Property event¶
Kivy properties, by default, provide an on_<property_name> event. This event is called when the value of the property is changed.
Note
If the new value for the property is equal to the current value, then the on_<property_name> event will not be called.
For example, consider the following code:
In the code above at line 3:
pressed = ListProperty([0, 0])
We define the pressed Property of type ListProperty, giving it a default value of [0, 0]. From this point forward, the on_pressed event will be called whenever the value of this property is changed.
At Line 5:
def on_touch_down(self, touch): if self.collide_point(*touch.pos): self.pressed = touch.pos return True return super(CustomBtn, self).on_touch_down(touch)
We override the on_touch_down() method of the Widget class. Here, we check for collision of the touch with our widget.
If the touch falls inside of our widget, we change the value of pressed to touch.pos and return True, indicating that we have consumed the touch and don’t want it to propagate any further.
Finally, if the touch falls outside our widget, we call the original event using super(...) and return the result. This allows the touch event propagation to continue as it would normally have occured.
Finally on line 11:
def on_pressed(self, instance, pos): print ('pressed at {pos}'.format(pos=pos))
We define an on_pressed function that will be called by the property whenever the property value is changed.
Note
This on_<prop_name> event is called within the class where the property is defined. To monitor/observe any change to a property outside of the class where it’s defined, you should bind to the property as shown below.
Binding to the property
How to monitor changes to a property when all you have access to is a widget instance? You bind to the property:
your_widget_instance.bind(property_name=function_name)
For example, consider the following code:
If you run the code as is, you will notice two print statements in the console. One from the on_pressed event that is called inside the CustomBtn class and another from the btn_pressed function that we bind to the property change.
The reason that both functions are called is simple. Binding doesn’t mean overriding. Having both of these functions is redundant and you should generally only use one of the methods of listening/reacting to property changes.
You should also take note of the parameters that are passed to the on_<property_name> event or the function bound to the property.
def btn_pressed(self, instance, pos):
The first parameter is self, which is the instance of the class where this function is defined. You can use an in-line function as follows:
The first parameter would be the instance of the class the property is defined.
The second parameter would be the value, which is the new value of the property.
Here is the complete example, derived from the snippets above, that you can use to copy and paste into an editor to experiment.
Running the code above will give you the following output:
Our CustomBtn has no visual representation and thus appears black. You can touch/click on the black area to see the output on your console.
Compound Properties¶
When defining an AliasProperty, you normally define a getter and a setter function yourself. Here, it falls on to you to define when the getter and the setter functions are called using the bind argument.
Consider the following code.
Here cursor_pos is a AliasProperty which uses the getter _get_cursor_pos with the setter part set to None, implying this is a read only Property.
The bind argument at the end defines that on_cursor_pos event is dispatched when any of the properties used in the bind= argument change. | http://kivy.org/docs/guide/events.html | CC-MAIN-2015-27 | refinedweb | 1,830 | 62.68 |
Wrapper to integrate CGI scripts into django views.
Wrapper to integrate CGI scripts into django views.
Simple function to wrap old-style CGI scripts/binaries to integrate them into the views of a django app.
This method preserves the shortcommings of the CGI deployment, but may be adequate when the performance hit caused by CGI spawning a new process for each request is negligible, a legacy CGI executable needs to be embedded in a new application or the ease of deployment is a priority.
Data returned by the CGI will be streamed to the client.
Example:
from django_cgi_wrap import cgi_wrap def example_view(request): return cgi_wrap(request, "/usr/bin/mapserv")
Also see the “tests” directory for a working example as well as the doc-strings of the module itself.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-cgi-wrap/ | CC-MAIN-2017-47 | refinedweb | 149 | 62.07 |
17 October 2011 08:31 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Shanxi Sanwei owns and operates a 100,000 tonne/year BDO plant at Hongdong in
After expansion works are completed, the company’s BDO capacity will be increased to 250,000 tonnes/year, the source added.
The company also plans to bring onstream a 30,000 tonne/year polytetramethylene ether glycol (PTMEG) plant at the same time at Hongdong, which will raise its PTMEG capacity to 85,000 tonnes/year, the source said.
“Construction work on the BDO plant and the new PTMEG plant has started,” said the source.
Shanxi Sanwei announced its expansion plans in a statement to the Shenzhen Stock Exchange in mid-September.
The expansion work on the BDO plant and the construction of the PTMEG plant are expected to cost around yuan (CNY) 1.5bn ($235m).
($1 = CNY6.38)
Please visit the complete ICIS plants and projects database | http://www.icis.com/Articles/2011/10/17/9500472/chinas-shanxi-sanwei-aims-to-complete-bdo-expansion-by-2013.html | CC-MAIN-2015-06 | refinedweb | 154 | 59.74 |
We where telling “encryption jokes” (like ROT26) at the office, until a colleague mentioned that a part of the Windows Registry is ROT13 encrypted.
It appeared to be true, Windows Explorer will store info about the programs you run in registry key
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{75048700-EF1F-11D0-9888-006097DEACF9}\Count.
The value names stored in this key are ROT13 encrypted.
Google for UserAssist and you’ll find several pages where this is explained in detail, like this one. Some of these pages mention programs to decrypt ROT13, but I didn’t find a program to display and manage the UserAssist data. So I decided to write my own, just for the fun of it.
I wanted this program to have a GUI and I didn’t want to spend much time coding low-level details, so I decided to code it with the Microsoft .NET 2.0 Framework.
I posted my program (source code and binaries) here on the gotdotnet site.
Download the ZIP file, you’ll have to extract UserAssist\UserAssist\bin\Release\UserAssist.exe to get my program. There is no setup, it’s just one executable.
I used Microsoft Visual C# 2005 Express Edition because it’s free, so you can examine and modify my program. But it’s not needed to run my program, you’ll only need the .NET Framework 2.0 runtime to run my program (download it only if you have a problem running my program, if you have an up-to-date version of Windows XP, the .NET 2.0 Framework will already be installed).
My program displays the decrypted UserAssist entries as a treeview:
ROT13 is a monoalphabetic substitution cipher, these ciphers are very easy to decrypt, e.g. by frequency analysis. The namespace System.Security.Cryptography in .NET 2.0 does not implement the ROT13 cipher, I had to implement my own method.
While I developed my program, I became intrigued with the binary data. Because I had no access to the Internet at that time, I had to resort to the good old trial and error technique to discover the meaning of this data (I tested my program on Windows XP SP2).
For all entries starting with UEME_RUN, the binary data is 16 bytes long. The first 4 bytes at first always remained zero, and the fifth byte increased with one each time I ran the corresponding program (like notepad.exe). Because the sixth, seventh and eight byte are zero, I surmized that the 4 bytes starting at byte 5 are a 32 bit counter. Data on Intel machines is usually stored in little-endian format, this means that the least significant byte is stored first, e.g. a 32 bit integer with value 9 is stored as 09 00 00 00. When I ran a program the first time, this counter was initialized to the value of 6. This was also the case when I deleted the entry and ran the program again.
The remaining 8 bytes changed apparently at random each time I ran the corresponding program, but in fact only the least significant bytes changed. I hypothesized that the remaining 8 bytes are a timestamp. I ran notepad, noted the value of the 8 bytes, ran notepad exactly one minute later and noted the new value of the 8 bytes. The difference was 0x23B9D020, or 599380000, which is almost equal to 60 seconds times 10.000.000. Hence it definitely was a timestamp, I tried to convert the 8 last bytes to a timestamp with the C# method DateTime.FromFileTime, and Bingo!, I got the date and time when I last ran notepad.
Now back to the first 4 bytes. I noticed a trend: they are set to the value of the last 4 bytes of the UEME_CTLSESSION, and these 4 bytes appear to be a 32 bit counter that increases each time the user logs on (only after a reboot?). I need to analyze this further.
To summarize, the 16 data bytes are organized as:
• 4 bytes, meaning unknown, probably a 32 bit integer, appears to be a session counter
• 4 bytes, a 32 bit integer, counts the number of times the corresponding program was executed, is equal to 6 for the first run
• 8 bytes, a 64 bit timestamp, last time the corresponding program was executed
When you select an entry in the treeview, the binary data will be decoded and displayed at the bottom of the form.
My program should be self-explanatory.
Right-click an entry to clear it:
Clear All will delete the root keys, thus deleting all entries and also preventing Windows Explorer to record program execution until you perform a new logon (in fact, the entries are (re)created when Windows Explorer is started).
Caution: the UserAssist entries are used by Windows to display programs you use frequently in the Start menu:
Clearing the UserAssist entries will impact the Most Frequently Used Programs portion of your Start menu.
Reverse-Engineering the UserAssist entries was relatively easy, but I can’t explain why they are ROT13 encrypted!
I just found your blog linked from Raymond Chen’s where he’s describing the “Most Frequently Used Programs” list.
Anyway, I thought I would just point out that the usual reason for using something like ROT13 is search. For whatever reason, they didn’t want this registry key to show up when you did a search for “notepad.exe” or “Program Files” in the registry. I can think of a few reasons why they would want to do that, but then again, perhaps some clueless manager simply told the developers “this data must be encrypted for security reasons” and the developers (realising that this would be impossible anyway) just used ROT13…
Comment by Dean Harding — Thursday 21 June 2007 @ 6:35
I understand why the keys are encrypted (obfuscation), it’s just surprising it’s ROT13. Because I think that there was no ROT13 encryption/decryption function in the Windows library, so they had to add it for these keys. And why add a new function when you can use existing encryption functions?
Comment by Didier Stevens — Friday 22 June 2007 @ 6:18
ROT13 was probably used because it’s easy and cheap.
Comment by Null1024 — Friday 25 January 2008 @ 22:23
[…] Quickpost: Windows 7 Beta: ROT13 Replaced With Vigenère? Great Joke! Filed under: Encryption, Entertainment, Forensics, Quickpost, Windows 7 — Didier Stevens @ 23:17 Remember that the UserAssist keys are encrypted with ROT13? […]
Pingback by Quickpost: Windows 7 Beta: ROT13 Replaced With Vigenère? Great Joke! « Didier Stevens — Sunday 18 January 2009 @ 23:18
heh it’s another reason why people shouldn’t use windows crypto api, even microsoft doesn’t use it😛
Comment by bw — Sunday 26 April 2009 @ 21:38
You can turn the encryption off, set (DWORD) NoEncrypt to 1 in …\CurrentVersion\Explorer\UserAssist\Settings
Comment by windowssucks — Sunday 27 September 2009 @ 10:52
> You can turn the encryption off, set (DWORD) NoEncrypt to 1 in
This is not true for all Windows versions. My UserAssist tool has a function to do this.
Comment by Didier Stevens — Sunday 27 September 2009 @ 12:07
Windows 7 RTM still uses Rot13, FWIW
Comment by anon coward — Thursday 28 January 2010 @ 7:04
@anon coward: Take a look here:
Comment by Didier Stevens — Thursday 28 January 2010 @ 11:28
[…] […]
Pingback by User Activity Logging — Wednesday 31 March 2010 @ 16:45
Your thread is still useful ten years later, Didier. Yesterday, I discovered these “encrypted” keys in my Windows 7 registry. I understood it was rot13 (*.exe =>*.rkr!!) and was afraid of a virus, because I could not believe Microsoft used such a childish ciphering method. Thanks.
Comment by Francois_C — Saturday 16 July 2016 @ 11:13 | https://blog.didierstevens.com/2006/07/24/rot13-is-used-in-windows-you%e2%80%99re-joking/ | CC-MAIN-2016-36 | refinedweb | 1,297 | 59.84 |
Ok, as some of you might know, I'm working with Visual C++ 6 as my IDE.
As I read through books, tutorials, and blocks of code, I'm confused with the different styles of including headers in my programs. Let's say I want to include a basic iostream. What's the correct way to do it, and why? I've seen all these notations:
#include <iostream> // works with my compiler
#include <iostream.h> // doesn't work
#include "iostream" // works
#include "iostream.h" // doesn't work
I haven't found any definitive information about this yet. As far as I can tell, some of the formats might be for C (not C++), and some might be compiler-specific. But as it is, I'm completely baffled! I like to know exactly what's happening with a line of code before I include it in a program.
Also, what if I want to use a different header, like math or string? I know they exist, but I don't know where to find them, how to implement them, or what functions/data types/etc. they offer to me. Guess and check doesn't work here!
And finally, further down the road, I'd like to be able to write my own header files and include them in my CPP files. It sounds like there's a different protocol for doing this... am I on the right track?
Thanks very much for your help! You guys are great. | http://cboard.cprogramming.com/cplusplus-programming/52022-understanding-headers-inclusions.html | CC-MAIN-2014-52 | refinedweb | 247 | 84.68 |
문의자
Hands-on Labs of Reactive Extensions for .NET (Rx) doesn't work
Hi,
I am using the Hands-on Labs of Reactive Extensions for .NET (Rx) PDF.
Most of the examples are not working anymore.
I'm getting some weird kind of errors.
Till Excersie 2, However I made it work.
But from Exercise 3 – Importing .NET events into Rx, it's not working anymore.
Say for the below example,
var lbl = new Label(); var frm = new Form { Controls = { lbl } }; frm.MouseMove += (sender, args) => { lbl.Text = args.Location.ToString(); }; Application.Run(frm);
The Error it's giving is :
Error 1 A local variable named 'args' cannot be declared in this scope because it would give a different meaning to 'args', which is already used in a 'parent or current' scope to denote something else C:\My Documents\Visual Studio 2010\Projects\ConsoleApplication1\ConsoleApplication1\Program.cs 15 37 ConsoleApplication1And for the rest of the examples, this kind of more errors are going on.Any help?
Thanks & Regards, Naimish Pandya [MCTS, MCPD] Naimish,
I don't think you read the instructions carefully enough. The default `Program.cs` is replaced by the following code:
using System.Linq; using System.Windows.Forms; class Program { static void Main() { var lbl = new Label(); var frm = new Form { Controls = { lbl } }; frm.MouseMove += (sender, args) => { lbl.Text = args.Location.ToString(); // This has become a position-tracking label. }; Application.Run(frm); } }
In this code there is only one `args` so the code compiles.
Cheers.
James C-S
I've started working through the lab using Rx v1.0 but the document seems to be out of date, e.g. System.CoreEx no longer exists, and methods such as subscriber.Run and Observable.GenerateWithTime have disappeared. I don't see any point in proceeding with the lab if these problems are going to hinder my attempts at learning.
Is there an updated version anywhere, or can someone recommend a good Rx tutorial?
Thanks in advance
Andy
-
- | http://social.msdn.microsoft.com/Forums/ko-KR/8709bd67-2a26-4acc-b1c0-a4e75db5421f/handson-labs-of-reactive-extensions-for-net-rx-doesnt-work?forum=rx | CC-MAIN-2013-48 | refinedweb | 327 | 61.02 |
The other day I decided to start a program that solves soduku puzzles on the account that I had just learned C++ and I thought it would be a good "out of the books" exercise. While writing some pseudocode I got bored and decided to run the following code to see what happens:
When I ran the program I got this:When I ran the program I got this:Code:#include <iostream> using namespace std; int main(){ int sudoku[10][10]; cout<< sudoku; cin.ignore(); return 0; }
0x22fde0
Can someone tell me what that means? The curiosity will kill me if I don't find out. | http://cboard.cprogramming.com/cplusplus-programming/72845-displaying-array-concept-not-how.html | CC-MAIN-2015-48 | refinedweb | 106 | 71.68 |
How to compile QtDeclarative without QtXmlPatterns, QtSvg and QtSql
I want to slim my application down (again).
QtDeclarative depends on QtXmlPatterns, QtSvg and QtSql which I don't use at all, so I'd like to compile the library without them.
Has someone altered source-file for QtDeclarative (or a clue how to do this) ?
You might have a look if the configure script has options for excluding dependencies to these libraries.
If not, the common way is to wade through the sources and place #ifdef/#endif preprocessor directives around code fragments which cause the dependency. This allows for switching on and off the dependencies using a single #define preprocessor directive.
However, I would not bet that QtDeclarative can be built without those dependencies and I am worried it isn't possible at all. I would definitly wait until this is acknowledged by a troll. Otherwise you will have wasted hours of hours of work. | https://forum.qt.io/topic/6677/how-to-compile-qtdeclarative-without-qtxmlpatterns-qtsvg-and-qtsql | CC-MAIN-2017-43 | refinedweb | 155 | 62.98 |
HTML Components: HTML Behaviors vs. HTC Behaviors
HTML Components
HTML Behaviors vs. HTC Behaviors
HTML Components are encapsulated HTML content that can be rendered in other HTML documents. Before HTML Components, the only way to create custom controls for use in HTML documents was through Microsoft ActiveX Controls. HTML authors had to use the
OBJECT tag applications anyway. contains scripts and a set of HTC-specific custom elements. These elements expose properties, methods, and events that define the HTML component. All HTC elements are accessible from script as objects, using their ID attributes. This allows all attributes and methods of HTC elements to be dynamically manipulated through script.
You can use HTCs to implement behaviors that:
- Expose properties and methods. You define them via the
PROPERTYand
METHODelements.
- Expose custom events. These events are defined via the
EVENTelement and are fired back to the containing page, using the element's
fire()method. The environment of the event can be set by the
createEventObject()method.
- Access the containing page's DHTML object model. The object
elementin HTCs returns the element to which the behavior is attached. With this object, an HTC can access the containing document and its object model (properties, methods, and events).
- Receive notifications. When using the
ATTACHelement, the browser notifies the HTC of standard DHTML events as well as two HTC-specific events,
oncontentreadyand
ondocumentready.
HTCs encapsulate a behavior definition. Behaviors were first delivered in Internet Explorer 5.0. We have introduced behaviors in Column 22, Internet Explorer 5.0 Review, Part I and Column 23, Internet Explorer 5.0 Review, Part II. The advantage of behaviors encapsulated in HTC is that they cannot be removed from their element tags. In IE 5.0, a behavior can be detached from an element via script. In IE 5.5, the elements are always consistent in that they cannot be separated from their original behaviors.
Next: How to create custom tags and namespaces
Produced by Yehuda Shiran and Tomer Shiran
Created: July 3, 2000
Revised: July 3, 2000
URL: | http://www.webreference.com/js/column64/2.html | CC-MAIN-2014-42 | refinedweb | 338 | 51.55 |
Created
8 May 2009
This article shows you how Adobe Flash CS4 Professional, the new FLVPlayback 2.5 component for Flash, and Flash Media Server 3.5 now work together to accommodate the inevitable "bumps" in data streams. It's the seventh in a loose series of beginner's tutorials.
Though we have come to regard the term "information superhighway" as a tired description of the Internet, this would be a great place to start.
Let me explain. When I drive to the college where I teach, the trip usually takes 40 minutes. I start my trip on streets where the speed limit is 50 km/h. Next I travel up a road where the speed limit is 80 km/h, and from that road I spend the next 30 minutes on a highway where my average speed is around 120 km/h. This is on a good day, and it is important that you pay attention to these speed limits. They are going to play a huge role in the next section.
As I'm a Canadian, for me winter is an especially brutal time for traffic. If there is a lot of snow on the highway, the best highway speed I can hope to drive is 35 km/h as traffic slows down to accommodate the conditions. If there is an accident or a car has spun out into a ditch, the odds are about 100% that I will travel no faster than 20 km/h or slower as traffic bunches up. Once we get past the accident, I can inevitably accelerate back up to to the 100 km/h speed limit.
Now if we translate this snowy day situation into the universe of streaming bits, you can think of the cars as a data stream. On a clear day, data will merrily flow along the "information superhighway" at 120 Kbps, but add an accident or snow to the mix and your data stream will slow down. In the case of streaming video, this can be infuriating to the user—fits and starts in the stream—and, for you, a never-ending source of headaches over which you have little or no control—that is, until now.
Flash Media Server 3.5 enables you, essentially, to dynamically adjust the stream to accommodate driving conditions. Then it gets better: It is dead simple to accomplish this using the FLVPlayback component and, if you really want to geek out, you can team up the component with an external SMIL file to control multiple bit rates.
In Part 2 and Part 3 of this series, I showed you how you can use the FLVPlayback component to play a single video. In this article, I show you how to use that same component to dynamically change that video as traffic conditions in the stream change.
Why do you need this? The answer is self-evident. Today's Internet, especially when it comes to bandwidth and computing technologies, is vastly more powerful than when video first became viable in Flash 7. The introduction of HD video to Adobe Flash Player in Flash CS3 Professional was a game-altering addition. The fact that YouTube, among others, is adopting this format—and that users have come to regard HD video as a "must have" rather than a "nice to have"—tells you that the user is expecting a consistent video experience whether or not that data is being delivered wirelessly in a coffee shop, on a street corner, or through an Ethernet cable.
The problem, as I am fond of telling people, is that trying to push HD video encoded at 1500 Kbps through a wireless connection is like trying to push a watermelon through a worm. The stops and starts due to buffering issues are no longer regarded as a "necessary evil" of video. Being able to detect the available bandwidth and switch to the appropriate version of a video is a major step forward in providing the user with a consistent video experience regardless of where he or she is physically located and connected to the Internet.
How can you accomplish this? Essentially, all you have to do is write some ActionScript that functions as your "traffic spotter" and, when things get congested, switches the stream to a copy of the video encoded with a data rate that accommodates the "slowdown up ahead"—and, when you get through the bottleneck, switches to a video that has the appropriate data rate to accommodate the speed increase.
As you may have inferred from this, your workflow has somewhat increased. Instead of a single copy of a video encoded at a single data rate, you need to encode multiple copies of the same video and use different data rates, for each copy, as shown in Figure 1.
What are the rates? That is best left to your judgment of the conditions your users will encounter. Still, if you poke around the articles by myself, David Hassoun, and others here at the Flash Media Server Developer Center, you will see projects such as this one that have as few as three copies of the video, as well as others that have five copies at data rates of 160 Kbps, 500 Kbps, 700 Kbps, 1000 Kbps, and 1500 Kbps. It all depends on your best guess as to the bandwidth and/or viewing situation that your users will encounter.
In this exercise, you will use the FLVPlayback 2.5 component and write the ActionScript 3.0 code that will enable you to play multiple streams to accommodate changing bandwidth situations.
To start, download the exercise files at the beginning of the article. When you unzip the file, move the three videos in the MBR folder to the vod folder located at C:\Program Files\Adobe\Flash Media Server 3.5\applications\vod\media. Having done that, return to the desktop and create a folder named MBR that will be used for this exercise:
- Launch Flash CS4 Professional and create a new ActionScript 3.0 document.
- Select Modify > Document. When the Document Properties dialog box opens (see Figure 2), change width to 640 pixels and the height to 520 pixels. Click OK to accept the change and close the dialog box. You need this odd height because the videos are 640 × 480, but you need the extra 40 pixels to accommodate the height of the skin.
- When the dialog box closes, select Window > Components and drag a copy of the FLVPlayback 2.5 component, found in the Video group, to the Stage. Close the Components panel when the component appears on the Stage.
- Click the component once to select it, and then open the Properties panel. Use the following settings (also shown in Figure 3):
- Instance name: myVideo
- X: 0
- Y: 0
- W: 640
- H: 480
- Add a new Layer, name it actions, and save the file to the MBR folder on your desktop.
Now that the assets are in place, you can turn your attention to "wiring up" this project with ActionScript 3.0:
- Select the first frame in the actions layer and open the Actions panel by pressing either F9 (Windows) or Option+F9 (Mac), or by selecting Window > Actions.
- Click once in the Script pane and enter the following code:
import fl.video.*; var myDynamicStream:DynamicStreamItem = new DynamicStreamItem();
The example starts by importing all of the methods and properties of the video class. You need to do this when using ActionScript to manage the component. The next two lines make use of the two classes—DynamicStream and DynamicStreamItem—you downloaded and installed prior to starting this exercise.
The two classes hand you a relatively efficient way of doing what used to be regarded as the "heavy lifting" involved in writing an ActionScript-based custom solution to manage multiple-bit-rate scenarios.
The DynamicStream class extends the NetStream class and provides you with a solution to managing NetStreams. As you may recall, you only get one NetStream per video. For all intents and purposes, this new class, which is to be used as a substitute for the NetStream class, allows you to keep the stream flowing while videos are switched on the fly. The DynamicStream instance also contains a
startPlay() method to which you can pass an Array of streams encoded at differing bit rates in the form of a DynamicStreamItem.
The second line of the code tells Flash to hand off the lifting duties to the DynamicStream class; the third line creates a new
DynamicStreamItem object named
myDynamicStream.
- Press Return/Enter a couple of times and enter:
myDynamicStream.uri = "rtmp://localhost/vod/";
The major difference here is that, instead of using
NetConnection to connect to the server, you use the
uri (Uniform Resource Identifier) property of the
myDynamicStream object to make the connection.
Now that the stream is in place and Flash knows where the server is located, you simply need to add the video streams to be used.
- Press Return/Enter twice and add the following code:
myDynamicStream.addStream ("mp4:ChefSchool_500.f4v",500); myDynamicStream.addStream ("mp4:ChefSchool_700.f4v",700); myDynamicStream.addStream ("mp4:ChefSchool_1500.f4v",1500);
As you can see, you simply use the
addStream method on the
myDynamicStream object and pass it the name of the video and the bit rate.
Note: I would like to thank William Hanna, dean of the School of Media Studies and Information Technology at the Humber Institute of Advanced Learning and Technology in Toronto, for permission to use the videos included in this article.
- Now that Flash knows which videos to add and what bit rates to look for, all you need to do is to hook all of this into the component. Press Return/Enter twice and enter the following line of code:
myVideo.play2 (myDynamicStream);
The new method is
play2(), which is an enhancement of the
NetStream.play() method traditionally used to stream video. Use this with the FLVPlayback 2.5 component to pass a DynamicStreamItem containing a list of stream names and varying bit rates as well a
uri property that tells the component where the server is located.
- If your completed code is:
import fl.video.*; var myDynamicStream:DynamicStreamItem = new DynamicStreamItem(); myDynamicStream.uri = "rtmp://localhost/vod/"; myDynamicStream.addStream ("mp4:ChefSchool_500.f4v",500); myDynamicStream.addStream ("mp4:ChefSchool_700.f4v",700); myDynamicStream.addStream ("mp4:ChefSchool_1500.f4v",1500); myVideo.play2 (myDynamicStream);
Save the file and test it.
As you have learned, it is a relatively simple matter to use the FLVPlayback 2.5 component to accommodate varying bit rates. As is usual with stuff that is great, there is an equally nasty gotcha. In this case the videos are "hard wired" into the component. What if you discover you need to add 300 and 1,000 Kbps bit rates (and the their corresponding F4V files) to the lineup? If you have access to the FLA file, this is a non-issue but it does tend to go against the grain when it comes to working with Flash.
There is a solution. Instead of an XML file, Flash Media Server supports dynamic streaming from an SMIL file. The beauty of this is you now have the ability to make changes without digging into the FLA or AS files. Here's how:
- Open the dynamicStream.smil file found in the MBR/SMIL directory. You can use Dreamweaver CS4 or any word processor to open the file. When it opens you should see the following:
<smil> <head> <meta base="rtmp://localhost/vod/" /> </head> <body> <switch> <video src="mp4:ChefSchool_500.f4v" system- <video src="mp4:ChefSchool_700.f4v" system- <video src="mp4:ChefSchool1_1500.f4v" system- </switch> </body> </smil>
As you can see, the only major difference between the ActionScript written in the previous section and this file is in how the data rate is expressed. You need to enter the actual number, 500000, instead of the kilobit equivalent, 500.
- Close the SMIL file and open the MBR_SMIL.fla file found in the MBR/SMIL directory. Open the file, select the first frame of the actions layer, and then open the Actions panel.
- When the Script pane opens, click once in line 3 of the code and enter:
myVideo.source = "dynamicStream.smil"
That's all the code you need to add. At this point you can save and test the file.
You may have read that last line and wondered, "Hang on there, Tom. How does this get hooked into the FMS 3.5 server?" Good question: that information, contained in the RTMP address, is contained between the
<head>
</head> tags of the SMIL document. All you need to do is to enter the address and save the SMIL file to the same directory as the SWF.
This tutorial showed you how to manage multiple-bit-rate scenarios using Flash Media Server 3.5 and the FLVPlayback 2.5 component and a SMIL file. In many respects, this is just a primer and is intended to introduce you to this new feature rather than let you really dig into it and get covered with electrons up to your elbows. If this subject has caught your attention, here are some other articles that will get you going:
- Dynamic streaming in Flash Media Server 3.5 – Part 1: Overview of new capabilities: David Hassoun walks you through the entire subject, from using a video object to the FLVplayback2.5 component.
- Dynamic streaming in Flash Media Server 3.5 – Part 2: ActionScript 3.0 dynamic stream API: David gives you an extremely thorough exploration of the DynamicStream and DynamicStreamItem classes.
- Dynamic streaming on demand with Flash Media Server 3.5: Abhinav Kapoor, a senior computer scientist with the Flash Media Server team, reviews how to get the most out of dynamic streaming, from encoding to coding.
- ActionScript guide to dynamic streaming: Abhinav gives you an extensive overview of the ActionScript required for stream switching.
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
More Like This
Tutorials & Samples Flash Media Server Forum More
Tutorials
Samples | https://www.adobe.com/devnet/adobe-media-server/articles/beginner_dynstream_fms.html | CC-MAIN-2017-43 | refinedweb | 2,334 | 61.67 |
ahhhnn.
Seitokai Yakuindomo Episode 4
I like the intro song.
I don’t like the outro song.
Demashita! Powerpuff Girls Z Episode 01
what is this i don’t eve– Blossom is cute.
*dropped* why did I even pick this up
Hitohira Episode 01
The main character is really original.
Hm, how would I describe this first episode. It’s like CLANNAD, without Tomoya and co.
Inukami! Episode 01
Somebody assassinate the lead male for not being in love with somebody voiced by Yui Horie.
Youko is cute though, so maybe I won’t drop this one. Hocchan~ >< damn this reminds me of how I failed to get her autograph at AX2010
Summary: Everything sucks about this anime but Yui Horie and her character. I think I’mma drop after all. I hate the main character’s guts.
Fairy Tail Episode 39
Yay, Mikuni Shimokawa’s ED single is out today! Back to the anime; last episode left us on quite a cliffhanger.
Etherion turned out lamer than I thought. The idea was cool.
“Breaking things happens to be Fairy Tail wizards’ specialty!”
Damn, cliffhanger.
11eyes Episode 13 [OVA]
Awesomesauce.
Kanokon Episode 01
Wtf awesome, Kawasumi Ayako voices the lead female. WTFFF awesome, Mamiko Noto voices the lead… MALE. WAHAHAHA ok I’ll stop here
The awesomesauce voices and art are awesomesauce, but what is with the rapid plot development. And sudden outbreaks of histronics. And the shouta little 5-year-old boy who’s “in high school” voiced by Mamiko Noto only complicates things. I keep imagining “ecchi-na-koto-wa-ikenai” Mahoro from Mahoromatic due to Kawasumi Ayako’s voice as well. Anyways, concerning the plot…
That made sense.
… NOT.
Asura Cryin’ Episode 01
This anime looks extremely promising. Watching with high expectations.
Damn. Best first episode of an anime I’ve watched in AGES.
Asura Cryin’ Episode 02
As epic as above.
Kanokon Episode 02
“I may get pregnant if I get too close to him.”
#include <stdharem.c>
Asura Cryin’ Episode 03 to 06
This is getting boring… wahh, why can’t I find any exciting animes anymore?
Nurarihyon no Mago Episode 04
Best anime this season, by far.
On an unrelated note, why doesn’t CoreAVC’s postprocessing get rid of that horrible debanding? Is free software really better than commercial software? I’m pretty sure ffdshow’s postprocessing far surpasses this horrible crap software. And plus, CoreAVC crashes like every other time I try to play a video, dammit. | http://blog.leafwood.net/2010/07/ | CC-MAIN-2018-05 | refinedweb | 415 | 77.03 |
How do I (or how can I) add a separator line in the ui editor?
Using the ui editor, is there a way to add a separator line such as you see on many settings pages in iOS (including, I believe, Pythonista's own settings page) ? I am creating a custom view which contains a number of settings with check boxes and toggle switches and other controls. I would like to add a separator line between various settings to break the ui into meaningful groups. As a hack I am using a label whose text attribute is a continuous underline character.
As far as I know Pythoniastandoes not expose a specific control (or View in Pyhthonistas terms) for that. There is also nothing like 4sided borders so that you could draw a (0,0,0,1) border. I did it recently by just adding an ui.View with thei height of 1 in the places I did need a border.
have you looked at the dialogs module? The form_dialog can make settings type ui's.
from dialogs import * dead={'type':'switch','title':'dead'} resting={'type':'switch','title':'resting'} stunned={'type':'switch','title':'just stunned'} section1=('Parrot',[dead, resting, stunned] ) spam={'type':'switch','title':'spam'} spamity={'type':'switch','title':'spam'} spaM={'type':'switch','title':'spam'} section2=('Spam',[spam, spamity, spaM],'SpammitySpam' ) f = form_dialog(title='Python Settings', sections= [section1, section2])
As to your original question, you can use an imageview with an image of a line that you stretch out. Also, you could use a custom view with drawing, but that requires code, cannot really layout in ui editor. | https://forum.omz-software.com/topic/4113/how-do-i-or-how-can-i-add-a-separator-line-in-the-ui-editor | CC-MAIN-2022-27 | refinedweb | 267 | 53.51 |
Hi,
I'm interested in fiddling around with the matplotlib source. Let's say we set up various things:
from matplotlib.figure import Figure()
from matplotlib.backends.backend_pdf import FigureCanvasPdf as FigureCanvas
fig = Figure()
canvas = FigureCanvas(fig)
ax = fig.add_subplot(1, 1, 1)
fig.savefig('asd.pdf', bbox_inches='tight')
I would like to know what exactly happens when bbox_inches='tight' is passed to savefig(). I've been searching in the figure.py source and nowhere can I see the bbox_inches='tight' keyword being tested for in the savefig() method. Having said that, all of the kwargs do get passed on to the canvas.print_figure() method, so I looked in the backend_pdf.py file but couldn't find a print_figure() method. Could someone point me in the right direction?
Regards,
-- Damon
···
--------------------------
Damon McDougall
Mathematics Institute
University of Warwick
Coventry
CV4 7AL
d.mcdougall@...230... | https://discourse.matplotlib.org/t/exploring/14065 | CC-MAIN-2021-43 | refinedweb | 143 | 53.47 |
A Mobile-IPv6 implementation for Linux 2.4.
Abstract Character Device Driver, may be used for fast implementing user-space device drivers for test purposes or working with non time critical devices. It's allow use device's I/O ports or memory areas directly, through standard device spec. files.
The SMBios Kernel Module provides access to the management information of SMBios structures in both human readable and binary form via the /proc file system.
This project aims at documenting the linux memory management to the smallest detail possible.
envfs is a virtual file system that provides namespace access to environment variables.
This interface will allow you to write code to retrieve and send promiscuous network packets from your Java program. It could provide a starting point for a java nmap or such...
Project to develop a driver for the Hauppauge WinTV PVR card
linux kernel driver (module) for 8 rc-servos directly connected to parallel port (nice if you want to build legs for your notebook...).
Functional and design specifications documents for Linux 2.4 kernel subsystems, services and the modules. This is a collaborative work by engineers in industry and in the open source community, with open peer review.
Fast, Rich, Secure, Free - DMGames OS
Fast, Rich, Secure, Free - DMGames OS | https://sourceforge.net/directory/system-administration/kernels/linux/developmentstatus%3Aprealpha/?sort=update&page=6 | CC-MAIN-2018-17 | refinedweb | 213 | 55.44 |
The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.
Introduction
Flask is a lightweight Python web framework that provides useful tools and features for creating web applications in the Python Language. It gives developers flexibility and is an accessible framework for new developers because you can build a web application quickly using only a single Python file. Flask is also extensible and doesn’t force a particular directory structure or require complicated boilerplate code before getting started.
Learning Flask will allow you to quickly create web applications in Python. You can take advantage of Python libraries to add advanced features to your web application, like storing your data in a database, or validating web forms.
In this tutorial, you’ll build a small web application that renders HTML text on the browser. You’ll install Flask, write and run a Flask application, and run the application in development mode. You’ll use routing to display various web pages that serve different purposes in your web application. You’ll also use view functions to allow users to interact with the application through dynamic routes. Finally, you’ll use the debugger to troubleshoot errors.
Prerequisites
Step 1 — Installing Flask
In this step, you’ll activate your Python environment and install Flask using the pip package installer.
First, activate your programming environment if you haven’t already:
Once you have activated your programming environment, install Flask using the
pip install command:
Once the installation is complete, you will see a list of installed packages in the last parts of the output, similar to the following:
Output... Installing collected packages: Werkzeug, MarkupSafe, Jinja2, itsdangerous, click, flask Successfully installed Jinja2-3.0.1 MarkupSafe-2.0.1 Werkzeug-2.0.1 click-8.0.1 flask-2.0.1 itsdangerous-2.0.1
This means that installing Flask also installed several other packages. These packages are dependencies Flask needs to perform different functions.
You’ve created the project folder, a virtual environment, and installed Flask. You can now move on to setting up a simple application.
Step 2 — Creating a Simple Application
Now that you have your programming environment set up, you’ll start using Flask. In this step, you’ll make a small Flask web application inside a Python file, in which you’ll write HTML code to display on the browser.
In your
flask_app directory, open a file named
app.py for editing, use
nano or your favorite text editor:
Write the following code inside the
app.py file:
flask_app/app.py
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return '<h1>Hello, World!</h1>'
Save and close the file.
In the above code block, you first import the
Flask object from the
flask package. You then use it to create your Flask application instance, giving it the name
app. You pass the special variable
__name__, which holds the name of the current Python module. This name tells the instance where it’s located; you need this because Flask sets up some paths behind the scenes.
Once you create the
app instance, you
'<h1>Hello, World!</h1>' as an HTTP response.
You now have a simple Flask application in a Python file called
app.py, in the next step, you will run the application to see the result of the
hello() view function rendered in a web browser.
Step 3 — Running the Application
After creating the file that contains the Flask application, you’ll run it using the Flask command line interface to start the development server and render on the browser the HTML code you wrote as a return value for the
hello() view function in the previous step.
First, while in your
flask_app directory with your virtual environment activated, tell Flask where to find the application (
app.py in your case) using the
FLASK_APP environment variable with the following command (on Windows, use
set instead of
export):
Then specify that you want to run the application in development mode (so you can use the debugger to catch errors) with the
FLASK_ENV environment variable:
- export FLASK_ENV=development
Lastly, run the application using the
flask run command:
Once the application is running, the output will be something like this:
Output* Serving Flask app "app" (lazy loading) * Environment: development * Debug mode: on * Running on (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 296-353-699
The preceding output has several pieces of information, such as:
- The name of the application you’re running (
"app").
- The environment in which the application is being run (
development).
Debug mode: onsignifies that the Flask debugger is running. This is useful when developing because it provides detailed error messages when things go wrong, which makes troubleshooting easier.
- The application is running locally on the URL.
127.0.0.1is the IP that represents your machine’s
localhostand
:5000is the port number.
Open a browser and type in the URL. You will see the text
Hello, World! in an
<h1> heading as a response. This confirms that your application is successfully running.
If you want to stop the development server, press
CTRL+C.
Warning: Flask uses a simple web server to serve your application in a development environment, which also means that the Flask debugger is running to make catching errors easier. You should not use this development server in a production deployment. See the Deployment Options page on the Flask documentation for more information. You can also check out this Flask deployment tutorial with Gunicorn or this one with uWSGI or you can use DigitalOcean App Platform to deploy your Flask application by following the How To Deploy a Flask App Using Gunicorn to App Platform tutorial.
To continue developing the
app.py application, leave the development server running and open another terminal window. Move into the
flask_app directory, activate the virtual environment, set the environment variables
FLASK_ENV and
FLASK_APP, and continue to the next steps. (These commands are listed earlier in this step.)
Note: When opening a new terminal, or when you close the one you are running the development server on and want to rerun it, it is important to remember activating the virtual environment and setting the environment variables
FLASK_ENV and
FLASK_APP for the
flask run command to work properly.
You only need to run the server once in one terminal window. applications at the same time, you can pass a different port number to the
-p argument, for example, to run another application on port
5001 use the following command:
With this you can have one application running on and another one on if you want to.
You now have a small Flask web application. You’ve run your application and displayed information on the web browser. Next, you’ll learn about routes and how to use them to serve multiple web pages.
Step 4 — Routes and View Functions
In this step, you’ll add a few routes to your application to display different pages depending on the requested URL. You’ll also learn about view functions and how to use them.
A route is a URL you can use to determine what the user receives when they visit your web application on their browser. For example, is the main route that might be used to display an index page. The URL may be another route used for an about page that gives the visitor some information about your web application. Similarly, you can create a route that allows users to sign in to your application at.
Your Flask application currently has one route that serves users who request the main URL (). To demonstrate how to add a new web page to your application, you will edit your application file to add another route that provides information on your web application at.
First, open your
app.py file for editing:
Edit the file by adding the following highlighted code at the end of the file:
flask_app/app.py
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return '<h1>Hello, World!</h1>' @app.route('/about/') def about(): return '<h3>This is a Flask web application.</h3>'
Save and close the file.
You added a new function called
about(). This function is decorated with the
@app.route() decorator that transforms it into a view function that handles requests for the endpoint.
With the development server running, visit the following URL using your browser:
You will see the text
This is a Flask web application. rendered in an
<h3> HTML heading.
You can also use multiple routes for one view function. For example, you can serve the index page at both
/ and
/index/. To do this, open your
app.py file for editing:
Edit the file by adding another decorator to the
hello() view function:
flask_app/app.py
from flask import Flask app = Flask(__name__) @app.route('/') @app.route('/index/') def hello(): return '<h1>Hello, World!</h1>' @app.route('/about/') def about(): return '<h3>This is a Flask web application.</h3>'
Save and close the file.
After adding this new decorator, you can access the index page at both and.
You now understand what routes are, how to use them to make view functions, and how to add new routes to your application. Next, you’ll use dynamic routes to allow users to control the application’s response.
Step 5 — Dynamic Routes
In this step, you’ll use dynamic routes to allow users to interact with the application. You’ll make a route that capitalizes words passed through the URL, and a route that adds two numbers together and displays the result.
Normally, users don’t interact with a web application by manually editing the URL. Rather, the user interacts with elements on the page that lead to different URLs depending on the user’s input and action, but for the purposes of this tutorial, you will edit the URL to demonstrate how to make the application respond differently with different URLs.
First, open your
app.py file for editing:
If you allow the user to submit something to your web application, such as a value in the URL as you are going to do in the following edit, you should always keep in mind that your app should not directly display untrusted data (data the user submits). To display user data safely, use the
escape() function that comes with the
markupsafe package, which was installed along with Flask.
Edit
app.py and add the following line to the top of the file, above the
Flask import:
flask_app/app.py
from markupsafe import escape from flask import Flask # ...
Then, add the following route to the end of the file:
flask_app/app.py
# ... @app.route('/capitalize/<word>/') def capitalize(word): return '<h1>{}</h1>'.format(escape(word.capitalize()))
Save and close the file.
This new route has a variable section
<word>. This tells Flask to take the value from the URL and pass it to the view function. The URL variable
<word> passes a keyword argument to the
capitalize() view function. The argument has the same name as the URL variable (
word in this case). With this you can access the word passed through the URL and respond with a capitalized version of it using the
capitalize() method in Python.
You use the
escape() function you imported earlier to render the
word string as text. This is important to avoid Cross Site Scripting (XSS) attacks. If the user submits malicious JavaScript instead of a word,
escape() will it render as text and the browser will not run it, keeping your web application safe.
To display the capitalized word inside an
<h1> HTML heading, you use the
format() Python method, for more on this method, see How To Use String Formatters in Python 3
With the development server running, open your browser and visit the following URLs. You can replace the highlighted words with any word of your choice.
You can see the word in the URL capitalized in an
<h1> tag on the page.
You can also use multiple variables in a route. To demonstrate this, you will add a route that adds two positive integer numbers together and displays the result.
Open your
app.py file for editing:
Add the following route to the end of the file:
flask_app/app.py
# ... @app.route('/add/<int:n1>/<int:n2>/') def add(n1, n2): return '<h1>{}</h1>'.format(n1 + n2)
Save and close the file.
In this route, you use a special converter
int with the URL variable (
/add/<int:n1>/<int:n2>/) which only accepts positive integers. By default, URL variables are assumed to be strings and are treated as such.
With the development server running, open your browser and visit the following URL:
The result will be the sum of the two numbers (
10 in this case).
You now have an understanding of how to use dynamic routes to display different responses in a single route depending on the requested URL. Next, you’ll learn how to troubleshoot and debug your Flask application in case of an error.
Step 6 — Debugging A Flask Application
When developing a web application, you will frequently run into situations where the application displays an error instead of the behavior you expect. You may misspell a variable or forget to define or import a function. To make fixing these problems easier, Flask provides a debugger when running the application in development mode. In this step, you will learn how to fix errors in your application using the Flask debugger.
To demonstrate how to handle errors, you will create a route that greets a user from a list of usernames.
Open your
app.py file for editing:
Add the following route to the end of the file:
flask_app/app.py
# ... @app.route('/users/<int:user_id>/') def greet_user(user_id): users = ['Bob', 'Jane', 'Adam'] return '<h2>Hi {}</h2>'.format(users[user_id])
Save and close the file.
In the route above, the
greet_user() view function receives a
user_id argument from the
user_id URL variable. You use the
int converter to accept positive integers. Inside the function, you have a Python list called
users, which contains three strings representing usernames. The view function returns a string that is constructed depending on the provided
user_id. If the
user_id is
0, the response will be
Hi Bob in an
<h2> tag because
Bob is the first item in the list (the value of
users[0]).
With the development server running, open your browser and visit the following URLs:
You will receive the following responses:
OutputHi Bob Hi Jane Hi Adam
This works well so far, but it can go wrong when you request a greeting for a user who doesn’t exist. To demonstrate how the Flask debugger works, visit the following URL:
You’ll see a page that looks like this:
At the top, the page gives you the name of the Python exception, which is
IndexError, indicating that the list index (
3 in this case) is out of the list’s range (which is only from
0 to
2 because the list has only three items). In the debugger, you can see the traceback that tells you the lines of code that raised this exception.
The last two lines of the traceback usually give the source of the error. In your case the lines may be something like the following:
File "/home/USER/flask_app/app.py", line 28, in greet_user return '<h2>Hi {}</h2>'.format(users[user_id])
This tells you that the error originates from the
greet_user() function inside the
app.py file, specifically in the
return line.
Knowing the original line that raises the exception will help you determine what went wrong in your code, and decide what to do to fix it.
In this case you can use a simple
try...except clause to fix this error. If the requested URL has an index outside the list’s range, the user will receive a 404 Not Found error, which is an HTTP error that tells the user the page they are looking for does not exist.
Open your
app.py file for editing:
To respond with an HTTP 404 error, you will need Flask’s
abort() function, which can be used to make HTTP error responses. Change the second line in the file to also import this function:
flask_app/app.py
from markupsafe import escape from flask import Flask, abort
Then edit the
greet_user() view function to look as follows:
flask_app/app.py
# ... @app.route('/users/<int:user_id>/') def greet_user(user_id): users = ['Bob', 'Jane', 'Adam'] try: return '<h2>Hi {}</h2>'.format(users[user_id]) except IndexError: abort(404)
You use
try above to test the
return expression for errors. If there was no error, meaning that
user_id has a value that matches an index in the
users list, the application will respond with the appropriate greeting. If the value of
user_id is outside the list’s range, an
IndexError exception will be raised, and you use
except to catch the error and respond with an HTTP 404 error using the
abort() Flask helper function.
Now, with the development server running, visit the URL again:
This time you’ll see a standard 404 error page informing the user that the page does not exist.
By the end of this tutorial, your
app.py file will look like this:
flask_app/app.py
from markupsafe import escape from flask import Flask, abort app = Flask(__name__) @app.route('/') @app.route('/index/') def hello(): return '<h1>Hello, World!</h1>' @app.route('/about/') def about(): return '<h3>This is a Flask web application.</h3>' @app.route('/capitalize/<word>/') def capitalize(word): return '<h1>{}</h1>'.format(escape(word.capitalize())) @app.route('/add/<int:n1>/<int:n2>/') def add(n1, n2): return '<h1>{}</h1>'.format(n1 + n2) @app.route('/users/<int:user_id>/') def greet_user(user_id): users = ['Bob', 'Jane', 'Adam'] try: return '<h2>Hi {}</h2>'.format(users[user_id]) except IndexError: abort(404)
You now have a general idea of how to use the Flask debugger to troubleshoot your errors and help you determine the appropriate course of action to fix them.
Conclusion
You now have a general understanding of what Flask is, how to install it, and how to use it to write a web application, how to run the development server, and how to use routes and view functions to display different web pages that serve specific purposes. You’ve also learned how to use dynamic routes to allow users to interact with your web application via the URL, and how to use the debugger to troubleshoot errors.
If you would like to read more about Flask, check out the Flask topic page. | https://www.xpresservers.com/how-to-create-your-first-web-application-using-flask-and-python-3/ | CC-MAIN-2021-39 | refinedweb | 3,110 | 62.98 |
examples for beginners Struts tutorial and examples for beginners
Can you suggest any good book to learn struts
Can you suggest any good book to learn struts Can you suggest any good book to learn struts
struts
struts how to start struts?
Hello Friend,
Please visit the following links:
Here you will get lot of examples through which you... Hi,
I need the example programs for shopping cart using struts with my sql.
Please send the examples code as soon as possible.
please send it immediately.
Regards,
Valarmathi
Hi Friend,
Please
Struts - Struts
Struts Dear Sir ,
I am very new in Struts and want to learn about validation and custom validation. U have given in a such nice way... provide the that examples zip.
Thanks and regards
Sanjeev. Hi friend ebook
struts ebook please suggest a good ebook for struts>
example on struts - Struts
example on struts i need an example on Struts, any example.
Please help me out. Hi friend,
For more information,Tutorials and Examples on Struts visit to :... visit for more information.
Thanks
Struts Code - Struts
more information,Tutorials and Examples on struts visit to :
http...Struts Code How to add token , and encript token and how decript token in struts. Hi friend,
Using the Token methods
The methods 2.0- Deployment - Struts
Struts 2.0- Deployment Exception starting filter struts2... got screen also.
when I restarted, issue raised. I checked class path also.
please suggest me
Struts Tag Lib - Struts
.
JSP Syntax
Examples in Struts :
Description
The taglib...Struts Tag Lib Hi
i am a beginner to struts. i dont have..., sun, and sunw etc.
For more information on Struts visit to : How to retrive data from database by using Struts
STRUTS
STRUTS MAIN DIFFERENCES BETWEEN STRUTS 1 AND STRUTS 2
Struts
Struts how to learn struts
Struts
Struts what is SwitchAction
STRUTS Request context in struts?
SendRedirect () and forward how to configure in struts-config.xml
Struts 2 Date Examples
Struts 2 Date Examples
In this section we will discuss the date processing
functionalities available in the Struts 2... provided by Struts 2 Framework.
Date Format
Examples
java struts error - Struts
*;
import javax.servlet.http.*;
public class loginaction extends Action{
public...java struts error
my jsp page is
post the problem...*;
import javax.servlet.http.*;
public class loginform extends ActionForm{
private
auto suggest where the data should come from database - Ajax
auto suggest where the data should come from database auto suggest... the following link which contains an Auto completer example created in Struts.... You can also use DOJO for creating Auto Suggest box. Here is the link
struts tags
.
examples of struts tags are
Hi Friend,
Please visit the following links: tags I want develop web pages using struts tags please help
struts
struts how to write Dao to select data from the database
struts
struts why doc type is not manditory in struts configuration file
Struts 2 Format Examples
Struts 2 Format Examples
... Struts 2 Format Examples are very easy to grasp and you will
learn these concepts...
the output as string in your action class. But there is some drawback it like
what are Struts ?
and proven design patterns.
Struts Examples..
Hi... - Struts
more information,tutorials and examples on Struts with Hibernate visit to :
Thanks This link...Hi... Hi,
If i am using hibernet with struts then
and testing the example
Advance Struts Action
Struts Action... Action class
Add configuration in struts.xml file
Build... application
Miscellaneous Examples
Struts PDF Generating Example
Graphs - Struts
Graphs Hi,I have an application developed using struts framework.Now the requirement is for displaying graph in it.Can anyone help me with some code... implementation, I would suggest Google charts.
struts
struts i have one textbox for date field.when i selected date from datecalendar then the corresponding date will appear in textbox.i want code for this in struts.plz help me
Struts - Framework
using the View component. ActionServlet, Action, ActionForm and struts-config.xml... struts application ?
Before that what kind of things necessary...,
Struts :
Struts Frame work is the implementation of Model-View-Controller
Struts
;Basically in Struts we have only Two types of Action classes.
1.BaseActions... class indirectly.These action classes are available...Struts why in Struts ActionServlet made as a singleton what
Error - Struts
create the url for that action then
"Struts Problem Report
Struts has detected... to test the examples
Run Struts 2 Hello...Error Hi,
I downloaded the roseindia first struts example
the checkbox.i want code in struts
struts internationalization for Korean language
struts internationalization for Korean language Hi All,
Please suggest me an example where struts is implemented
for korean locale using struts internationalization concept.
Regards
Nagaraju
Struts -
*;
import org.apache.struts.action.*;
public class LoginAction extends Action...struts <p>hi here is my code can you please help me to solve...;
<p><html>
<body></p>
<form action="login.do">
struts
}//execute
}//class
struts-config.xml
<struts...struts <p>hi here is my code in struts i want to validate my...;gt;
<html:form
<pre>
struts - Struts
; Hi,Please check easy to follow example at dispatchaction vs lookupdispatchaction What is struts
struts - Struts
struts shud i write all the beans in the tag of struts-config.xml | http://www.roseindia.net/tutorialhelp/comment/15232 | CC-MAIN-2014-10 | refinedweb | 870 | 67.86 |
On Wed, Jul 21, 2010 at 3:49 AM, David Jencks <david_jencks@yahoo.com> wrote:
>
> On Jul 20, 2010, at 10:24 PM, Jarek Gawor wrote:
>
>> David,
>>
>> Most jndi lookups via the java namespace are supposed to return an
>> unique instance on each lookup. If we route the java namespace lookups
>> directly to service registry lookups we will no longer be returning
>> unique instances because service registry lookups are cached
>> automatically. So we really need to do what we have done before and
>> lookup a gbean in SR and call .$getResource() on it. That also means
>> that each gbean that implements ResourceSource interface must not
>> implement the ServiceFactory interface where getService() calls
>> $getResource().
>>
>
> Does this mean that if we want to make the datasources available directly in the osgi
or aries namespace, perhaps always returning the same instance, and also want to satisfy the
uniqueness requirement for looking it up in the java: namespace, we need to register 2 services
in the service registry?
Yes, I think so - if we want to lookup gbeans in service registry
instead of the kernel registry. We could also revert the code and go
back to old way of doing things. That is, lookup gbeans in kernel
registry for java namespace and register service factories for these
gbeans in service registry for osgi/aries namespace lookups.
Jarek | http://mail-archives.apache.org/mod_mbox/geronimo-dev/201007.mbox/%3CAANLkTikPhfx0_FFPll-qqBKakgkRhEb-ZRXboK1J47ER@mail.gmail.com%3E | CC-MAIN-2017-34 | refinedweb | 223 | 66.27 |
Stroustrup Says C++ Education Needs To Improve 567, 'Given that the problems are not restricted to C++, I'm not alone in that. As far as I can see, every large programming community suffers, so the problem is one of scale.' We've discussed Stroustrup's views on C++ in the past."
I'm just glad they're teaching C++ actively again. (Score:2, Interesting)
One semester later I dropped out and never looked back to computer science again as a career choice. In fact, thoug
Re:I'm just glad they're teaching C++ actively aga (Score:5, Informative)
My favorite lecturer quote, "Oh, I don't really do any coding at all".
Re: (Score:3, Informative)
Where are they teaching it actively again?
My school is. In fact, C++ is the primary language you learn on. There's some Java classes to expose you to it and even brief exposures to some other misc. languages (ADA/LISP/PROLOG). We also program in C for some of the classes geared towards UNIX system programming. It's a nice balance because with Java EVERYTHING is an object and it's likely to confuse freshman. Heck, it first confused me with these static mains coming from a C/C++ background. At least with C++ you can start with just functional p
Re: (Score:2)
Actually, one of the problems with Java is that not everything is an object. That's why I think Java will eventually be replaced by something much more Ruby-like (but with a bit more performance, please). But Java was written to replace C++ at the application level, and as such it does a tremendous job.
Re: (Score:2, Insightful)
Re:I'm just glad they're teaching C++ actively aga (Score:4, Insightful)
Yes, the lack of top-level functions in Java is well-known criticism (at least from the Python/Ruby crowd), but personally I've never had any problems with that, mostly because I don't use Java for quick scripts. The constant need to cast everything is a much bigger problem, IMO.
Re: (Score:3, Insightful)
Err, are you saying that you *have* to use an IDE to generate a class to run a Java app? The equivalent of your C++ main would be:
public class Test {
// ...
public static void main(String[] args) {
}
}
Why do you need an IDE to write that?
Re: (Score:3, Informative)
I've been a professional Java programmer for 2 years now, and the more I use it, the less I like it. Java behaves like an advanced scripting language, and it's great as long as the programming tasks stay reasonably simple. But beyond some hard-to-define point, where I'm right now, Java just doesn't cut it. Too many pitfalls, too many workarounds.
In my experience there's no language that's more suitable for gigantic software projects with millions of dependencies than Java. Admittedly I don't have much experience with Python and Ruby yet, and while I see those two as advanced scripting languages, other people keep using them to build large software projects in less time than it takes in Java. But compared to C++, maintaining very large projects is much easier in Java.
If you can try to define your point, pitfalls and workarounds, perhaps someon
Re: (Score:2, Informative)
That's imperative programming.
You get functional programming with lisp, scheme, python, ocaml, haskell.
Re: (Score:2)
I believe you mean to say "procedural or imperative". You can use this paradigm before you start true OO programming in C++. For a functional language, try ML.
Re:I'm just glad they're teaching C++ actively aga (Score:5, Insightful)
Re: (Score:3, Funny)
Re: (Score:2, Informative)
Re:I'm just glad they're teaching C++ actively aga (Score:5, Funny)
I'm still astounded that a Computer Science curriculum includes any in-depth teaching of a programming language. Does the physics curriculum include courses on car repair? Does the biology curriculum include courses for the female students on how to land a good husband? Does the Literature curriculum include an in-depth study of calligraphy?
OK, I'm exaggerating a bit for effect, but seriously, most of computer science doesn't even require a computer, let alone an in-depth knowledge of any particular computer programming language. Some universities seem to have CS curriculum that would be more at home at DeVry's than at a university.
When you read Knuth, are you sitting in front of a computer? Gack, I bet that's what the kids nowadays do. The right way to read and study Knuth is sitting in a very fine leather chair, in front of a fireplace with fire, with a large dog sleeping at your feet, a drink in one hand, and classical music playing softly--and it should take along walk across a moor to reach the nearest computer.
Re: (Score:3, Insightful)
As background, I have worked with Java, C, and C++. I've also dealt with functional languages (Lisp, Haskell), scripting languages (perl, php, python), and a bunch of other stuff. In all my experiences, my response to
Re:I'm just glad they're teaching C++ actively aga (Score:5, Funny)
I'm a first year programming student at an Ivy League school and I've just finished my Visual Basic classes. This term I'll be moving onto C++. However I've noticed some issues with C++ that I'd like to discuss with the rest of the programming community. Please do not think of me as being technically ignorant. In addition to VB, I am very skilled at HTML programming, one of the most challenging languages out there! existence, it's starting to show its age, and I feel that it should be retired, as COBOL, ADA and Smalltalk seem to have been. Recently I've become acquainted clunky double-pluses.) Its biggest strength is that it abandons an OOPS-style of programming. No more awkward "objects" or "glasses". Instead C uses what are called structs. Vaguely similar to a C++ "glass", a struct does away with anachronisms like inheritance, stability rivaling language.
Thank you for your time. Your feedback is greatly appreciated.
Re: (Score:3, Funny)
myth (Score:5, Funny)
That's actually a myth.
Or, as Alan Kay, the guy who invented object oriented programming, said: "When I invented OOP, I did not have C++ in mind."
(He was trying to be diplomatic.)
Re: (Score:3, Interesting)
C++ might use classes and objects, but it basically makes the programmer think of things on a low level and program in a style that accepts the fact that computers do thing
Re: (Score:3, Informative)
Wrong.
C++, like every sufficiently useful programming language, is a multi-paradigm language. It is dominantly procedural, this is true, but it also has language support for OOAD (which is not the same thing as OOP, as you correctly pointed out), generic programming, generative programming and facilities for adding embedding different ki
bullshit (Score:3, Interesting)
Alan Kay didn't criticize C++ for being multi-paradigm or for being low-level. Alan Kay criticized C++ because C++ classes and the C++ type system are so mind-numbingly limited and poorly designed. No duck typing. No "become". No reflection. No meta-programming. Instead, you get a tar pit of source code dependency management, potenti
Java put you on Skid Row? (Score:5, Insightful)
I will grant you that if you or the parents are shelling out the Purdue tuition, maybe their CS department should find a better professor for their intro course. I am sorry to hear that this experience dissuaded you from completing a CS degree, and there is probably a lot more to your personal story than can be shared on Slashdot.
But I would like to communicate to others out there that you will have a few good teachers in your educational career who are really inspiring, a vast group of average teachers, and a number of who you consider to be really, really bad teachers. The "bad" teachers are that way (in your opinion) for a number of reasons -- they may be "nice guys or gals" who don't have enough preparation or smarts to teach, they may have admitted to you gaps in their preparation that you have taken upon yourself to hold them in disrespect for, or maybe they assign too much HW and work you too hard.
If one is going to take a passive approach, show up to class and demand, "Here, educate me", that is a good way to fail at getting a degree and also to fail at every other opportunity that presents itself down the road. If one is going to take an active approach, working as hard as one can at learning from all teachers, the good and the bad, supplementing gaps in instruction with self-study, working coding jobs, group study, one is going to be successful at college and everything else.
To suggest that a person can have one "bad" prof means that they are on the street drinking methyl antifreeze out of a jar wrapped in a paper bag, this suggests a very passive approach to not just education but life in all its aspects.
Re: (Score:2)
Everything is an exaggeration (Java isn't very big on low-level system/OS programming as far as I know), but on the application level, Java is the biggest, most succesful language ever, so he wasn't far off.
Visual J++ was an obvious dead end, however.
Computer science is about concepts, not syntax (Score:5, Insightful)
Isn't the whole idea of a Computer Science education to learn the underlying concepts of programming, not just the syntax or semantics of a particular language? The programming language used is merely a vehicle for conveying those concepts. The professor who was "learning [J++] with you as we go" was referring to the specifics of the new programming language, not the concepts that he was going to teach in the course. Presumably (unless he was unfit for his position), he knew those concepts well and was able to convey them to the class using whatever syntax and semantics were thrown in front of him.
The quote I'll never forget came from one of my professors during an advising session: "I'll often get calls from IT managers asking if we have any graduating students who know COBOL. I always tell them that ANY of our graduates could know COBOL - and ask if they are hiring someone for their intellect and understanding of programming concepts, or for their knowledge of a particular language."
I completely agree (Score:5, Insightful)
more to it (Score:5, Interesting)
The language is overly complex. The key advice any C++ expert is "restrict yourself to a specific subset of C++". That's the bulk of the difficulty. If C++ were simplified to include only that subset, you'd have a lot less need for training,
Re:more to it (Score:5, Informative)
For example, most people don't use the SSE stuff or even know about it. You can, for example, make a vector with 4 numbers in it and multiply it with another vector with 4 numbers in it. The result is that the four multiplications are done simulatanously.
Most people won't use this functionality and thus don't even need to learn it, but when you need an algorithm to run fast, it is essential.
Re:more to it (Score:5, Informative)
I focus instead on restricting programmers to the tools they need, so they focus their creativity on algorithms instead of coding methodology. I've even codified it all, as an extension to C [sf.net], rather than C++. Works great for team programming. I had a guy last week write two IC placers: simulated annealing, and quadratic placement, in 5600 lines of hand written code, debugged and working. He did it in 6 days while supporting a difficult client, without working weekends or evenings. I'd estimate his productivity at 10X to 100X higher than average.
Re:more to it (Score:5, Interesting)
A simple example: given a vector of ints, calculate the average value, using standard idiomatic C++. Let's give it a try:A simple task, and a clean implementation, using the standard library as much as possible. No fancy language features used - no template metaprogramming, no pointers, no virtual inheritance. The kind of stuff a new C++ programmer might write after reading Stroustrop. And yet, it gives wrong result for certain input data. Now, who'll be the first one to spot and explain the problem here, preferrably without actually running it? Bonus points for explaining why we need "istream", "ostream" and "iomanip" headers here in addition to "iostream" (and we do, if we want this to be portable).
Re:more to it (Score:5, Interesting)
v.size() has vector<int>::size_type as its return type. Depending on which specific type, either that type will be converted to int, and all is fine, or the result of std::accumulate(v.begin(), v.end(), 0) is converted to an unsigned type, then divided, and then converted back to int. In the latter case, when you have a negative sum, the division gives the wrong result.
I think the <istream> and <ostream> headers are required because the types of std::cin and std::cout might otherwise be incomplete, meaning operator<< might be undeclared. I know the <ostream> header is also required to that std::endl is declared. I'm not seeing what could be in <iomanip> that's of interest of your program. Could you explain that last one?
Re:more to it (Score:5, Interesting)
Why I quit using C++ (Score:3, Interesting)
I talked to another individual about C++ and what he hates with C++ is that you are taking too much time thinking about infrastructure and not solving problems. Whereas in Java you think more about solving problems and less about infrastructure.
Re:more to it (Score:5, Insightful)
Did I mention using the right tool for the job? Use perl for this example.
You guys are weird. Someone gave incorrect, convoluted code (in C++) for doing something, and you criticize C++ for this???
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Yes, that subset is called C.
Seriously, at this point it is time to call a halt to C++ education. Treat it like COBOL, Fortran or any other legacy language that still has demand for programmers but is long since past utility. It is a good idea for students to understand that they have to keep current with programming languages and not expect to be employed as a programmer if they can only code in one. But that does not mea
Re:more to it (Score:5, Insightful)
There, fixed that for you.
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
And anyone who is saying 'learn to use the leak detection tool', testing programs for correctness is a
Re: (Score:3, Interesting)
Yeah. That subset could be called something then to distinguish it; say, "Java". And then you could have a slightly larger subset with more advanced stuff, like pointers and value types; let's call that something more C++-like to reflect it: hm, how about "C#"? And sure
Re:more to it (Score:5, Insightful)
The correct advice would be, "use precisely what you need and no more".
I have been programming in C++ for a very long time, and I have needed *every single tool* that some of you nay-sayers would say was unnecessary. And that's needed in the sense of "the best solution was achieved using these tools", not "I padded my CV by adding unnecesary complexity".
Re:I completely agree (Score:5, Interesting)
As for programming pedagogy, I think we'd do a lot better if the faculty of CS departments would migrate away from using Java/C++ as the introductory programming model because so much of what gets said initially just goes in one ear and out the other. I will admit to not remembering at all how typedefs or templates in C++ work, and I can't say it's harmed me much.
Python would be a much better choice in my view for a variety of reasons (and I say this though I'm a Perl nut!), or hell, if you teach them Lisp they'll be horribly screwed up for the rest of their lives but at least they'll understand how registers and OOP work.
In short, novice programmers are not going to learn anything useful if you use C++ as the prescriptive model for how a well-written computer program should look -- they're just going to hit the bottle earlier in life.
Re:I completely agree (Score:4, Insightful)
Hell, who am I kidding. Just teach them to code in brainfuck. Or maybe INTERCAL. Normal languages will seem much nicer after that.
Re: (Score:2)
As for programming pedagogy, I think we'd do a lot better if the faculty of CS departments would migrate away from using Java/C++ as the introductory programming model because so much of what gets said initially just goes in one ear and out the other.
Even better would be if they didn't get involved in ideological wars about languages, and focussed on the *concepts* they wanted to teach. If students really are coming out of these courses with certificates saying "computer science" which should read "Java forms design for data centers" then arguing the merits of Java,Lisp,C++ or Son Of The Return of the Bride of BASIC isn't going to fix anything. If you're gonna call the course "Computer Science" then students should come out knowing a decent wodge o
Re: (Score:2)
True, but Real Programmers also know not to use C for things it's not intended to. C is for high-performance systems programming, maintainable application programming is much easier and faster with a higher level language.
So long its not Matlab (Score:5, Interesting)
Granted, Engineering always went for things that CS considered "brain dead" -- Basic, PC's, DOS, Windows. But Matlab is more brain dead than most.
What happened is that a lot of the current generation of Engineering profs cut their teeth on FORTRAN -- their Intro to Programming was in FORTRAN, whatever industrial job they had before getting a PhD had them compute things in FORTRAN. Few of them were ever comfortable in it and most of them spent hours in the computing center debugging programs dumped to massive punch card decks.
When Matlab came around, it was numerical Nirvana. It had this massive numeric library that you didn't have to write your own Q-R linear equation solver or SVD subroutine, and you didn't have to go searching for this stuff either, it was all there. It had a command prompt to performed immediate execution along with reasonably friendly error messages. And it acquired a thoroughly feature-full graphics package.
Don't get me wrong, Matlab is a very capable numerical applications language and even turns out to be one of the better Java scripting languages of all things. But it really falls down in terms of extensibility of its type system, and as far as what Mathworks tacked on for object-oriented programming, fuggedaboutit. It is also the Swiss Army knife of software for a whole bunch of people, and forget about introducing them to a socket wrench and handle that can apply serious torque to a bolt when they think they can get by with the pliers tool.
While people who know what they are doing can benefit from the convenience of the numeric and graphics libraries, the immediate mode, the verbose error handling and rare instances of complete crashes, if you don't know what you are doing (i.e. you are just learning), it can lead to as many hour-gobbling skull-cracking debug sessions as anything else. Our required Numerical Methods course is in CS, it uses Matlab, our faculty is complaining that the students are complaining that they hate the course because they are spinning their wheels trying to get programs to run (in Matlab of all things), and we have guys in our department we want to teach Numerical Methods (in Matlab, of course), in the context of a watered-down Intro to Engineering offering.
What the community needs right now is a Python distro with enough of a numerics and graphics package rolled in to do 90 percent of what is in Matlab (Are the Python people still hashing out that Numerics/Numpy divide? Is there an engineering graphics library that is Numerics/Numpy compatible? 99 percent of what you do in Matlab is that you have a Leatherman Tool of a 2-D array type (Matlab, Matrix Lab) along with all of the libraries being compatible with that type.) CS departments could teach their Intro to Programming along with their Numerical Methods courses using that Python distro, and we can save a generation of engineers from brain damage.
Re:So long its not Matlab (Score:5, Informative)
> numerics and graphics package rolled in to do 90 percent of what is in
> Matlab.
Good idea. This is what both Sage [sagemath.org] and the Enthought Python Distribution [enthought.com] are
shooting for.
> (Are the Python people still hashing out that Numerics/Numpy divide?
No that is done. And the lead developer of Numpy -- Travis Oliphant --
now gets to work full time on Python scientific computing, as an
employee of Enthought [enthought.com].
> Is there an engineering graphics library that is Numerics/Numpy compatible?
There is Matplotlib [sourceforge.net] for
matlab like numpy graphics, and Chaco [enthought.com] for more dynamic 2d graphics. MayaVi [enthought.com] and Sage [sagemath.org] both provide powerful 3d graphics.
Re: (Score:2)
That might be because it is rare to learn C++ outside of an organized course, and if you don't get such a thing in school, it's hard to do it on your own later. O'Reilly apparently publishes a self-teaching book called Practical C++ programming [amazon.com] , but it's not widely distributed compared to their interpreted language offerings. Every other C++ introduction I encounter seems to be designed for use in schools, not for autodidacts.
Re:I completely agree (Score:5, Informative)
I agree with the premise that C++ is a great language, that is poorly understood, and often mis-used. Education would seem to be the answer to that.
Re: (Score:2)
Re: (Score:3, Insightful)
I am not alone in my opinion. Consider the difference in rank o
Re: (Score:2)
I agree and I point the finger (I have to blame someone don't I?) at architects who don't understand what computer code is. Have you ever been presented the next amazing design for a solution and thinking, "am I a magician or a software developer?". If an architect has had some contact with code then the solutions tend to be a little more realistic.
Re:I completely agree (Score:4, Insightful)
I'm not sure if the problem is bad education or the lazy coders who expect everything to be easy (ie done for them) so they don't have to really think about what they're doing.
Even MS's clever chaps have this problem - "lets have C# and GC so no-one need think about memeory ever again" they cried. Then they realised that objects are more than just memory so you do have to worry about destruction of the non-memory resources held by an object (eg file handle, etc). Then they realised they were getting problems writing code that interacted with the OS, so they introduced reference counting objects [msdn.com] that they could put in their "deterministic" finalisation objects that they could put in their Garbage-collected objects.
Re: (Score:2)
Too many people these days have little or no exposure to C++, and never learn how programming in the absence of garbage collection works.
Most people don't need exposure to C++, because most people don't do any systems programming. C++ is for operating systems and drivers, and was never intended for application programming. I think it makes most sense for people to start on something like Java (or Lisp, perhaps), and once they've figured out they want to do systems programming, teach them C and C++. If, instead, they want to go towards application programming, teach them a higher level language like Python or Ruby.
It is especially problematic in our research labs, where computationally complex problems must be solved with very fast code, but the people writing it get completely confused by pointers and memory management. Worse is when a proof-of-concept is distributed, with horrible bugs and completely incomprehensible code.
Sounds like they cho
Maybe the real problem... (Score:3, Insightful)
P.S. Feel free to flame away at me, but not only have I developed professionally in C++, I've actually rescued a C++ project by (among other things) drafting C++ coding standards and guidelines for the 30 or so developers working on it.
Re: (Score:2, Interesting)
It reminds me of this:- [thedailywtf.com]
Re:Maybe the real problem... (Score:5, Insightful)
...and several previous generations of programmes roll over in their graves at the thought that C++ is a "lower-level language".
The thing is, C++ is huge. Just to have a solid working knowledge of the core language, you need to master whole rafts of things that have nothing whatsoever to do with the low-level operation of the machine, because even the core is a labyrinth of obscure corner cases that make language lawyers drool, and which, if expressed in pseudo-code, would be a bunch of gigantic switch statements with a couple dozen levels of ifs nested inside each case. Now, add the STL on top of that, and add common third-party bits like Boost on top of that, and you're left with a monstrosity. To really understand programming at a lower level, you need at best only a small subset of C++, and unfortunately for C++ that subset is properly called "C".
Re: (Score:3, Insightful)
C++ is also complex in and of itself, but ther
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Absolutely. This is fairly easy to check, too; here [open-std.org]'s the most
Re: (Score:3, Insightful)
No, it does not. I wasn't comparing C++ to anything (and there are more complicated industrial languages out there: Common Lisp comes to mind), merely pointing out that its sheer size and complexity is more than most can handle.
They don't, and that's part of the problem (because the books are inadequate; and the books are inadequate because any book that'd cover all the subtleties won't
Re:Maybe the real problem... (Score:4, Interesting)
Re:Maybe the real problem... (Score:4, Interesting)
It will take a lot of effort for someone to make good clean code, fast and efficient, easy to read, easy to update, well commented and bug free. C++ is only a good language if you have the tolerance to code that carefully over a long time.
For real life upper level languages the the job done faster better and easier.
For terms of Teaching Computer Science C++ isn't a good language at all. The problem is the students are focused more on getting the damn thing to compile finding where they missed the ";" vs. Understanding the logic and concepts being taught. I knew C before I started college so doing C++ wasn't much of a stretch, so I got more out of my education then some of the other students who didn't have the background their view is If it compiles then it is done.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
To fully understand what Bruce means, read this [yosefk.com]
Education, even at Universities, needs to Improve (Score:5, Interesting)
I work in another department and sadly, without formal CS experience, I'm a better programmer than many (if not most) of the CS department's graduates. I don't think, however, that this problem is unique to my school. I've visited other US universities where the situation is very similar.
In fact, I recently took an informal survey of about a dozen CS seniors and found that none (yes, none) of them knew what K&R, the "white book", or the "Art of Computer Programming" were.
Re:Education, even at Universities, needs to Impro (Score:5, Interesting)
My university course spent about half an hour on pointers in a 3 year course. Most of that half hour was factually wrong: the slides were full of code samples that wouldn't compile or would always crash.
They did, however, spend two terms teaching Hoare logic. Or rather, they spent one term teaching it, and then repeated the same material in another term with a different lecturer, because their communication was so poor they never realised they had duplicated their teching.
Friends at other universities reported similar stupidities, though not always on the same scale.
C++ is a rather complex language, but simplifying it won't help. The problem is that low quality education is rampant.
Re: (Score:2)
I think you're having a "Tastes great / less filling" argument. Good education can somewhat prepare a person to learn and use an overly complex language. A bad education leaves a person less prepared to do so.
Having programmed in C++ for many years, and having lived through its evolution to include its nightmarish template system with its nearly incomprehensible error messages even fo
Re: (Score:3, Insightful)
Why the hell should they? Is knowing the title of a C book that's out of date (OK, the second edition is better, but K&R usually refers to the original edition) important?
Perhaps you don't understand what CS is. CS isn't about C - in fact, it's not even about programming at all. CS is about the theory of computation
Re: (Score:2)
I got my B.Sc. in Comp. Sci in the 1970s and ... (Score:5, Funny)
He should go work for Microsoft ... (Score:5, Funny)
They have a solution for that
Not just C++ (Score:2)
I came into IT through the back door. I was a science major in college, messed with computers all the time as I was growing up, and realized I could make a better living in IT than I could in science. So yes, I don't have a ton of programming experience. I have picked up a lot of information over the years on how operating systems actually work under the hood though.
Re: (Score:2)
Computers have gotten so fast and powerful that there's no need to optimize code anymore. This explains why everyone's programming in Java and
.NET.
I was with you up until the above. What? No, no. Java and
.Net are popular because you get the power and flexibility of C++ and other compiled OO languages, but you get the increased productivity that managed code gives you. It is a *lot* easier to create an application in Java or .Net that is good enough, runs fast enough, and has no memory leaks, than it is to create the same application in C++.
.Net, but for the majority of co
There is a bunch of stuff where C++ is a much, much better choice than Java or
Re: (Score:2)
As for
the thing is, if you tell a less-capable programmer that he doesn't have to worry about memory, he'll think christmas has come early and he will stop worrying about memory... as a result his code will become poorer, resource cons
level of abstraction complexity. (Score:2)
A trade off of higher learning curve for faster programming with more complex abstractions vs. lower learning curve for less abstract complex language application.
In the simplest example and using the basis of common computers:
When you mark a switch with a symbol of a "0" and a "1" what does it mean? On/Off or a variation inbetween these like standby, sleep, or any other thi
In other news... (Score:3, Funny)
OK I'll say it.. (Score:2)
I've written some educational C++ articles (Score:5, Informative)
The articles are:
Linus Torvalds on C++ (Score:4, Interesting)
For the record, I'm inclined to agree with Torvalds. The main problem with C++ is its insane levels of complexity and its unerring eye for adding subtle and difficult-to-diagnose problems once things like multiple inheritance get factored in.
Sadly... (Score:3, Funny)
Strostrup is the problem (Score:5, Interesting)
The big problem with C++ is Strostrup. He's in denial about the fact that the language is fundamentally broken. But he's still influential in C++ circles. Thus, no one else can fix the mess at the bottom.
The fundamental problem with C++ is that it has hiding ("abstraction") without memory safety. This is the cause of most of the world's buffer overflows. No other major language has that problem. C has neither hiding nor memory safety, so it is still vulnerable to buffer overflows, but they're to some extent visible at the place they occur. Pascal, Modula, Ada, Java, C#, and all the interpreted "scripting languages" have memory safety. C++ stands alone as a language where you can't see what's going on, and the compiler doesn't have enough information to check subscripts.
The reaction of the C++ standards committee has been to try to paper over the problems at the bottom with a template layer. That didn't work. The template classes just hide the mess underneath; they don't make the language memory safe. There are too many places that raw pointers leak out and break any protection provided by the templates. The template language itself is deeply flawed, and attempts to fix it have resulted in a collection of "l33t features" understood and used by few, and too dangerous to use in production code.
The fundamental cause of the trouble comes from C's "pointer=array" equivalence. That was a terrible mistake, borrowed from BCPL. The trouble is that the compiler knows neither which variables are arrays nor how big the arrays are. You can't even talk about arrays properly. I mean, of course,
int read(int fd, char* buf, size_t len);
That's just trouble waiting to happen. "read" has no information about how big "buf" is.
C++ added references to C, and should have added syntax like
int read(int fd, char& buf[len], size_t len);
to go along with it, so that arrays became first-class objects with sizes. But it didn't. There are some other things that have to be done to the language to make this concept work, but this is the general idea. This is the elephant in the living room of C++, and Strostrup is in denial about it.
Every time you have another crash from a buffer overflow, every time you install another patch to fix a buffer overflow, every time you have a security break-in from a buffer overflow, think of this.
Re: (Score:3, Insightful)
Yeah, array/pointer ambiguity is a key "broken" feature of C++, although at the same time it's exactly the kind of thing that makes it possible to use the same language for code running on a microcontroller and for a full-blown GUI. But yes, for most things it would be incredibly useful to have proper arrays with index checking and so on. Most templated solutions that I've seen (Boost?) are just butt-ugly and make the code that much more difficult
Re:Strostrup is the problem (Score:4, Interesting)
Yeah, array/pointer ambiguity is a key "broken" feature of C++, although at the same time it's exactly the kind of thing that makes it possible to use the same language for code running on a microcontroller and for a full-blown GUI.
Not really. I'm not asking for array descriptors. I'm not proposing to change the run-time representation of arrays. What I'm talking about is a generalization of array declaration syntax.
Until C99, array declarations always had to have constant array sizes. C99 introduced on-stack arrays, so in C99 you can write:
int fn(size_t n);
/* on-stack array, sized based on fn parameter */ ...}
{ float tab[n];
This was the first syntax which allowed a run-time variable in an array definition, and it was a big help, because it saved a malloc call inside of number-crunching libraries. (Numerical Recipes in C had this problem, and FORTRAN didn't.).
The next logical step is to allow that syntax in more places, like function parameters. This doesn't require passing an array descriptor. The programmer gets to specify the expression that defines the array size. So there's now a way to talk about array size that the compiler understands. Checking becomes possible.
Of course "sizeof" and "lengthof" should be extended to such arrays, so you can write:
int status = read(infd, buf, sizeof(buf));
Giving up pointer arithmetic is too much to ask C and C++ programmers, but restricting pointers to iterator syntax (valid values are NULL, a pointer to an element of the array, and one past the end of the array) makes them checkable. There are checking iterator implementations, so this is possible.
Once the compiler knows what's going on with arrays, subscript checking can be optimized. This is well understood. In particular, subscript checks within for loops can usually be optimized ("hoisted" is the term compiler writers use) so that there's one check at the beginning of the loop, and often that check can be folded into the loop termination condition. So the checking cost in matrix-oriented number-crunching code is usually zero.
There's more to this, and it isn't painless to remove this ambiguity, but if we had a "strict mode", where the checkable forms have to be used, there's a transition path.
If all code running as root had to be in "strict mode", we'd be way ahead.
Re: (Score:3, Insightful)
Give it up Bjarne (Score:4, Interesting)
Few could tell you why you necessarily want to make your destructors virtual, why not doing "delete [] array" is not necessarily a memory leak, where must references be initialized, why it's good practice to use (at the time) the new cast notation... the list went on.
It's been a decade, I've started to forget all that material. I followed the ANSI committee, read most issues of "C++ Report" and wrote some of my best code during my days at MS. Unfortunately I can't say I found many people who could relate with verve for putting out great code. (All you trolls, this is about a programming language, not about any specific product or company, go outside, run 'til your heart feels like it's going to give out so your thoughts gravitate elsewhere... better yet, let it give out)
Sayonara C++,
-M
PS: C++ has become niche Bjarne.
Re: (Score:3, Interesting)
Few could tell you why you necessarily want to make your destructors virtual, why not doing "delete [] array" is not necessarily a memory leak, where must references be initialized, why it's good practice to use (at the time) the new cast notation... the list went on.
This exactly highlights the problem of C++:
Either you are doing low-level system programming in which case C++'s OOP abstractions wont help you anyway.
Or you are doing high-level coding in which case C++ doesnt help you because its abstractions are so leaky, they arent making stuff easier but unfortunately even more complex (manual management of virtual/nonvirtuals, memory/resource management, etc.).
C++ always was a kludge, its only raison d'etre was that many other languages weren't there yet and oth
Is it because it is just a language? (Score:5, Interesting)
Reading the threads, many people are discussing the relative merits of the other programming languages/environments - Java/C#/Python/etc - but what do all of those have that C++ does not have?
More complete environments.
When you install a C++ compiler, you get a C++ compiler and the standard library. When you install Java or C# or Python you get libraries to support simplified Networking, IO, Database access, GUIs, Memory Management, Threading and more.
Now it is possible to find all that for C++, but they are all separate components that the developer needs to decide on and download. And the number of choices for each is large. Do you use wxWidgets or FLTK or GTK+ for GUI, for example.
The other environments actually reduce your options, and for projects on a timeline the less time you spend on determining what you need to accomplish the task, the sooner you finish. Yes you can bring in replacement libraries in Java or Python or C#, but few people do. The folks that wrote those libraries did a pretty reasonable job on them, and since they are bundled with the standard installers, unless there are really specific needs, there's rarely a reason to replace them.
Look as an example of this at the Mono project. It is an attempt to provide the C#/.Net environment outside of Windows, but it does not have as much traction as
.Net on Windows, why? in part because the .Net frameworks are more complete on Windows than in Mono. I not many .Net developers that use WindowsForms in every project. Without that piece of the eco-system already available, their project would take much longer. Mono basically provides C# for Linux, just another programming language.
I've watched over the years as some folks tried to assemble Java-like libraries for C++, but they didn't really take off.
This appears to me as why C++ has the reputation of being so hard to build applications in. The developer has to do so much extra work just to get to the point of assembling the program that the Java or Ruby or C# or Python crowd gets out of the box. Is this the fault of C++? Not the language, but perhaps it is something the steering committe should address. As someone pointed out in an earlier thread, the C++ standard group likes to make the comment that a particular given feature is not part of the language. Perhaps they should rethink that stand.
As point of background, I started working with C++ when it first appeared as a pre-processor that created C code that was compiled by a C compiler (when you had to use the keyword Overload). I later moved into Java and have made a good living doing Java development. Recently though I have gotten deep into programming in 3D graphics with OpenGL. I'm doing it both in Java (using jogl) and C++ (direct gl calls as well as engines). This is one area where there is not a clear choice for any platform, but because in the Java world I have the Networking and Threading, I was able to put a system together much quicker than I could in C++. Of course the Java approach has it's own problems because of the sheer volume of objects created/destroyed (imagine a 3D model made of Vector3D objects), so I end up using C++ approaches using float[] arrays (also an object, but only one).
Sorry for the ramble. Anyway, the point is, I personally think C++ would be more acceptable if it really was an eco-system and not just a programming language.
Re:C++ has issues (Score:5, Insightful)
Re:C++ has issues (Score:5, Interesting)
you have my full attention. Please, support your assertion that Java and C# suck rocks.
C++ can be fast at execution time but the development time is prohibitive in many applications where you need to be agile and actually ship code in a hurry. I try not to get hung up on all the esoteric points of different programming languages, although I am quite amused to read other's comments. Yet, I will hazard a post on this topic.
I learned C++, not all of it to be sure, but the portions 85% of us might need in a given project. It may be intellectually stimulating to code an app form a "purest perspective" but many of us have to earn a living and produce a lot of code in short order. C++ does not fit this bill. Most applications just have to work and work today, not next quarter. Then we have to extend the app after a few months. Since C++ is quite a bit harder to read and I have to learn code I did not write in short order to perform this maintenance, I enjoy Java and C# apps a lot more than ones coded in C++.
Please, tell me why Java and C# suck compared to C++ in the practical world. Nearly all of us are not writing low-level, time-critical code. Most of us write apps for business transactions. I happen to write business software that is widely distributed and the C++ performance boost is nullified by the latency of remote calls to distant servers.
Please, tell me the advantage of writing an app in a year vice 6 months.
There are many languages because there are many problem domains. C++ is not the best language. There is no best language, period.
Nearly all serious desktop software
Finally, it has been my observation most "serious" code is no longer constrained to the desktop.
Re: (Score:3, Insightful)
Re: (Score:2)
So let's see. While I will readily agree that C++ really needs a sane syntax, which would include having explicit the default, the real problem with the first program is poorly named functions and classes. Explicit would have saved you that time, but the poor naming would have killed you later anyway. The error in the second one is some programmer's inability to read warnings from the compiler, though again I agree that the default is a backwards. Yet, I have never been bitten by the either,eh, feature, th
Re: (Score:3, Insightful)
You can still use the === operator, or the kind_of? method. And I'm fairly certain metaprogramming is supported.
I've just found that static type checking is about all that's missing, and it isn't incredibly useful, especially when you're doing unit tests.
I'd assumed
Re: (Score:3, Interesting)
You can still use the === operator, or the kind_of? method. And I'm fairly certain metaprogramming is supported.
That would still be runtime. I want it to be compile time, so that I don't pay a runtime overhead for something that could be precomputed, and so that type errors and other such trivia is out of the way before I begin testing. It's like getting a free peer review
:o)
I've just found that static type checking is about all that's missing, and it isn't incredibly useful, especially when you're doing unit tests.
I think we just have to disagree there. Unit test doesn't always save you from odd corner cases. Strict and static typing helps there.
I'd assumed they did? After all, Ruby is from Japan...
It is kind of ironic, but until recently ruby's unicode support sucked.
I'm feeling a bit lost in this jargon... Can you give me some specific examples?
Hmm. I hate giving examples i
Re:C++ too ich and is fast becoming a niche langua (Score:2)
Well, I'm finally moving from working on a FORTRAN 77 project to a C++ project in a few months, so I'm curious what you mean there. Maybe that like FORTRAN, C++ will never go away no matter how much we wish it would. And even worse that after using it for a while you secretly think it does some thing better than the more dominant languages but people look at you like you are crazy if you say so. For example, FORTRAN 77 has a better system for simple file IO
Re: (Score:2)
Re: (Score:3)
Re: (Score:3, Interesting)
True. Some, however, are hideously worse than others.
Pointers are a powerful tool, in the very few cases where you actually need them. Otherwise, they're just a buffer overflow or segmentation fault waiting to happen. (And there are languages which can't have either.)
Re: (Score:3, Interesting)
Really? In what way is that valuable? I can see how it would be interesting, but I can't imagine how it would make my programs better in any measurable way.
Well, in those days, with absurdly small limits on program size, you absolutely needed to know how many bytes your function would take. It sometimes meant being more than the 2K eprom. That requirement of knowledge make for better engineers, IMHO.
C++ has its flaws, absolutely. However, if you u | http://developers.slashdot.org/story/08/03/30/1155216/stroustrup-says-c-education-needs-to-improve?sdsrc=rel | CC-MAIN-2015-35 | refinedweb | 8,271 | 60.65 |
For the openHAB namespace: Choose the option “openHAB 2 Add-ons” in your IDE setup, and go ahead and create a skeleton for your binding. For this, go into your git repository under
git/openhab2-addons/addons/binding and call the script
create_openhab_binding_skeleton.sh with a single parameter, which is your binding name in camel case (e.g. ‘ACMEProduct’ or ‘SomeSystem’). When prompted, enter your name as author and hit “Y” to start the skeleton generation.
For the Eclipse SmartHome namespace: Choose the option “Eclipse SmartHome Extensions” in your IDE setup, and go ahead and create a skeleton for your binding. For this, go to
git/smarthome/tools/archetypeand run
mvn install in order to install the archetype definition in your local Maven repo. Now go to
git/smarthome/extensions/binding and call the script
create_esh_binding_skeleton.sh with a single parameter, which is your binding name in camel case (e.g. ‘ACMEProduct’ or ‘SomeSystem’). When prompted, enter your name as author and hit “Y” to start the skeleton generation.
Now switch in Eclipse and choose
File->Import->General->Existing Projects into Workspace, enter the folder of the newly created skeleton as the root directory and press “Finish”.
Note: Here you can find a screencast of the binding skeleton creation.
Implement of your code, please check the library recommendations at Eclipse SmartHome. This will ensure that everyone uses the same libraries for e.g. JSON and XML processing or for HTTP and websocket communication.
Setup and Run the Binding
To setup the binding you need to configure at least one Thing and link an Item to it. In your workspace in
demo-resources/src/main/resources/things, you can define and configure Things in file with a
*.things extensions.. To have the binding being picked up by the distro, you furthermore need to add it to the feature.xml, again at the alphabetically correct position. If you have a dependency on some
The Build includes Tooling for static code analysis that will validate your code against the openHAB Coding Guidelines and some additional best practices. Information about the checks can be found here.
The tool will generate an idividual report for each binding that Prioriry. | https://docs.openhab.org/v2.1/developers/development/bindings.html | CC-MAIN-2018-39 | refinedweb | 364 | 56.55 |
Develop .NET Standard user-defined functions for Azure Stream Analytics Edge jobs (Preview)
Azure Stream Analytics offers a SQL-like query language for performing transformations and computations over streams of event data. There are many built-in functions, but some complex scenarios require additional flexibility. With .NET Standard user-defined functions (UDF), you can invoke your own functions written in any .NET standard language (C#, F#, etc.) to extend the Stream Analytics query language. UDFs allow you to perform complex math computations, import custom ML models using ML.NET, and use custom imputation logic for missing data. The UDF feature for Stream Analytics Edge jobs is currently in preview and shouldn't be used in production workloads.
Overview
Visual Studio tools for Azure Stream Analytics make it easy for you to write UDFs, test your jobs locally (even offline), and publish your Stream Analytics job to Azure. Once published to Azure, you can deploy your job to IoT devices using IoT Hub.
There are three ways to implement UDFs:
- CodeBehind files in an ASA project
- UDF from a local project
- An existing package from an Azure storage account
Package path
The format of any UDF package has the path
/UserCustomCode/CLR/*. Dynamic Link Libraries (DLLs) and resources are copied under the
/UserCustomCode/CLR/* folder, which helps isolate user DLLs from system and Azure Stream Analytics DLLs. This package path is used for all functions regardless of the method used to employ them.
Supported types and mapping
CodeBehind
You can write user-defined functions in the Script.asql CodeBehind. Visual Studio tools will automatically compile the CodeBehind file into an assembly file. The assemblies are packaged as a zip file and uploaded to your storage account when you submit your job to Azure. You can learn how to write a C# UDF using CodeBehind by following the C# UDF for Stream Analytics Edge jobs tutorial.
Local project
User-defined functions can be written in an assembly that is later referenced in an Azure Stream Analytics query. This is the recommended option for complex functions that require the full power of a .NET Standard language beyond its expression language, such as procedural logic or recursion. UDFs from a local project might also be used when you need to share the function logic across several Azure Stream Analytics queries. Adding UDFs to your local project gives you the ability to debug and test your functions locally from Visual Studio.
To reference a local project:
- Create a new class library in your solution.
- Write the code in your class. Remember that the classes must be defined as public and objects must be defined as static public.
- Build your project. The tools will package all the artifacts in the bin folder to a zip file and upload the zip file to the storage account. For external references, use assembly reference instead of the NuGet package.
- Reference the new class in your Azure Stream Analytics project.
- Add a new function in your Azure Stream Analytics project.
- Configure the assembly path in the job configuration file,
JobConfig.json. Set the Assembly Path to Local Project Reference or CodeBehind.
- Rebuild both the function project and the Azure Stream Analytics project.
Example
In this example, UDFTest is a C# class library project and ASAEdgeUDFDemo is the Azure Stream Analytics Edge project, which will reference UDFTest.
Build your C# project, which will enable you to add a reference to your C# UDF from the Azure Stream Analytics query.
Add the reference to the C# project in the ASA Edge project. Right-click the References node and choose Add Reference.
Choose the C# project name from the list.
You should see the UDFTest listed under References in Solution Explorer.
Right click on the Functions folder and choose New Item.
Add a C# function SquareFunction.json to your Azure Stream Analytics project.
Double-click the function in Solution Explorer to open the configuration dialog.
In the C# function configuration, choose Load from ASA Project Reference and the related assembly, class, and method names from the dropdown list. To refer to the methods, types, and functions in the Stream Analytics Edge query, the classes must be defined as public and the objects must be defined as static public.
Existing packages
You can author .NET Standard UDFs in any IDE of your choice and invoke them from your Azure Stream Analytics query. First compile your code and package all the DLLs. The format of the package has the path
/UserCustomCode/CLR/*. Then, upload
UserCustomCode.zip to the root of the container in your Azure storage account.
Once assembly zip packages have been uploaded to your Azure storage account, you can use the functions in Azure Stream Analytics queries. All you need to do is include the storage information in the Stream Analytics Edge job configuration. You can't test the function locally with this option because Visual Studio tools will not download your package. The package path is parsed directly to the service.
To configure the assembly path in the job configuration file,
JobConfig.json:
Expand the User-Defined Code Configuration section, and fill out the configuration with the following suggested values:
Limitations
The UDF preview currently has the following limitations:
.NET Standard languages can only be used for Azure Stream Analytics on IoT Edge. For cloud jobs, you can write JavaScript user-defined functions. To learn more, visit the Azure Stream Analytics JavaScript UDF tutorial.
.NET Standard UDFs can only be authored in Visual Studio and published to Azure. Read-only versions of .NET Standard UDFs can be viewed under Functions in the Azure portal. Authoring of .NET Standard functions is not supported in the Azure portal.
The Azure portal query editor shows an error when using .NET Standard UDF in the portal.
Because the custom code shares context with Azure Stream Analytics engine, custom code can't reference anything that has a conflicting namespace/dll_name with Azure Stream Analytics code. For example, you can't reference Newtonsoft Json. | https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge-csharp-udf-methods | CC-MAIN-2018-51 | refinedweb | 996 | 65.42 |
With Windows XP Tablet PC Edition, Microsoft introduced more than just a powerful platform for users. Through the associated SDKs, Microsoft has also empowered developers to create advanced ink-enabled applications.
Recognizer objects provide the means to recognize handwriting as text. Similarly, gestures can be recognized and interpreted in any way the developer desires. Other objects allow for the division of digital ink into paragraphs, lines, and segments. The combination of these objects allow for the creation of very advanced applications. However, these options do not cover the complete range of features needed to create next-generation ink-enabled applications.
So what is missing? In short, the ability to add meaning to ink beyond the simple recognition of text. Users may add drawings, for instance. With ink recognizers, even a simple line is considered to be text and is thus recognized with very poor results. You need the ability to identify whether a part of a document is text or a drawing before recognition is applied.
Also, once a drawing element such as a line is identified, its meaning must also be interpreted. It could be meant as a true drawing, but it could be a line connecting an annotation with another part of a document, or it could be an underlined word. This sort of meaning can only be derived by using contextual analysis that looks at large sections of ink rather than just individual strokes that make up characters or words. And even if the line is ultimately identified as a true drawing, how can the system recognize drawing primitives such as circles, squares, or triangles?
The first-generation Ink SDK features do not make it easy to handle any of these scenarios. Furthermore, spatial analysis functionality is missing. Ink Divider objects can break digital ink into various segments, but logical context is limited. Segments are not the same as words, for instance. This makes it difficult to handle scenarios such as reflowing ink, which is necessary when ink-space gets resized. If a window containing ink changes its size, individual words of ink may need to move onto the next line and the rest of the ink has to move accordingly. Without the means to recognize concepts such as lines and words reliably, it is difficult to implement ink reflowing. Also, as ink moves, logically associated ink such as an underline or an annotation needs to move as well. To implement such a scenario successfully, relationships between different ink segments need to be recognized.
Another aspect of spatial analysis is the ability to assign names to certain sections of ink. In digital equivalents of paper forms, for instance, you would find different areas such as a space for a name or address.
The new InkAnalysis API (available in Windows Vista™ as well as downlevel to Windows XP through a redistributable) provides the means to handle all these scenarios with ease. This API is available as part of the new Windows® SDK, which contains the new WinFX® components (see msdn.microsoft.com/windowsvista/getthebeta). Note that the WinFX SDK has a dependency on the .NET Framework 2.0.
The InkAnalysis API supplements existing features that are still available. However, the API also supersedes the existing objects, especially the recognizer objects (specifically the Recognizer and RecognizerContext objects) and the ink Divider object. In essence, the InkAnalysis API combines the two different but related technologies that are required for handling digital ink: recognition and layout analysis and classification.
Ink recognition, often called handwriting recognition, is the task of analyzing strokes of ink in order to turn them into a text-based version of the ink that can be handled as standard strings. This is done based on specific language and culture assumptions for the ink that is to be recognized. In other words, ink recognition is the computer reading someone’s handwriting. Ink recognition can also be applied in a slightly different way. For instance, ink recognition could analyze sheet music for the purpose of turning it into some sort of standard data format, such as MIDI. Or ink recognition could be used to read Egyptian hieroglyphs. With the new InkAnalysis API, you can add drawing primitives and basic shapes to the list.
Ink layout analysis and classification, on the other hand, deals with the overall layout and spatial distribution of ink. Microsoft defines two areas of ink parsing: classification and layout analysis. Ink classification concerns itself with finding semantically meaningful parts such as paragraphs, words, or drawings. In many cases it is helpful to find such semantics before applying handwriting recognition. For instance, it makes no sense to apply handwriting recognition to drawings. The InkAnalysis API provides the means to do so using ink classification, a feature that was not available previously. Furthermore, the InkAnalysis API allows you to break ink into a hierarchical tree, which then directly provides the means to perform recognition on individual tree branches.
Layout analysis refers to the computational analysis of ink strokes and their positions relative to each other with the intent to find spatial and semantic meaning. This goes beyond the relatively simple task of finding lines and paragraphs. In simpler terms, layout analysis can find things such as annotations, bulleted lists, flow charts, and much more.
Basic Ink Analysis
To create a first ink analysis example, let’s start with a basic ink-enabled Windows Forms application just as if you were to create a conventional ink recognition example. If you are not familiar with ink development, here is the short version of how to create such a form.
Using the Microsoft® Tablet PC SDK, any form or control can be ink enabled. To do so, create a standard Windows Forms app and add an InkOverlay object. This object is part of the Microsoft.Ink namespace, exposed from Microsoft.Ink.dll, which needs to be added to the project references. (It is listed as Microsoft Tablet PC API in the Add References dialog box.) You can then link this object with any control to be ink-enabled by associating the ink overlay with the control’s window handle. When the InkOverlay’s Enabled property is set to true, ink collection and ink rendering are enabled on the control, including the ability to add, select, erase, and otherwise manipulate ink. The digital ink information is stored in an Ink object, accessible from the InkOverlay, where it can be accessed for recognition, programmatic manipulation, or storage.
To provide a good ink experience, it is important to create an inkable area of sufficient physical dimension. A good choice is a Panel control, perhaps with white background, that covers the majority of a window. Of course it is also possible to enable an entire window (Form) for inking, but in most scenarios other controls are needed in the interface, such as buttons. Therefore, the Panel approach is usually preferable.
To follow this Panel approach, create a new Form in the .NET-targeted language of your choice, drop in a Panel control, and name it inkPanel. To ink-enable the panel, add the following lines of code to the form:
private InkOverlay overlay; public Form1() { InitializeComponent(); this.overlay = new InkOverlay(this.inkPanel); this.overlay.Enabled = true; }
The InkAnalysis API operates based on ink and stroke information, which is collected in the InkOverlay’s Ink object. Fundamentally, ink analysis is as simple as creating an instance of an InkAnalyzer object, pointing it at ink, and calling the Analyze method (to use InkAnalyzer, you must add a reference to Microsoft.Ink.Analysis.dll, which, as of the February 2006 Community Technology Preview, is available in %ProgramFiles%\ReferenceAssemblies\Microsoft\TabletPC\vl.7\).
Here is the simple code needed to trigger ink analysis:
InkAnalyzer analyzer = new InkAnalyzer(this.overlay.Ink, this); analyzer.AddStrokes(this.overlay.Ink.Strokes); analyzer.Analyze();
This analyzes all the strokes of ink added to the analyzer. This configures the analyzer and causes it to create analysis results internally that you can subsequently access. To access the root of the hierarchical ink representation, start with the RootNode property of the analyzer after executing Analyze:
ContextNode root = analyzer.RootNode;
To perform text recognition, you use the GetRecognizedString method of either the type-specific Node or the analyzer itself:
string text = analyzer.GetRecognizedString(); // ... or ... string text = root.GetRecognizedString();
It is also possible to access all the hierarchical subelements of the root node individually. The individual elements found that way will vary with the exact ink the user writes. But, in general, immediate subnodes of the root node are likely to be paragraphs or drawings. The following code iterates over all child nodes and retrieves their handwriting recognition results one by one:
StringBuilder text = new StringBuilder(); foreach(ContextNode subNode in root.SubNodes) { text.Append(subNode.GetRecognizedString()); }
It is important to realize the hierarchical nature of the analysis tree. Performing recognition on the root node returns the complete recognized text for all the ink data added to the analyzer. Performing recognition on the subnodes returns subsections of the entire document, most likely paragraph by paragraph (again, depending on the exact data). Each paragraph can then be broken down further by accessing its children, and so forth.
It is relatively easy to walk the entire tree in a recursive fashion. The code in Figure 2 performs such a tree walk and shows the recognition results of each node in the tree, as well as the node type. The complete code for the form shown in Figure 1 is included with the code download for this article.
The result of walking through the analysis tree can be seen in Figure 1. If you look closely, you can see the individual nodes on the tree and their types, as well as the recognition results.
You may see ink examples that use the ToString method of a Strokes collection to retrieve recognition results. For example, instead of using GetRecognizedString as I did in the previous sample, I could have used:
string text = node.Strokes.ToString();
This is not recommended as it is a legacy feature of the ink API. Using Strokes.ToString in this fashion does not take advantage of the processing and contextual analysis performed by InkAnalyzer. Thus, the recognition results could vary depending on whether the recognizer looks at strokes making up a whole paragraph or line of ink, or whether it just looks at the strokes for an individual word. This is due to how the recognizer rates recognition quality. If a whole sentence is recognized, grammatical and other language rules can be used to aid recognition; while in scenarios with individual word recognition, only spelling can be used to verify which recognition results are likely. You should avoid these issues by instead using GetRecognizedString. Additionally, I’ve wrapped usage of the InkAnalyzer with a C# using statement in order to ensure that InkAnalyzer.Dispose is called when I’m done with it. Calling InkAnalyzer.Dispose is mandatory to avoid resource leaks.
Paragraphs, Lines, and Words
An important part of ink analysis is the ability to find logical segments of text such as paragraphs, lines, and words. The previous example looks at the whole logical tree of ink nodes, which includes this information. Often it is preferable to specifically look for certain types of text. This can be achieved either through the Type property of each node or through the FindNodesOfType method on the analyzer object. The code in Figure 3 looks at all paragraphs and, subsequently, at all lines within a node. This makes it possible to recognize handwriting and preserve line breaks at the same time.
Of course, a similar result (which can be seen in Figure 4) could have been achieved using the older Divider object, but it would have been somewhat more difficult. Also, this approach is more powerful than the ink divider. For instance, this example would completely ignore drawings, since the FindNodesOfType method is used to return paragraphs only. The ink Divider is not capable of such distinction. Ink analysis is even capable of identifying more sophisticated logical elements of ink, such as bulleted lists.
Analysis of bulleted lists requires the identification of bullets of any kind. Bullets exist within paragraphs. Whenever a bullet is found, it is the first element within a paragraph. The bullet itself does not have any text beyond the label of the bullet (such as *, or 1., or A). The text that goes along with the bulleted list item is the first line element within the paragraph. This is not a subnode of the bullet, but a sibling. Unfortunately, there is no way to find siblings easily. Therefore, the way to find bulleted-list elements is to look at paragraphs and then check whether the first item within the paragraph is a bullet.
Another approach is the use of the FindNodesOfType method to retrieve all bullets. From there on, you can look at the parent (which is the paragraph) and then look at the paragraph’s second item, which is the text that goes along with the bullet. Better approaches would depend on the specifics of the scenario at hand. If the goal is to extract bulleted lists from a large amount of ink that may include non-bulleted text, then it is better to first retrieve a list of bullets using FindNodesOfType. If the goal is to process a large amount of ink-text and simply handle bulleted lists appropriately if encountered, then the paragraph-by-paragraph approach is preferable.
Figure 5 shows an example of a numbered list. Figure 6 provides the code that produces this result. Note that this example performs recognition on the line of ink associated with the bullet, as well as the bullet itself, to retrieve the bullet label.
Analysis Hints
The InkAnalysis API does an amazing job out of the box, but it is possible to improve results further and make it easier to work with the API at the same time by using analysis hints. Analysis hints allow the developer to provide additional information to the recognizer and the analysis engine. For instance, you can set "factoids" to specify information about the type of text that is expected. This drastically improves recognition results for hard-to-recognize text such as e-mail addresses.
Factoids have been available on simple recognition objects since the first versions of the ink API. However, what makes analysis hints special is that separate areas on an ink canvas can have different factoids. For instance, it is possible to create an area on a form that is dedicated to receiving a specific kind of ink input. The following defines a rectangle within the ink panel that is dedicated to writing a name:
Rectangle nameRect = new Rectangle( 100, 10, inkPanel.Width-110, 60); AnalysisHintNode hint = analyzer.CreateAnalysisHint( GetInkSpaceRectangle(nameRect)); hint.Name = "Name";
In this example, a new node called Name is added to the analysis tree. The Name area is defined by a rectangle that begins 100 pixels from the left and 10 pixels from the top of the form, goes across almost the remaining width of the panel, and is 60 pixels tall. From this point on, the developer has the choice to refer to everything written in this area by name. It thus becomes easy to recognize ink entered in the Name area of the form. To do so, you simply access the Name node and retrieve all linked nodes in the following fashion:
ContextNodeCollection names = analyzer.GetAnalysisHints("Name"); if (names.Count > 0) { AnalysisHintNode hint = (AnalysisHintNode)names[0]; string name = hint.Links[0].DestinationNode.GetRecognizedString(); }
The linked node is generally a WritingRegionNode, which contains a paragraph and one or more lines. It is possible to either recognize the whole writing region or drill down further in a fashion identical to what you have seen in prior examples.
Note that the rectangle defining the input area has to be defined in ink space. Ink is collected and stored at a much higher resolution than it is displayed onscreen. Generally, developers are familiar with screen coordinates and thus like to define regions using that system. However, these coordinates have to be converted into ink coordinates before they can be used as analysis hints. A detailed discussion of ink versus screen coordinates is beyond the scope of this article, but here are two simple methods that convert screen X and Y coordinates to ink X and Y coordinates using a GDI+ Graphics object and the PixelToInkSpace method exposed by the Renderer object associated with the InkOverlay:
private int GetInkX(int screenX) { return GetInkPoint(screenX, 0).X; } private int GetInkY(int screenY) { return GetInkPoint(0, screenY).Y; } private Point GetInkPoint(int screenX, int screenY) { using(Graphics g = this.CreateGraphics()) { Point p = new Point(screenX, screenY); this.overlay.Renderer.PixelToInkSpace(g, ref p); return p; } }
The GetInkSpaceRectangle method seen previously uses these methods to convert a screen-space rectangle into an ink-space rectangle. The complete code used to convert the rectangle is included in the code download.
This analyzer hint is a rather basic one that serves the main purpose of creating a named, simple-to-access ink input area on a form. This is useful for the simulation of a paper form that has a number of distinct fields that need to be filled out. However, analysis hints offer a lot more than that. The following code example sets aside another part of the ink panel to allow for the entry of an e-mail address:
Rectangle emailRect = new Rectangle(100, 280, inkPanel.Width-110, 60); AnalysisHintNode hint3 = analyzer.CreateAnalysisHint(GetInkSpaceRectangle(emailRect)); hint3.Name = "Email"; hint3.Factoid = "EMAIL";
E-mail addresses are difficult to recognize because they use odd characters like @ and they often use sequences of characters that cannot be found in recognizer dictionaries. However, they also follow a specific pattern. They must have one and only one @ sign. They cannot have spaces, and they must have at least one dot after the @ sign. This is valuable information for the recognizer, so setting this factoid will dramatically increase the ability to recognize e-mail addresses.
There are a large number of factoids, many of which are culture specific. For an exact list of factoids, refer to the Mobile PC and Tablet PC Development Guide. Note that it is possible to set a factoid for a complete ink area and not just a specific region by creating an analysis hint without specifying a limiting rectangle. Only one such global analysis hint can exist for an ink analyzer object.
Another valuable analysis hint feature is the ability to set analysis guides. Guides define an area of ink, the number of lines and columns, and the height of each line. Finding the baseline of each line of text in a free-form ink area is an inexact science. Also, finding character heights is difficult and thus the distinction between uppercase and lowercase characters can be inaccurate. Using guides, you can define the height of the midline, an imaginary line along the top edge of a half-height character like "u."
The following code snippet defines an address area within the ink panel that contains three lines and no columns. The entire space is 150 pixels tall, with a little bit of margin at the top and bottom so the user isn’t immediately penalized if they happen to write outside the defined area. This leaves 45 pixels per line. Considering that the midline is usually a little higher than half the height of an uppercase character, I define the midline to be 25 pixels. The last two parameters passed to the RecognizerGuide object define the writing rectangle (the ideal writing rectangle for each individual line without the margin of error granted to the user) and the overall rectangle the guide is for (measured relative to the area assigned to the overall hint). Of course, all these screen coordinates have to be converted to ink space:
Rectangle addrRect = new Rectangle(100,100, inkPanel.Width-110, 150); AnalysisHintNode hint2 = analyzer.CreateAnalysisHint(GetInkSpaceRectangle(addrRect)); hint2.Name = "Address"; Rectangle drawnRect = new Rectangle(5, 10, addrRect.Width-20, 45); Rectangle writingRect = new Rectangle(0, 0, addrRect.Width, 50); hint2.Guide = new RecognizerGuide(3, 0, GetInkY(20), GetInkSpaceRectangle(writingRect), GetInkSpaceRectangle(drawnRect));
The trouble is that while you have now perfectly defined various subareas of the ink panel, the user does not know about these areas, since there is no visual indication of these areas in the user interface. It is therefore advisable to add such indicators. In general it is done with a few simple lines of GDI+ drawing code that is linked to the paint event of the ink-enabled control. Figure 7 shows one such paint event handler, and Figure 8 shows the result.
Drawing Recognition
Recognizing drawings has been traditionally difficult to implement with the Tablet PC SDK since there was no predefined functionality for this purpose. The InkAnalysis API takes a first step towards solving the problem by providing the ability to recognize drawing primitives such as rectangles, circles, triangles, and the like. The following code snippet extracts all drawing nodes and retrieves the names of the individual shapes:
StringBuilder text = new StringBuilder(); using(InkAnalyzer analyzer = new InkAnalyzer(this.overlay.Ink, this)) { analyzer.AddStrokes(this.overlay.Ink.Strokes); analyzer.Analyze(); ContextNodeCollection drawings = analyzer.FindNodesOfType(ContextNodeType.InkDrawing); foreach(InkDrawingNode drawing in drawings) { text.AppendLine(drawing.GetShapeName()); } } MessageBox.Show(text.ToString());
Figure 9 shows this code in action. Of course, just like with other nodes, each drawing node has a number of additional properties associated with itself, such as the position of each element, the bounding box, and drawing specific information such as the position of the four corners of a rectangle. These points are known as hot points. Each ink drawing node has an array of associated hot points. For most shapes, hot points are the start and end points of individual lines. Circles and ellipses are the exception, with hot points defining four points at 0, 90, 180, and 360 degree points.
Figure 9 also shows drawings analyzed for hot points. I created a GDI+ drawn overlay that renders red lines from hot point to hot point. The methods responsible for this behavior can be seen in Figure 10. Note that all hot points are defined in ink space coordinates and need to be converted to screen coordinates before they can be rendered with GDI+. The code for the coordinate conversion can also be found in Figure 10.
The Bigger Picture
Ink analysis really shines when it comes to analyzing large segments of ink for dependencies and semantic meaning. Not only does ink analysis break digital ink into a hierarchical tree, it also creates links between these hierarchical nodes. I have already shown an example of such links in the analysis hint examples, where a certain area within an ink surface is represented by a named node that links to ink nodes that fall within the on-screen area occupied by the analysis hint node. This is an example of a link that is introduced manually. But ink analysis can also find such links based on contextual meaning. Based on this ability, it is possible to find multiple segments of ink that form a logical unit, such as a word and an underline, as well as self-contained segments that relate to each other, such as a paragraph and an associated annotation, or two different shapes in a flow chart.
Links can be explored through the Links collection that is present on each context node in the analyzed hierarchy. For instance, a word node can be associated with an underline by a single element in the links collection. Figure 11 shows an example that draws colored bounding boxes around linked objects (blue for the link source and red for the link destination). In this example, the underline is linked with the word "brown." This example is a slightly more sophisticated version of the very basic ink context hierarchy example at the beginning of the article.
Instead of displaying the hierarchy in a messagebox, it is displayed in a treeview control. Each node in the treeview maintains a link to the ink context node created by the ink analyzer. The ink context node is assigned to the Tag property of each tree node, so it can be accessed at a later point in time. This enables the app to react to clicks on various TreeView nodes by selecting the ink and recognizing the handwritten text associated with each node:
private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { ContextNode inkNode = (ContextNode)e.Node.Tag; // Recognize the text this.textBox1.Text = inkNode.GetRecognizedString(); // Select the associated ink this.overlay.Selection = inkNode.Strokes; }
Also, the Paint event of the ink panel can be used to draw bounding boxes around linked objects for the current selection. The code in Figure 12 accomplishes that task. Note that once again, the bounding box coordinates need to be converted from ink space to screen space.
A slightly more complex version of this example is available in the code download. In that example, ink analysis is triggered automatically 2 seconds after the user writes the last stroke of ink. If more ink is added later, analysis is triggered again. This is accomplished by listening for the Stroke event on the InkOverlay, which fires every time a stroke of ink has been completed. The event handler code associated with this event resets a timer object with an interval of 2000 milliseconds (2 seconds). If another stroke occurs within those 2 seconds, the timer is reset. Otherwise, the timer fires and triggers the analysis, which populates a treeview as in the example just shown. Whenever the user selects a node in the treeview, the associated ink is selected, and bounding boxes are drawn to show the links. The code used to do so is almost identical to the snippets you just saw, with the exception of some more sophisticated screen refresh code. Also, the code includes a method that converts ink space coordinates to screen space coordinates.
Figure 13 shows another ink document analyzed by the same code. This time, ink analysis is used to find links between various elements of a flow chart. In particular, a line (the source object) has two separate links to connect two shapes in the chart. If we were to look at each of the two shapes by itself, we would find that it has links to the text within itself, and so forth.
Using link analysis, it is possible to create advanced behavior. For instance, linked objects can be moved automatically whenever one of the objects involved in the link scenario is moved around. This enables you to keep logical units, such as words and underlines, together. In other scenarios, ink may be manipulated in different ways. For instance, if the Step 2 shape from Figure 13 is moved down, the enclosed text has to move, but the line that connects the Yes/No shape with the Step 2 shape should not move, but stretch instead. With the basis laid by link analysis, this represents a relatively simple computational problem that can be resolved based on the number of linked objects and their positions.
Link analysis represents one of the biggest improvements in the Tablet PC SDK. Without ink analysis (and link analysis), such scenarios used to represent problems of such magnitude that it wasn’t feasible for the average developer to resolve them using old-style recognizers and ink dividers.
Other Technical Aspects
All the features introduced in this article so far are directly aimed at producing recognition and analysis results. However, there are a few other technical aspects that should be mentioned as well. For one, there is the aspect of performance. Any form of ink recognition and analysis is computationally expensive. This is especially true for large ink documents. For such scenarios, synchronous analysis may not be feasible. Instead, large ink documents need to be analyzed continuously in the background.
To perform background analysis, use the analyzer’s BackgroundAnalyze method instead of the Analyze method shown in this article. Since this method is executed asynchronously, it does not directly return a result. Instead, an event-driven model is used to indicate that analysis is complete, or new analysis results are available.
Another important new feature of ink analysis is the ability to store results to disk. Using conventional recognizer objects, recognition was performed in memory only. Of course, it was possible to retrieve recognition results and store them as text, but it was not possible to serialize the state of a recognizer object. If a large ink segment was reloaded later, all recognition had to be performed again, which can be a very time consuming task. Results may also vary depending on the configuration of the recognizer. Using ink analysis, the state of the analyzer can be serialized. The analyzer object provides SaveResults and Load methods for this purpose.
The ability to save the state of the analyzer is particularly useful since the analyzer can perform incremental analysis and recognition. This means that additional strokes of ink can be added for analysis and the analyzer will only have to analyze data associated with the new strokes. Depending on the exact scenario, only the added strokes may be analyzed, or a slightly larger set of strokes may be involved if the relationship between existing and new strokes needs to be analyzed. In almost all scenarios, the workload for the analyzer will be significantly reduced compared to re-analyzing a complete document. Performance reasons alone make InkAnalysis preferable over older ink objects.
If you are interested in Tablet PC development, then take some time and familiarize yourself with the new ink analysis services. This technology is currently available in beta, but it is at the very top of my "technologies I want to use as soon as possible" list, because it provides a number of key improvements over older Ink API components and technologies. | https://www.codemag.com/article/0606016 | CC-MAIN-2019-13 | refinedweb | 4,985 | 53.81 |
aCollection foreach {(elem) => // do something with elem println(elem) }For those not familiar with Scala, this would be a bit like doing something like this in Java:
import scala.*; // ... aCollection.foreach(new Function1<ElemType?, Unit> { Unit apply(ElemType? elem) { println(elem); return Unit.instance() } });Don't call us, indeed. This is idiomatic in Scala, so almost everything uses things like that. You do have to modify your element access to fit this type of iteration, and even then, it could decide to do it in random order, or not at all. (I'll ignore head and tail for a bit, because they don't technically have to be implemented by a Transversable collection.) Yes, it does think it knows better when to do whatever it is you want to do to each element than you. What of it?
Iterator<ElemType?> iter = aCollection.getIterator(); while(iter.hasNext()) { ElemType? elem = iter.next(); println(elem); }Or even (shudder):
for(int i = 0; i < aCollection.getSize(); i++) { ElemType? elem = aCollection.get(i); println(elem); }So, which is better? Iteration where you have control, or where the library does? Please give me an example of where the HollywoodPrinciple could make things harder to maintain. And I want code, please. Even if it's in Ook! representing objects with an incredibly complex message-passing system. | http://c2.com/cgi/wiki?DontCallMe | CC-MAIN-2014-41 | refinedweb | 219 | 59.8 |
Created on 2012-05-11 00:12 by jfunk, last changed 2015-06-15 03:02 by martin.panter. This issue is now closed.
OpenSSL provides a method, SSL_CTX_set_default_verify_paths(), for loading a default certificate store, which is used by many distributions.
In openSUSE, the default store is not a bundle, but a directory-based store, which is not supported at all by the SSL module in Python 2.7. A bug related to this was assigned to me here:
I created patches for the Python 2.7.3 and 3.2.3 SSL modules that will load the distribution-specific store if ca_certs is omitted.
Here's the patch for Python 3.
Well, this is basically a change in behaviour, so it could only go in the default branch (3.3). But 3.3 already has SSLContext.set_default_verify_paths(), so it's not obvious why the patch is needed.
(furthermore, automatically selecting the system-wide certificates could be surprising. There may be applications where you don't want to trust them.)
By the way, I see that the original bug is against the python-requests library? Perhaps it would be better to advocate the use of load_verify_locations() there, since it's a more adequate place for a policy decision.
(that said, Python's own urllib.request could perhaps grow a similar option)
load_verify_locations() is not available in Python 2.x. It was added in 3.x. Also, there is no way to load a directory-based certificate store at all in Python 2.x, which is why the bug was opened.
Sorry, by our policy this change is a new feature and cannot go into stable versions.
Fair enough. What about a patch to handle a directory store passed through the ca_certs parameter? As it stands now, it's impossible to load the distribution-supplied cert store on openSUSE.
> What about a patch to handle a directory store passed through the
> ca_certs parameter? As it stands now, it's impossible to load the
> distribution-supplied cert store on openSUSE.
I'm afraid it would still be a new feature, unsuitable for a bugfix release. Other distros simply have both a directory-based cert store and a cert bundle. In Mageia I see both /etc/pki/tls/rootcerts/ (a directory-based cert store) and /etc/pki/tls/certs/ca-bundle.crt (a single file cert bundle). (yes, I hope they're synchronized :))
Generally, the only reason we would add a new feature in a bugfix release is if it's necessary to fix a security issue (such as the hash randomization feature). Here it's not necessary: you could simply ship a cert bundle in addition to the cert store. I suppose its generation is easily automated with a script.
(and, yes, the ssl module has long lacked important features; its history is a bit bumpy)
Again, for 3.3, a patch allowing urllib.request to call load_default_verify_locations() could be a good idea.
Something like this perhaps?
--- a/Lib/urllib/request.py Fri May 11 13:11:02 2012 -0400
+++ b/Lib/urllib/request.py Fri May 11 11:03:02 2012 -0700
@@ -135,16 +135,19 @@
_opener = None
def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
- *, cafile=None, capath=None):
+ *, cafile=None, capath=None, cadefault=True):
global _opener
if cafile or capath:
if not _have_ssl:
raise ValueError('SSL support not available')
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
context.options |= ssl.OP_NO_SSLv2
- if cafile or capath:
+ if cafile or capath or cadefault:
context.verify_mode = ssl.CERT_REQUIRED
- context.load_verify_locations(cafile, capath)
+ if cafile or capath:
+ context.load_verify_locations(cafile, capath)
+ else:
+ context.load_default_verify_locations()
check_hostname = True
else:
check_hostname = False
> Something like this perhaps?
For example, yes. Now we need to find a way of testing this...
Ok, here's a patch with a test and documentation updates.
> Ok, here's a patch with a test and documentation updates.
Ok, thanks! The only change I would make is that cadefault needs to be
False by default; particularly because some platforms don't have a
OpenSSL-compatible default CA store (Windows comes to mind) and all
HTTPS requests would then start failing.
You don't have to upload a new patch; I can change it when committing.
Oh, by the way, could you sign and send a contributor agreement? See
(it is not a copyright assignment, just a formal licensing agreement)
New changeset f2ed5de1c568 by Antoine Pitrou in branch 'default':
Issue #14780: urllib.request.urlopen() now has a `cadefault` argument to use the default certificate store.
Patch committed. I propose to close the issue, unless further enhancements are suggested.
Ok, perfect. I submitted a copy of the agreement.
The documentation of the default value “cadefault=False” was fixed in Issue 17977. A later change seems to have made this paramter redundant. Anyway I think this can be closed. | http://bugs.python.org/issue14780 | CC-MAIN-2017-04 | refinedweb | 800 | 58.89 |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
If you liked what you've learned so far, dive in!
video, code and script downloads.
The service container is the magician's hat of Drupal: it's full of useful objects, I mean "services" - we're trying to sound sophisticated. By default, the container is filled will cool stuff, like a logger factory, a translator, a white rabbit and a database connection, just to name a few.
Head over to the terminal and run a new Drupal Console command:
drupal container:debug
This prints every single service in the container: over 400 tools that we have access to out-of-the-box. The nickname of each service is on the left: you'll use that to get that service. On the right, it tells you what type of object this will be.
Most of the services here you won't need to use directly: some like
cron,
database,
file_system and a few other ones might be really useful to you.
Before we get into how to use these services, I want to add our
RoarGenerator to
the service container. This means that instead of instantiating it directly, we'll
teach Drupal's container how to instantiate the
RoarGenerator for us. Then we'll
ask the container for the "roar generator" and it will create it for us.
Why would you do this? Hold that thought: you'll see the benefits soon.
To register a service, go to the root of your module and create a
dino_roar.services.yml
file. Inside, start with a
services key:
In order for the container to create the
RoarGenerator for us, it needs to know
two things: what's your spirit animal and your favorite color.
Scratch that: the first thing it needs to know is what "nickname" to use for the
service. This can be anything, how about
dino_roar.roar_generator:
The only rule is that this needs to be unique in your project and use lowercase and
alphanumeric characters with exceptions for
_ and
..
The second thing the container needs to know is class for the service. To tell it,
add a
class key below the nickname and type
RoarGenerator:
I'll hit tab because I'm super lazy and PhpStorm isn't. It gives me the fully qualified namespace. Thanks Storm!
Go to the terminal and rebuild the cache:
drupal cache:rebuild
Now run
container:debug and pipe it into grep for "dino":
drupal container:debug | grep dino
There it is! It's nickname is
dino_roar.roar_generator and it'll give us a
RoarGenerator
object. Yay! Um, but what now? How can we get that service from the container? Actually,
I have no idea. I'm kidding - next chapter! | https://symfonycasts.com/screencast/drupal8-under-the-hood/configure-service | CC-MAIN-2020-16 | refinedweb | 468 | 73.47 |
$ cnpm install petit-dom
A minimalist virtual DOM library.
$ npm install --save petit-dom
or
$ yarn add petit-dom
If you're using Babel you can use JSX syntax by putting a
/* @jsx h */ at the top of the source file.
/* @jsx h */ import { h, render } from "petit-dom"; // assuming your HTML contains a node with "root" id const parentNode = document.getElementById("root"); // mount render(<h1>Hello world!</h1>, parentNode); // patch render(<h1>Hello again</h1>, parentNode);
You can also use raw
h function calls if you want, see examples folder for usage.
petit-dom also supports render functions
/* @jsx h */ import { h, render } from "petit-dom"; function Box(props) { return ( <div> <h1>{props.title}</h1> <p>{props.children}</p> </div> ); } render(<Box title="Fancy box">Put your content here</Box>, parentNode);
render functions behave like React pure components. Patching with the same arguments will not cause any re-rendering. You can also attach a
shouldUpdate function to the render function to customize the re-rendering behavior (By default props are tested for shallow equality).
Besides HTML/SVG tag names and render fucntions, the 'h' function also accepts any object with the following signature
{ mount(props, stateRef, env) => DomNode patch(newProps, oldProps, stateRef, domNode, env) => DomNode unmount(stateRef, domNode, env) }
Each of the 3 functions (they are not methods, i.e. no
this) will be called by the library at the moment suggested by its name:
mount is called when the library needs to create a new DOM Node to be inserted at some palce into the DOM tree.
patch is called when the library needs to update the previously created DOM with new props.
unmount is called after the DOM node has been removed from DOM tree.
props,
newProps and
oldProps all refer to the properties provided to the
h function (or via JSX). The children are stored in the
children property of the props object.
stateRef is an object provided to persist any needed data between different invocations. As mentioned, the 3 functions above are not to be treated as instance methods (no
this) but as ordinary functions. Any instance specific data must be stored in the
stateRef object.
domNode is obviously the DOM node to be mounted or patched.
env is used internally by the mount/patch process; This argument must be forwarded to all nested
mount,
patch and
unmount calls (see below example).
Custom components are pretty raw, but they are also flexible and allow implementing higher-level solution (for example, render functions are implemented on top of them).
The examples folder contains simple (and partial) implementations of React like components and hooks using the custom component API.
h(type, props, ...children)
Creates a virtual node.
type: a string (HTML or SVG tag name), or a custom component (see above)
props: in the case of HTML/SVG tags, this corresponds to the attributes/properties to be set in the real DOM node. In the case of components,
{ ...props, children } is passed to the appropriate component function (
mount or
patch).
render(vnode, parentDom)
renders a virtual node into the DOM. The function will initially create a DOM node as specified the virtual node
vnode. Subsequent calls will update the previous DOM node (or replace it if it's a different tag).
There are also lower level methods that are typically used when implementing custom components:
mount(vnode, env)
Creates a real DOM node as specified by
vnode. The
env argument is optional (e.g. when called from top level), but typically you'll have to forward something passed from upstream (e.g. when called inside a custom component).
patch(newVNode, oldVNode, domNode, env)
Updates (or eventually replaces)
domNode based on the difference between
newVNode and
oldVNode.
unmount(vnode, domNode, env)
This is called after
domNode has been retired from the DOM tree. This is typically needed by custom components to implement cleanup logic. | https://developer.aliyun.com/mirror/npm/package/petit-dom | CC-MAIN-2020-24 | refinedweb | 646 | 54.52 |
This is a
playground to test code. It runs a full
Node.js environment and already has all of
npm’s 400,000 packages pre-installed, including
chance-dotfile with all
npm packages installed. Try it out:
require()any package directly from npm
awaitany promise instead of using callbacks (example)
This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.
A Chance.js mixin to generate a dotfile.
$ npm i chance-dotfile
$ yarn add chance-dotfile
import Chance from 'chance'; import dotfile from 'chance-dotfile'; const chance = new Chance(); chance.mixin({ dotfile }); chance.dotfile();
By default,
chance-dotfile will return a randomly generated a dotfile.
Example:
.random
MIT © Michael Novotny | https://npm.runkit.com/chance-dotfile | CC-MAIN-2020-34 | refinedweb | 117 | 51.24 |
I came across this Reselect.js error earlier today:
Error: Selector creators expect all input-selectors to be functions, instead received the following types: [function, undefined]
Addressing this problem is a nightmare if you don't know what to look for. But lucky for you, I've dealt with this before.
I admit I wrote the following code -- well, the names have been changed to protect the innocent. Yes, here it looks stupid and obvious. But, in the real code base, I assure you it's much less obvious (but still stupid)
This is what the basic pattern looks like:
import { createSelector } from 'reselect';import { State } from './state';import { getA } from './otherSelectors.ts';export const getB = (state: State) => state.b;export const getD = createSelector( getA, getB, (a, b) => a + b; );
import { createSelector } from 'reselect';import { getB } from './someSelectors.ts';export const getA = (state: State) => state.a;export const getC = createSelector( getA, getB, (a, b) => a - b; );
import { getD } from './someSelectors.ts'console.log(getD(state)); // Error
The problem is a circular dependency. someSelectors.ts imports from otherSelectors.ts and vice versa. When dependentCode.ts imports from the circle, one of the files will not be able to be resolved.
Whichever import gets trashed will cause an
undefined to show up instead of the expected function.
Breaking this kind of circular dependency is as easy as reorganizing your files.
For instance, could move
getA and
getC into
someSelectors.ts.
Sometimes, however, it makes more sense to move both
getA and
getB into a new file of their own and leave
getC and
getD where they are.
That should fix the problem for 99% of the cases.
When processing JavaScript, the interpreter will read all functions in a scope before executing any statements.
It's important to note that if you use the ES6 style of defining functions as lambdas
const f = (x: number) => x + 42;
as opposed to
function f(x: number) { return x + 42;}
then your "functions" don't get processed until those assignment statements are executed.
You can read more about this phenomenon called Function Hoisting.
This means that you could still have circular dependencies even if you put all the dependent selectors in the same file.
If that's the case, you need to refactor out the core dependency. | https://decembersoft.com/posts/error-selector-creators-expect-all-input-selectors-to-be-functions/ | CC-MAIN-2018-43 | refinedweb | 381 | 57.16 |
> I really think that we need to avoid trying to have a single 'known good' > flag/generationnrwith the inode.I don't think we should have anything in the inode. We don't want tobloat inode objects for this cornercase.> if you store generation numbers for individual apps (in posix attributes > to pick something that could be available across a variety of > filesystems), you push this policy decision into userspace (where itAgreed> 1. define a tag namespace associated with the file that is reserved for > this purpose for example "scanned-by-*"What controls somewhat writing such a tag on media remotely ? Locally youcan do this (although you are way too specialized in design - an LSM hookfor controlling tag setting or a general tag reservation sysfs interfaceis more flexible than thinking just about scanners.> 2. have an kernel option that will clear out this namespace whenever a > file is dirtiedThat will generate enormous amounts of load if not carefully handled.> 3. have a kernel mechanism to say "set this namespace tag if this other > namespace tag is set" (this allows a scanner to set a 'scanning' tag when > it starts and only set the 'blessed' tag if the file was not dirtied while User space problem. Set flags 'dirty', then set bit 'scanning'clear 'dirty' then clear 'scanning' when finished. If the dirty flag gotset while you were scanning it will still be set now you've cleared youscanning flag. Your access policy depends upon your level of paranoia (eg"dirty|scanning == BAD")> programs can set the "scanned-by-*" flags on that the 'libmalware' library We've already proved libmalware doesn't make sense> L. the fact that knfsd would not use this can be worked around by running > FUSE (which would do the checks) and then exporting the result via knfsdwNot if you want to get any work done.> what did I over complicate in this design? or is it the minimum feature > set needed?> > are any of the features I list impossible to implement?Go write it and see, provide benchmarks ? I don't see from this how youhandled shared mmap ? | http://lkml.org/lkml/2008/8/16/65 | CC-MAIN-2017-13 | refinedweb | 352 | 59.84 |
The official source of product insight from the Visual Studio Engineering Team
In Walkthrough-- Publishing a Custom Web Control (Part 1 of 2) you learned to create and publish a custom web control. You created the extensibility project manually, a procedure with many steps. Now that you have an extensibility project template, you can publish it to the Visual Studio gallery. Anyone who wants to create and publish a custom web control can download and install your template and create the project in one step.
In the second. In addition, you must have completed the first part of this walkthrough.
To create an extensibility web control project template, start with the custom web control project that you created in the first part of this walkthrough. Because this project includes the ColorTextControl web control, you can use this template to create custom web controls that render as colored text.
To publish a project template to the Visual Studio gallery, you must create the project template as a VSIX extension and provide it with an icon and a screenshot. The easiest way to do this is to use the Export Template as VSIX wizard.
1. Open the MyWebControls project in Visual Studio.
2. Use the Extension Manager to download the Export Template as VSIX wizard from the Visual Studio gallery. This adds the Export Template as VSIX … item to the File menu when a project is open.
3. Select the Export Template as VSIX menu item.
4. In the Choose Template Type page, make sure that Project Template is selected, and that the MyWebControls.csproj check box is checked. Click Next.
5. In the Select Template Options page, set the Template Name to Extensibility Color Text Web Toolbox Control and the Template Description to Color text web control project that produces a VSIX extension.
6. In the Icon Image text box, Browse to and select the Color.bmp file. You must set the file filter to All Files (*.*) to see this file.
7. In the Preview Image text box, Browse to and select the ScreenShot.bmp file. You must set the file filter to All Files (*.*) to see this file. Click Next.
8. In the Select VSIX Options page, change the Product Name to Extensibility Color Text Web Control Template.
9. Change the Company Name, and so forth, as desired.
10. Uncheck the Automatically import the template into Visual Studio check box, then click Finish.
In a moment, the Windows Explorer opens to display the Extensibility Color Text Web Control Template.vsix file in the <Users>\My Documents\Visual Studio 2010\My Exported Templates folder.
You are now ready to publish your project template Project or Item Template, then click Next.
6. In Step 2: Upload, click the Browse button and select the Extensibility Color Text Web Control Template.vsix file located in the My Exported Templates folder. Click Next.
7. In Step 3: Basic Information, the information you entered into the Export Template as VSIX wizard appears.
8. Set the Category to ASP.NET and the Tags to toolbox, web control, templates.
9. Read and agree to the Contribution Agreement, then type the text image into the text box.
10. Click Create Contribution, then click Publish.
11. Search the Visual Studio Gallery for extensibility color text web control template. The new template listing appears.
Now that your web control project template is published, install it in Visual Studio and test it there.
1. Return to Visual Studio.
2. From the Tools menu, select Extension Manager.
3. Click Online Gallery, then search for extensibility color text web control template. The Extensibility Color Text Web Control Template listing appears.
4. Click the Download button. After the extension downloads, click the Install button. Your project template is now installed in Visual Studio.
You no longer have to create a custom web control the manual way, as you did in the first part of this walkthrough. Instead, you can use the Extensibility Color Text Web Control project template. In this section, you use the template to create a BlueColorTextControl web control.
1. Select the File/New Project menu item, then click the Online Templates tag in the left pane.
2. Select the ASP.NET node, then click Extensibility Color Text Web Control Template in the middle pane.
3. Set the Name to MoreWebControls, then click OK.
4. Rename the ColorTextControl.cs file to be BlueColorTextControl.cs.
5. Open the BlueColorTextControl.cs file.
6. In the ToolboxData attribute, replace both occurrences of ColorTextControl with BlueColorTextControl.
These values specify the opening and closing tags generated for the control when it is dragged from the Toolbox into a web page at design time. They must match the name of the control class, which is also the name of the control that appears in the toolbox.
The start of the control class source code now looks like this:
namespace MoreWebControls
{
[DefaultProperty("Text")]
[ToolboxData("<{0}:BlueColorTextControl runat=server></{0}:BlueColorTextControl>")]
[ProvideToolboxControl("MoreWebControls", false)]
public class BlueColorTextControl : WebControl
{
7. In the get method, Change the color “green” to “blue”.
get
{
String s = (String)ViewState["Text"];
return "<span style='color:blue'>' + s + "</span>";
}
This surrounds the text with a span tag that colors it blue.
8. Build the MoreWebControls project.
Do not press F5 to launch an experimental instance of Visual Studio. Instead, launch it by following the steps below. You will learn more about adding a debug action to a project template in a later section.
1. Launch an experimental instance of Visual Studio explicitly by selecting the menu item Start/All Programs/Microsoft Visual Studio 2010 SDK/Tools/Start Experimental Instance of Microsoft Visual Studio 2010. For convenience, you can create a shortcut to this executable.
2. Create a new web application project.
3. Open default.aspx in Source mode.
4. Open the toolbox. You should see BlueColorTextControl in the category MoreWebControls.
5. Drag a BlueColorTextControl to the body of the web page.
6. Add a Text attribute with the value Think Blue! to the BlueColorTextControl tag. The resulting tag should look like this:
<cc1:BlueColorTextControl
7. Press F5 to launch the ASP.NET Development Server.
The BlueColorTextControl should render something like this:
8. Close the ASP.NET Development Server.
9. Close the experimental instance of Visual Studio.
To add a debug action to your project template, you must delete the current template and recreate it. To delete the project template and the project it generated, follow these steps.
1. Return to your web browser.
2. Click the My Contributions link in the upper left-hand corner. The Extensibility Color Text Web Control Template listing appears.
3. Click Delete to permanently remove your project template from the Visual Studio Gallery.
4. Close the browser and return to Visual Studio.
5. From the Tools menu, select Extension Manager.
6. Select Extensibility Color Text Web Control Template, then click Uninstall.
7. Close the Extension Manager.
8. Close the MoreWebControls solution.
9. Close Visual Studio.
10. Delete the MoreWebControls project folder.
11. Start Visual Studio to complete the uninstall process.
If you press F5 while the MoreWebControls solution is open, you will see the error message “A project with an Output Type of Class Library cannot be started directly”. Your project template does not set a debug action because the location of the experimental instance of Visual Studio is unknown until the project template is installed on the target machine.
Microsoft extensibility project templates include an invisible wizard that runs during project template installation. This wizard determines the location of the experimental instance of Visual Studio and sets the debug action accordingly. You can create your own wizard to do the same. You only need to create this wizard once. You can use the same wizard with every extensibility project template you create.
The extensibility project template wizard must have a public implementation of Microsoft.VisualStudio.TemplateWizard.IWizard, and must be signed with a strong assembly name.
1. Create a new Visual C#/Windows/Class Library project named MyWizard.
2. Rename the file class1.cs to be MyWizard.cs.
3. Replace the MyWizard.cs file content with the following code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.VisualStudio.TemplateWizard;
using System.Globalization;
using EnvDTE;
namespace MyWizard
{
public class MyWizard : IWizard
{
public void BeforeOpeningFile(ProjectItem projectItem)
{
}
public void ProjectFinishedGenerating(Project project)
{
foreach (Configuration config in project.ConfigurationManager)
{
//Set up the debug options to run "devenv /rootsuffix Exp";
config.Properties.Item("StartAction").Value = 1;
//Get the full path to devenv.exe through DTE.FullName
config.Properties.Item("StartProgram").Value =
project.DTE.FullName;
config.Properties.Item("StartArguments").Value =
"/rootsuffix Exp";
}
}
public void ProjectItemFinishedGenerating(ProjectItem projectItem)
{
}
public void RunFinished()
{
}
public void RunStarted(object automationObject,
Dictionary<string, string> replacementsDictionary,
WizardRunKind runKind, object[] customParams)
{
}
public bool ShouldAddProjectItem(string filePath)
{
return true;
}
}
}
4. Add the following references to the project. If there is more than one choice, choose the reference that has a path to Visual Studio 2010:
5. In the Signing tab of the project properties dialog box, check the Sign the assembly check box.
6. In the Choose a strong name key file dropdown list, select <New…>. The Create Strong Name Key dialog box appears.
7. Set the Key file name to key.snk and uncheck the Protect my key file with a password check box.
8. Click OK. The key.snk file is added to the project.
9. Build the MyWizard project as a Release build. Your wizard is now ready to use.
10. Close the MyWizard solution.
To incorporate the wizard into the project template VSIX extension, you must backtrack and set the path to your wizard in the Export Template as VSIX wizard Wizard text box.
Follow the steps beginning with the section Creating and Publishing an Extensibility Web Control Template, with these exceptions:
When you reach the section Testing the new web control, you can now launch the experimental instance of Visual Studio by pressing F5.
Congratulations! You have come full circle, from creating a web control project from scratch, to creating an extensibility web control project template.
Note that in this walkthrough, you use the Export Template as VSIX wizard to simplify the task of creating and publishing a project template. If you need more control of the project template, for example, to choose the icon that appears in the New Project dialog box, you must explicitly create the project template and wrap it in a VSIX extension. For more information, see Aaron Marten’s Creating and Sharing Project and Item Templates.
Short Bio: Martin Tracy is a senior programmer writer for the Visual Studio Project and Build team. As part of VS 2010, he has documented numerous features of the MSBuild system and Visual Studio platform. His long term focus is on developing infrastructure to build.
Hi Martin,
This is what I am looking for quite a while :)
Could you please tell me, whether this will work with WPF as well ?
Thank you,
Karthik
It should, but you can also use the Extensibility WPF Toolbox Control template provided by the VS SDK to get started.
Thanks Martin, I will check out that.
Hello Martin,
Could you please tell me how to strong name a project created from the wizard using the DTE.
I have a file xyz.snk and i want all the projects created from the wizard to be signed by that file only.
Thanks..!! | http://blogs.msdn.com/b/visualstudio/archive/2010/06/01/walkthrough-publishing-an-extensibility-web-control-project-template-part-2-of-2.aspx | CC-MAIN-2014-15 | refinedweb | 1,891 | 58.89 |
Deep learning model to predict mRNA Degradation
This article was published as a part of the Data Science Blogathon
Designing a deep learning model that will predict degradation rates at each base of an RNA molecule using the Eterna dataset comprising over 3000 RNA molecules.
Introduction
Project Objectives
In this project, we are going to use explore our dataset and then preprocessed sequence, structure, and predicted loop type features so that they can be used to train our deep learning GRU model. Finally predicting degradation records on public and test datasets.
Importing the Libraries necessary to Predict mRNA Degradation
We will be using TensorFlow as our main library to build and train our model and JSON/Pandas to ingest the data. For visualization, we are going to use Plotly and for data manipulation Numpy.
# Dataframe import json import pandas as pd import numpy as np # Visualization import plotly.express as px # Deeplearning import tensorflow.keras.layers as L import tensorflow as tf # Sklearn from sklearn.model_selection import train_test_split #Setting seeds tf.random.set_seed(2021) np.random.seed(2021)
Training Parameters
- Target columns: reactivity, deg_Mg_pH10, deg_Mg_50C, deg_pH10, deg_50C
- Model_Train: True if you want to Train a model which takes 1 hour to train.
# This will tell us the columns we are predicting target_cols = ['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C', 'deg_pH10', 'deg_50C'] Model_Train = True # True if you want to Train model which take 1 hour to train.
Metric to Evaluate the mRNA Degradation Prediction
Our model performance metric is MCRMSE (Mean column-wise root mean squared error), which takes root mean square error of ground truth of all target columns.
where is the number of scored ground truth target columns, and are the actual and predicted values, respectively?
def MCRMSE(y_true, y_pred):## Monte Carlo root mean squared errors colwise_mse = tf.reduce_mean(tf.square(y_true - y_pred), axis=1) return tf.reduce_mean(tf.sqrt(colwise_mse), axis=1)
The mRNA degradation data is available on Kaggle.
Detailed Explanation of the Data
- id – Unique identifier sample.
- seq_scored – This should match the length of reactivity, deg_*, and error columns.
- seq_length – The length of the sequence.
- sequence – Describes the RNA sequence, a combination of A, G, U, and C for each sample.
- structure – An array of (, ), and. characters donate to the base is to be paired or unpaired.
- reactivity – These numbers are reactivity values for the first 68 bases used to determine the likely secondary structure of the RNA sample.
- deg_pH10 – The likelihood of degradation after incubating without magnesium on pH10 at base or linkage.
- deg_Mg_pH10 – The likelihood of degradation after incubating with magnesium on pH 10 at base or linkage.
- deg_50C – The likelihood of degradation after incubating without magnesium at 50 degrees Celsius at base or linkage.
- deg_Mg_50C – The likelihood of degradation after incubating with magnesium at 50 degrees Celsius at base or linkage.
- *_error_* – calculated errors in experimental values obtained in reactivity, and deg_* columns.
- predicted_loop_type – Loop types assigned by bpRNA from Vienna which suggests:
- S: paired Stem
- M: Multiloop
- I: Internal loop
- B: Bulge
- H: Hairpin loop
- E: dangling End
- X: external loop
Observing Data
data_dir = "stanford-covid-vaccine/" train = pd.read_json(data_dir + "train.json", lines=True) test = pd.read_json(data_dir + "test.json", lines=True) sample_df = pd.read_csv(data_dir + "sample_submission.csv")
we have sequence, structure, and predicted loop types that are in text formats. We will be converting them into numerical tokens so that they can be used to train deep learning models. Then we have arrays within columns from reactivity_error to deg_50C that we will be using as targets.
train.head(2)
print('Train shapes: ', train.shape) print('Test shapes: ', test.shape)
Train shapes: (2400, 19) Test shapes: (3634, 7)
Signal to noise distribution in mRNA Degradation Prediction Data
We can see the signal-to-noise distribution is between 0 to 15 and the majority of samples lie between 0-6. We have also negative values that we need to get rid of.
fig = px.histogram( train, "signal_to_noise", nbins=25, title='signal_to_noise distribution', width=800, height=400 ) fig.show()
Removing negative values
train = train.query("signal_to_noise >= 1")
Sequence Test length
After looking at sequence length distribution we know that we have two distinctive sequence lengths, one at 107 and another at 130.
fig = px.histogram( test, "seq_length", nbins=25, title='sequence_length distribution', width=800, height=400 ) fig.show()
.png)
Splitting Test into Public and Private DataFrame
Let’s split our test dataset based on sequence length. Doing this will improve the overall performance of our GRU model.
public_df = test.query("seq_length == 107")
private_df = test.query("seq_length == 130")
Pre-processing Data
token2int = {x: i for i, x in enumerate("().ACGUBEHIMSX")}
token2int
{'(': 0, ')': 1, '.': 2, 'A': 3, 'C': 4, 'G': 5, 'U': 6, 'B': 7, 'E': 8, 'H': 9, 'I': 10, 'M': 11, 'S': 12, 'X': 13}
Converting DataFrame to 3D Array
The function below takes a Pandas data frame and converts it into a 3D NumPy array. We will be using it to convert both training features and targets.
def dataframe_to_array(df): return np.transpose(np.array(df.values.tolist()), (0, 2, 1))
Tokenization of Sequence
The function below uses a string to integer dictionary that we had created early to convert training features into arrays containing integers. Then we will be using dataframe_to_array to convert our dataset into a 3D NumPy array.
def dataframe_label_encoding( df, token2int, cols=["sequence", "structure", "predicted_loop_type"] ): return dataframe_to_array( df[cols].applymap(lambda seq: [token2int[x] for x in seq]) ) ## tokenization of Sequence, Structure, Predicted loop
Preprocessing Features and Labels
- Using label endorsing function on our training features.
- Converting target data frame into a 3D array.
train_inputs = dataframe_label_encoding(train, token2int) ## Label encoding train_labels = dataframe_to_array(train[target_cols]) ## dataframe to 3D array to
Train & Validation split
Splitting our training data into train and validation sets. We are using signal to noise filter to equally distribute our dataset.
x_train, x_val, y_train, y_val = train_test_split( train_inputs, train_labels, test_size=0.1, random_state=34, stratify=train.SN_filter )
Preprocessing Public and Private Dataframe
Earlier we have split our test dataset into public and private based on sequence length now we are going to use dataframe_label_encoding to tokenized and reshape it into NumPy array as we have done the same with the training dataset.
public_inputs = dataframe_label_encoding(public_df, token2int) private_inputs = dataframe_label_encoding(private_df, token2int)
Training / Evaluating the mRNA Degradation Prediction Model
Build Model
Before jumping directly into the deep learning model, we have tested other gradient boosts such as Light GBM and CatBoost. Then as we were dealing with the sequence, I thought to experiment around BiLSTM model, but they all performed worst compared to the triple GRU model with linear activation.
This model is influenced by xhlulu initial models. I amazed me how simple the GRU layer can produce the best results possible without using data augmentation or feature engineering.
To learn more about RNNs, LSTM and GRU, please see this blog post.
def build_model( embed_size, # Length of unique tokens seq_len=107, # public dataset seq_len pred_len=68, # pred_len for public data dropout=0.5, # trying best dropout (general) sp_dropout=0.2, # Spatial Dropout embed_dim=200, # embedding dimension hidden_dim=256, # hidden layer units ): inputs = L.Input(shape=(seq_len, 3)) embed = L.Embedding(input_dim=embed_size, output_dim=embed_dim)(inputs) reshaped = tf.reshape( embed, shape=(-1, embed.shape[1], embed.shape[2] * embed.shape[3]) ) hidden = L.SpatialDropout1D(sp_dropout)(reshaped) # 3X BiGRU layers hidden = L.Bidirectional( L.GRU( hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer="orthogonal", ) )(hidden) hidden = L.Bidirectional( L.GRU( hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer="orthogonal", ) )(hidden) hidden = L.Bidirectional( L.GRU( hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer="orthogonal", ) )(hidden) # Since we are only making predictions on the first part of each sequence, # we have to truncate it truncated = hidden[:, :pred_len] out = L.Dense(5, activation="linear")(truncated) model = tf.keras.Model(inputs=inputs, outputs=out) model.compile(optimizer="Adam", loss=MCRMSE) # loss function as of Eval Metric return model
Building model to predict mRNA Degradation Prediction
building our model by adding embed size (14) and we are going to use default values for other parameters.
- sequence length: 107
- prediction length: 68
- dropout: 0.5
- spatial dropout: 0.2
- embedded dimensions: 200
- hidden layers dimensions: 256
model = build_model( embed_size=len(token2int) ## embed_size = 14 ) ## uniquie token in sequence, structure, predicted_loop_type model.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 107, 3)] 0 _________________________________________________________________ embedding (Embedding) (None, 107, 3, 200) 2800 _________________________________________________________________ tf.reshape (TFOpLambda) (None, 107, 600) 0 _________________________________________________________________ spatial_dropout1d (SpatialDr (None, 107, 600) 0 _________________________________________________________________ bidirectional (Bidirectional (None, 107, 512) 1317888 _________________________________________________________________ bidirectional_1 (Bidirection (None, 107, 512) 1182720 _________________________________________________________________ bidirectional_2 (Bidirection (None, 107, 512) 1182720 _________________________________________________________________ tf.__operators__.getitem (Sl (None, 68, 512) 0 _________________________________________________________________ dense (Dense) (None, 68, 5) 2565 ================================================================= Total params: 3,688,693 Trainable params: 3,688,693 Non-trainable params: 0 _________________________________________________________________
Training the mRNA Degradation Prediction Model
We are going to train our model for 40 epochs and save the model checkpoint in the Model folder. We have experimented with batch sizes from 16, 32, 64 and by far 64 batch sizes produced better results and fast convergence.
As we can observe both training and validation loss (MCRMSE) is reducing with every iteration until 20 epochs and from there they start to diverge. For the next experimentation, we will be keeping the number of epochs limited to twenty to get fast and better results.
if Model_Train: history = model.fit( x_train, y_train, validation_data=(x_val, y_val), batch_size=64, epochs=40, verbose=2, callbacks=[ tf.keras.callbacks.ReduceLROnPlateau(patience=5), tf.keras.callbacks.ModelCheckpoint("Model/model.h5"), ], )
Epoch 1/40 30/30 - 69s - loss: 0.4536 - val_loss: 0.3796 Epoch 2/40 30/30 - 57s - loss: 0.3856 - val_loss: 0.3601 Epoch 3/40 30/30 - 57s - loss: 0.3637 - val_loss: 0.3410 Epoch 4/40 30/30 - 57s - loss: 0.3488 - val_loss: 0.3255 Epoch 5/40 30/30 - 57s - loss: 0.3357 - val_loss: 0.3188 Epoch 6/40 30/30 - 57s - loss: 0.3295 - val_loss: 0.3163 Epoch 7/40 30/30 - 57s - loss: 0.3200 - val_loss: 0.3098 Epoch 8/40 30/30 - 57s - loss: 0.3117 - val_loss: 0.2997 Epoch 9/40 30/30 - 57s - loss: 0.3046 - val_loss: 0.2899 Epoch 10/40 30/30 - 57s - loss: 0.2993 - val_loss: 0.2875 Epoch 11/40 30/30 - 57s - loss: 0.2919 - val_loss: 0.2786 Epoch 12/40 30/30 - 57s - loss: 0.2830 - val_loss: 0.2711 Epoch 13/40 30/30 - 57s - loss: 0.2777 - val_loss: 0.2710 Epoch 14/40 30/30 - 57s - loss: 0.2712 - val_loss: 0.2584 Epoch 15/40 30/30 - 57s - loss: 0.2640 - val_loss: 0.2580 Epoch 16/40 30/30 - 57s - loss: 0.2592 - val_loss: 0.2518 Epoch 17/40 30/30 - 57s - loss: 0.2540 - val_loss: 0.2512 Epoch 18/40 30/30 - 57s - loss: 0.2514 - val_loss: 0.2461 Epoch 19/40 30/30 - 57s - loss: 0.2485 - val_loss: 0.2492 Epoch 20/40 30/30 - 57s - loss: 0.2453 - val_loss: 0.2434 Epoch 21/40 30/30 - 57s - loss: 0.2424 - val_loss: 0.2411 Epoch 22/40 30/30 - 57s - loss: 0.2397 - val_loss: 0.2391 Epoch 23/40 30/30 - 57s - loss: 0.2380 - val_loss: 0.2412 Epoch 24/40 30/30 - 57s - loss: 0.2357 - val_loss: 0.2432 Epoch 25/40 30/30 - 57s - loss: 0.2330 - val_loss: 0.2384 Epoch 26/40 30/30 - 57s - loss: 0.2316 - val_loss: 0.2364 Epoch 27/40 30/30 - 57s - loss: 0.2306 - val_loss: 0.2397 Epoch 28/40 30/30 - 57s - loss: 0.2282 - val_loss: 0.2343 Epoch 29/40 30/30 - 57s - loss: 0.2242 - val_loss: 0.2392 Epoch 30/40 30/30 - 57s - loss: 0.2232 - val_loss: 0.2326 Epoch 31/40 30/30 - 57s - loss: 0.2207 - val_loss: 0.2318 Epoch 32/40 30/30 - 57s - loss: 0.2192 - val_loss: 0.2339 Epoch 33/40 30/30 - 57s - loss: 0.2175 - val_loss: 0.2287 Epoch 34/40 30/30 - 57s - loss: 0.2160 - val_loss: 0.2310 Epoch 35/40 30/30 - 57s - loss: 0.2137 - val_loss: 0.2299 Epoch 36/40 30/30 - 57s - loss: 0.2119 - val_loss: 0.2288 Epoch 37/40 30/30 - 57s - loss: 0.2101 - val_loss: 0.2271 Epoch 38/40 30/30 - 57s - loss: 0.2088 - val_loss: 0.2274 Epoch 39/40 30/30 - 57s - loss: 0.2082 - val_loss: 0.2265 Epoch 40/40 30/30 - 57s - loss: 0.2064 - val_loss: 0.2276
Evaluate training history
Both validation and training loss were reduced until 20 epochs. The validation loss became flat after 35 so in my opinion, we should test results on both 20 and 35 epochs.
if Model_Train: fig = px.line( history.history, y=["loss", "val_loss"], labels={"index": "epoch", "value": "MCRMSE"}, title="History", ) fig.show()
.png)
The test dataset was divided into public and private sets that have different sequence lengths, so in order to predict degradation on different lengths, we need to build 2 different models and load our saved checkpoints. This is possible because RNN models can accept sequences of varying lengths as inputs. Artificial Intelligence()
model_public = build_model(seq_len=107, pred_len=107, embed_size=len(token2int)) model_private = build_model(seq_len=130, pred_len=130, embed_size=len(token2int)) model_public.load_weights("Model/model.h5") model_private.load_weights("Model/model.h5")
Prediction
We have successfully predicted for both public and private data sets. In the next step, we will be combining them using test id.
public_preds = model_public.predict(public_inputs) private_preds = model_private.predict(private_inputs)
private_preds.shape
(3005, 130, 5)
Post-processing and submit
- combining both private (df,prediction ) and public(df,prediction) data frame.
- Adding series of integers in front of id based on a sequence of single prediction for example
[id_00073f8be_0,id_00073f8be_1,id_00073f8be_2 ..]
- Concatenating all of the data into Pandas Dataframe and preparing for submission.
preds_ls = [] for df, preds in [(public_df, public_preds), (private_df, private_preds)]: for i, uid in enumerate(df.id): single_pred = preds[i] single_df = pd.DataFrame(single_pred, columns=target_cols) single_df["id_seqpos"] = [f"{uid}_{x}" for x in range(single_df.shape[0])] preds_ls.append(single_df) preds_df = pd.concat(preds_ls) preds_df.head()
reactivity deg_Mg_pH10 deg_Mg_50C deg_pH10 deg_50C id_seqpos 0 0.685760 0.703746 0.585288 1.857178 0.808561 id_00073f8be_0 1 2.158555 3.243329 3.443042 4.394709 3.012130 id_00073f8be_1 2 1.432280 0.674404 0.672512 0.662341 0.718279 id_00073f8be_2 3 1.296234 1.306208 1.898748 1.324560 1.827133 id_00073f8be_3 4 0.851104 0.670810 0.971952 0.573919 0.962205 id_00073f8be_4
Submission
Merging sample data frame with predicted on
id_seqpos to avoid repetition and make sure it follows submission format. Finally, save our data frame into .csv file.
submission = sample_df[["id_seqpos"]].merge(preds_df, on=["id_seqpos"]) submission.to_csv("Submission/submission.csv", index=False)
submission.head()
id_seqpos reactivity deg_Mg_pH10 deg_Mg_50C deg_pH10 deg_50C 0 id_00073f8be_0 0.685760 0.703746 0.585288 1.857178 0.808561 1 id_00073f8be_1 2.158555 3.243329 3.443042 4.394709 3.012130 2 id_00073f8be_2 1.432280 0.674404 0.672512 0.662341 0.718279 3 id_00073f8be_3 1.296234 1.306208 1.898748 1.324560 1.827133 4 id_00073f8be_4 0.851104 0.670810 0.971952 0.573919 0.962205
Private Score
After submission into OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction | Kaggle we got a private score of 0.2723 which is quite better. This result was produced on 50 epochs as I was trying different parameters to improve my score.
.png)
Conclusion
This was a unique experience for me as I was dealing with JSON files with multiple arrays within single samples. After figuring out how to use that data the challenge became quite simple and the Kaggle community have a bigger part in helping me solve this problem. This article was purely model-based and apart from it I have explored the dataset and used data analysis to make sense of some of the common patterns. I wanted to include my experiments with other gradient boosting and LSTM models, but I wanted to present the best possible model.
We have used JSON files and converted them into tokenized 3D Numpy arrays and then used the 3X GRU model. Finally, we have used saved weights to create distinct models for varying lengths of the RNA sequence. I will suggest you use my code and play around to check whether you can beat my score on the leader board.
Source Code
You can find the source code on dagshub and Deepnote Project.
Additional Data
How RNA vaccines work, and their issues:
Launch of the OpenVaccine challenge:
The impossibility of mass immunization:
Eterna, the crowdsourcing platform for RNA design:
References
- Image 1 –
- Image 2 -
- Data –
Author
Abid Ali Awan
I am a certified data scientist professional, who loves building machine learning models and research on the latest AI technologies. I am currently testing AI Products at PEC-PITC, which later get approved for human trials for example Breast Cancer Classifier.
You can follow me on LinkedIn, Twitter, and Polywork where I post my article on weekly basis.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
Leave a Reply Your email address will not be published. Required fields are marked * | https://www.analyticsvidhya.com/blog/2021/09/deep-learning-model-to-predict-mrna-degradation/ | CC-MAIN-2022-27 | refinedweb | 2,828 | 60.41 |
BugZappers/Meetings/Minutes-2009-Mar-17
From FedoraProject
< BugZappers | Meetings
Bug Triage Meeting :: 2009-Mar-17
Attendees
- tk009
- jds2001
- iarlyy
- beland
- poelcat (meeting lead)
- adamw
- SMParrish
- comphappy
Previous Meeting and Mailing list Follow-up
- Introduction Email SOP - After discussion, it was felt there might be some confusion with the use of "'fedorabugs' group" in the language. A second draft will be created with changes to the language removing "'fedorabugs' group" and any confusion it might cause new members. A new draft will be submitted for review on the mailing list once the status of JDS2001's proposed FAS patch is known.
- Wiki Front Page - The front page draft submitted by adamw was reviewed and approved for immediate implementation. Minor adjustments to the page will be made when required to align with supporting pages.
- Wiki Components_and_Triagers page - here: - This page was also reviewed and approved, with the provision that when the metrics become operational, statistical information will be removed from the page.
- Meeting Day and Time - There were insufficient numbers at the meeting to make a decision on changing the meeting schedule. It was decided to move the discussion to the list to finalize the change (if any) to be made. If you have not done so this is your last chance to add your preference to.
- Bug Work Flow png - No resolution, awaiting further information from John5342.
Added Agenda Item
- Triage Metrics - The metrics are awaiting bug fixes to other required packages, and not yet operational. The up-to-date Metrics code will be pushed to the triage git repository later today.
Follow-up Actions
- FAS Patch - JDS2001 will submit a patch to infrastructure, so membership in the BugZappers automatically adds 'fedorabugs' group permissions.
- Introduction Email SOP Draft - TK009 will create a second draft incorporating the changes specified in the meeting.
- Supporting wiki pages - beland and adamw will work on consolidating, revising and organizing the supporting pages within the BugZappers namespace.
- Bug Work Flow png - Need information from John5342 on the status.
- Triage Metrics Code - Comphappy will upload the up-to-date metrics code to the triage git repository.
IRC Transcript
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt! | http://fedoraproject.org/wiki/BugZappers/Meetings/Minutes-2009-Mar-17 | CC-MAIN-2014-52 | refinedweb | 366 | 52.9 |
4. Which of the following are assumptions of the capital asset pricing model? (Points : 3) Funds can be borrowed or lent in unlimited quantities at risk-free rate. The objective of all investors is to maximize their expected utility over the same one-period timeframe using the same basis for evaluating investments. There are no taxes or transaction costs associated with any investment. All of the above are correct assumptions. 5. (TCO 7) A good way to minimize risk and receive an optimum return on your portfolio is: (Points : 3) through diversification. to buy only risk-free securities. through blue-chip stock purchases only. through junk-bonds. 6. (TCO 7) Assume a portfolio has the possibility of returning 7%, 8%, 10%, or 12%, with the likelihood of 20%, 30%, 25%, and 25%, respectively. The expected value of the portfolio is: (Points : 3) 10.0%. 9.0%. 9.3%. 9.25%. 7. (TCO 7) If the market rate of return is 10% and the beta on a particular stock is 1.00, the return on the stock will be: (Points : 3) greater than 10%. 10%. less than 10%. dependent on some other factor. 8. (TCO 7) For two investments with a correlation coefficient (rij) less than +1, the portfolio standard deviation will be __________ the weighted average of the individual investments' standard deviation. (Points : 3) more than less than equal to zero compared to 9. (TCO 7) The capital asset pricing model (CAPM) takes off where the _________ concluded. (Points : 3) security market line capital market line efficient frontier and Markowitz portfolio theory arbitrage pricing theory 10. (TCO 7) Using the formula for the security market line (Formula 21-7), if the risk-free rate (RF) is 6%, the market rate of return (KM) is 12%, and the beta (bi) is 1.2, compute the anticipated return for stock i (Ki). (Points : 3) 20.4% 16.33% 13.64% 13.2% | http://www.chegg.com/homework-help/questions-and-answers/4-following-assumptions-capital-asset-pricing-model-points-3-funds-borrowed-lent-unlimited-q3732019 | CC-MAIN-2015-06 | refinedweb | 320 | 56.86 |
I have tested with OCR ABBY using sample code C# with sucess , but with the command :
ConsoleTest.exe --asMRZ C:\image.jpg C:\ (the result is file image.xml)
I want to integrate in my code C#, with function ProcessMRZ.
I do these steps:
1)I added two references: AbbyyOnlineSdk and NDesk.Options 2) I add namespace: using Abbyy.CloudOcrSdk;
3)I declare RESTClient: private Rest serviceclient RESTClient;
4)I called to the Test () function, with my infos details: name of my application and password
5)Now I want to apply asMRZ: Process Mrz, I call restClient.Process Mrz (@ "C: \ temp \ image.jpg"); Unfortunately, I have not the result ... I always ERROR!
Is Step 5) to call MRZ Process correct ? or how I should change it?
I also tried with: Process Mrz (@ "C: \ temp \ image.jpg" @ "C: \ temp \"); I have always not the result !!
Thanks | http://forum.ocrsdk.com/thread/5827-still-alwayas-problem-integrate-abbyy-code-in-my-applciatin-c/ | CC-MAIN-2017-30 | refinedweb | 145 | 70.39 |
I want to make a ball launch from a catapult mechanism when i press the space bar. How would i make it so that it has to be touching an object on the character so that it would move. Also how would I make it that it goes in the direction that the character is facing. This is my code so far. I am using C# using UnityEngine; using System.Collections;
public class shooting : MonoBehaviour {
Animator anim;
// Use this for initialization
void Start () {
anim = GetComponent<Animator> ();
}
// Update is called once per frame
void Update () {
if (Input.GetKey (KeyCode.Space))
rigidbody.AddForce (Vector3.right * 100);
if (Input.GetKey (KeyCode.Space))
rigidbody.AddForce (Vector3.up * 70);
}
}
Answer by hav_ngs_ru
·
Jun 28, 2014 at 10:08 AM
to move the ball at the direction the character facing use transform.forward vector instead of Vector3.forward.
Another way is to use rigidbody.AddForceRelative(Vector3.forward * 100f) - it will cause the same result (applies force in forward direction of the character).
Also its better to use just one AddForce / AddForceRelative call:
rigidbody.AddForceRelative(Vector3.forward 100f + Vector3.up 70f)
Distribute terrain in zones
2
Answers
Multiple Cars not working
1
Answer
camera script, allow free movement of the camera up to a certain distance.
1
Answer
Score Manager Help
0
Answers | http://answers.unity3d.com/questions/736340/how-would-i-make-addforce-only-apply-when-a-ball-i.html | CC-MAIN-2016-26 | refinedweb | 216 | 58.18 |
Produce the highest quality screenshots with the least amount of effort! Use Window Clippings.
ATL includes a lightweight regular expression implementation. Although originally part of Visual C++, it is now included with the ATL Server download.
The CAtlRegExp class template implements the parser and matching engine. Its single template argument specifies the character traits such as CAtlRECharTraitsW for Unicode and CAtlRECharTraitsA for ANSI. The template argument also has a default argument based on whether or not _UNICODE is defined.
The CAtlREMatchContext class template provides an array of match groups for a successful match of a regular expression. It has the same default template argument as CAtlRegExp.
In the example below, CAtlRegExp’s Parse method is used to convert the regular expression into an instruction stream that is then used by the Match method to efficiently match the input string. Each match group is defined by a start and end pointer, defining the range of characters so that a copy does not have to be made if it is not needed.
The regular expression grammar is defined at the top of the atlrx.h header file.
#include <atlrx.h>#include <atlstr.h>
#define ASSERT ATLASSERT
int main(){ CAtlRegExp<> regex; const REParseError status = regex.Parse(L"^/blog/{\\d\\d\\d\\d}/{\\d\\d?}/{\\d\\d?}/{\\a+}$"); ASSERT(REPARSE_ERROR_OK == status);
CAtlREMatchContext<> match;
if (regex.Match(L"/blog/2008/7/16/SomePost", &match)) { ASSERT(4 == match.m_uNumGroups);
PCWSTR start = 0; PCWSTR end = 0;
match.GetMatch(0, &start, &end); const UINT year = _wtoi(start);
match.GetMatch(1, &start, &end); const UINT month = _wtoi(start);
match.GetMatch(2, &start, &end); const UINT day = _wtoi(start);
match.GetMatch(3, &start, &end); const CString name(start, end - start);
wprintf(L"Year: %d\n", year); wprintf(L"Month: %d\n", month); wprintf(L"Day: %d\n", day); wprintf(L"Name: %s\n", name); }}
If you’re looking for one of my previous articles here is a complete list of them for you to browse through.
Why not install Feature Pack for Visual C++ 2008 and be done with it?
It does ship with a TR1 implementation which means basic_regex is available.
Tanveer Badar: The TR1 implementation is fantastic. I don’t however use exceptions in most of my native code projects and that is why I prefer ATL classes over STL classes. ATL doesn’t force you to adopt exceptions whereas the STL does.
Arguably, for new projects, using the C++ TR1 regex implementation, which will be part of C++0x, is more future-proof.
Arno
Arno Schoedl: absolutely. As I said, provided you’re using exceptions as part of the error handling strategy for your project it makes perfectly good sense to use the Standard C++ Library and the TR1 additions. A lot of developers however still use C++ without exceptions and for them a lightweight ATL alternative comes in handy.
I have to use ATL regexps for one of our work projects and the regexp syntactic differences are often a major annoyance. It's not too hard to get used to the syntax, but when you also do regexps in .NET (which uses a more universal style) it gets a little weird at times. So unless you really have to, I'd say stay away from ATL regexps.
Nish: have you compared the .NET and TR1 grammars?
Kenny : Nope, not yet. Why? Don't tell me they are quite unlike each other too!!!
Nish: Like night and day. Night and day. :) Seriously I haven’t played much with the TR1 impl so I can’t really judge yet. The only regex I’ve used extensively is the .NET one. I just like the ATL impl when I’m in a tight corner and can’t or don’t want to have any baggage just to parse a string in some interesting way. I have heard from Stephan that the TR1 impl is really kick ass though. | http://weblogs.asp.net/kennykerr/archive/2008/07/18/visual-c-in-short-regular-expressions.aspx | crawl-002 | refinedweb | 649 | 66.44 |
Delete-cmd
Delete the specified object.
This command works primarily on datasets and variables.
delete [option] <name>
OR
del [option] <name>
Syntax: del <name>
Script Examples:
del Col(B); // Deletes Column B of the active worksheet
range bb = 2;
del bb; // Same as 'del Col(2)'
range bc = 2[1:2];
del bc; // Deletes rows 1-2 of Column 2; shifts cells up.
Syntax: del @
Note that this switch differs from other switches in that you DO NOT precede the switch with a dash(-) character. When used by itself, as in ...
del @;
... ALL persistent variable values stored at HKEY_CURRENT_USER\SOFTWARE\OriginLab\SysVar will be deleted. To selectively delete persistent variable values from the registry, follow the "@" symbol with (a) a particular persistent variable or (b) use letters in combination with the asterisk (*) wildcard character, as in the following examples:
del @AWC // deletes the value of @AWC stored in the registry
del @A* // deletes the values of all registry-stored variables that begin with letter "A"
See the list @ command.
Syntax: del -a
Delete all the temporary datasets, including all the axes datasets. A temporary dataset has a name which starts with an underscore ("_") character, e.g., _temp.
Syntax: del -al varName
Delete the local variable with name varName. If varName is of the string type, do not include the trailing $-sign.
double aa=col(2)[1];
string ss$="First cell of 2nd col is $(aa)";
type ss$+", and 3rd is $(col(2)[3])";
del -al aa;
del -al ss;
After the above code, you can do a list a and verify that both variables are gone.
Syntax: del -as
Delete all datasets that are not in any worksheet and not used in any graphs.
// Delete all the loose datasets. Use ''list s'' to check it.
create cc -wdn 10 a,b,c;
del -as;
Syntax: del -di
Delete all imported data from the project, including data imported and saved during previous sessions.
Syntax: del -f filepath
Delete a file specified by filepath
The following will delete macros.cnf from the User File Folder if it existed
del -f "%Ymacros.cnf";
silent delete
del -fs path
Pass in a full path with file name with wild cards to delete multiple files.
Pass in a full path with ending '\' will delete directory and all files and directories in it.
Syntax: del -fc
The user's Origin.ini file, saved to the User Files Folder, contains a [Column Custom Format List] section that, over time, accumulates all custom formats entered into dialog box custom format lists (e.g. Column Properties). As the list accumulates, these dialog box custom format lists become difficult to use. Use this command to clear the list.
Syntax: del -fi
Delete the file import information saved in the worksheet. It is useful if the analysis template imports data with import X-Function in the first place and you want to use Data Connector to import data later. This may be the case if the template was made in some old version like Origin2019 and reused in newer version like Origin2020b. You will need to run this command first to clear old import info saved in the worksheet.
Syntax: del -i
If you run into trouble not finding the correct things by Origin's Start button, you can use this option to reset the indexing. Next time you click Start button, the search bar should be disabled and show the text "Indexing... please wait" until the indexing is done.
Syntax: del -m macroName
Delete the macro macroName.
def double { type "$(%1 * 2)"; }; // Define a macro.
del -m double;
The following script deletes the macro named AutoExec:
del -m AutoExec;
Syntax: del -oc
Delete all OCB files from the OC Temp folder. This is sometimes needed when source OC file has changed but with an older date and you want to force recomplile those OC files.
Syntax: del -ocb
del -ocb filepathname1.c
del -ocb filepathname1.ocw
del -ocb filepathname1.c filepathname2.c // delete multiple files
del -ocb %YOCTEMP\filename.c // use %Y to get to the Users Files Folder
For more information on it's use see LabTalk.CHM: Programming in LabTalk: Updating an Existing Origin C File
Syntax: del -oct
Reset OC user workspace to a new empty one.
Syntax: del -occlean
This command deletes all OCB, OP, OPH files and also to remove entire OCTemp folder. After this command, you must have all the original OC source code to properly compile Origin system OC files.
This command is stronger the -oc, so if -oc does not work, you can try this. This was added for Origin C developers during the days of the Developers Kit.
Syntax: del -op [UID]
Minimum Origin Version Required: 8.5.1
Without the UID, it will clean up all the pending recalculate operations which are without input or output. Or it will delete the pending recalculate operation with UID, including the output of the operation. To see the pending operations, including UID, please use the list opp command.
delete -op; /// clean up dangling operations with empty input or empty output
delete -op 779; /// delete operation with UID 779 and its outputs
Syntax: del -path [UFF]
This option can be used to reset the path of User Folder.
// Reset the User File Folder, next time you start Origin:
del -path;
// Reset the User File Folder the next time you start Origin:
del -path UFF;
Syntax: del -r
// Delete all DataRange, including Autofill and operations ranges etc.
del -r;
// Delete all orphaned operations ranges:
del -r 16;
// Delete all AutoFill ranges:
del -r 32;
Syntax: del -ra VarName
(Available in Version 8.0 SR3 and newer)
// Delete local variables 'aa':
del -ra aa;
// Delete local variables with names starting with the letter 'a'.
// The * is a wildcard character:
del -ra a*;
// Delete all local variables in the current scope:
del -ra *;
Note: del -ra * will not delete session-level constants, so for instance, pi will not be removed by del -ra *. But, if you enter del -rac * within session scope, pi will be deleted. At local level, locally-defined constants will be deleted by del -ra *.
Syntax: del -s name
Delete the given dataset or function. All dependent datasets are also deleted.
del -s col(C);
Syntax: del -sd name
Delete only the specified dataset and reset all other dependent datasets.
create temp -wd 50 a b c;
list s; //shows temp_a, temp_b, temp_c;
del temp_c;
list s; // only temp_c deleted
del temp_a;
list s;
// temp_b deleted as well since it has X dependency on temp_a
Now compare with the following:
create temp -wd 50 a b c;
list s;
del -sd temp_a;
list s;
// still shows temp_b and temp_c
Syntax: del -si
Delete all dataset with invalid name
Origin Version: 2017
Syntax: del -sw name
Delete a set of dataset whose name includes characters specified in name. Wildcard is supported
create temp -wd 50 aa ab c;
list s; //shows TEMP_AA, TEMP_AB, TEMP_C;
del -sw temp_a*; //delete datasets whose name includes "temp_a"
list s;
// TEMP_AA, TEMP_AB are deleted
Syntax: del -v varName
This command is used to delete the specified user-defined variables, but it won't touch the system variables.
// Define a user-defined variable numeric variable "ProNum"
ProNum = 100;
list v; // Check variable list to make sure it shows up
del -v ProNum; // Delete the variable named 'globalNum'
list v; // Re-check the list to make sure it is gone
del -v ex*; // Delete all user-defined variables that begin with "ex"
Please note, for the earlier versions than Origin 2020, delete -v *; will delete all variables including the system variables. You might need run list v; to check the variables and then execute the deletion more precisely.
delete -v *;
list v;
Syntax: del -vs stringVarName
// Omit the trailing $-sign from the string variable name:
// Define global/project variable globalStr$ = "Hello"
globalStr$ = "Hello";
list vs; // Check variable list to make sure it shows up
del -vs globalStr; // Delete the variable
list vs; // Re-check the list to make sure it is gone
Note: This command supports wildcard since Origin 9.0.
Syntax: del -web URL
delete -web "";
See LabTalk system variable @CFDT. | https://www.originlab.com/doc/LabTalk/ref/Delete-cmd | CC-MAIN-2021-25 | refinedweb | 1,361 | 67.89 |
OK, how about option 4 ... :-)
Add an isRandom column to your Car and delete those every time your game
starts (when you create a new ParkingLot). Sure, they'll persist, but
you really don't care if you can easily get rid of them (or just replace
their values with new random car values?). Or, I suppose you could
easily delete them prior to saving your parking lot. Something along
that line.
/dev/mrg
-----Original Message-----
From: Elia Morling [mailto:eli..amiljen.se]
Sent: Wednesday, June 30, 2004 1:05 PM
To: cayenne-use..bjectstyle.org
Subject: Re: Working with Transient Objects
I've tried Option 1 and it will give me errors as I can't do a
Car.getParkingLot() which is the case all over my code. ;)
Option 3 doesn't change that much as all newly created objects
are TRANSIENT per default. First registering and then unregistering
is the same as not adding it to the DataContext in the first place.
This leaves me to try option 2. I guess as long as the DataObject
is in the DataContext it will try to change its PersistenceState each
time I change the value of a field. So I will have to override each
set method in the DataObject and set its PersistenceState to TRANSIENT
after the field has been changed.
I will not be able to try this for abit, so if anyone has anything
valuable
to add that would be fine.
Elia
----- Original Message -----
From: "Gentry, Michael" <michael_gentr..anniemae.com>
To: "Elia Morling" <eli..amiljen.se>; <cayenne-user@objectstyle.org>
Sent: Wednesday, June 30, 2004 6:50 PM
Subject: RE: Working with Transient Objects
Option 3?
There is also a dataContext.unregisterObjects() call which supposedly
sets the collection of objects you pass in (your random cars) to a
TRANSIENT state, which might be better than setting them directly on the
object (in case the data context does some other stuff behind the
scenes).
/dev/mrg
-----Original Message-----
From: Gentry, Michael
Sent: Wednesday, June 30, 2004 12:46 PM
To: Elia Morling; cayenne-use..bjectstyle.org
Subject: RE: Working with Transient Objects
OK, two suggestions to try (since you obviously have a better test
environment for this than me).
1) Use transient variables like I suggested. Just keep the persistent
cars separate from the random cars. The persistent cars are obviously
in a data context, the random ones created outside of one (this might
not be possible, but I'd hope so).
public List allCars()
{
List results = new List();
results.addAll(savedCars());
results.addAll(randomCars());
return results;
}
(Add all the other plumbing you need to create the random cars, etc.)
2) Cheat.
import org.objectstyle.cayenne.PersistenceState;
randomCar.setPersistenceState(COMMITTED); // Make it think it's already
saved
Or, have you tried this one?
randomCar.setPersistenceState(TRANSIENT);
Let us know if either of these approaches work for you.
Thanks,
/dev/mrg
-----Original Message-----
From: Elia Morling [mailto:eli..amiljen.se]
Sent: Wednesday, June 30, 2004 12:34 PM
To: cayenne-use..bjectstyle.org
Subject: Re: Working with Transient Objects
Sorry, let me try to make a better example.
In the Cayenne Modeler I have a Car and a ParkingLot. In my
database I have stored 1 ParkingLot and 5 Car(s).
Now, I fire up my game. I get the ParkingLot and 5 Car(s) all stored
in the database. Now I want 5 randomly created computer-controlled
Car(s) for the duration of the game only. I want to use all the fields
in
my Car class as they are identical, except that I don't want to store
them in the database. Why? Because the next time I start the game I
want to randomly create 5 new ones. Meanwhile I want to commit any
changes made to the 5 Car(s) controlled by players. All I need is
a PersistenceState flag like NEW_DONT_COMMIT or similar on
the computer controlled Car(s) so that the DataContext doesn't store
them in the database.
Elia
----- Original Message -----
From: "Gentry, Michael" <michael_gentr..anniemae.com>
To: "Elia Morling" <eli..amiljen.se>; <cayenne-user@objectstyle.org>
Sent: Wednesday, June 30, 2004 6:05 PM
Subject: RE: Working with Transient Objects
I'm not sure if I understand what you are trying to do, but ...
You have two entities which you modeled in Cayenne Modeler, Car and
ParkingLot. You want to persist the ParkingLot entities, but not the
Car entities? Do you ever persist Car entities? It doesn't sound to me
like you want to persist them.
If you do not want to persist the Car, you shouldn't be modeling it in
Cayenne Modeler at all. What you should do is model the ParkingLot
entity and in the subclass of the ParkingLot class (where you add your
business logic), create your own setters, getters, and transient
instance variables. Something similar to this:
public class ParkingLot extends _ParkingLot
{
private List cars;
public void addToCarArray(Car car)
{
if (cars == null)
cars = new List();
cars.add(car);
}
public void removeFromCarArray(Car car)
{
if (cars != null)
cars.remove(car);
}
public List cars()
{
if (cars == null)
cars = new List();
return cars;
}
}
NOTE: I didn't try to compile any of the above, but the general idea is
to handle transient information in the subclass. You don't model
transient information in Cayenne Modeler (the Car -- make plain old java
class for it). It's also a good idea to use the subclass to handle
derived information, possibly even caching it for efficiency (if it
makes sense to do so -- you have to be careful if the parent values
change, which would change the derived value). For example, you could
have a fullName() method which returns firstName() + " " + middleName()
+ " " + lastName(). And, of course, any convenience methods go in the
subclass, too.
Hope that helps!
/dev/mrg
-----Original Message-----
From: Elia Morling [mailto:eli..amiljen.se]
Sent: Tuesday, June 29, 2004 5:21 PM
To: cayenne-use..bjectstyle.org
Subject: Working with Transient Objects
Hi,
Here's an example to illustrate my problem. I have a parking lot
with 5 cars stored in the database. Every time I start up my
application I need to create 5 new cars for the duration of the
run only. I don't want to store them in the database so I simply
create the objects without a context using regular new().
Now I can't make them work with the parking lot, when using:
parkinglot.addToCarArray(car);
OR
car.setParkinglot(parkinglot)
Why doesn't this work and what should I do about it? I need to
have database cars and non-database cars to co-exist in the
parkinglot.
Elia
This archive was generated by hypermail 2.0.0 : Wed Jun 30 2004 - 13:17:31 EDT | http://objectstyle.org/cayenne/lists/cayenne-user/2004/06/0232.html | CC-MAIN-2015-18 | refinedweb | 1,126 | 65.01 |
Here's what I did,
unzip etc
copy souper95.com c:\windows\command
(maybe souper95.exe c:\windows\command\souper.exe ?)
mkdir c:\home
created two files newsrc (list of groups I wanted) kill (the souper kill
file) in c:\home
install yarn as per instructions. Use adduser to set home as my home dir.
connect via DUN to my newserver
open a MS-DOS command prompt
entered the following commands:
cd c:\temp
set NNTPSERVER=news.foo.wherever.com
souper -N c:\home\newsrc -K c:\home\kill -k0 pop3.mail.server name password
import -u
yarn
Once it works might want to add the set NNTPSERVER to your autoexec.bat
or create a batch file containing these commands with a desktop shotcut
pointing to it. Also might need to fiddle with the msdos properties to
get something you like.
?
>
No, not blind. However, the command line switches are the same.
>TIA.
>
> James
> | http://www.vex.net/yarn/list/199608/0026.html | crawl-001 | refinedweb | 154 | 76.42 |
My temperature readings from the DS18B20 sensor is off by a large amount. How can it be calibrated?
Hi Jack.
To calibrate the sensor, you need to measure something with known temperature and check it against the value you get.
The easiest things are ice (0ºC) and boiling water (100ºC).
Take a look at this article that may help you with that:
Regards,
Sara
Then how do you incorporate that information into the code, so the output is corrected?
In other words, when we work with sensors in general, the code needs to include a means for approximating a standard curve, ie, a “reading” vs voltage or current.
Sometime a standard curve is not linear, so ideally there would be a means to use multiple samples or use a function other than y=mx+b.
Hi Jack.
I was thinking that y=mx+b is the easiest way to go. Unless you have other means to calibrate the sensor. With the water reference you’ll only get to values to adjust a calibrating function.
If you have another calibrated temperature sensor or a thermometer that you can compare the readings with, you can get a better adjusted function.
For example, there are software that adjusts a mathematical function to your readings, so you can use it to get the right temperature readings (you can do that with excel for example).
But I guess that the real issue is that usually people don’t have a calibrated source to use as reference.
Regards,
Sara
I was thinking ahead in terms of calibrating other sensors, such as gas sensors that might be curvilinear. For temp I will stick with y=mx+b.
Regardless, back to my question, I still do not understand how/where you insert the two calibration data points into the DS18B20 code?
Hi.
I’m not really sure that I understood your doubt. But here’s the steps:
First, you need to run the code to measure known temperatures. Get several data points and calculate the average value.
For example, imagine that when you measure the following temperatures, the mean values are:
- 0ºC(real value): 3 ºC (value measured with the sensor)
- 100ºC (real value): 95 ºC (value measured with the sensor)
Your function we’ll be something as follows:
y = 1.087x – 3.26
Where y is the real value and x is the value you get from the sensor.
So, then, you can incorporate that equation in your projects.
For example:
float temperature = sensors.getTempCByIndex(0);
float realTemperature = 1.087*temperature - 3.26;
I hope you understand.
Regards,
Sara
This makes sense, thanks. I asked because I don’t see this in any of the code in the esp32 book for Module 4, Units 8 and 11 or Module 5, Units 3-5. In fact, I didn’t see any discussion of ‘calibration’ for any of the sensors. Perhaps you might consider adding this in the next version?
By the way, I also see the use of “readTemp” and “getTemp”. How do they differ and when to use either?
You’re right, we don’t have any content about calibrating sensors.
Regarding the functions used, it depends on which sensor you are using. To interface with each sensor, you use different libraries. So, you use the functions provided by the library.
the readTemperature() is used with the BME sensor and the getTemp is used with the DS18B20.
Regards,
Sara
Great!
I’ll mark this issue as resolved.
If you need further help, you just need to open a new question.
Regards,
Sara
In ice: 0°C = -127 C
In boiling water = -127
3.3 v
Wire sda gpio 21
Start your code here
#include “Arduino.h”
#include <OneWire.h>
#include <DallasTemperature.h>
const int oneWireBus = 21;
OneWire oneWire(oneWireBus);
DallasTemperature sensors(&oneWire);
void setup() {
Serial.begin(115200);
}
void loop() {
sensors.requestTemperatures();
Serial.println(String(sensors.getTempCByIndex(0)));
delay(200);
}
It’s no better on pin 32
Regards
JPD
Hi.
The -127 is an error that usually happens when the sensor was not able to get readings.
Many times it is caused by “bad” wires or wrong connections. Additionally, if the wires are very long, it can also be an issue.
If the wiring is correct, you can try to add an if statement in your code that checks if the temperature is -127. If it is, try to get another reading.
You may also need to add a bigger delay time between readings in your loop.
Regards,
Sara | https://rntlab.com/question/esp32-module-5-unit-5-ds18b20-calibration/ | CC-MAIN-2021-10 | refinedweb | 751 | 66.13 |
I’m not only beginning python but also programming.
So so far i solve the problems but i’m trying to question the way i do it.
But it’s not easy without the experience…
So i’d like to start to ask comments about the way i solve problems.
Is it ok, can it be optimised, is it just “naive” and not the way a progammmer would do, is it “pythonic”
So for Reverse, this is my solution:
def reverse(text): txet = "" for i in range(len(text)): txet = txet + text[len(text)-(1+i)] print txet return txet
Thanks in advance for your time
[edit] I found some answers in this very interesting topic about the reverse problem: | https://discuss.codecademy.com/t/reverse-naive-solution/196536 | CC-MAIN-2018-39 | refinedweb | 120 | 60.69 |
Email sending in Reaction is handled by Nodemailer and the use of any SMTP server is supported. See the configuration documentation for details on how to set up mail in the admin dashboard.
All emails that are sent from Reaction are added to a job queue for both logging and failure handling (see vsivsi:job-collection for full API docs). While you can add jobs directly to the queue, it is recommended that you use the API outlined below to send emails.
APIAPI
All server side email methods (except Meteor methods) are available in the
Reaction.Email namespace.
Reaction.Email.getMailUrl()Reaction.Email.getMailUrl()
If mail is configured, returns an SMTP URL string.
Reaction.Email.getMailConfig()Reaction.Email.getMailConfig()
If mail is configured, returns a Nodemailer configuration object.
Example Nodemailer config
{ host: "smtp.mailgun.org", port: 587, secure: true, auth: { user: "someUsername", pass: "somePassword", } }
If no mail settings are found, a "direct" mail config will be returned and Nodemailer will attempt to connect directly to the destination SMTP server that the email is being sent to. This is purely a fallback option and is extremely unlikely to be a reliable way to send email. Many ISP's block the required ports and many mail servers filter incoming mail from unknown SMTP servers to the recipient's spam folder.
The "direct" config looks like this:
{ direct: true }
Reaction.Email.send(options)Reaction.Email.send(options)
Adds an email sending job to the queue. Jobs are processed immediately in the order they are added. Failures are retried 5 times, with a 3 minute wait between each try.
options
{Object} (required)
(all fields required, email job will fatally fail if any are missing)
to- email address to send to
from- email address that will appear to have sent the email
subject- the email subject
html- the HTML or plain text content of the email
Reaction.Email.getTemplate(template)Reaction.Email.getTemplate(template)
Returns an email template as a
String for server side rendering of an email body.
template
{String} (required)
The
template name passed in is used to find a template in either the
Templates collection in the database or the provided email templates in the filesystem. The convention is to name templates based on the folder/file structure relative to /private/email/templates. For example, to get the template used for inviting a shop member, you would use:
const tmpl = Reaction.Email.getTemplate("accounts/inviteShopMember");
That would first try to find a template in the
Templates collection with that name and the current locale/language like this:
Templates.findOne({ template: "accounts/inviteShopMember", language: "en" // shop locale checked to get this value });
If no template is found, it will fallback to the default template in the filesystem at /private/email/templates/accounts/inviteShopMember.html using Meteor's Assets API. | https://docs.reactioncommerce.com/docs/next/email-api | CC-MAIN-2018-43 | refinedweb | 466 | 53 |
zip_source_read man page
zip_source_read — read data from zip source
Library
libzip (-lzip)
Synopsis
#include <zip.h>
zip_int64_t
zip_source_read(zip_source_t *source, void *data, zip_uint64_t len);
Description
The function zip_source_read() reads up to len bytes of data from source at the current read offset into the buffer data.
The zip source source has to be opened for reading by calling zip_source_open(3) first.
Return Values
Upon successful completion the number of bytes read is returned. Upon reading end-of-file, zero is returned. Otherwise, -1 is returned and the error information in source is set to indicate the error.
See Also
libzip(3), zip_source(3), zip_source_seek(3), zip_source_tell(3), zip_source_write(3)
History
zip_source_read() was added in libzip 1.0.
Authors
Dieter Baron <dillo@nih.at> and Thomas Klausner <tk@giga.or.at>
Referenced By
zip_source_open(3), zip_source_seek(3), zip_source_tell(3). | https://www.mankier.com/3/zip_source_read | CC-MAIN-2018-17 | refinedweb | 138 | 59.4 |
I put together a small Python script to check the TankUtility propane tank gauge and put into an openHAB item which you can see below. I’ve just started using it, so I can’t say much about the script or the gauge reliability. I’ll post an update as I use it more.
Some notes:
- The gauge updates once per day, so there is no point in running this more than once per day.
- This is not local; it pulls the data from TankUtility’s server. If I find out how to get the data locally, I’ll change the script.
- You need an RRD gauge on your tank which TankUtility will sell you for $15 if you don’t already have this.
- I chose Python because I’m guessing that this could be converted to a Jython rule easily. I think this can be done fairly easily in rules DSL as well.
- I call the script from cron once per day.
- There is no error checking - that needs to be added.
The API is here. I am new to Python, so I’m sure this could use some improvement. There is other data that can be collected as well including the time of the reading and the temperature (of what? there’s no sensor in the tank - maybe it is the interface box’s temperature).
import requests from requests.auth import HTTPBasicAuth # change these appropriately openHABHostAndPort = '' TankUser = 'email@somewhere.com' TankPassword = 'whatever' def putToOpenHAB(item, itemData): ItemURL = openHABHostAndPort + '/rest/items/' + item + '/state' OpenHABResponse = requests.put(ItemURL, data = str(itemData).encode('utf-8'), allow_redirects = True) def Main(): jsonTokenResponse = requests.get('', auth=HTTPBasicAuth(TankUser, TankPassword)).json() jsonDeviceResponse = requests.get('' + jsonTokenResponse["token"]).json() # below is querying the first tank in the account - adjust the [0] if you want other tanks jsonTankDataResponse = requests.get('' + jsonDeviceResponse["devices"][0] + '?token=' + jsonTokenResponse["token"]).json() print(jsonTankDataResponse["device"]["lastReading"]["tank"]) putToOpenHAB('TankLevel', jsonTankDataResponse["device"]["lastReading"]["tank"]) if __name__ == '__main__': Main() | https://community.openhab.org/t/propane-tank-monitor-tankutility-python-script/91331 | CC-MAIN-2022-27 | refinedweb | 324 | 51.04 |
I have a question about Inheritance and Binding.
What if I create a new method in a subclass, and try to call it with the superclass reference?
I know that it will check first the type, and after that the object.
So according to the rules this is not going work, because this method is not declared in superclass.
But is there no way to overcome it?
I mean does Inheritance mean, that you must declare every single method in superclass, and if you would like to change something for subclass, you can only override it?
So if suddenly I realise, that one of my subclasses does needs a special method, or needs an overloading, then I eather forced to declare it in superclass first or forget about it at all?
So if suddenly I realise, that one of my subclasses does needs a special method, or needs an overloading, then I eather forced to declare it in superclass first or forget about it at all?
There is a third option. Declare the method in the subclass. In code that needs to call the method, cast the reference to the subclass type. If the reference does not really point to an object of that subclass, you will get a ClassCastException.
If you end up having to do this sort of thing you should take another look at it during your next refactoring pass to see if it can be smoothed out.
public class Test { public static void main(String[] args) { System.out.println(new Test().toString()); Test sub = new TestSub(); System.out.println(sub.toString()); ((TestSub)sub).specialMethod(); } @Override public String toString(){ return "I'm a Test"; } } class TestSub extends Test { void specialMethod() { System.out.println("I'm a TestSub"); } } | https://codedump.io/share/QQB0jbHk54no/1/java-superclass-calling-a-method-created-in-a-subclass | CC-MAIN-2018-05 | refinedweb | 290 | 72.05 |
Hello, To remove bottlenecks I usually instrument some functions in my application inside a dedicated test and set speed goals there until they are met. Then I leave the test to avoid speed regressions, when doable, by translating times in pystones. Unless I missed something in the standard library, I feel like there's a missing tool to do it simply: - the timeit module is nice to try out small code snippets but is not really adapted to manually profile the code of an existing application - the profile module is nice to profile an application as a whole but is not very handy to gather statistics on specific functions in their real execution context What about adding a decorator that fills a statistics mapping in memory (time+stones), like this: >=========== import time import sys import logging from test import pystone benchtime, stones = pystone.pystones() def secs_to_kstones(seconds): return (stones*seconds) / 1000 stats = {} def reset_stats(): global stats stats = {} def log_stats(): template = '%s : %.2f kstones, %.3f secondes' for key, v in stats.items(): logging.debug(template % (key, v['stones'], v['time'])) if sys.platform == 'win32': timer = time.clock else: timer = time.time def profile(name='stats', stats=stats): def _profile(function): def __profile(*args, **kw): start_time = timer() try: return function(*args, **kw) finally: total = timer() - start_time kstones = secs_to_kstones(total) stats[name] = {'time': total, 'stones': kstones} return __profile return _profile >=========== This allows instrumenting the application by decorating some functions, either inside the application, either in a dedicated test: >====== def my_test(): my.app.slow_stuff = profile('seem slow')(my.app.slow_stuff) my.app.other_slow_stuff = profile('seem slow too')(my.app.other_slow_stuff) # should not take more than 40k pystones ! assert stats['seem slow too']['profile'] < 40 # let's log them log_stats() >====== Regards, Tarek -- Tarek Ziadé | Association AfPy | Blog FR | Blog EN | -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/python-ideas/2008-June/001659.html | CC-MAIN-2016-44 | refinedweb | 303 | 50.67 |
Writing Tests for a Stored Proc Sure Feels Weird
Some times you are just stuck and have to write some weird test fixtures to get the level of confidence you need to move forward in a legacy system. You can’t simply throw the baby out with the bath water no matter how much you really, really want every one to agree that is the best course of action.
In that situation, it is just as important to stick to your guns and find a way to wrap a test around what you are working on. Case in point, the current project I am on is your traditional business logic in sprocs application. By mandate, all updates to data must happen in a sproc so that business rules can be “enforced”.
The team of developers I have joined have no faith in Agile practices and see unit testing as a drain on their time and resources for no value. Interestingly enough, when I joined the team the vast majority of sprint items were bug fixes to multi-hundred line sprocs where the fix might actually cause more bugs. There was no real way to gain any kind of confidence other than poking the application with a stick.
Enter the stored procedure unit test fixture.
[TestFixture] public class when_creating_a_new_research_item_and_an_open_research_item_already_exists : with_a_valid_security { private Execute statement; public override void Because_of() { statement = Execute.Proceedure("spResearchItem_Create") .WithParameter("@TableName", tableName) .WithParameter("@ColumnName", columnName) .WithParameter("@AssignedToUser", user) .WithParameter("@ItemId", recordId) .WithParameter("@Note", note); } [Test] public void it_should_refuse_to_create_the_record() { Assert.Throws(() => { statement.AsNonQuery(); }); } [Test] public void it_should_have_a_descriptive_error() { var error = Assert.Throws(() => { statement.AsNonQuery(); }); error.Message.ShouldContain("Open Research Item Already Exists"); } [Test] public void no_record_should_be_created() { var count = Execute.Statement( @"SELECT COUNT(*) FROM ResearchItem WHERE [email protected] AND [email protected] AND [email protected] AND IsOpen='Y'") .WithParameter("@p1", tableName) .WithParameter("@p2", columnName) .WithParameter("@p3", recordId) .AsValue(); count.ShouldBe(1); } }
This single fixture explicitly demonstrates a business rule, it can run with every build and we will get instant notification when this rule can be violated because of changes in the sproc. It is also nicely wrapped in a transaction that is automatically rolled back, so I can point it at any database and test its set of sprocs. It is not optimal, it is not pretty. But it does give you confidence to move forward.
Side Note: Don’t pay to much attention to the Execute class. It is simply a test helper to remove some of the tediousness of executing ADO code from the tests.
Share this story
About Author
I am a passionate engineer with an interest in shipping quality software, building strong collaborative teams and continuous improvement of my skills, team and the product. | https://iamnotmyself.com/2010/04/02/writing-tests-for-a-stored-proc-sure-feels-weird/ | CC-MAIN-2017-39 | refinedweb | 446 | 51.07 |
Created on 2012-10-11 21:30 by gvanrossum, last changed 2013-11-23 22:46 by serhiy.storchaka. This issue is now closed.
I've noticed a subtle bug in some of our internal code. Someone wants to ensure that a certain string (e.g. a URL path) matches a certain pattern in its entirety. They use re.match() with a regex ending in $. Fine. Now someone else comes along and modifies the pattern. Somehow the $ gets lost, or the pattern develops a set of toplevel choices that don't all end in $. And now things that have a valid *prefix* suddenly (and unintentionally) start matching.
I think this is a common enough issue and propose a new API: a fullmatch() function and method that work just like the existing match() function and method but also check that the whole input string matches. This can be implemented slightly awkwardly as follows in user code:
def fullmatch(regex, input, flags=0):
m = re.match(regex, input, flags)
if m is not None and m.end() == len(input):
return m
return None
(The corresponding method will have to be somewhat more complex because the underlying match() method takes optional pos and endpos arguments.)
+1. Note that this really can't be done in user-level code. For example, consider matching the pattern
a|ab
against the string
ab
Without being _forced_ to consider the "ab" branch, the regexp will match just the "a" branch. So, e.g., the example code you posted will say "nope, it didn't match (the whole thing)".
What will be with non-greedy qualifiers? Should '.*?' full match any string?
>>> re.match('.*?$', 'abc').group()
'abc'
>>> re.match('.*?', 'abc').group()
''
> Note that this really can't be done in user-level code.
Well, how about:
def fullmatch(regex, input, flags=0):
return re.match("(:?" + regex + ")$", input, flags)
Antoine, that's certainly the conceptual intent here. Can't say whether your attempt works in all cases. The docs don't guarantee it. For example, if the original regexp started with (?x), the docs explicitly say the effect of (?x) is undefined "if there are non-whitespace characters before the [inline (?x)] flag".
Sure, you could parse the regexp is user code too, and move an initial (?...x...) before your non-capturing group. For that matter, you could write your own regexp engine in user code too ;-)
The point is that it should be easy for the regexp engine to implement the desired functionality - and user attempts to "fake it" have pitfalls (even Guido didn't get it right - LOL ;-) ).
'$' will match at the end of the string or just before the final '\n':
>>> re.match(r'abc$', 'abc\n')
<_sre.SRE_Match object at 0x00F15448>
So shouldn't you be using r'\Z' instead?
>>> re.match(r'abc\Z', 'abc')
<_sre.SRE_Match object at 0x00F15410>
>>> re.match(r'abc\Z', 'abc\n')
>>>
And what happens if the MULTILINE flag is turned on?
>>> re.match(r'abc$', 'abc\ndef', flags=re.MULTILINE)
<_sre.SRE_Match object at 0x00F15448>
>>> re.match(r'abc\Z', 'abc\ndef', flags=re.MULTILINE)
>>>
Matthew, Guido wrote "check that the whole input string matches" (or slice if pos and (possibly also) endpos is/are given). So, yes, \Z is more to the point than $ if people want to continue wasting time trying to implement this as a Python-level function ;-)
I don't understand what you're asking about MULTILINE. What's the issue there? Focus on Guido's "whole input string matches", not on his motivational talk about "a regex ending in $". $ and/or \Z aren't the point here; "whole input string matches" is the point..
This means Tim and Guido are right that a dedicated fullmatch() method
is desireable.
Definitely this is not easy issue.
Serhiy, I expect this is easy to implement _inside_ the regexp engine. The complications come from trying to do it outside the engine. But even there, wrapping the original regexp <re> in
(?:<re>)\Z
is at worst very close. The only insecurity with that I've thought of concerns the doc's warnings about what can appear before an inline re.VERBOSE flag. It probably works fine even if <re> does begin with (?...x....).
It certainly appears to ignore the whitespace, even if the "(?x)" is at the end of the pattern or in the middle of a group.
Another point we need to consider is that the user might want to use a pre-compiled pattern.
I'm about to add this to my regex implementation and, naturally, I want it to have the same name for compatibility.
However, I'm not that keen on "fullmatch" and would prefer "matchall" instead.
What do you think?
I like "matchall" fine, but I can't channel Guido on names - he sometimes gets those wrong ;-)
re.matchall() would appear to be related to re.findall(), which it isn't.
The re2 package has a FullMatch method:
re2's FullMatch method contrasts with its PartialMatch method, which re doesn't have!
But my other argument stands.
OK, in order to avoid bikeshedding, "fullmatch" it is.
FWIW, I prefer "fullmatch" as well :)
Patch attached. I've taken a slightly different approach than what has been discussed here: rather than define a new fullmatch() function and method, I've defined a new re.FULLMATCH flag for match(). So an example would be
re.match('abc','abc',re.FULLMATCH)
The implementation is basically what has been discussed here, except done when the regular expression is compiled rather than at the user level.
Thanks for the patch. While an internal flag may be a reasonable implementation strategy, IMHO a dedicated method still makes sense: it's simply more readable than passing a flag.
As for the tests, they should probably exercise the interaction with re.MULTILINE - see MRAB's comment in msg172775.
I've attached a patch.
I did not analyze the patch deeply, only left on Rietveld comments on first sight. Need to update the documentation.
The patch doesn't seem to include failure cases for fullmatch (i.e. cases where fullmatch returns None where match or search would return a match).
3 of the tests expect None when using 'fullmatch'; they won't return None when using 'match'.
> 3 of the tests expect None when using 'fullmatch'; they won't return
> None when using 'match'.
Sorry, my bad. Like Serhiy, I can't comment on the changes to re internals, but we can trust you on this. The patch needs documentation, though.
I can't comment right now, but I am going inspect thoroughly re internals.
This is a new feature and we have enough time before 3.4 freeze.
Serhiy, sorry to ping you, but do you think you're gonna look at this?
I updated the patch to current tip, fixed three issues from the review, and added documentation updates.
New changeset b51218966201 by Georg Brandl in branch 'default':
Add re.fullmatch() function and regex.fullmatch() method, which anchor the
Sorry, accidental push, already reverted.
Patch updated to current tip. I have added some changes from the review and have added some tests.
Matthew, why change for SRE_OP_REPEAT_ONE is needed? Tests are passed without it.
Matthew, could you please answer my question?
I don't know that it's not needed.
New changeset 89dfa2671c83 by Serhiy Storchaka in branch 'default':
Issue #16203: Add re.fullmatch() function and regex.fullmatch() method,
Committed with additional test (re.fullmatch('a+', 'ab')) which proves that change for SRE_OP_REPEAT_ONE are needed.
Thank you Matthew for your contribution. | https://bugs.python.org/issue16203 | CC-MAIN-2017-22 | refinedweb | 1,259 | 68.57 |
NAME
Test::Stream - **DEPRECATED** See Test2-Suite instead
DEPRECATED
This distribution is deprecated in favor of Test2, Test2::Suite, and Test2::Workflow.
See Test::Stream::Manual::ToTest2 for a conversion guide.
***READ THIS FIRST***
This is not a drop-in replacement for Test::More.
Adoption of Test::Stream instead of continuing to use Test::More is a
choice. Liberty has been taken to make significant API changes. Replacing
use Test::More; with
use Test::Stream; will not work for more than the
most trivial of test files.
See Test::Stream::Manual::FromTestBuilder if you are coming from Test::More or Test::Simple and want a quick translation.
***COMBINING WITH OLD TOOLS***
At the moment you cannot use Test::Stream and Test::Builder based tools in the same test scripts unless you install the TRIAL Test::More version. Once the Test::More trials go stable you will be able to combine tools from both frameworks.
MANUAL
The manual is still being written, but a couple pages are already available.
Migrating from Test::More
Test::Stream::Manual::FromTestBuilder
How to write tools for Test::Stream
Test::Stream::Manual::Tooling
Overview of Test-Stream components
Test::Stream::Manual::Components
DESCRIPTION.
Bundles and plugins can be used directly, it is not necessary to use Test::Stream to load them.
SYNOPSIS
use Test::Stream -Classic; ok(1, "This is a pass"); ok(0, "This is a fail"); done_testing;
The '-' above means load the specified bundle, this is the same as:
use Test::Stream::Bundle::Classic; ok(1, "This is a pass"); ok(0, "This is a fail"); done_testing;
SUBCLASS
package My::Loader; use strict; use warnings; use parent 'Test::Stream'; # The 'default' sub just returns a list of import arguments to use byu # default. sub default { return qw{ -Bundle1 Plugin1 ... }; } 1;
IMPORTANT NOTE
use Test::Stream; will fail. You MUST specify at least one bundle or
plugin. If you do not specify any then none would be imported and that is
obviously not what you want. If you are new to Test::Stream then you should
probably start with one of the pre-made bundles:
'-Classic' - The 'Classic' bundle.
This one is probably your best bet when just starting out. This plugin closely resembles the functionality of Test::More.
See Test::Stream::Bundle::Classic.
'-V1' - The bundle used in Test::Streams tests.
This one provides a lot more than the 'Classic' bundle, but is probably not suited to begginers. There are several notable differences from Test::More that can trip you up if you do not pay attention.
See Test::Stream::Bundle::V1.
WHY NOT MAKE A DEFAULT BUNDLE OR SET OF PLUGINS?
Future Proofing. If we decide in the future that a specific plugin or tool is harmful we would like to be able to remove it. Making a tool part of the default set will effectively make it unremovable as doing so would break compatability. Instead we have the bundle system, and a set of starter bundles, if a bundle proves ot be harmful we can change the recommendation of the docs.
PLUGINS, BUNDLES, AND OPTIONS
Test::Stream tools should be created as plugins. This is not enforced, nothing prevents you from writing Test::Stream tools that are not plugins. However writing your tool as a plugin will help your module to play well with other tools. Writing a plugin also makes it easier for you to create private or public bundles that reduce your boilerplate.
Bundles are very simple. At its core a bundle is simply a list of other bundles, plugins, and arguments to those plugins. Much like hash declaration a 'last wins' approach is used; if you load 2 bundles that share a plugin with different arguments, the last set of arguments wins.
Plugins and bundles can be distinguished easily:
use Test::Stream( '-Bundle', # Bundle ('-') ':Project', # Project specific bundle (':') 'MyPlugin', # Plugin name (no prefix) '+Fully::Qualified::Plugin', # (Plugin in unusual path) 'SomePlugin' => ['arg1', ...], # (Plugin with args) '!UnwantedPlugin', # Do not load this plugin 'WantEverything' => '*', # Load the plugin with all options 'option' => ..., # Option to the loader (Test::Stream) );
Explanation:
'-Bundle',
The
-prefix indicates that the specified item is a bundle. Bundles live in the
Test::Stream::Bundle::namespace. Each bundle is an independant module. You can specify any number of bundles, or none at all.
':Project'
The ':' prefix indicates we are loading a project specific bundle, which means the module must be located in
t/lib/,
lib/, or the paths provided in the
TS_LB_PATHenvironment variable. In the case of ':Project' it will look for
Test/Stream/Bundle/Project.pmin
TS_LB_PATH,
t/lib/, then
lib/.
This is a good way to create bundles useful to your project, but not really worth putting on CPAN.
'MyPlugin'
Arguments without a prefix are considered to be plugin names. Plugins are assumed to be in
Test::Stream::Plugin::, which is prefixed automatically for you.
'+Fully::Qualified::Plugin'
If you write a plugin, but put it in a non-standard namespace, you can use the fully qualified plugin namespace prefixed by '+'. Apart from the namespace treatment there is no difference in how the plugin is loaded or used.
'SomePlugin' => \@ARGS
Most plugins provide a fairly sane set of defaults when loaded. However some provide extras you need to request. When loading a plugin directly these would be the import arguments. If your plugin is followed by an arrayref the ref contents will be used as load arguments.
Bundles may also specify arguments for plugins. You can override the bundles arguments by specifying your own. In these cases last wins, arguments are never merged. If multiple bundles are loaded, and several specify arguments to the same plugin, the same rules apply.
use Test::Stream( '-BundleFoo', # Arguments to 'Foo' get squashed by the next bundle '-BundleAlsoWithFoo', # Arguments to 'Foo' get squashed by the next line 'Foo' => [...], # These args win );
'!UnwantedPlugin'
This will blacklist the plugin so that it will not be used. The blacklist will block the plugin regardless of where it is listed. The blacklist only effects the statement in which it appears; if you load Test::Stream twice, the blacklist will only apply to the load in which it appears. You cannot override the blacklist items.
'WantEverything' => '*'
This will load the plugin with all options. The '*' gets turned into
['-all']for you.
'option' => ...
Uncapitalized options without a
+,
-, or
:prefix are reserved for use by the loader. Loaders that subclass Test::Stream can add options of their own.
To define an option in your subclass simply add a
sub opt_NAME()method. The method will receive several arguments:
sub opt_foo { my $class = shift; my %params = @_;
}
my $list = $params{list}; # List of remaining plugins/args my $args = $params{args}; # Hashref of {plugin => \@args} my $order = $params{order}; # Plugins to load, in order my $skip = $params{skip}; # Hashref of plugins to skip {plugin => $bool} # Pull our arguments off the list given at load time my $foos_arg = shift @$list; # Add the 'Foo' plugin to the list of plugins to load, unless it is # present in the $args hash in which case it is already in order. push @$order => 'Foo' unless $args{'Foo'}; # Set the args for the plugin $args->{Foo} = [$foos_arg]; $skip{Fox} = 1; # Make sure the Fox plugin never loads.
AVAILABLE OPTIONS
class => $CLASS
Shortcut for the Test::Stream::Plugin::Class plugin.
skip_without => $MODULE
- skip_without => 'v5.008'
skip_without => [$MODULE => $VERSION]
Shortcut for the Test::Stream::Plugin::SkipWithout plugin. Unlike normal specification of a plugin, this APPENDS arguments. This one can be called several time and the arguments will be appended.
Note: specifying 'SkipWithout' the normal way after a call to 'skip_without' will wipe out the argument that have accumulated so far.
srand => $SEED
Shortcut to set the random seed.
SEE ALSO
For more about plugins and bundles see the following docs:
plugins
Test::Stream::Plugin - Provides tools to help write plugins.
bundles
Test::Stream::Bundle - Provides tools to help write bundles.
EXPLANATION AND HISTORY
Test::Stream has learned from Test::Builder. For a time it was common for
people to write
Test::* tools that bundled other
Test::* tools with them
when loaded. For a short time this seemed like a good idea. This was quickly
seen to be a problem when people wanted to use features of multiple testing
tools that both made incompatible assumptions about other modules you might
want to load.
Test::Stream does not recreate this wild west approach to testing tools and bundles. Test::Stream recognises the benefits of bundles, but provides a much more sane approach. Bundles and Tools are kept separate, this way you can always use tools without being forced to adopt the authors ideal bundle.
ENVIRONMENT VARIABLES
This is a list of environment variables Test::Stream looks at:
- TS_FORMATTER="Foo"
TS_FORMATTER="+Foo::Bar"
This can be used to set the output formatter. By default Test::Stream::Formatter::TAP is used.
TS_KEEP_TEMPDIR=1
Some IPC drivers make use of temporary directories, this variable will tell Test::Stream to keep the directory when the tests are complete.
TS_LB_PATH="./:./lib/:..."
This allows you to provide paths where Test::Stream will search for project specific bundles. These paths are NOT added to
@INC.
TS_MAX_DELTA=25
This is used by the Test::Stream::Plugin::Compare plugin. This specifies the max number of differences to show when data structures do not match.
TS_TERM_SIZE=80
This is used to set the width of the terminal. This is used when building tables of diagnostics. The default is 80, unless Term::ReadKey is installed in which case the value is determined dynamically.
TS_WORKFLOW=42
TS_WORKFLOW="foo"
This is used by the Test::Stream::Plugin::Spec plugin to specify which test block should be run, only the specified block will be run.
TS_RAND_SEED=44523
This only works when used with the Test::Stream::Plugin::SRand plugin. This lets you specify the random seed to use.
HARNESS_ACTIVE
This is typically set by TAP::Harness and other harnesses. You should not need to set this yourself.
HARNESS_IS_VERBOSE
This is typically set by TAP::Harness and other harnesses. You should not need to set this yourself.
NO_TRACE_MASK=1
This variable is specified by Trace::Mask. Test::Stream uses the Trace::Mask specification to mask some stack frames from traces generated by Trace::Mask compliant tools. Setting this variable will force a full stack trace whenever a trace is produced.
SOURCE
The source code repository for Test::Stream | https://web-stage.metacpan.org/release/EXODIST/Test-Stream-1.302027/source/README.md | CC-MAIN-2021-31 | refinedweb | 1,721 | 56.45 |
Question: How can I parse GFF3 using BCBio?
0
I have the following code:
import sys from BCBio import GFF file = sys.argv[1] for record in GFF.parse(file): print([record.seqid,record.source, record.start, record.end, record.strand])
My question is how can extract the ids, the chromosome, the coordinates and the strand from a GFF3 files using python/biopython (I'm bounded to python because this is part of a bigger program writen in said language). I thought since record.seqid, record.source, record.start, record.end, record.strand is used to write new GFF, this could work, but clearly it doesn't.
I need this solved ASAP but also I need to learn, so please, anyone out there, please help.
Thanks in advance
Jenifer
Try this package in python -
This package creates a "local" database from your gff file that you can interact with in relatively easier manner. For instance, I was able to extract start stop position of gene along with exons coding for that gene and their start and stop positions. Hope this helps!
Ok, so now what I need is an iterator to get all CDS with all start, end and and strand for a chromosome. Does this library provide one, or how do I get this.
Context: For each chromosome I calculated the GC_Skew and now I need to show in that plot, the zones that have CDS, distinguishing the zones that have CDS and the strand. The function:
GC_Skew parameter is a list with the GC-skew calculation, window and overlap is the size of the window and the overlap, salida is the name of the chromosome (seqid) and anotation can be None or a created database like this:
anotation = gffutils.create_db(fn, dbfn = anotation_file)So the thing is I don't have a clear idea how gffutils works and I don't know if I can get all CDS to add that data to the plot If you can give me a hand in this, I will apreciate it
The tutorial () is very straightforward, for example let us presume that the local database that you have created is by the name db and the gene whose CDS you would like to extract is assigned name gene, then you can accomplish what you want to by the following:
See if this helps! To be honest I do not have an entire idea of what you are looking to do, if this does not fix your problem then maybe elaborate your next problem and we can see if we can help
Yes, I saw that, so now what I need to do is to, get a list of genes for one seqid (salida parameter), from the gff3 database. Then construct a list of tuples for each gene and then, add that data to the plot... Mmm I really need to meditate how to do this
EDIT: ok, thing is, I just have the chromosome (seqid) so in order for your solution to work, I need to get a list of the genes, but i don't know how to get it. The only thing I know is, for my script so work, is something like this:
The thing is, how do I get ge genes object/list?
Is it possible for you to share a short snapshot of your gff file? | https://www.biostars.org/p/418595/ | CC-MAIN-2020-50 | refinedweb | 560 | 69.75 |
public class Product { [Key] public int ProductId { get; set; } public string ProductName { get; set; } public string WarehouseLocation { get; set; } public string LastAuditedBy { get; set; } public int Quantity { get; set; } [ConcurrencyCheck] public byte[] TheRowversion { get; set; } }
The concurrent update scenario:
class Program { static void Main(string[] args) { var userA = new CatalogContext(); var userB = new CatalogContext(); // User A and User B happen to open the same record at the same time var a = userA.Products.Find(1); var b = userB.Products.Find(1); a.ProductName = "New & Improved" + a.ProductName + "!"; b.WarehouseLocation = "Behind appliances section"; b.LastAuditedBy = "Linus"; b.Quantity = 7; userA.SaveChanges(); // This will fail. As the rowversion of the same record since it was first loaded, had // already changed (the record was changed by User A first) when it's User B's time to save the same record. userB.SaveChanges(); }
Why a modal dialog is bad for editing a record? It's bad when you employ concurrency handling (a must) on records. Imagine your program is in kiosk mode, and let's say due to policy or technology constraints(say IE6), launching multiple instances of your app or using multiple tabs(e.g. IE6) isn't allowed/possible. How can we prevent User B from wasting his edits/time(imagine if there are 20 fields on a given record)? from losing his work? Let him open the latest changes of the record he happened to open at the same time with other users in a new tab(e.g. jQuery tab). With non-modal editing, User B can compare his changes against the latest changes of User A, and then he can copy-paste his changes to the latest changes, not much work will be wasted. You cannot facilitate this sort of thing with modal dialog editor, User B will be compelled to lose all his edits, and will just re-open the latest version from User A and re-input again all his concerning fields.
DDL:
create table Product ( ProductId int identity(1,1) not null primary key, ProductName nvarchar(100) not null unique, WarehouseLocation nvarchar(100) not null, LastAuditedBy nvarchar(100) not null, Quantity int not null, TheRowversion rowversion not null ) | http://www.ienablemuch.com/2011/05/entity-frameworks-concurrency-handling.html | CC-MAIN-2018-05 | refinedweb | 363 | 61.46 |
Use a label to determine whether to forward logs or not with Fluentd daemonset in K8s
While Implementing logging architecture in Kubernetes, we often run Fluentd as a daemonset to collect console logs from all pods and ship logs into EFK (or other storage). Usually, Fluentd daemonset is a cluster-level deployment, in which the config is shared by all namespaces and pods.
Under some circumstances, we’ll like to give the decision back to pods, so pod owners can decide whether they want their console logs to be shipped and stored.
To achieve this goal, pod owners should add a label to a pod, for example, we named it console_log_forward, with a value noforward, any other value strings will be ignored.
template:
metadata:
labels:
app: myapp
console_log_forward: noforward
And in Fluentd daemonset’s fluentd.config, add a new match paragraph with @type rewrite_tag_filter.
Explanation of this config is: When matching pattern (/^noforward$/) in the field ($.kubernetes.labels.console_log_forward), rewrite the log tag into “clear”.
And we need to add another match paragraph, when matching tag “clear”, use @type null to drop it.
<match kubernetes.**>
@type rewrite_tag_filter
<rule>
key $.kubernetes.labels.console_log_forward
pattern /^noforward$/
tag clear
</rule> # If there's other rewrite rules, add them here
</match><match clear>
@type null
</match>
In this example case,
- Pods with console_log_forward: noforward label => pod logs will NOT be forward by fluentd.
- Pods without console_log_forward label => pod logs will not enter the above <match>, logs with continually be handled by the rest of the fluentd.config
- Pods with console_log_forward label but with a value other than “noforward” => pod logs will not enter the above <match>, log with continually be handled by the rest of your fluentd.config. | https://rubyfaby.medium.com/use-a-label-to-determine-whether-to-forward-logs-or-not-with-fluentd-daemonset-in-k8s-f337b1b6af77?source=user_profile---------1---------------------------- | CC-MAIN-2022-33 | refinedweb | 284 | 50.26 |
Hello,
I have four children to a parent. They all stick together wherever they go. The children are placed on the right, left, up and down with respect to the parent. Due the rotation of the parent, at any point I would like to find the child which occupies the minimum/Maximum X position and the minimum/maximum Z Position among everything.
What is the best way to solve it or is there any in-built Unity3d function which could grant access to find my child position or even compare the position with all the children and find the min and max?
Any help or advice is highly appreciated.
Thank you Devs
GAME : Top-Down View. i.e. along x and z axis.
P.S. :If you devs recommend to keep it simple, first I would like to find the min then.
Answer by whydoidoit
·
Jun 14, 2012 at 07:38 PM
Can't test this as I'm currently reimporting all my game assets :( - But this should do it:
using System.Linq;
var leftMost = transform.Cast<Transform>().OrderBy(t=>t.position.x).First();
var topMost = transform.Cast<Transform>().OrderBy(t=>t.position.z).First();
Obviously use .Last() for the maximum position.
Just a curiousity question but...how's the compatibility for Linq across different platforms? I've been straying away from using it because my boss wants VERY high compatibility, and I can't seem to find any nice resources detailing it.
It's compatible on US and C# so long as you don't use the mini mscorlib.dll
See here supported on everything except micro
It will work find on an iPad - it certainly does for me :)
Sorry about the x/z mix.
The name 'Joystick' does not denote a valid type ('not found')
2
Answers
Children must not flip with parent.
3
Answers
Can someone help me fix my Javascript for Flickering Light?
6
Answers
Take array and waypoints from parent spawnner
2
Answers
Add Pointer Down event trigger only to parent object.
1
Answer | https://answers.unity.com/questions/267249/find-the-min-and-max-position-of-children.html?sort=oldest | CC-MAIN-2020-45 | refinedweb | 339 | 65.83 |
Shorter might be better
It's one of the most basic axioms of the hardcore computer programmer: because shorter equates to less typing, shorter must therefore be intrinsically better. This philosophy gave birth to IDEs like vi, where commands like
:wq! and
28G hold a wealth of meaning. It also led to some of the most cryptic code you'll ever come across -- variables like
ar for Agile Runner (or is it Argyle, or Atomic Reactor... well, you get the idea).
Sometimes, in an effort to be short, programmers leave clarity somewhere back in the rearview mirror. That said, the backlash against short has resulted in code that is so verbose as to be painful. Variables named
theAtomicReactorLocatedInPhiladelphia are just as annoying and unwieldy as those named
ar. There must be a happy medium, right?
That happy medium, at least as far as I'm concerned, is rooted in finding convenient ways to do things, that aren't short for the the sake of being short. As a great example of just such a solution, Java 5.0 introduces a new version of the
for loop, which I refer to as
for/in. It's also called foreach, and sometimes enhanced for -- but these all refer to the same construct. Whatever you choose to call it,
for/in makes for simpler code, as you'll see in the examples throughout this article.
Losing an Iterator
The most fundamental difference in using
for/in, as opposed to "plain ol' for", is that you won't have to use a counter (usually called
i or
count) or
Iterator. Check out Listing 1, which shows a
for loop that uses an
Iterator:
Listing 1. for loops, old-school style
public void testForLoop(PrintStream out) throws IOException { List list = getList(); // initialize this list elsewhere for (Iterator i = list.iterator(); i.hasNext(); ) { Object listElement = i.next(); out.println(listElement.toString()); // Do something else with this list element } }
Note: If you've been following my articles on the new features in Tiger (see Resources), you know that I often thank O'Reilly Media, Inc., which allows me to publish code samples from my Tiger book in this article format. That means you get code that has gone through more testing, and more commenting, than I could otherwise provide to you.
If you're expecting a lengthy description of how to convert this code over to the new
for/in loop, I'm about to disappoint you. Listing 2 shows the loop in Listing 1 using
for/in, and it's remarkably similar. Take a look, and I'll explain what's going on in as much detail as I can muster (which will still barely take a paragraph).
Listing 2. Converting over to for/in
public void testForInLoop(PrintStream out) throws IOException { List list = getList(); // initialize this list elsewhere for (Object listElement : list) { out.println(listElement.toString()); // Do something else with this list element } }
The basic syntax of the
for/in loop is shown in Listing 3. This may look a little odd if you're not used to reading specifications, but it's pretty easy when you walk through it piece by piece.
Listing 3. Basic for/in loop structure
for (declaration : expression) statement
declaration is just a variable, like
Object listElement. This variable should be typed so that it's compatible with each item in the list, array, or collection being iterated over. In the case of Listing 2,
list contains objects, so that's the type of
listElement.
expression is ... well ... an expression. It should evaluate to something that can be iterated over (more on what that means under the hood later). For now, just ensure that expression evaluates to a collection or an array. This expression can be as simple as a variable (as shown in Listing 2) or a method call (like
getList()), or a complex expression that involves boolean logic or a ternary operator. As long as it returns an array or collection, things will work fine.
statement is a placeholder for the contents of the loop, which operates on the variable defined in declaration; of course, this is a loop, so statement is applied to each item in the array or collection. And, with bracketing (
{ and
}), you can use multiple statements.
The long and the short of it is that you create a variable, indicate the array or collection to iterate over, and then operate upon the variable you defined. There is no assignment of each item in the list, as
for/in handles that for you. Of course, if this is still a little unclear, just keep reading -- there's a wealth of examples that will make this plenty clear.
Before moving on, though, I want to supply a few more specification-style illustrations of how
for/in works. Listing 4 shows the
for/in loop in action when a genericized type is supplied. This is actually what the statement looks like once the compiler converts it internally to a normal
for loop.
Did you catch that? The compiler actually turns this shorter, more convenient
for/in statement into a compiler-friendly
for loop -- and you don't bear the brunt of the work. That's why I call this convenient, rather than simply short.
Listing 4. for/in loop, in a transformed state, with an Iterable
for (Iterator<E> #i = (expression).iterator(); #i.hasNext(); ) { declaration = #i.next(); statement }
Listing 5 is another
for/in after compiler transformation, this time without a genericized type. The same sort of thing is going on, although it's a little simpler. In either case, though, you can see that it's easily to mentally (and programmatically) convert a
for/in statement to a normal
for statement. Once you can do this conversion mentally, things get awfully easy.
Listing 5. for/in loop, transformed, where there is no unparameterized type
for (Iterator #i = (expression).iterator(); #i.hasNext(); ) { declaration = #i.next(); statement }
Working with arrays
Now that you've got the basic semantics down, you're ready to move to some more concrete examples. You've already seen how
for/in works with lists; it's just as easy to work with arrays. Arrays, like collections, are assigned values (as shown in Listing 6), and then those values are routinely peeled off one by one, and operated upon.
Listing 6. Simple array initialization
int[] int_array = new int[4]; String[] args = new String[10]; float[] float_array = new float[20];
In situations where you might use
for and a counter or index variable, you can now use
for/in (assuming you're working with Tiger, of course). Listing 7 shows another simple example:
Listing 7. Looping over an array with for/in is a piece of cake
public void testArrayLooping(PrintStream out) throws IOException { int[] primes = new int[] { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 }; // Print the primes out using a for/in loop for (int n : primes) { out.println(n); } }
There shouldn't be anything particularly revelatory here -- this is about as basic as it gets. Arrays are typed, and so you know exactly what variable type is going to be needed for each item in the array. This example creates the variable (named
n in this case), and then operates upon that variable. Pretty easy, isn't it? I told you there's nothing complicated here.
It doesn't matter what types are in the array -- you just choose the right type for your declaration. In Listing 8, an array contains
Lists. So you've actually got an array of collections here. Still, using
for/in is simple as can be.
Listing 8. It's also possible to loop over an object array with for/in
public void testObjectArrayLooping(PrintStream out) throws IOException { List[] list_array = new List[3]; list_array[0] = getList(); list_array[1] = getList(); list_array[2] = getList(); for (List l : list_array) { out.println(l.getClass().getName()); } }
You can even layer in your
for/in loops, as shown in Listing 9:
Listing 9. There's nothing wrong with using for/in inside of for/in!
public void testObjectArrayLooping(PrintStream out) throws IOException { List[] list_array = new List[3]; list_array[0] = getList(); list_array[1] = getList(); list_array[2] = getList(); for (List l : list_array) { for (Object o : l) { out.println(o); } } }
Working with collections
Once again, simplicity is the watchword. There's simply nothing tricky or complicated about iterating over collections using
for/in -- it works just like you've already seen, with both lists and arrays. Listing 10 shows an example of iterating over both a
List and a
Set, and has little in the way of surprise to offer. Still, walk through the code, and make sure you're clear on what's going on.
Listing 10. Lots of simple loops in this program demonstrate how to use for/in
package com.oreilly.tiger.ch07; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; public class ForInDemo { public static void main(String[] args) { // These are collections to iterate over below List wordlist = new ArrayList(); Set wordset = new HashSet(); // (Object (Object word : wordset) { System.out.print((String)word + " "); } } }
Listing 11 shows the output of this program (with some data thrown in on the command line for demonstration purposes):
Listing 11. The output does just what you would expect -- lots of printing!
run-ch07: [echo] Running Chapter 7 examples from Java 5.0 Tiger: A Developer's Notebook [echo] Running ForInDemo... [java] Assigning arguments to lists... [java] word1 word2 word3 word4 word1 [java] Printing words from wordList (ordered, with duplicates)... [java] word1 word2 word3 word4 word1 [java] Printing words from wordset (unordered, no duplicates)... [java] word4 word1 word3 word2
The pain of a typecast
So far, when dealing with collections, you've seen
for/in used with a generic variable type, like
Object. This is nice, but doesn't really take advantage of another important Tiger feature: generics (sometimes called parameterized types). I'll leave the details of generics for developerWorks' upcoming tutorial on the subject, but generics make
for/in even more powerful.
Remember that the declaration portion of a
for/in statement creates a variable of the type of each item in the collection being iterated over. In arrays, this was very specific, as arrays are strongly typed -- an
int[] can only contain ints -- and therefore there are no typecasts in the loop. The same is possible when you use typed lists, via generics. Listing 12 shows a few simple parameterized collections:
Listing 12. Adding a parameter to collection types means typecasting can be avoided later on
List<String> wordlist = new ArrayList<String>(); Set<String> wordset = new HashSet<String>();
Now, your
for/in loop can ditch plain old
Object and be more specific. Listing 13 demonstrates this:
Listing 13. When you know the types within the collection, your loop body can be more type-specific
for (String word : wordlist) { System.out.print(word + " "); }
As a more complete example, Listing 14 takes the program shown in Listing 10, and augments it with genericized lists and more specific
for/in loops:
Listing 14. Listing 10 can be rewritten to take advantage of generics
package com.oreilly.tiger.ch07; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; public class ForInDemo { public static void main(String[] args) { // These are collections to iterate over below List<String> wordlist = new ArrayList<String>(); Set<String> wordset = new HashSet<String>(); // (String (String word : wordset) { System.out.print((String)word + " "); } } }
Of course, the typecasting doesn't completely disappear in these cases. However, you're offloading the work onto the compiler (which is more or less what generics does, for those who are interested in that sort of thing). At compile-time, all of these types will be checked, and you'll get any errors accordingly. And if someone else can do this work -- well, that's good for everyone, right?
Integrating classes with for/in
So far, I've dealt only with iteration over Java's pre-packaged classes and types -- arrays, lists, maps, sets, and other collections. While that's a pretty good deal, the beauty of any programming language is the ability to define your own classes. Custom objects are the backbone of any large-scale application. This section deals with just that -- the concepts and steps involved in allowing your own objects to be used by the
for/in construct.
A new interface
You should be familiar with the
java.util.Iterator interface by now -- just in case you're not, Listing 15 shows this interface, as it appears in Tiger:
Listing 15. Iterator has long been a mainstay of the Java language
package java.util; public interface Iterator<E> { public boolean hasNext(); public E next(); public void remove(); }
To take advantage of
for/in, though, you'll need to add another interface to your domain knowledge --
java.lang.Iterable. This interface is shown in Listing 16:
Listing 16. The Iterable interface is the cornerstone of the for/in construct
package java.lang; public interface Iterable<E> { public java.util.Iterator<E> iterator(); }
For your object or class to work with
for/in, it needs to implement the
Iterable interface. This leaves you with two basic scenarios:
- Extend an existing collection class that already implements
Iterable(and therefore already supports
for/in).
- Handle iteration manually, by defining your own implementation of
Iterable.
Handling iteration manually
If at all possible, I strongly recommend you have your custom objects extend an existing collection. Things are dramatically simpler, and you get to avoid all sorts of messy detail. Listing 17 shows a class that does just this:
Listing 17. Extending an existing collection is an easy way to take advantage of for/in
package com.oreilly.tiger.ch07; import java.util.LinkedList; import java.util.List; public class GuitarManufacturerList extends LinkedList<String> { public GuitarManufacturerList() { super(); } public boolean add(String manufacturer) { if (manufacturer.indexOf("Guitars") == -1) { return false; } else { super.add(manufacturer); return true; } } }
Because
LinkedList already works with
for/in, you can use this new class with
for/in with no special code. Listing 18 demonstrates this -- and how little work was required to make this possible:
Listing 18. The Iterable interface is the cornerstone of the for/in construct
package com.oreilly.tiger.ch07; import java.io.IOException; import java.io.PrintStream; public class CustomObjectTester { /** A custom object that extends List */ private GuitarManufacturerList manufacturers; public CustomObjectTester() { this.manufacturers = new GuitarManufacturerList<String>(); } public void testListExtension(PrintStream out) throws IOException { // Add some items for good measure manufacturers.add("Epiphone Guitars"); manufacturers.add("Gibson Guitars"); // Iterate with for/in for (String manufacturer : manufacturers) { out.println(manufacturer); } } public static void main(String[] args) { try { CustomObjectTester tester = new CustomObjectTester(); tester.testListExtension(System.out); } catch (Exception e) { e.printStackTrace(); } } }
Handling iteration manually
In certain unusual cases -- and I had a hard time thinking of many, to be honest -- you may need to perform specific behavior when your custom objects are iterated over. In these (rather unfortunate) cases, you'll have to take things into your own hands. Listing 19 shows you how this works -- it's not as complex as it is a lot of work, so I'll let you walk through this code on your own. This class provides a wrapper over a text file that lists each line of a file as it is iterated over.
Listing 19. With some patience, you can implement Iterable on your own, providing custom behavior in looping
package com.oreilly.tiger.ch07; import java.util.Iterator; import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; /** * This class allows line-by-line iteration through a text file. * The iterator's remove() method throws UnsupportedOperatorException. * The iterator wraps and rethrows IOExceptions as IllegalArgumentExceptions. */ public class TextFile implements Iterable<String> { // Used by the TextFileIterator below final String filename; public TextFile(String filename) { this.filename = filename; } // This is the one method of the Iterable interface public Iterator<String> iterator() { return new TextFileIterator(); } // This non-static member class is the iterator implementation class TextFileIterator implements Iterator<String> { // The stream being read from BufferedReader in; // Return value of next call to next() String nextline; public TextFileIterator() { // Open the file and read and remember the first line // Peek ahead like this for the benefit of hasNext() try { in = new BufferedReader(new FileReader(filename)); nextline = in.readLine(); } catch (IOException e) { throw new IllegalArgumentException(e); } } // If the next line is non-null, then we have a next line public boolean hasNext() { return nextline != null; } // Return the next line, but first read the line that follows it public String next() { try { String result = nextline; // If we haven't reached EOF yet... if (nextline != null) { nextline = in.readLine(); // Read another line if (nextline == null) in.close(); // And close on EOF } // Return the line we read last time through return result; } catch (IOException e) { throw new IllegalArgumentException(e); } } // The file is read-only; we don't allow lines to be removed public void remove() { throw new UnsupportedOperationException(); } } public static void main(String[] args) { String filename = "TextFile.java"; if (args.length > 0) filename = args[0]; for (String line : new TextFile(filename)) System.out.println(line); } }
The bulk of the work here is in implementing an
Iterator, which is then returned by the
iterator() method. Everything else is pretty straightforward. As you can see, though, it takes a lot more to implement the
Iterable interface manually than simply extending a class that does this work for you.
What you can't do
As with all good things -- and I do think that
for/in is one of them -- there are limitations. Because of how
for/in is setup, and specifically because it leaves out explicit usage of
Iterator, there are things you just can't get done with this new construct.
Positioning
The most notable limitation is the inability to determine the position you're in within a list or array (or custom object). To refresh your memory, Listing 20 shows how a typical
for loop might be used -- note the usage of an index variable to not only move through the list, but also to indicate position:
Listing 20. Using the iteration variable in a "normal" for loop
List<String> wordList = new LinkedList<String>(); for (int i=0; i<args.length; i++) { wordList.add("word " + (i+1) + ": '" + args[i] + "'"); }
This isn't a fringe usage, either -- this is common programming. However, you can't do this very simple task with
for/in, as Listing 21 illustrates:
Listing 21. It's impossible to access position in a for/in loop
public void determineListPosition(PrintStream out, String[] args) throws IOException { List<String> wordList = new LinkedList<String>(); // Here, it's easy to find position for (int i=0; i<args.length; i++) { wordList.add("word " + (i+1) + ": '" + args[i] + "'"); } // Here, it's not possible to locate position for (String word : wordList) { out.println(word); } }
Without any type of counter variable (or
Iterator), you're simply out of luck here. If you need position, stick with "ordinary"
for. Listing 22 shows another common usage of position -- working with strings:
Listing 22. Another problem -- string concatenation
StringBuffer longList = new StringBuffer(); for (int i=0, len=wordList.size(); i < len; i++) { if (i < (len-1)) { longList.append(wordList.get(i)) .append(", "); } else { longList.append(wordList.get(i)); } } out.println(longList);
Removing items
Another limitation is in item removal. As Listing 23 shows, it's not possible to remove an item during list iteration:
Listing 23. It's impossible to remove an item in a for/in loop
public void removeListItems(PrintStream out, String[] args) throws IOException { List<String> wordList = new LinkedList<String>(); // Assign some words for (int i=0; i<args.length; i++) { wordList.add("word " + (i+1) + ": " '" + args[i] + "'"); } // Remove all words with "1" in them. Impossible with for/in! for (Iterator i = wordList.iterator(); i.hasNext(); ) { String word = (String)i.next(); if (word.indexOf("1") != -1) { i.remove(); } } // You can print the words using for/in for (String word : wordList) { out.println(word); } }
In the big picture, these aren't limitations as much as they are guidelines on when to use
for and when to use
for/in. A subtle thing, perhaps, but worth noting.
The bottom line is that you will never find a case -- at least that I'm aware of -- where you need
for/in. Instead, consider it a nice convenience function that can be brought in to make code a little clearer, a little more concise, all without getting into the terse code that gives the reader a headache.
Resources
- Download Tiger and try it out for yourself.
- The official J2SE 5.0 home page is a comprehensive resource you won't want to miss.
- For specifics on Tiger, see John Zukowski's Taming Tiger series, which offers short tips on the additions and changes in J2SE 5.0.
- Brett McLaughlin has also contributed some articles on new features in Tiger:
- A two-part series on annotations in Tiger: "Annotations in Tiger, Part 1: Add metadata to Java code"; and "Annotations in Tiger, Part 2: Custom annotations"
- "Getting started with enumerated type"
- Java 1.5 Tiger: A Developer's Notebook (O'Reilly & Associates; 2004), by Brett McLaughlin and David Flanagan, covers almost all of Tiger's newest features -- including annotations -- in a code-centric, developer-friendly format.
- Find hundreds of articles about every aspect of Java programming. | http://www.ibm.com/developerworks/java/library/j-forin/index.html | CC-MAIN-2015-22 | refinedweb | 3,554 | 55.64 |
S. Somasegar is the corporate vice president of the Developer Division at Microsoft. Learn more about Somasegar.
I’m excited today to announce a new DevLabs project: Microsoft Codename “Casablanca”. You can learn more about the project and download the bits from the DevLabs site.
I’ve previously discussed some of the major trends that have influenced the direction we’ve taken for developer tools, with a key example being applications that connect devices to continuous services running in the cloud. In order to develop such applications efficiently, developers need productive, high-level programming models and APIs for connecting to and interacting with services. Similarly, in order to build those services in a scalable manner, developers need productive models that compose well and that are fundamentally asynchronous.
Take .NET as an example. C#, Visual Basic, and F# developers all have a robust and scalable networking stack, which has been made all the more productive with .NET 4.5 advances such as HttpClient, language support for asynchrony, and an ASP.NET Web API framework for easily building HTTP services. Or take Node.js, which, with the Windows Azure SDK, enables you to easily build fast and scalable network applications for the cloud using JavaScript.:
using namespace http;int __cdecl wmain(int argc, wchar_t * argv[]) {.
Last week, I blogged about some of our efforts to “meet developers where they are.” Our work around C++ has long been a significant component of that, with so much software in the world developed in the language and with its heavy cross-platform adoption. Taking C++ to the cloud with “Casablanca” is another exciting step in that journey.
As with other DevLabs releases, this release of “Casablanca” is meant for you to experiment with and to provide feedback on.. We look forward to hearing from you in the forums.
Namaste! | http://blogs.msdn.com/b/somasegar/archive/2012/04/30/devlabs-c-cloud-services-and-you.aspx | CC-MAIN-2013-20 | refinedweb | 304 | 55.74 |
How to use case insensitive Filter in OpenUI5
Introduction
We have newcomers into our Techedge’s office in Lucca and we’re training them to become awesome OpenUI5 ninja dev (the path is long but we believe in them).
They’re fresh from IT University and they don’t know yet how to get their hands dirty and develop with SAPUI5. So we gave them some good material to read and watch (SAPUI5 Walkthrough tutorial and Developing Web Apps with SAPUI5).
After they’ve finished this introduction I would like to test what they’ve learned and their adaptive skills. So I’ve started to create some UI5 exercise examples that go from easy to hard.
In one of this example app, I needed to filter the Business Partner’s CompanyName attribute (I’m using Netweaver Gateway Demo ES5).
I’ve added a SearchField, I’ve attached the search event to my implementation on the Controller and tested it.
So in the GIF, I’m trying to filter them with the String “SAP” and everything works as expected but if I try with “Sap” I get no record.
For something like 10 long seconds I’ve scratched my head asking “Why?”
Then the lightbulb lightens up and I’ve remembered that I’ve already handled something like this for a Trenitalia’s project. And that I’ve also submitted a proposal on OpenUI5’s Github issue system.
The problem is all about the case-insensitive filter.
The solution
Here’s my actual solution (pretty easy uh?)
/** * Method called by the FilterBar that will start a new search */ onSearch: function() { //I'm getting search query like this because we're using a FilterBar and not a SearchField as I said in the blog post ;) var oFilter = this.getView().getModel("filters"); var sPartnerName = oFilter.getProperty("/partnerName"); var aFilters = []; if( sPartnerName ) { aFilters.push( this.createFilter("CompanyName", FilterOperator.Contains, sPartnerName, true) ); } this.getView().byId("businessPartnerTable").getBinding("items").filter(aFilters); }, createFilter: function(key, operator, value, useToLower) { return new Filter(useToLower ? "tolower(" + key + ")" : key, operator, useToLower ? "'" + value.toLowerCase() + "'" : value); }
The implementation problem
So, if you’re backend system support the tolower operation (like HANA) you’re done.
Otherwise, you need to implement it on your own or you’ll get this error:
{ "error": { "code": "/IWBEP/CM_MGW_EXPR/000", "message": { "lang": "en", "value": "Function tolower is not supported." }, "innererror": { "application": { "component_id": "OPU-BSE-SDE", "service_namespace": "/IWBEP/", "service_id": "GWSAMPLE_BASIC", "service_version": "0001" }, "transactionid": "B9E0A2A100BE0080E005B9656F36B72B", "timestamp": "20180917175331.1571030", "Error_Resolution": { "SAP_Transaction": "Run transaction /IWFND/ERROR_LOG on SAP Gateway hub system and search for entries with the timestamp above for more details", "SAP_Note": "See SAP Note 1797736 for error analysis ()", "Additional_SAP_Note": "See SAP Note 1922246 (). This SAP Note contains specific error information.", "Batch_SAP_Note": "See SAP Note 1869434 for details about working with $batch ()" }, "errordetails": [{ "code": "/IWBEP/CX_MGW_EXPR_OSQL_EXCP", "message": "Function tolower is not supported", "propertyref": "", "severity": "error", "target": "" }] } } }
Happy programing!
Hi,
Thanks, that was really clear.
But in my case it seems my backend system doesn't support the tolower operation..
So what do you mean you have to implement it your own ?
A starting point or a hint how to do it will be really great.
Regards.
Filter now contains a "caseSensitive" paramter which you can set to false
Hi,
Can you kindly elaborate a little on this on how the solution can be achieved ?
Maybe a working sample would be really helpful for the community here.
Thank & Regards,
Samson
Hi Samson Moses
As per input from Nils Vanderheyden I just tried it out, it's working with the below line of code.
new Filter({ path: 'name',caseSensitive: false,operator: FilterOperator.Contains,value1: sQuery})
We don't need to have any configuration in the backend, we can achieve it through the UI5 filter itself.
Note:
Actually, tolower() works in OData V2 but not in oData V4 but the above approach works in V4.
Just for others reference
How do you do case-insensitive filters in Fiori Elements (list report)?
You can create a custom method to fetch all binding filters from list report grid table and loop through each filter and modify the sPath and value. This is not the ideal way but just a workaround in Fiori List Report applications. You can call this below method in onBeforeRebindTableExtension function passing oEvent as input parameter. | https://blogs.sap.com/2018/09/18/use-case-insensitive-filter-in-openui5/ | CC-MAIN-2022-27 | refinedweb | 712 | 54.52 |
Java - Polymorphism
Polymorphism is the ability of an object to take on many forms. The most common use of polymorphism in OOP occurs when a parent class reference is used to refer to a child class object.
Any java object that can pass more than one IS-A test is considered to be polymorphic. In Java, all java objects are polymorphic since any object will pass the IS-A test for their own type and for the class Object.
It is important to know that the only possible way to access an object is through a reference variable. A reference variable can be of only one type. Once declared the type of a reference variable cannot be changed.
The reference variable can be reassigned to other objects provided that it is not declared final. The type of the reference variable would determine the methods that it can invoke on the object.
A reference variable can refer to any object of its declared type or any subtype of its declared type. A reference variable can be declared as a class or interface.
Virtual Methods:
In this section, I will show you how the behavior of overridden methods in Java allows you to take advantage of polymorphism when designing your classes.
We already have discussed method overriding, where a child class can override a method in its parent. An overridden method is essentially hidden in the parent class, and is not invoked unless the child class uses the super keyword within the overriding method.
/* File name : Employee.java */ public class Employee { private String name; private String address; private int number; public Employee(String name, String address, int number) { System.out.println("Constructing an Employee"); this.name = name; this.address = address; this.number = number; }; } }
Now suppose we extend Employee class as follows:
/*; } }
Now you study the following program carefully and try to determine its output:
/* File name : VirtualDemo.java */ public class Virtual would produce following result:
Here we instantiate two Salary objects . one using a Salary reference s, and the other using an Employee reference e.
While invoking s.mailCheck() the compiler sees mailCheck() in the Salary class at compile time, and the JVM invokes mailCheck() in the Salary class at run time.
Invoking mailCheck() on e is quite different because e is an Employee reference. When the compiler seese.mailCheck(), the compiler sees the mailCheck() method in the Employee class.
Here, at compile time, the compiler used mailCheck() in Employee to validate this statement. At run time, however, the JVM invokes mailCheck() in the Salary class.
This behavior is referred to as virtual method invocation, and the methods are referred to as virtual methods. All methods in Java behave in this manner, whereby an overridden method is invoked at run time, no matter what data type the reference is that was used in the source code at compile time. | http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=java&file=java_polymorphism.htm | CC-MAIN-2013-20 | refinedweb | 477 | 64.71 |
Hello.
Can anyone point me to a tutorial on how to properly organize header files?
What I have (this time, I really can't post the code itself) is an MFC document class that contains a vector of classes that have an instance of a pointer to a base class (defined in another header).
This base class is polymorphic with (currently) two derived classes. These classes need to have data type information from three header files that contain structs, macros, and externs that will apply to most of the different classes. However, I do not mean that the same structs and definitions are reused by the different derived classes, just that these headers have a huge number of typedefs that are not strongly related in code, but associate logically.
I have already used #ifndef,#define, and #endif preprocessor commands to surround each header, but there are still many "LNK2005, already defined in *.obj" errors and "LNK2001, unresolved external symbol" errors.
Here's what I mean, in pseudo code.
The files:
MFCDoc.h, .cpp:
BaseClass.h
Derived1.h, .cpp
Derived2.h, .cpp
Define1.h, Define2.h, Define3.h: The globals, typedef structs, macros and #defines.
//BaseClass.h #ifndef BASECLASS #define BASECLASS #include "Define1.h" #include "Define2.h" #include "Define3.h" #endif
//Derived1.h #ifndef DERONE #define DERONE #include "BaseClass.h" #endif
//Derived1.cpp #include "Derived1.h"
//Derived2.h #ifndef DERTWO #define DERTWO #include "BaseClass.h" #endif
//Derived2.cpp #include "Derived2.h"
//MFCDoc.h #ifndef MFC_DOC #include MFC_DOC #include "BaseClass.h" #include "Derived1.h" #include "Derived2.h" //Class1: Data class with pointer to baseclass type //Class2: Document class, with vector of Class1 instances. #endif
//MFCDoc.cpp #include "MFCDoc.h"
Is this appropriate? It's been suggested that I try to leave two of the definition headers alone, so if there's an arrangement where I can have all those headers available to all the files without redefintion errors, I'd like to know how to do that. | https://www.daniweb.com/programming/software-development/threads/31400/header-organization-base-and-derived-classes | CC-MAIN-2017-30 | refinedweb | 325 | 60.41 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to get total workers by button ? [Closed]
The Question has been closedby
my requirement like this when i adding workers in select workers tree view, then need to add total of them right side bottom total workers field.its ok its show when i'm going to save or when i clicked the (update) button. i refer the purchase module but i can't find what function exaclty called when that button clicked.
my whole code uploaded to here@GitHub refer line 397 in bpl_view.xml and line 335 in bpl.py
as per purchase module i wrote function.but its have only return statement.thats also confused to me.
def button_total(self, cr, uid, ids, context=None): return True
please advice me on this issue & please tell me why when clicked the button records save automatically.?its have only return True statement. ? ?
Write this code:
def button_total(self, cr, uid, ids, context=None): tea_worker_line_ids = self.browse(cr, uid, ids[0], context=context).selected_tea_workers_line_ids or [] total_tea_worker = len(tea_worker_line_ids) self.write(cr, uid, ids, {'total_workers': total_tea_worker}, context=context) return True
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-get-total-workers-by-button-12146 | CC-MAIN-2017-43 | refinedweb | 235 | 59.8 |
I am trying to find a way to get an Arduino to signal another Arduino when the first one is finished to start its code. Please let me know with any suggestions!
Depends on how far those two are away.
You could use Wlan modules and communicate via wireless.
You could use a single digital Output as a “switch” and connect other arduinos digital input via cable, read for, lets say, HIGH. When Digital Input is HIGH, then toggle the code in loop with a simple if.
I am afraid, you have to give more information, since your use case is too abstract to give a precise answer. There are possibilities though.
The goal is it will be in a Terminal style Exhibit so all devices will be close to each other so I could do the digital option with pin. I am staying away from wireless options as I need it to be as reliable as possible. So here is one of the codes
#include <Adafruit_NeoPixel.h> int Pins[13] = {22,23,24,26,27,28,30,31,32,34,35,36}; //14 is A0 //int Pins[13] = {14,12,11,10,9,8,2,3,4,5,6,7}; //14 is A0 //int Sequence[12] = {1,0,1,0,1,1,0,1,1,0,0,1}; int Sequence[4][13] = { {1,0,1,0,1,1,0,1,1,0,0,1}, {1,1,0,0,1,1,0,1,1,1,0,0}, {1,0,1,0,0,1,0,1,1,0,1,1}, {0,1,1,0,1,0,1,0,0,1,1,0}}; int NeoPixel_Pin = 2; //15 is A1 int Relay_Pin = 4; //16 is A2 int Seq = 0; int Reset_Pin = 3; Adafruit_NeoPixel strip = Adafruit_NeoPixel(12, NeoPixel_Pin, NEO_GRB + NEO_KHZ800); void setup() { Serial.begin(9600); for(int x=0; x<13; x++) { pinMode(Pins[x], INPUT_PULLUP); } pinMode(Relay_Pin, OUTPUT); digitalWrite(Relay_Pin, LOW); pinMode(Reset_Pin, INPUT_PULLUP); strip.begin(); strip.show(); // Initialize all pixels to 'off' delay(500); ResetMe(); } void loop() { if(digitalRead(Reset_Pin) == 0) { ResetMe(); } int Matched = 0; for(int x=0; x<12; x++) { if(1 - (digitalRead(Pins[x]) == Sequence[Seq][x])) { Matched++; } } //Matched = Matched - 1; Serial.println(Matched); if(Matched == 12) { SuccessMe(); } strip.show(); } void SuccessMe() { Serial.println("Success!"); digitalWrite(Relay_Pin, HIGH); //delay(5000); //ResetMe(); } void ResetMe() { Seq++; if(Seq > 3) { Seq = 0; } digitalWrite(Relay_Pin, LOW); for(int x=0; x<13; x++) { if(Sequence[Seq][x] == 0) { strip.setPixelColor(x, strip.Color(127, 0, 0)); } else { strip.setPixelColor(x, strip.Color(0, 127, 0)); } } strip.show(); Serial.println("Reset"); Serial.print("Next Sequence is: "); Serial.println(Seq); delay(1000); }
Read the forum guidelines to see how to properly post code.
Please add your Code in [.Code.] taglines. (Without Points)
Thus being Said. Your best Bet ist communication via Pins, if it is just a simple toggling (Like Switch). For more complicated Data Transfer, Check serial Ports. When all of arduinos are going to be Close to each other, you can also controll all of your stuff from a Single arduino, when CPU Power ist enough for your application.
I need it to be as reliable as possible.
PS: i Wish, more escape room Designer would think Like that. Some dumbass thought it would be a fantastic Idea to integrate Shitty Augmented reality. Only 1 device for 4 people. 100€ down the drain. Never going to go there again.
Please correct your post above and add code tags around your code:
[code]`` [color=blue]// your code is here[/color] ``[/code].
It should look like this:
// your code is here
(Also press ctrl-T (PC) or cmd-T (Mac) in the IDE before copying to indent your code properly)
————-
"Close" is very relative... my friends live 100m down the road, I would say they are close to me, almost next door given the gardens... so how close is "close" for you?
There is another escape-room-mission for you: learning to program
Take a look into this tutorial:
Arduino Programming Course
It is easy to understand and has a good mixture between explaining important concepts and example-codes to get you going. So give it a try and report your opinion about this tutorial.
best regards Stefan | https://forum.arduino.cc/t/escape-room-project/686780 | CC-MAIN-2021-43 | refinedweb | 703 | 65.42 |
Radix tree (C)
[edit] Introduction
A Radix Tree, or Radix Trie, or Crit-bit Tree, or PATRICIA Trie, or yet Compact Prefix Tree, is a data structure used to search a set of bit sequences (generally character strings, but optionally other bit-patterns like integers or IPv4 addresses). In this sense, they can be used for the same applications as hash tables can.
The advantages of radix trees over hash tables are that radix tries have worst-case complexity O(1), as opposed to a hash table's worst-case complexity of O(n) or O(log n) depending on whether the buckets are implemented as linked lists or red-black trees, for example. Also, since radix trees are trees, the user can walk it and return all key/value pairs in lexicographic order, or obtain all keys with a given prefix. Also, the space cost of radix trees is very deterministic: a radix tree with n items will have exactly n leaf nodes and n - 1 internal nodes, as befits a binary tree.
The advantages of a hash table over a radix tree are essentially that the buckets are all allocated at once, so for a hash table with no collisions, the data are all packed together and thus are very cache-friendly. Radix trees, having its nodes allocated on demand, may have them allocated all over the heap, so one may expect to see a lot more cache misses, unless the tree is completely filled once and then treated as read-only.
Radix trees can contain leaves with only keys, in which case they work as sets rather than dictionaries. In this article, we'll write some routines to implement radix tries for a generic reference type, and then use those routines to implement both a set of ints and a map of char *s to void *.
[edit] Generic routines
The routines in this article are almost free-standing, depending only on the standard C99 header <stdint.h> for more some datatype aliases. Besides that, a heap allocator is required, usually the system malloc(), but which will be included later on if the specific implementations don't provide a different one.
<<Internal definitions>>= #include <stdint.h>
The structure of the generic part of the implementation files looks like this:
<<Generic part>>= Internal definitions Internal structures Generic helper functions Generic algorithms
[edit] Data model
The radix tree is made up of three types of structure: one for holding the root of the tree (which will be called ROOTSTRUCT), one for representing the internal nodes, and one for representing the leaves. The leaf type varies according to what kind of data the tree accepts, and should be ordinarily be a pointer to a structure, but can be a regular integer or a pointer to char in some cases. Our generic routines will use a preprocessor token, LEAFTYPE, to refer to this type, which will be defined in the concrete types.
When analysing radix trees, there are three major cases to consider: when the tree has no items, when it has a single item, and when it has more than one item. In the first case, the root object holds a NULL pointer; in the second case, it points directly to a leaf object; and in the third case, it points to an internal node. These three cases will have to be considered separately for each operation upon the tree.
Considering the above, the root struct must contain a reference to the root node of the tree, as well as a flag to tell if it is a leaf or an internal node. We will actually maintain this flag as a count of leaves within the tree, to help determine when the tree is empty in case of integer keys where zero is a valid key.
<<Internal structures>>= struct ROOTSTRUCT { uint32_t leafcount; union { struct _internal_node * node; LEAFTYPE leaf; } root; };
The internal nodes will always contain references to two children which have a common prefix in their key up to a given bit b, called the critical bit for that given subtree. Each subtree can also have either one leaf, or an internal node with two leaves or more attached to it. We will need, then, two references to internal nodes, two flags to tell whether each child is a leaf or an internal node, and the length of the common prefix. Here, we also define three preprocessor macros, IS_LEAF, SET_LEAF, and SET_NODE, to maintain the leaf flags.
<<Internal structures>>= struct _internal_node { uint32_t is_leaf : 2; uint32_t critbit : 30; union { struct _internal_node * node; LEAFTYPE leaf; } child[2]; };
<<Internal definitions>>= #define IS_LEAF(node, dir) ((node)->is_leaf & (1 << (dir))) #define SET_LEAF(node, dir) ((node)->is_leaf |= (1 << (dir))) #define SET_NODE(node, dir) ((node)->is_leaf &= ~(1 << (dir)))
Finally, we know (at this stage) very little about the leaf type, which. The only things the generic algorithms need to be able to do with it, besides receiving it as a parameter and possibly returning it, are comparing two of them to decide at which bit their keys first differ (or if it doesn't), and retrieving the ith bit of the key for a given leaf. Finally, the algorithms might need to refer to a non-existing leaf (for example, if the search has a key that doesn't exist in the tree), represented by the preprocessor token NO_LEAF.
The operation for comparison of two leaves will be called COMPARE, will receive two objects of type LEAFTYPE as parameters, and will return the first bit at which they differ or -1 if their keys are equivalent.
The operation for extracting a bit from a leaf's key will be called DECIDE and will receive a leaf, a bit position, and an extra parameter of type EXTRA_ARG, which is passed to the generic algorithms by their callers. For leaves with char * keys, this is generally the strlen() of the key to help determine where the key ends and avoid an illegal access, but can be whatever the concrete implementation of DECIDE requires. The operation returns either 0 or 1, which will be the value of the bit at the position requested.
[edit] Searching
The first operation we will consider is searching for a given key in a tree. We will assume, for now, that we already have:
- a tree which has been previously initialised and possibly populated;
- a LEAFTYPE object with the key we want to search for;
- a third argument of type EXTRA_ARG to be passed to DECIDE when it is called.
The operation will return the leaf which contains the requested key or NO_LEAF if no leaf with that key was found.
The three cases are:
- If the tree has no nodes, then it doesn't contain the requested key, whichever it may be. Return NO_LEAF;
- If the tree has a single node, a leaf, then COMPARE it with the key we are seeking and return it if it is the key we want, or return NO_LEAF if it isn't;
- If the tree has more than one node, walk down it, checking the critical bit for each internal node to know which subtree to look into, until you reach a leaf. Then treat that leaf as if it were the above case, COMPAREing with the desired key and returning it if it is equivalent.
<<Generic algorithms>>= static LEAFTYPE _get(struct ROOTSTRUCT * tree, LEAFTYPE leaf, EXTRA_ARG aux) { LEAFTYPE result; struct _internal_node * node; uint32_t dir; if (Check if tree is empty) return NO_LEAF; if (Check if root points to a leaf) { result = tree->root.leaf; Compare leaves and return } /* root points to a node */ node = tree->root.node; Walk tree looking for leaf Compare leaves and return }
We check whether a tree is empty by verifying if its leafcount is zero. If the leafcount is one, the root is pointing to a leaf.
<<Check if tree is empty>>= tree->leafcount == 0
<<Check if root points to a leaf>>= tree->leafcount == 1
The most basic operation when dealing with radix trees is walking down the internal nodes, comparing the critical bit for that internal node to the reference key, until a leaf is reached. The DECIDE operation is used here:
<<Walk tree looking for leaf>>= while (1) { dir = DECIDE(leaf, node->critbit, aux); if (IS_LEAF(node, dir)) { result = node->child[dir].leaf; break; } else { node = node->child[dir].node; } }
Finally, when we have a candidate leaf in result, we use COMPARE to check whether it is the leaf we want or not, and return either it or NO_LEAF.
<<Compare leaves and return>>= if (COMPARE(result, leaf) == -1) return result; else return NO_LEAF;
And with this, the generic part of the search algorithm is completed. The algorithm takes a tree, a leaf with a key somewhere within it, and returns the leaf with an equivalent key (where equivalency is defined by COMPARE returning -1) from the tree, if it exists.
[edit] Inserting
Now we will consider inserting a new leaf into a tree which has already been initialised. This operation may require, in case the tree is not already completely empty, a new internal node to be allocated. This requires us to consider that:
- the system might run out of memory, in which case we need to make sure both that the tree retains a valid structure and that the exceptional condition is properly signaled;
- the user might want to use a different allocator in place of the standard malloc().
The first issue will be solved by checking for the return value of the allocator before any changes are made to the tree and, if the allocation failed, setting a special reference parameter, similarly to what the strtof family of functions does with its second parameter. The second one will be dealt with by expecting a preprocessor token ALLOC which defaults to malloc, as well as a matching token DEALLOC which defaults to free for the deallocator.
<<Internal definitions>>= #ifndef ALLOC # include <stdlib.h> # define ALLOC malloc #endif #ifndef DEALLOC # include <stdlib.h> # define DEALLOC free #endif
The generic algorithm thus takes:
- a reference to a tree;
- the leaf to be added (the responsibility for allocating memory for the leaves is up to the specific functions, as they understand the requirements of each leaf type);
- an extra parameter of type EXTRA_ARG to be passed to the DECIDE function;
- a boolean value telling whether to replace or ignore the addition of a leaf if another is found with an equivalent key (collision policy);
- a pointer to an integer which will return -1 if an allocation error happened or 0 otherwise.
The algorithm will return a LEAFTYPE object which will usually be NO_LEAF, except when there already was a leaf with the same key as the one being added. In this case, the object returned is either the one that was replaced by the new leaf or the new leaf that didn't replace the existing one, depending on the collision policy parameter.
Here, the three cases are:
- when the tree is empty, simply add the new leaf at the root and update the leaf count;
- when the tree contains a single leaf, check which is the first bit where the new leaf and the existing one differ. If they don't differ at all, depending on the collision policy, either replace or don't replace the leaf and return the one that isn't connected to the tree for destruction by the specific code, if needed. If they do differ, allocate a new internal node and set its critical bit to the number above, and put both the new leaf and the existing leaf as its children. The root now points to the internal node and the leaf count is updated;
- when the tree contains more than one leaf, walk down it until reaching a leaf. Then compare this leaf with the one to be added; if their keys are equivalent, replace it or not as per the replacement policy parameter, returning the leaf which didn't make it into the tree, as per above. If they do differ, allocate a new internal node and place the new leaf in the appropriate child slot. Then walk down the tree again, looking for the first internal node which has a larger critical bit than your new one, insert that one in the other slot of the new internal node, and insert the new node in place of the one you attached to it.
<<Generic algorithms>>= static LEAFTYPE _add(struct ROOTSTRUCT * tree, LEAFTYPE leaf, EXTRA_ARG aux, int should_replace, int* error) { LEAFTYPE result; struct _internal_node * node, * parent, * child; uint32_t dir, dir2; int32_t critbit; *error = 0; if (Check if tree is empty) { tree->root.leaf = leaf; tree->leafcount ++; return NO_LEAF; } else if (Check if root points to a leaf) { result = tree->root.leaf; critbit = COMPARE(leaf, result); if (critbit == -1) { Treat collision in addition } else { Allocate new internal node and insert new leaf in it result = tree->root.leaf; Place previous leaf in the new node tree->root.node = node; tree->leafcount ++; return NO_LEAF; } } /* else */ node = tree->root.node; Walk tree looking for leaf critbit = COMPARE(leaf, result); if (critbit == -1) { Treat collision in addition } else { Allocate new internal node and insert new leaf in it Walk tree and splice new node in tree->leafcount ++; return NO_LEAF; } }
The treatment of collision in addition will depend on the value of the should_replace parameter. This is the only case where the addition algorithm returns something other than NO_LEAF.
<<Treat collision in addition>>= if (should_replace) { node->child[dir].leaf = leaf; return result; } else return leaf;
The allocation of the new node needs to check for consistency and return setting *error to -1 if it fails. Also, the new leaf must be inserted in the correct subtree and the correct leaf type must be set.
<<Allocate new internal node and insert new leaf in it>>= node = (struct _internal_node *) ALLOC(sizeof (struct _internal_node)); if (node == NULL) { *error = -1; return NO_LEAF; } else { node->critbit = critbit; dir = DECIDE(leaf, critbit, aux); node->child[dir].leaf = leaf; SET_LEAF(node, dir); }
In the case where there is only a single leaf after the root, the newly allocated node goes between it and the root pointer.
<<Place previous leaf in the new node>>= node->child[1 - dir].leaf = tree->root.leaf; SET_LEAF(node, 1 - dir);
In the other cases, the other branch of the new node will contain an internal node.
<<Insert child node into the new node>>= node->child[1 - dir].node = child; SET_NODE(node, 1 - dir);
In those cases, we must place the new node in the correct position such that it is in the correct position in the tree regarding critical bits and the critical bits always increase as one walks down the tree.
<<Walk tree and splice new node in>>= child = tree->root.node; if (Check if node should be inserted before child) { Insert child node into the new node tree->root.node = node; } else while (1) { parent = child; dir2 = DECIDE(leaf, parent->critbit, aux); if (IS_LEAF(parent, dir2)) { result = parent->child[dir2].leaf; Place previous leaf in the new node parent->child[dir2].node = node; SET_NODE(parent, dir2); } else { child = parent->child[dir2].node; if (Check if node should be inserted before child) { Insert child node into the new node parent->child[dir2].node = node; break; } } }
<<Check if node should be inserted before child>>= node->critbit < child->critbit
[edit] Deleting
Next, we will consider deleting a leaf from a previously initialised and populated tree. As with inserting and searching, the generic algorithm receives:
- A reference to a tree;
- A leaf object containing the key to be sought and removed;
- An extra parameter of type EXTRA_ARG to pass to DECIDE.
It will return the leaf that was removed, if there is one which matches the key in the given leaf, so it may be destroyed by the specific code.
<<Generic algorithms>>= static LEAFTYPE _del(struct ROOTSTRUCT * tree, LEAFTYPE leaf, EXTRA_ARG aux) { LEAFTYPE result; struct _internal_node * node, * parent; uint32_t dir, dir2; if (Check if tree is empty) return NO_LEAF; if (Check if root points to a leaf) { result = tree->root.leaf; if (COMPARE(result, leaf) == -1) { tree->root.leaf = NO_LEAF; tree->leafcount --; return result; } else return NO_LEAF; } /* else */ node = tree->root.node; Walk tree looking for leaf if (COMPARE(result, leaf) == -1) { parent = tree->root.node; Walk tree and get parent of node Replace node with its other child tree->leafcount --; DEALLOC(node); return result; } else return NO_LEAF; }
When removing a leaf which isn't tied directly at the root, we would end up with an internal node with only one subtree, which would cause problems when walking. Since internal nodes represent a split in the keys of their subtrees at a given bit, we can just throw that node away and replace it by its other child, still maintaining the invariant that an internal node always has two children.
<<Replace node with its other child>>= if (IS_LEAF(node, 1 - dir)) { parent->child[dir2].leaf = node->child[1 - dir].leaf; SET_LEAF(parent, dir2); } else { parent->child[dir2].node = node->child[1 - dir].node; }
To do that, however, we must find the parent of the internal node in order to replace it as a child. It is the same as walking the tree to find a leaf, but we will always deal with internal nodes only, and we will compare the children to the node we want to determine if we found the parent.
<<Walk tree and get parent of node>>= while (1) { dir2 = DECIDE(leaf, parent->critbit, aux); if (parent->child[dir2].node == node) break; else parent = parent->child[dir2].node; }
[edit] Iterating Forwards and in Reverse
This operation is not one which is not usual for either sets or maps/dictionaries, but it is something that radix trees can provide. If the bits of the keys are indexed by the critical-bit manipulating functions COMPARE and DECIDE, they can assume in the tree a natural order which can be recovered by a simple depth-first search on the tree.
For Unicode string keys, that indexing requires the bits to be ordered in ascending order by character index, and within a character they must be sorted from most significant to least significant bit. For unsigned integer keys, they must be sorted from most significant to least significant bit.
If another collating sequence is required, then the process complicates a bit. First, before any key is added, searched or deleted, the key must be mapped into a canonical form where increasing values of the bit correspond to higher numerical values of the characters; for example, if case independence is desired, then running toupper() or tolower() on the keys will make operations independent of case. However, for iterating, if one wants the callback to show the key as the user understands it, then you have to run the reverse mapping on the key in the tree before presenting it to the callback; if the mapping is not bijective (as toupper() and tolower() are), then both the original and the canonical form must be stored in the leaf, and the original is only used when it is supplied to the iteration callbacks.
The generic algorithm, in contrast, is very simple, and doesn't even require the use of the usual operations on LEAFTYPE, for which reason it only takes a tree and a callback to be called for each leaf. It does, however, use recursion, and for that reason we will remove the depth-first search into a separate function so it may call itself:
<<Generic helper functions>>= static uint32_t _dfs(struct _internal_node * node, uint32_t (*leafcb)(LEAFTYPE), uint32_t (*inodecb)(struct _internal_node *), uint32_t direction, int32_t pre_post) { uint32_t cur = direction; if ((pre_post == -1) && (inodecb != NULL) && (inodecb(node) == 0)) return 0; Search the subtree in the `cur' direction if ((pre_post == 0) && (inodecb != NULL) && (inodecb(node) == 0)) return 0; cur = 1 - cur; /* now the other child */ Search the subtree in the `cur' direction if ((pre_post == 1) && (inodecb != NULL) && (inodecb(node) == 0)) return 0; return 1; }
The _dfs() function takes a callback to be called on leaves and another on internal nodes; a direction parameter which tells which child we should process first: 0 if we're iterating forwards, 1 if we're iterating in reverse; and a pre_post parameter which tells whether the internal node callback should be called before, after or in the middle of calling the processing of the subtrees.
<<Search the subtree in the `cur' direction>>= if (IS_LEAF(node, cur)) { if (leafcb(node->child[cur].leaf) == 0) return 0; } else { if (_dfs(node->child[cur].node, leafcb, inodecb, direction, pre_post) == 0) return 0; }
Of note is the fact that the callback takes a leaf and returns an uint32_t which may be either 1 if the iteration should continue or 0 if the algorithm should break out of it, which is why the DFS returns 0 if either the callback or its recursive instances return 0.
<<Generic algorithms>>= static void _fwd(struct ROOTSTRUCT * tree, uint32_t (*cb)(LEAFTYPE)) { Treat empty or single leaf case _dfs(tree->root.node, cb, NULL, 0, 0); } static void _rev(struct ROOTSTRUCT * tree, uint32_t (*cb)(LEAFTYPE)) { Treat empty or single leaf case _dfs(tree->root.node, cb, NULL, 1, 0); }
The empty case and the single leaf case are identical for both forward and reverse iteration, and simply either do nothing or call the callback upon the single leaf. In both cases, we return from the function so we don't have to wrap the rest of the function in braces.
<<Treat empty or single leaf case>>= if (Check if tree is empty) return; if (Check if root points to a leaf) { cb(tree->root.leaf); return; } /* else */
[edit] Initialisation and Disposal
Initialisation of the radix tree is trivial. Allocate a new struct ROOTSTRUCT structure, set its leafcount to zero, and return it. As it depends on the allocator, it may fail, as well, in which case the algorithm returns NULL.
<<Generic algorithms>>= static struct ROOTSTRUCT * _new(void) { struct ROOTSTRUCT * result; result = (struct ROOTSTRUCT *) ALLOC(sizeof (struct ROOTSTRUCT)); if (result != NULL) result->leafcount = 0; return result; }
Destruction is more complex, since we will have to free all leaves and internal nodes, as well as the root structure. For that, we will repurpose the _dfs() helper function to walk the tree, freeing leaves and internal nodes as we go.
<<Generic helper functions>>= static uint32_t _dealloc_internal_node(struct _internal_node * node) { DEALLOC(node); return 1; }
<<Generic algorithms>>= static void _end(struct ROOTSTRUCT * tree, uint32_t (*cb)(LEAFTYPE)) { if (! Check if tree is empty) { if (Check if root points to a leaf) { cb(tree->root.leaf); } else { _dfs(tree->root.node, cb, _dealloc_internal_node, 0, 1); } } DEALLOC(tree); }
[edit] Integer Set
So far, we have written a lot of generic code which cannot be used because it depends on a few definitions which haven't been provided yet. In this section, we will provide those definitions and generate two files (a header file and an implementation file) for a set of integers. Now, sets are trees whose leaves contain only their keys, so what would ordinarily be a get or search operation in a dictionary will be called a test operation here.
The header file will simply declare an opaque type uint32_set and five operations on it: uint32_set_new, uint32_set_free, uint32_set_add, uint32_set_test, uint32_set_remove, and uint32_set_each.
Note that we cannot declare uint32_set as a pointer to struct ROOTSTRUCT, since this preprocessor token won't be defined on the header file, so we just sub in the actual definition.
<<uint32_set.h>>= #ifndef _UINT32_SET_H_ #define _UINT32_SET_H_ #include <stdint.h> typedef struct _uint32_set * uint32_set; uint32_set uint32_set_new(void); void uint32_set_free(uint32_set set); uint32_t uint32_set_add(uint32_set set, uint32_t val); uint32_t uint32_set_test(uint32_set set, uint32_t val); uint32_t uint32_set_remove(uint32_set set, uint32_t val); void uint32_set_each(uint32_set set, uint32_t (*cb)(uint32_t)); #endif
Notice that the opaque type isn't declared as a pointer to struct ROOTSTRUCT; that struct cannot be visible outside of the implementation file or two different types based on radix trees won't be useable together. Therefore, we must remember to typedef into a unique name like _uint32_set.
<<uint32_set.c>>= #include <stdlib.h> #include <stdint.h> #include "uint32_set.h"
After the includes, we will have to define the preprocessor tokens the generic part of the implementation requires: LEAFTYPE, NO_LEAF, COMPARE, DECIDE, EXTRA_ARG and possibly ALLOC and DEALLOC, if we don't want to use malloc() and free() for them.
<<uint32_set.c>>= #define LEAFTYPE uint32_t #define ROOTSTRUCT _uint32_set #define NO_LEAF ((uint32_t) 0) #define COMPARE integer_compare #define DECIDE(n, b, dummy) (((n) >> (31 - (b))) & 1) #define EXTRA_ARG uint32_t
Note that in this case we will simply ignore the extra argument to DECIDE, but we still have to define its type EXTRA_ARG. Since it can be anything, we give it a base integer type and pass zero into every generic algorithm call which calls for a parameter of that type.
Also, we defined DECIDE as a macro, but COMPARE was defined as a token integer_compare, which is the name of a function which will have to be defined. Here is its definition:
<<uint32_set.c>>= static int32_t integer_compare(uint32_t a, uint32_t b) { uint32_t xor, mask, result; xor = a ^ b; for (mask = 0x80000000, result = 0; result < 32 && ! (xor & mask); mask >>= 1, result ++); if (result == 32) return -1; else return result; }
It simply XORs the two numbers together then checks, from most significant to least significant bit, until it finds a set bit. It then reports the number of the bit or -1 if it fell off the end of the number.
Having all this defined, we can finally include the generic part and define the _uint32_set type we refer to in the header:
<<uint32_set.c>>= Generic part typedef struct ROOTSTRUCT _uint32_set;
Now we can start on the implementation of the functions we defined on the header.
uint32_set_new() trivially calls _new():
<<uint32_set.c>>= uint32_set uint32_set_new(void) { return _new(); }
uint32_set_free() doesn't actually have to deal with freeing leaves, as they aren't allocated separately on the heap, so we can simply pass it a dummy function which takes an uint32_t and returns 1:
<<uint32_set.c>>= static uint32_t dummy_free_leaf(uint32_t arg) { return 1; } void uint32_set_free(uint32_set set) { _end(set, dummy_free_leaf); }
The lack of need for allocating leaves makes add similarly trivial; it is just convenient to remember that the third parameter is ignored in this specific case, so we just put in zero, and the fourth parameter, should_replace, is set to zero since there's no point in replacing one number with the exact same one. Finally, the error parameter is passed in by reference and checked for memory exhaustion errors:
<<uint32_set.c>>= uint32_t uint32_set_add(uint32_set set, uint32_t val) { int error; _add(set, val, 0, 0, &error); if (error == -1) return 0; else return 1; }
test similarly just passes through to _get and returns 1 if it didn't return NO_LEAF:
<<uint32_set.c>>= uint32_t uint32_set_test(uint32_set set, uint32_t val) { return _get(set, val, 0) != NO_LEAF; }
remove can also safely ignore the return value of _del, since it returns a leaf that need not be freed:
<<uint32_set.c>>= uint32_t uint32_set_remove(uint32_set set, uint32_t val) { _del(set, val, 0); return 1; }
Finally, uint32_set_each receives the callback cb and passes it to _fwd trivially --- we could set up a flag to enumerate the numbers in decreasing order, but this will do for the purpose of this example:
<<uint32_set.c>>= void uint32_set_each(uint32_set set, uint32_t (*cb)(uint32_t)) { _fwd(set, cb); } | http://en.literateprograms.org/Radix_tree_(C) | CC-MAIN-2014-52 | refinedweb | 4,552 | 53.14 |
A Developer.com Site
An Eweek.com Site
Type: Posts; User: Duoas
You've got a couple of significant syntax errors.
To declare a static field:
// foo.hpp
#ifndef FOO_HPP
#define FOO_HPP
You have opened file_in twice (lines 20 and 22). Be careful there.
@GCDEF, Paul McKenzie
I said more likely, not absolutely. I've known my share of 'real programmers' too...
Quite simply, much of the time people doing the hiring are mid-level managers who know...
Yes, but it is often indicative of one. The more a person know about a language's features the more time that person has probably spent using the language. And the more comprehensive that knowledge,...
You can also use the STL
#include <algorithm>
#include <fstream>
#include <string>
...
std::string mystring;
Good grief.
If yes, use 4-byte characters. If no, use 2-byte characters. String length is (size of string in bytes / size of character in bytes).
To determine the file type, open it and look for...
Ah, sorry.
Yes, never put unsafe (unknown) code anywhere in the WINDOWS directory tree.
FreeType2 is a popular library used by hundreds of thousands of people around the globe. I wouldn't exactly call it a "strange dll".
@STLdude
How is that any different than what I said?
@Rajesh
You did not clearly mention anything about function prototypes in your first post. You only asked about the difference with and...
The windows equivalents to those directories are:
/usr/include
C:\<compiler's base directory>\include
/usr/lib
C:\<compiler's base directory>\lib
/usr/bin
C:\WINDOWS\system32
People tend towards the first because it can be used to declare variables
Test testing123;
whereas the latter looks like a function prototype to the compiler
Test testing123();
@Mikau
Yeah, the MSYS/MinGW downloads are confusing.
For MinGW, make sure you get the following packages:
GCC Version 4
GNU Binutils
GNU Make
MinGW API for MS-Windows
MinGW Runtime
Both work just fine. The "best" one is the one that fits your requirements.
What exactly are you trying to do? (What is your data?)
Wow, that really is an obnoxious standard.
The choice of data size depends on whether or not you want to support CJK Extension B. See
You...
That is a typical *nix software setup. If you are trying to compile under Windows, you will need to get MSYS and MinGW and compile it under the bash command shell that comes with msys.
To do it,...
Does
while (p != NULL) {
app = p;
p = p->next;
free( app );
}
fix it?
You forgot to add a '\0' (nul) char at the end of your string.
...
char* test = new char[6];
...
test[5] = '\0';
cout << ...
Except that using bitfields doesn't guarantee in what order they are packed.
Just bit-shift the values together:
unsigned short pack565( unsigned char a, unsigned char b, unsigned char c )...
Please don't use MBCS. It is evil. It requires an excruciating amount of twiddling to use it (select code page, parse, convert, print, repeat ad-nauseum)
Use Unicode. It is simple, fast, and works...
[1]
You will need to become familiar with the C++ locale and facet objects.
Googling "c++ internationalization" gets some good hits too.
A book that might be worth your getting is Standard C++...
Are you sure the file is opened? (Because you didn't check in the code you posted.)
Try
string strInput;
ifstream inf( "su08_world1.dat" );
if (!inf)
cout << "fooey, the file...
Your friend is wrong.
In fact, you cannot do it any other way. Static members must be defined separate from the class declaration. Typically that means that the class declaration goes in a header...
That seems reasonable.
Your flag name fooled me for a minute. In addition to indicating whether or not the string is dynamically allocated, you should also indicate whether or not it is a temporary...
Did you just type this in by hand?
Or did you cut and paste?
Because it has some errors...
I think that the actual problem is up five lines where you say:
it=this->begin();
That should be
...
I'm confused. Are you counting permutations or combinations? They are different. | http://forums.codeguru.com/search.php?searchid=20489385 | CC-MAIN-2020-10 | refinedweb | 694 | 77.94 |
std::basic_ostream::basic_ostream
Template:ddcl list begin <tr class="t-dcl "><td >
<td > (1) </td> <td class="t-dcl-nopad"> </td> </tr> <tr class="t-dcl "><td >
basic_ostream( const basic_ostream& rhs ) = delete;
<td > (2) </td> <td > (since C++11) </td> </tr> <tr class="t-dcl "><td >
basic_ostream( basic_ostream&& rhs );
<td > (3) </td> <td > (since C++11) </td> </tr> Template:ddcl list end.
Parameters
Example
#include <sstream> #include <utility> #include <iostream> int main() { // std::ostream myout(std::cout); // ERROR: copy ctor is deleted std::ostream myout(std::cout.rdbuf()); // OK: shares buffer with cout // std::ostream s2(std::move(std::ostringstream() << 7.1)); // ERROR: move constructor // is protected std::ostringstream s2(std::move(std::ostringstream() << 7.1)); // OK: move ctor called // through the derived class myout << s2.str() << '\n'; }
Output:
7.1 | http://en.cppreference.com/mwiki/index.php?title=cpp/io/basic_ostream/basic_ostream&oldid=46057 | CC-MAIN-2014-42 | refinedweb | 131 | 55.44 |
News
Abstract
MaaS360 TLS v1.2 Weak Cipher Deprecation - (Platform Deprecation set for 10.84 release on 11 December 2021)
Content
MaaS360 TLS v1.2 Weak Cipher Deprecation - (Platform Deprecation set for 10.84 release on 11 December 2021)
MaaS360 uses TLS (Transportation Layer Security) to provide privacy and data integrity between devices and MaaS360 components. To keep the devices secure, MaaS360 is deprecating the weak cipher in TLS 1.2, which has direct impact on the below devices.
- Android – Below version 5.0 (Lollipop)
- iOS – Below version 9.0
- macOS – Below version 10.11 (El Capitan)
As part of this deprecation, the devices on lower OS versions than those listed above will no longer be able to communicate with the MaaS360 platform. Therefore, these devices cannot be managed going forward by MaaS360.
Required action
MaaS360 recommends upgrading the OS of devices that will be impacted to continue communication with the platform. If the devices are unable to upgrade to one of the supported versions, please remove MaaS360 control before the deprecation to avoid complications. MaaS360 always recommends using the latest available OS versions, as they often feature patches and enhancements that improve device security.
What is TLS?
The primary purpose of the TLS protocol is to provide privacy and data integrity between two communicating applications. The protocol is composed of two layers: TLS Record Protocol and TLS Handshake Protocol. At the lowest level, layered on top of some reliable transport protocol (example, TCP), is the TLS Record Protocol. It is the most widely deployed security protocol used today. It is used for web browsers and other applications that require data to be securely exchanged over a network or internet. TLS ensures that a connection to a remote endpoint is the intended endpoint through encryption and endpoint identity verification. The versions of TLS available today are TLS 1.0, 1.1, 1.2, 1.3. The MaaS360 platform supports only TLS 1.2.
How this relates to the MaaS360 Product Suite
IBM MaaS360 will start deprecating support for TLS 1.2 weak Cipher and will disable encryption protocol across services.
Cipher Details
MaaS360 continues to align with the PCI security standards and ensure highest security and safety of your data. The deprecation will have impact on all MaaS360 customers currently using TLS 1.2, and it is advised that you check if you're going to be affected. MaaS360 solution contains the platform, on-premises agents and mobile apps; each component will have a different path of upgrade and the below information will outline the areas where this deprecation will be affected. After the deprecation occurs on the MaaS360 platform, any agent that has not been upgraded will no longer be able to connect and be managed by the platform.
MaaS360 TLS Platform deprecation will occur along with the 10.84 release on 11 December 2021. Please review section below for details.
Described below are the compatibilities across MaaS360 Apps, Agents, Web Services, and Web Browsers:
- Android Apps, SDK and App Wrapping
- iOS Apps, SDK and App Wrapping
- macOS Agents
- Cloud Extender and MEG Agents
- Windows/WinPhoneApps and Agents
- WebServices
MaaS360 will discontinue support for the devices running Android OS versions below 5.0 (Android L).
Android Apps, SDK and App Wrapping
MaaS360 discontinues support for the devices running Android OS versions below 5.0 (Android L).
iOS Apps, SDK and App Wrapping
macOS Agents
MaaS360 discontinues support for the devices running macOS versions below 10.11.
Cloud Extender and MEG Agents
No Action required. The Cloud Extender(CE) and Mobile Enterprise Gateway (MEG) services are composed of two components: the core agent and modules. Neither the core agent or modules are impacted.
Windows/DTM Apps and Agents
No Action required. The Windows and DTM agents all work with no impact.
Web Services and 3rd Party Portal Integrations
For those customers using WebServices/API's or have enabled 3rd Party Integrations using API's on the MaaS360 Platform, the API client used on the Customer side may require adjustments or upgrades. Please check with your client's documentation on how to upgrade to TLS 1.2 support.
Steps to check for API compatibility
- Set up an API client in a test environment. This could be any software or library that you are using to integrate to MaaS360 or any custom integration code that you have written. The examples cited in this write up uses python as a client language. This could be Java or any other language in your environment.
- A web service client usually makes GET and POST requests to servers.
- Using your client test environment, make a GET request to the following URL.
- Your version of client library should be able to make a successful GET request to the URL above and receive a result of "0". This response means that underlying TLS v1.2 with ciphers deprecated connection is successful.
- If you get anything other than "0" in the result, it would indicate that the client you have could not make a successful connection to our servers which has TLS v1.2 with ciphers deprecated. You need to upgrade your client library which supports TLS v1.2 ciphers and run the same test to confirm you are getting a result of "0".
An example of doing this in a python script is as follows:
import
requests
url
=
""
data
=
requests.get(url)._content
assert
data
=
=
"0"
If you are using python for consuming MaaS360 web services then, run this code to see if your client connects to a URL that has TLS v1.2 with ciphers deprecated.
Note: If a different programming language is in use, similar code should be written in that environments language and verified using the test URL if the client works with the URL that has TLS v1.2 with ciphers deprecated.
Change History
Document Information
Modified date:
22 December 2021
UID
ibm16439547 | https://www.ibm.com/support/pages/node/6439547 | CC-MAIN-2022-05 | refinedweb | 979 | 56.76 |
CodePlexProject Hosting for Open Source Software
Documentation
Features Refactoring
PTVS supports several different refactorings which provide automatic transformations to your source code. This includes the
Refactory->Rename command which renames a method or variable name, the
Extract method command which creates a new method, the
Add Import command which provides a smart tag to add a missing import, and the
remove imports refactoring which removes unused imports.
To rename a variable, class, method name, ... simply move the caret to the identifier and type Ctrl-R Ctrl-R (or use the Refactor->Rename menu). You'll be prompted to type a new name. You can then select which lines the changes will be applied to (All
by default):
Extract method works in a similar way: select the lines or expression you're interested in extracting, and type Ctrl-R Ctrl-M (or use the Refactor->Extract Method menu item). You'll get a pop up that enables you to specify a new method name, where to
extract the code to (eg Module, enclosing Method or Class), and optionally any closure variables (if not selected, they become parameters):
The add import feature is made available via a smart tag menu which pops up on an identifier which currently has no type information available via the analysis. When the caret is moved to the identifier a smart tag is offered to the user which can be invoked
using the mouse or the keyboard short cut. The user is then displayed a menu which includes a list of available names which can be imported. Selecting one of the options causes the import to be inserted at the top of the file after the other imports or into
an existing from ... import statement if the user's code is already importing from that module.
The smart tags will offer both imports and from ... import smart tags. Import completions will be offered for top-level packages and modules and will insert "import xyz" statements at the top of the file after the doc string and any existing imports.
"from ... import" completions will be offered for sub-modules and sub-packages as well as module members. This includes functions, classes, or exported data.
Both "import" and "from ... import" smart tags will be offered for both members in the users' project as well as members from the cached standard library.
PTVS attempts to filter out members which are not really defined in a module. This includes filtering out any modules which are imported into another module/package but aren't a child of the module/package doing the importing. For example the "sys"
module is frequently imported in lots of modules but you don't usually want to do "from xyz import sys", instead you want "import sys". Therefore PTVS will not offer a completion for importing "sys" from other modules even if
the modules are missing an __all__ member which excludes sys.
There's also a similar level of filtering for functions which are imported from other modules or from the built-in namespace. For example if a module imports the "settrace" function from the sys module then in theory you could import it from that
module. But the correct place to import settrace would be directly from the sys module. Therefore we don't offer imports of functions from one module when the function has been defined in another module.
Finally if something would be excluded due to the rules above but has other values that would be included (for example because the name was assigned a value in the module) the import will still be excluded. This is making the assumption that the value should
not be exported because it is defined in another module - the additional assignment is likely to be a dummy value which is also not exported.
This feature enables the user to remove unused imports from within a file. This feature exposes two new commands, one for removing imports from the current scope and one for removing imports from all scopes. These commands are exposed via the editor window
context menu "Remove Imports" which has two sub-items "Current Scope" and "All Scopes".
During the analysis this feature will only look at names which are imported and whether or not that name is used in any scope below where the import occurs. There will be no accounting for control flow - for example using a name before an import statement
will be treated as if the name was in fact used.
The analysis will ignore all "from __future__" imports, imports that are performed inside of a class definition, as well from
from ... import * statements.
from ... import *
4729fe3bc7b8500a3d4362a1c3f9d3f85c20294a4db24fd26c120531b8ead788
Last edited Jul 2, 2014 at 5:35 PM by pminaev, version 23 | https://pytools.codeplex.com/wikipage?title=Features%20Refactoring&version=23 | CC-MAIN-2015-48 | refinedweb | 788 | 57.71 |
Introduction:. Each cXML document is constructed based on XML Document Type Definitions (DTDs). Acting as templates, DTDs define the content model of a cXML document, for example, the valid order and nesting of elements, and the data types of attributes.
You can find more information on cXML at
How to obtain the cXML DTD:
All the versions of cXML DTD’s are present in the cxml.org website. Please see the table below for details:
Example:
I need version 1.2.011 in my project, so to get all the details I need to use the URL’s below:
Use any tools to convert the DTD into an XSD
In SAP PI, Import the XSD as an External Definition in the required namespace. I have imported it as cXML_1_2_011
When choosing the message in message mapping, choose cXML.
Disable nodes that you are not using in the mapping.
When selection the Operation Interface choose cXML
The rest of the steps for building the interface are the same…
References:
Very Nice Information
Thanks for sharing it. Helpful information.
Hi Preetam,
Nice blog, with good explanation.
we have a requirement to send the purchase orders in CXML format.
i didn’t find the DTD for this, please help me.
Can we send the data in CXML format using PI standard adapters or we have to use ARIBA adapters?
Regards
Bhargava krishna
Hi Bhargava,
Was this problem solved? can you let me know the solution made for it?
Regards,
Siva.
Nice info…
We can alwayse use DTD as well… thought to mention this as you mentioned in a blog to convert DTD to XSD.
–Divyesh
Hi Preetam,
It helps a lot to understand the cxml concept and working in PI!
I am having the exact scenario as below url:
I am planning to send PO from SAP SRM to some third party vendor and they only accepted cXML format. once i converted the PO DTD to xsd (as you suggested above) and prepare final xml, how do i convert the message into cxml format? or the above inbound service will get cxml after message from source xml?
Appreciate your quick response – Thanks.
Regards,
Siva. | https://blogs.sap.com/2013/12/06/working-with-cxml/ | CC-MAIN-2018-34 | refinedweb | 361 | 70.33 |
CodePlexProject Hosting for Open Source Software
Hi,
I am new to blogengine and I have created two domains in godaddy. I have configured one SQL server database. For the first domain, I have copied blogengine in the root directory and works fine. For the second domain - I have a directory but this gives a
500 internal server error. Even if I don't use the database & use only xmldbprovider to store I get the same error. Please tell me how to configure two domains with blogengine.
Thanks
Shankar
Shankar,
This sounds like a server configuration. I run several TLD blogs useing godaddy with no issuse.
please check the following
Hope it helps.
Craig
Craig,
Thanks for the help. I have my first website on the root (). The second website is configured in a directory inside the root ( - path is /two). I have copied the blogengine in the root folder as well as the 'two' folder. I have set
the <location path="." inheritInChildApplications="false"> for the system.web tab in the root directory web.config file.
In the child directory I have tried both options for the virtual path in the appSettings (<add key="BlogEngine.VirtualPath" value="~/"/> and <add key="BlogEngine.VirtualPath" value="~/two/"/> ). But
both seems not to solve the issue.
If I do not add the following in config - then I get blog:PostCalendar not found. But if I add the following, I always get the 500 internal server error.
<pages enableSessionState="false" enableViewStateMac="true" enableEventValidation="true">
<controls> <add namespace="Controls" tagPrefix="blog"/>
</controls> </pages>
Both the websites are as their own application. But still not able to find the correct configuration.
Shankar.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://blogengine.codeplex.com/discussions/225968 | CC-MAIN-2017-17 | refinedweb | 310 | 68.87 |
daemon()
Run a process in the background
Synopsis:
#include <stdlib.h> int daemon( int nochdir, int noclose );
Since:
BlackBerry 10.0.0
Arguments:
- nochdir
- If this argument is 0, the current working directory is changed to the root directory (/).
- noclose
- If this argument is 0, standard input, standard output, and standard error are redirected to /dev/null.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The daemon() function allows programs to detach themselves from the controlling terminal and run in the background as system daemons.
This function calls fork() and setsid().
The controlling terminal behaves as in Unix System V, Release 4. An open() on a terminal device not already associated with another session causes the device to become the controlling terminal for that process.
Classification:
Caveats:
Currently, daemon() is supported only in single-threaded applications. If you create a thread and then call daemon(), the function returns -1 and sets errno to ENOSYS.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/d/daemon.html | CC-MAIN-2015-32 | refinedweb | 188 | 50.84 |
ZK8 Series: Interact with Client Side Libaries using ZK's New Client Side Binding
Han Hsu, Potix Corporation
April 2, 2015
ZK 8.0.0.FL.20150331
Introduction
In a recent blog, we have introduced ZK 8's new client side binding and showed how we can use it to make a Polymer component work with ZK. In this smalltalk, we will have a more complete discussion of what will be included in the new client side binding up to date and how we can use these new methods to interact with a more complex client side library. In the second part of this post, we will present a simple demo along with its implementation to show you how to actually work with the new client side binding in real projects.
ZK 8's client side binding
The new client side binding provides 4 methods - 2 at the client side, and 2 at the server side. Their relationships can be illustrated by the following diagram:
At client
We have to get the client binder first in order to use the client side methods. To get the binder, simply use
var binder = zkbind.$('$id');
in the scripts. After we have our client binder, we could use the command and after method to interact with the view model back to our server.
binder.command(commandName, data)
This method is used to trigger a command we have at server by giving the command name as the first parameter. The second parameter is a JavaScript object, which is used to pass any information you want with the command.
Note: You could also pass ZK widgets in the data object and use @BindingParam to get the corresponding ZK component at the server.
binder.after(commandName, callback)
This method is used to place a callback at the client after a command gets executed at the server.
At server
Here, we are going to introduce two new annotations at the server side for the new client side binding. They should be placed at the beginning of the class declaration of our View Model.
@NotifyCommand(value="commandName", onChange="_vm_.expression")
The notify command annotation allows us to trigger a command whenever the given expression changes at the server. Notice the command which gets triggered is a command in our view model. The _vm_ here means the current view model.
@ToClientCommand(commandNames)
The client command annotation allows us to put which commands we want to notify the client after execution has been done. Notice only the commands we put inside this annotation will trigger the callback we put in binder.after at client.
Demo
In this demo, we will use full calendar, which is a JavaScript calendar library under MIT license. We are going to wire up the full calendar with some ZK components back to the server with a View Model and a Data Model. The goal here is to bind the events coming out from full calendar to the commands we have in our View Model so that we can sync whatever changes in the calendar directly to our data model.
Implementation
We will have four main parts in this demo.
- zul page, which is the view of our demo
- js file, which will contain our event handlers for full calendar and our client side bindings.
- view model, where we handle all the commands that come from the client.
- data model, which act as a data accessing object to our data source.
zul page
Our zul page is relatively simple. It has three parts, a container for full calendar and two popups. The container will be just a native div element with an ID:
<n:div
The popups will be ZK popups and they will be used to create and modify events. The modify event popup looks like:
<popup id="modifyEventPop"> <window title="Modify Event: " width="500px"> <grid form="@id('fx') @load(vm.tempEvent) @save(vm.tempEvent, before='modEvent')"> <rows> <row> <label value="Event ID: " /> <label id="modId" value="@load(fx.id)" /> </row> <row> <label value="Title:" /> <textbox id="modTitle" value="@bind(fx.title)" /> </row> <row> <label value="Start Date:" /> <datebox id="modStart" value="@bind(fx.start)" format="long+medium" /> </row> <row> <label value="End Date:" /> <datebox id="modEnd" value="@bind(fx.end)" format="long+medium" /> </row> </rows> </grid> <button id="modify" label="Modify" onClick="@command('modEvent', pop=modifyEventPop)" /> <button label="Cancel" onClick="modifyEventPop.close()" /> </window> </popup>
The popup shows when an eventClicked event is triggered. The data of the event will be pre-loaded before the popup shows. When the modify button gets clicked, a modEvent command will be fired back to our View Model, and we will update the modified event to our data model from there. The popup for creating event is very similar with the one above, we will omit its implementation here just to keep things simple.
js file
Our script is where the new client side binding begins to play a role at. The complete js file looks like:
zk.afterMount(function() { var binder = zkbind.$('$cal'), calConfig = {}; // day click handler calConfig.dayClick = function(data, jsEvent, view) { var popOffset = [jsEvent.clientX, jsEvent.clientY]; binder.command('doDayClicked', {dateClicked: data.toDate().getTime()}) .after(function() { var newPop = zk.$('$newEventPop'); newPop.open(newPop, popOffset); }); }; // event click handler calConfig.eventClick = function(event, jsEvent, view) { var popOffset = [jsEvent.clientX, jsEvent.clientY]; binder.command('doEventClicked', {evtId: event.id}) .after(function() { var modPop = zk.$('$modifyEventPop'); modPop.open(modPop, popOffset); }); } // event drop handler and event resize handler calConfig.eventResize = calConfig.eventDrop = function(event, delta, revertFunc, jsEvent, ui, view) { var startTime = event.start ? event.start.toDate().getTime() : 0, endTime = event.end ? event.end.toDate().getTime() : 0; binder.command('doEventChanged', {evtId: event.id, startTime: startTime, endTime: endTime}); } $('#cal').fullCalendar(calConfig); // the event handler of after 'doCommandChange' from server binder.after('doEventsChange', function(events) { $('#cal').fullCalendar('removeEvents'); $('#cal').fullCalendar('addEventSource', events); $('#cal').fullCalendar('rerenderEvents'); }); });
The first thing we do here at line 3 is to get our client binder. Then in our day clicked event handler, we use binder.command to trigger the doDayClicked command and pass the clicked date back to our view model at line 10. At line 11, we use binder.after to open our popup. Notice that when cascading binder.command and binder.after, the first argument in binder.after can be omitted. eventClick, eventDrop, and eventResize handlers follow the similar concept as well. Line 37 is where we initialize our full calendar and finally, begins at line 40 is where our doEventsChange callback. We use this callback to ensure that every time when events change at the View Model, they will be updated in our view.
View Model
Our view model is where we put all of our commands at. The structure of our view model looks like:
@NotifyCommand(value="doEventsChange", onChange="_vm_.events") @ToClientCommand({"doEventClicked", "doDayClicked", "doEventsChange"}) public class DemoViewModel { private EventsDataModel dataModel; private Collection<EventObject> events; private EventObject tempEvent; @Init public void init() throws GeneralSecurityException, IOException { // init event data model dataModel = new DemoDataModel(); events = dataModel.getEvents(); } @Command @NotifyChange("tempEvent") public void doDayClicked(@BindingParam("dateClicked") long dateClicked); @Command @NotifyChange("tempEvent") public void doEventClicked(@BindingParam("evtId") String evtId); @Command @NotifyChange("events") public void doEventChanged(@BindingParam("evtId") String evtId, @BindingParam("startTime") long startTime, @BindingParam("endTime") long endTime); @Command @NotifyChange("events") public void modEvent(@BindingParam("pop") Popup pop); @Command @NotifyChange("events") public void createEvent(@BindingParam("pop") Popup pop); }
Notice we omit command implementations here just to focus on the new client side binding. First we put our @NotifyCommand and @ToClientCommand on top of our class declaration.
At line 1,
@NotifyCommand(value="doEventsChange", onChange="_vm_.events")
We specify that our view model will trigger the doEventsChange command whenever events are changed. The _vm_ here stands for the current view model.
At line 2,
@ToClientCommand({"doEventClicked", "doDayClicked", "doEventsChange"})
We specify that every time these commands execute, ZK will notify our client, and if there is a binder.after callback at client, it will be invoked. Notice that we do not have a doEventsChange command in our view model, we put @NotifyCommand here just because we want to trigger the callback function at the client.
data model
Data model is used to assess our data source. Here's the class diagram of our data model:
Since we can have different data sources, our data model will have different implementations depending on the data source. As shown in the demo above, we use a mock data source, which is just a map object in memory. If you check out the source code of this demo, we also have a implementation with Google Calendar's API.
Download
You can download the war file and all of the source code for this demo in Github | https://www.zkoss.org/wiki/Small_Talks/2015/April/ZK8_Series:_Interact_with_Client_Side_Libaries_using_ZK%27s_New_Client_Side_Binding | CC-MAIN-2022-21 | refinedweb | 1,438 | 56.76 |
Question
Explain how currency fluctuations affect the return on foreign investments.
Answer to relevant QuestionsSuggest two types of strategies to reduce or neutralize the impact of currency fluctuations on portfolio returns. Assume you invest in the German equity market and have a 20 percent return (quoted in euros). a. If during this period the euro appreciated by 10 percent against the dollar, what would be your actual return translated into ...What are the three primary approaches to real estate valuation? Should they be combined? What is the life of a venture capital fund, and how long might it take for an investor to get his or her investment back? How does a market neutral hedge fund expect to make profits?
Post your question | http://www.solutioninn.com/explain-how-currency-fluctuations-affect-the-return-on-foreign-investments | CC-MAIN-2017-13 | refinedweb | 123 | 57.57 |
Runtime Argument Passing In Java
VM arguments are arguments such as System properties that are passed to the JavaSW interpreter. These classes are in the java.lang package, so they are available without needing to specify any import statements in your Java programs. Rather than having elaborate checks in the constructor itself, I decided to provide a controlled set of addOption() methods.The constructor also creates an instance of java.util.regex.Pattern, which is used for this I accidentally added butter into flour/cocoa powder/baking soda without beating first What is the best item to farm and sell for Gil? navigate here
Consider a program ArgsTest.java: package test; import java.io.IOException; public class ArgsTest { public static void main(String[] args) throws IOException { System.out.println("Program Arguments:"); for (String arg : args) { System.out.println("\t" + arg); num = Integer.parseInt(args[0]); } catch (NumberFormatException nfe) { // The first argument isn't a valid integer. Why did the rebels need the Death Star plans? Table of Contents The command line arguments of a Java program Arguments (parameters) of the main method Consider the main method in the following Java program: public class Demo { public
How To Pass Command Line Arguments In Java Eclipse
Where should a galactic capital be? Do I need a hard shell to ski in sunny weather conditions? Options taking a value also have a separator and might accept details. For example, let's say you have a class like this: public class MyProgram { ... // The entry-point for my program.
- Word for fake religious people How are there so many species on the space station 'A long way from anywhere V'?
- This value is passed on to any option set and any option created subsequently.
- In the large subwindow of the Run window, there is a set of tabs, labelled Main, Arguments, JRE, etc.
- Enums can also conveniently be used with genericized collections.Note that the Prefix and Separator enums have their own constructors, allowing for the definition of an actual character representing this enum instance
What was the Ludicrous Patents Office? I'm using Java 5. –rize Oct 31 '09 at 11:46 No, I just didn't include the run method - it wasn't meant to be a full program. Will try to edit to add a run method though. (Am on phone, so tricky.) –Jon Skeet Oct 31 '09 at 11:52 Ok, now I understand the structure, many Java Vm Arguments Table of Contents Command Line Arguments Your Java application can accept any number of arguments from the command line.
All rights reserved. The number parsing functions will throw a java.lang.NumberFormatException when parsing fails. When converting dynamic SQL (pivot query) to xml output, why is the first digit of the date converted to unicode? asked 7 years ago viewed 15084 times active 6 years ago Blog Developers, webmasters, and ninjas: what's in a job title?
Then just click Apply, followed by Run. Java -cp Command Example If you were using DOS, you would invoke the Sort application on your data file like this: C:\> java Sort ListOfFriends In the Java language, when you invoke an application, the getOptionData() returns a list of all OptionData instances, while getOption() allows direct access to a specific option. Choose your Java IDE Find out what to look for in a Java IDE and get tips for deciding which of the top three--Eclipse,...
How To Take Command Line Arguments In Java
minData and maxData are the minimum and maximum number of acceptable data arguments for this set.The public API for OptionSet contains the following methods:General access methods:String getSetName() int getMinData() int getMaxData() All Rights Reserved. How To Pass Command Line Arguments In Java Eclipse This is your program's entry-point. Java Command Line Arguments Parser Popular on JavaWorld Eclipse, NetBeans, or IntelliJ?
For example, the UNIX command that prints the contents of a directory--the ls utility program--accepts arguments that determine which file attributes to print and the order in which the files are check over here Previous page: Properties Next page: Environment Variables current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. The main point was passing args into the constructor. Click on Run -> Run (not Run Last Launched). How To Pass Arguments To Main Method In Java Using Eclipse
Also, the parsing functions are static, so you don't need to create any new objects to use these functions. Program arguments go after your Java class. Each element of the array contains one of the values specified on the command-line. Then just click Apply, followed by Run.
While running the program any things written after the name of the class are the command line arguments. Java.exe Command Line Arguments How can I strengthen a lawn/verge? Instead, catch the exception, print out a helpful message, and then exit your program with a nonzero error code.
For the Args program, eight input argument values are needed.
In addition, typically an application requires these data arguments to be last on the command line, but that doesn't always have to be the case. Android Studio for beginners: Code the app Open source Java projects: Docker Swarm Newsletters Stay up to date on the latest tutorials and Java community news posted on JavaWorld Get our String b = "34"; System.out.println( a + b ); System.out.println( Integer.parseInt(a) + Integer.parseInt(b) ); // CRASH !!! } } Example Program: (Demo above code) Java Command Line Options This method design directly ties in with the multiplicity defined for the option.
For more information about applet parameters, see Communicating With the User . Then just click Apply, followed by Run. Why can't I use the usual "this.variable = " in the constructor with "args"? weblink Click on the Arguments tab, in the large subwindow of the Run window.
This action brings up the Program arguments window, where you should type in your input values. For example, a program might allow the user to specify verbose mode--that is, specify that the application display a lot of trace information--with the command line argument -verbose. Note that leading and trailing whitespace characters are removed from the values stored in the args array. Early on, I found it annoying to manually create and maintain the code for processing the various options. | http://dailyerp.net/command-line/runtime-argument-passing-in-java.html | CC-MAIN-2017-39 | refinedweb | 1,065 | 55.44 |
Namespace in PHP:
The term namespace is very much common in OOP based language, basically it is a collection of classes, objects and functions.
Namespaces are one of the major change in PHP 5.3.0. Naming collision of classes, functions and variables can be avoided.
We can put any code of PHP within a namespace, but in general class, functions, and constants are placed for easy identification and retrieval.
A namespace is declared with namespace keyword, but we should keep remember that namespace is introduced in PHP 5.3.0 and if we try this in older version of PHP then a error message would appear. The namespace should always declare on the top of the page.
PHP OOP Namespace Example:
<?php
namespace MyProject;
?>
Exaplanation:
We should declare the namespace as above and we can also place a backslash between multiple names of namespace.
Advertisements
Posted on: February | http://www.roseindia.net/tutorial/php/phpoop/php-namespace.html | CC-MAIN-2016-44 | refinedweb | 149 | 66.54 |
On 09/30/2011 04:06 PM, Tejun Heo wrote:> Hello, Jeremy.>> On Fri, Sep 30, 2011 at 09:28:15AM -0700, Jeremy Fitzhardinge wrote:>> Make stop_machine() safe to call early in boot, before SMP has been>> set up, by simply calling the callback function directly if there's>> only one CPU online.> ...>> @@ -485,6 +485,9 @@ int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus)>> .num_threads = num_online_cpus(),>> .active_cpus = cpus };>> >> + if (smdata.num_threads == 1)>> + return (*fn)(data);>> +> As others have pointed out, you'll need to call both local and hardirq> disables. Also, I think the description and the code are a bit> misleading. How aobut setting cpu_stop_initialized in cpu_stop_init()> and testing it from __stop_machine() instead? I think it would be> better to keep the behavior as uniform as possible once things are up> and running.Yes, I was wondering about that. Do you think the patch (with irq fixesin place) would affect the behaviour of an SMP kernel running UP? Jdiff --git a/kernel/stop_machine.c b/kernel/stop_machine.cindex f5855fe3..70b3be4 100644--- a/kernel/stop_machine.c+++ b/kernel/stop_machine.c@@ -41,6 +41,7 @@ struct cpu_stopper { }; static DEFINE_PER_CPU(struct cpu_stopper, cpu_stopper);+static bool stop_machine_initialized = false; static void cpu_stop_init_done(struct cpu_stop_done *done, unsigned int nr_todo) {@@ -386,6 +387,8 @@ static int __init cpu_stop_init(void) cpu_stop_cpu_callback(&cpu_stop_cpu_notifier, CPU_ONLINE, bcpu); register_cpu_notifier(&cpu_stop_cpu_notifier); + stop_machine_initialized = true;+ return 0; } early_initcall(cpu_stop_init);@@ -485,7 +488,7 @@ int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus) .num_threads = num_online_cpus(), .active_cpus = cpus }; - if (smdata.num_threads == 1) {+ if (!stop_machine_initialized) { /* * Handle the case where stop_machine() is called early in boot * before SMP startup. | https://lkml.org/lkml/2011/9/30/417 | CC-MAIN-2015-27 | refinedweb | 267 | 57.27 |
Alex Papadimoulis' .NET BlogAlex's musings about .NET and other Microsoft technologies Server2005-04-22T14:06:00ZScriptOnly - The Opposite of a NOSCRIPT<P>Despite all of the advances in client-side scripting, the wonderful JavaScript libraries like <A href="" mce_href="">Prototype</A> and <A href="" mce_href="">Scriptaculous</A>,. <P>There’s a lot of ways that this can be accomplished, but one of the more popular ways is with the SCRIPT/NOSCRIPT combo... <!-- code formatted by --> <BLOCKQUOTE><PRE class=csharpcode><SPAN class=kwrd><</SPAN><SPAN class=html>script</SPAN> <SPAN class=attr>type</SPAN><SPAN class=kwrd>="text/javascript"</SPAN><SPAN class=kwrd>></SPAN> document.write(<SPAN class=str>'Only users with JavaScript will see me.'</SPAN>); <SPAN class=kwrd></</SPAN><SPAN class=html>script</SPAN><SPAN class=kwrd>></SPAN> <SPAN class=kwrd><</SPAN><SPAN class=html>noscript</SPAN><SPAN class=kwrd>></SPAN> Only users without JavaScript will see me. <SPAN class=kwrd></</SPAN><SPAN class=html>noscript</SPAN><SPAN class=kwrd>></SPAN> </PRE></BLOCKQUOTE> <P>While this works fine in a lot of scenarios, it can get especially tricky when you want to put server-side controls on the SCRIPT side of things. A lot of developers resort to something like this... <!-- code formatted by --> <BLOCKQUOTE><PRE class=csharpcode><SPAN class=kwrd><</SPAN><SPAN class=html>div</SPAN> <SPAN class=attr>id</SPAN><SPAN class=kwrd>="javaScriptOnly"</SPAN> <SPAN class=attr>style</SPAN><SPAN class=kwrd>="display:none"</SPAN><SPAN class=kwrd>></SPAN> Only users with JavaScript will see me. <SPAN class=kwrd><</SPAN><SPAN class=html>asp:LinkButton</SPAN> <SPAN class=attr>runat</SPAN><SPAN class=kwrd>="server"</SPAN> ... <SPAN class=kwrd>/></SPAN> <SPAN class=kwrd></</SPAN><SPAN class=html>div</SPAN><SPAN class=kwrd>></SPAN> <SPAN class=kwrd><</SPAN><SPAN class=html>div</SPAN> <SPAN class=attr>id</SPAN><SPAN class=kwrd>="noJavaScript"</SPAN> <SPAN class=attr>style</SPAN><SPAN class=kwrd>="display:block"</SPAN><SPAN class=kwrd>></SPAN> Only users without JavaScript will see me. <SPAN class=kwrd><</SPAN><SPAN class=html>asp:Button</SPAN> <SPAN class=attr>runat</SPAN><SPAN class=kwrd>="server"</SPAN> ... <SPAN class=kwrd>/></SPAN> <SPAN class=kwrd></</SPAN><SPAN class=html>div</SPAN><SPAN class=kwrd>></SPAN> <SPAN class=kwrd><</SPAN><SPAN class=html>script</SPAN> <SPAN class=attr>type</SPAN><SPAN class=kwrd>="text/javascript"</SPAN><SPAN class=kwrd>></SPAN> document.getElementById(<SPAN class=str>'javaScriptOnly'</SPAN>).style.display = <SPAN class=str>'block'</SPAN>; document.getElementById(<SPAN class=str>'noJavaScript'</SPAN>).style.display = <SPAN class=str>'none'</SPAN>; <SPAN class=kwrd></</SPAN><SPAN class=html>script</SPAN><SPAN class=kwrd>></SPAN> </PRE></BLOCKQUOTE> <P>... and of course, things quickly get much uglier once you do this in the real world. <P>One solution that I use is a simple, custom-control called ScriptOnly. It works just like this... <!-- code formatted by --> <BLOCKQUOTE><PRE class=csharpcode><SPAN class=kwrd><</SPAN><SPAN class=html>inedo:ScriptOnly</SPAN> <SPAN class=attr>runat</SPAN><SPAN class=kwrd>="server"</SPAN><SPAN class=kwrd>></SPAN> Only users with JavaScript will see me. <SPAN class=kwrd><</SPAN><SPAN class=html>asp:LinkButton</SPAN> <SPAN class=attr>runat</SPAN><SPAN class=kwrd>="server"</SPAN> <SPAN class=attr>onClick</SPAN><SPAN class=kwrd>="doSomething"</SPAN> ... <SPAN class=kwrd>/></SPAN> <SPAN class=kwrd></</SPAN><SPAN class=html>inedo:ScriptOnly</SPAN><SPAN class=kwrd>></SPAN> <SPAN class=kwrd><</SPAN><SPAN class=html>noscript</SPAN><SPAN class=kwrd>></SPAN> Only users without JavaScript will see me. <SPAN class=kwrd><</SPAN><SPAN class=html>asp:Button</SPAN> <SPAN class=attr>runat</SPAN><SPAN class=kwrd>="server"</SPAN> <SPAN class=attr>onClick</SPAN><SPAN class=kwrd>="doSomething"</SPAN> ... <SPAN class=kwrd>/></SPAN> <SPAN class=kwrd></</SPAN><SPAN class=html>noscript</SPAN><SPAN class=kwrd>></SPAN></PRE></BLOCKQUOTE> <P. <P>Behind the scenes, ScriptOnly is a very simple control... <!-- code formatted by --> <BLOCKQUOTE><PRE class=csharpcode>[ParseChildren(<SPAN class=kwrd>false</SPAN>)] <SPAN class=kwrd>public</SPAN> <SPAN class=kwrd>class</SPAN> ScriptOnly : Control { <SPAN class=kwrd>protected</SPAN> <SPAN class=kwrd>override</SPAN> <SPAN class=kwrd>void</SPAN> Render(HtmlTextWriter writer) { <SPAN class=rem>//Render contents to a StringWriter</SPAN> StringWriter renderedContents = <SPAN class=kwrd>new</SPAN> StringWriter(); <SPAN class=kwrd>base</SPAN>.Render(<SPAN class=kwrd>new</SPAN> HtmlTextWriter(renderedContents)); <SPAN class=rem>//write out the contents, line by line</SPAN> writer.WriteLine(<SPAN class=str>"<script type=\"text/javascript\">"</SPAN>); StringReader sr = <SPAN class=kwrd>new</SPAN> StringReader(renderedContents.ToString()); <SPAN class=kwrd>while</SPAN> (sr.Peek() >= 0) { <SPAN class=rem>// This could be optimized to write on one line; but</SPAN> <SPAN class=rem>// I've found this makes it easier to debug when</SPAN> <SPAN class=rem>// looking at a page's source</SPAN> writer.WriteLine( <SPAN class=str>"document.writeln('{0}');"</SPAN>, jsEscapeText(sr.ReadLine()).Trim()); } writer.WriteLine(<SPAN class=str>"</script>"</SPAN>); } <SPAN class=kwrd>private</SPAN> <SPAN class=kwrd>string</SPAN> jsEscapeText(<SPAN class=kwrd>string</SPAN> <SPAN class=kwrd>value</SPAN>) { <SPAN class=kwrd>if</SPAN> (<SPAN class=kwrd>string</SPAN>.IsNullOrEmpty(<SPAN class=kwrd>value</SPAN>)) <SPAN class=kwrd>return</SPAN> <SPAN class=kwrd>value</SPAN>; <SPAN class=rem>// This, too, could be optimzied to replace character</SPAN> <SPAN class=rem>// by character; but this gives you an idea of</SPAN> <SPAN class=rem>// what to escape out</SPAN> <SPAN class=kwrd>return</SPAN> <SPAN class=kwrd>value</SPAN> <SPAN class=rem>/* \ --> \\ */</SPAN> .Replace(<SPAN class=str>"\\", "</SPAN>\\\\<SPAN class=str>") /* ' --> \' */ .Replace("</SPAN><SPAN class=str>'", "\\'") /* " --> \" */ .Replace("\"", "\\\"") /* (newline) --> \n */ .Replace("\n", "\\n") /* (creturn) --> \r */ .Replace("\r", "\\r") /* </script> string */ .Replace("</script>", "</scri'</SPAN>+'pt>"); } }</PRE></BLOCKQUOTE> <P>When "pre-reistered" in your web.config, it works just as well as the NOSCRIPT tag.</P><img src="" width="1" height="1">Alex Papadimoulis Workaround For VirtualPath Weirdness With Custom VirtualPathProviders<P>[[ <EM>Meta-blogging: as you may have noticed from the name/description change (and of course, this article) I’ve decided to shift the focus of this blog back to the “front lines” of Microsoft/.NET development technologies. All other rants and ramblings will go to </EM><A href="" mce_href=""><EM>Alex's Soapbox</EM></A><EM> over at <A class="" href="" mce_href="">WTF</A> </EM>]] </P> <P>If you've ever come across this error... <BLOCKQUOTE>The VirtualPathProvider returned a VirtualFile object with VirtualPath set to '/global/initrode/embeddedControl.ascx' instead of the expected '//global/initrode/embeddedControl.ascx'</BLOCKQUOTE> <P>... then chances are you're implementing VirtualPathProvider in order to serve up embedded Page/Control resources or something fun like that. Let's just hope your not <A href="" mce_href="">serving pages from a ZIP file</A>. And if you have no idea what a VirtualPathProvider is, then do check out that MSDN article I linked to get an idea. <P>The reason behind this error is identified in Microsoft Bug #<A href="" mce_href="">307978</A>: ASP.NET is erroneously replacing page compilation errors with the bad virtual path error. While ensuring that your virtual-pathed page will compile is a sure-fire way to fix the error, finding the compilation errors can be a bit of pain. <P, <PRE; } } }</PRE> : <BLOCKQUOTE>: error CS1002: ; expected </BLOCKQUOTE><img src="" width="1" height="1">Alex Papadimoulis: Web Applications for Dummies by Dummies<p><img style="width: 192px; height: 106px" src="" alt="" width="192" height="106" align="right" /,</p><blockquote><p><em>Coghead will enable nonprogrammers to rapidly create their own custom business software</em></p></blockquote><p.</p><p>There's a bit of ambiguity surrounding what is and isn't a 4GL (Fourth Generation Language), so I'll stick with James Martin's characterization from his 1982 book, <em>Application Development Without Programmers</em>. The book's title should give you a good enough understanding of the goal of a 4GL: the ability to develop complex custom business software through the use of simple-to-use tools.</p><p>In the quarter-century since <em>Application Development Without Programmers</em>.</p><p.</p><p>Like many other common-sense principles, the "software machine" is one that some programmers don't get. Be it with The Tool or The Customer Friendly System, these programmers believe they are so clever and so intelligent that they can program even themselves into obsolescence.</p><p>Some businesses don't get it, either. But they will eventually pay the price: the "mission critical" software they developed for themselves in Microsoft Access will become their albatross, costing time, opportunity, and, eventually, lots of money for real programmers to fix their mess. </p><p: </p><blockquote><p><em>"anyone who can code a simple Excel macro should have little trouble using Coghead to create even sophisticated enterprise apps like logistics trackers, CRM programs, or project management systems.</em></p></blockquote><p.</p><blockquote><p><img src="" alt="" width="554" height="391" /><br /><em>the Coghead web IDE</em></p></blockquote><p.</p><p. </p><p.</p><img src="" width="1" height="1">Alex Papadimoulis Using Enterprise Manager! (Use DDL Instead)<P.</P> <P>Most applications exist in at least two different environments: a development environment and a production environment. Promoting changes to code from a lower level (development) to a higher level (production) is trivial. You just copy the executable code to the desired environment. </P> <UL> <LI>Click on the desired database. </LI> <LI>Click on Action, New, then Table. </LI> <LI>Add a column named "Shipper_Id" with a Data Type "char", give it a length of 5, and uncheck the "Allow Nulls" box. </LI> <LI>In the toolbar, click on the "Set Primary Key" icon. Then you skip 22 steps. </LI> <LI>In the toolbar, click on the "Manage Relationships…" button. </LI> <LI>Click on the New button, and then select "Shippers" as the Foreign key table. </LI> <LI>Select "Shipper_Id" on the left column and "Shipper_Id" on the right column. Skip the remaining steps. </LI></UL> <P>Not only is this process tedious, but you're prone to making errors and omissions when using it. Such errors and omissions leave the higher-level and lower-level databases out of sync. </P> <P>Fortunately, you can use an easier method to maintain changes between databases: Data Definition Language (DDL). The change described in the previous example can be developed in a lower-level environment and migrated to a higher-level environment with this simple script: </P> <BLOCKQUOTE><PRE) </PRE></BLOCKQUOTE> <P. </P> <P. </P> <P>Enterprise Manager helps you with this transition. Before making database changes, Enterprise Manager generates its own DDL script to run against the database. With the "Save Change Script" button, you can copy the generated DDL script to disk, instead of running it against the database. </P> <P. </P><img src="" width="1" height="1">Alex Papadimoulis Crap: I'm an Official MVP for MS Paint<p><img style="FLOAT: left" src="" /> <p>There are few emails that one will receive in his lifetime that will render him completely speechless. This past weekend, I received one such email. Its subject read <i>Congratulations on your MVP Award!</i> <p>I struggle with the words to describe how elated I am to be chosen for this award. Sure, I’ve worked my butt off in <font color="#0000ff"><u>microsoft.public.accessories.paint</u></font>, helping both newbies and vets solve their problems. But I never expected this. For me, it’s always been about my love of the Paint, and sharing my knowledge and expertise of Paint with the world. <p! <hr /> <p><b>Why are some of my edges jagged?</b><br /. <p><img src="" /> <p. <p><img src="" /> <p>And like magic, the jagged edge is no more! <p><b>How do I do shadows?</b><br />Shadows in Paint are incredibly easy to do:<br />1) Draw the shape you want to draw, but instead use black<br />2) Draw the shape you want to draw, using the colors you really want to use, but draw it at an angle slightly away from the black shape <p><img src="" /> <p>Look ma, a shadow! <p><b>How can I make realistic looking Hair?</b><br />This is one of the more difficult things to accomplish in Paint. But it’s certainly doable. First, you need to figure out what hair style you want to use. Once you figure that out, it’s just a matter of using the right tool. <p><i>Curl</i><br /><img src="" /><br />Believe it or not, this is a simple matter of using the wonderfully handy spray can tool. Just pick the hair color, and go crazy!!! <p><i>Baldy</i><br /><img src="" /><br />This hairstyle is so ridiculously simple you’ll wonder why more cartoons characters aren’t bald. Simply apply the ellipse tool twice, above each ear, and you’ve got yourself a bald guy! <p><i>Side Part</i><br /><img src="" /><br />When you want to make your character look neat and orderly, only the polygon tool will do. Here’s something funny: I like to part my own hair on the left, but draw it parted on the right. Funny, see, I told you! <p><i>Bed Head</i><br /><img src="" /><br />Oh no, caught red handed without a comb! You can easily achieve this look with the use of the paint brush tool. Don’t go too crazy, it’s pretty easy to slip and go through an eye. <hr /> <p>Be sure to congratulate Jason Mauss as well. He was <A href="">awarded</a> this year’s MSN Messenger MVP.</p><img src="" width="1" height="1">Alex Papadimoulis Agent for SQL Server Express: Jobs, Jobs, Jobs, and Mail<P>UPDATE: My appologies, but with the advent of relatively inexpensive commercial solutions avaiable, I've decided to suspend this project indefinitely. If I do need a solution for myself, I may take it up again. But until then, I would recommend getting a commercial version (<A href="" mce_href=""><STRIKE></STRIKE></A><STRIKE> is one source</STRIKE>) or using the Windows Task Manager to run batch files. </P> <P (<A href=""></A>) is out there - I have not used this, however.</P> <P>I was pretty excited to learn about <A href="" mce_href="">SQL Server: Express Edition</A>.. </P> <P. </P> <P. </P> <P>I'm developing an application that will fill this functionality gap: Express Agent. I was hoping to have this complete before the launch of SQL Server Express, but other priorities prevented this from happening. Express Agent strives to replace and improve upon the SQL Agent that was left out. </P> <P>Like the SQL Agent, Express Agent runs as a service. However, Express Agent can also be "plugged in" to a hosted web-application as a HttpHandler. This allows Express Agent agent to run as background thread, running jobs and sending email as needed. </P> <P. </P> <P. </P> <P ... </P> <P><IMG height=332 </P> <P><IMG height=415 </P> <P><IMG height=476 </P> <P.</P><img src="" width="1" height="1">Alex Papadimoulis' Down in Detroit: The 2005 Launch Party<p>I saw that Jason Mauss wrote about <a href="">his experience</a>. </p><p. </p><blockquote><table border="0"><tbody><tr><td><img height="200" src="" width="200" /></td><td><img height="203" src="" width="164" /></td></tr><tr><td colspan="2">Tax-free Booze</td></tr></tbody></table></blockquote><p. </p><blockquote><p><img height="368" src="" style="width: 292px; height: 368px" width="292" /><br />The Ren Cen</p></blockquote>. </p><p>Coming in so late offered one other large disadvantage: missing out on many of the cooler freebies given out by the vendors. Here's a quick classification/rarity guide on the Detroit Launch Event vendor free stuff: </p><ul><li><strong>Laser Pen</strong> (Rare) - Offered by Berbee, this was by far the coolest give-away. Only a few lucky attendies scored this combination pen/laser pointer. Surprisingly, no one abused these devices during the sessions. </li><li><strong>Blinking Yo-yo</strong> (Rare) - I somehow managed to get one of these. It was really cool until I realized it was not a "sleeper" yo-yo, so I gave it away to a colleague. </li><li><strong>Blinking HP Necklace</strong> (Uncommon) - About a third of the attendees had these, leading to two simultaneous yet conflicting feelings: "those are incredibly tacky" and "I wish I had one." </li><li><strong>Quest Software Weeble</strong> (Uncommon) - I don't know what these were called actually, but it was just a yellow cotton ball with paper feet and plastic eyes glued on. Despite having an uncommon rarity, no one really wanted these. </li><li><strong>Intel Mints</strong> (Uncommon) - These were in a neat, small metal container. They are borderline rare, mostly because you had to actually talk to the rep to get one. They were not just lying out like everything else. </li><li><strong>Pens</strong> (Common) - A handful of vendors were giving these away, giving to a good variety of pens. All however were cheap and plastic. </li><li><strong>Post-It Pads</strong> (Common) - Surprisingly, only one vendor was giving these away. Probably a good thing, just one less thing to end up in the landfill after the event. </li></ul><p. </p><blockquote><p><img height="298" src="" width="410" /><br />Too cute to eat</p></blockquote><p>After breakfast, there was the keynote speech and then a technical session. Not quite sure if there's anything more I can say about those. </p><p. </p><p>After lunch, they had another technical session. </p><p. </p><blockquote><p><img height="384" src="" width="640" /><br />Gives you the energy to throw it away</p></blockquote><p. </p>. </p>!</p><img src="" width="1" height="1">Alex Papadimoulis 5.0: Still A "Toy" RDBMS<p>"Ha," an email from a colleague started, "I think you can finally admit that MySQL is ready to compete with the big boys!" I rolled my eyes and let out a skeptical "uh huh." His email continued, "Check out Version 5. They now have views, stored procedures, and triggers." <p. <p. <p>The MySQL developers claim to have built a reliable RDBMS yet seem to lack a thorough understanding of RDBMS fundamentals, namely data integrity. Furthermore, they will often surrogate their ignorance with arrogance. Consider, for example, their documentation on <a href="">invalid data</a> [emphasis added]: <blockquote> <p>MySQL allows you to store certain incorrect date values into DATE and DATETIME columns (such as '2000-02-31' or '2000-02-00'). The idea is that <b>it's not the job of the SQL server [sic] to validate dates</b>.</p></blockquote> <p <em>point</em> of a DBMS; it ensures that data are typed and valid according to business rules (i.e. an employee can't have -65 dependents). <p>But I digress. This is the 5.0 release. They've added views. They've added stored procedures. They've added triggers. Maybe things have changed. <p! <p>After it installed, I fired up the MySQL prompt and started hackin' around. <p> <div dir="ltr" style="MARGIN-RIGHT: 0px"> <p dir="ltr" style="MARGIN-RIGHT: 0px"><pre><font size="2"> mysql> CREATE DATABASE ALEXP; Query OK, 1 row affected (0.00 sec) <br /> mysql> USE ALEXP; Database changed <br /> mysql> CREATE TABLE HELLO ( <br /> -> WORLD VARCHAR(15) NOT NULL PRIMARY KEY, <br /> -> CONSTRAINT CK_HELLO CHECK (WORLD = 'Hello World') <br /> -> ); <br /> Query OK, 0 rows affected (0.14 sec)</font></pre> <p></p></div> <p>Wow! I'm impressed! MySQL 5.0 has check constraints! Maybe I was wrong about these guys ... <p> <div dir="ltr" style="MARGIN-RIGHT: 0px"> <p dir="ltr" style="MARGIN-RIGHT: 0px"><pre><font size="2"> mysql> INSERT INTO HELLO(WORLD) VALUES('Hi World'); <br /> Query OK, 1 row affected (0.05 sec)</font></pre> <p></p></div> <p>Err … umm … wait a minute. You did just see me put that check constraint on the HELLO table, right? It's not a very complicated check, maybe, I did it wrong? <p> <div dir="ltr" style="MARGIN-RIGHT: 0px"> <p dir="ltr" style="MARGIN-RIGHT: 0px"><pre><font size="2"> mysql> SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS<br /> -> WHERE TABLE_NAME='HELLO'; <br /> +--------------------+-------------------+-----------------+--------------+------------+-----------------+ <br /> | CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | TABLE_SCHEMA | TABLE_NAME | CONSTRAINT_TYPE | <br /> +--------------------+-------------------+-----------------+--------------+------------+-----------------+ <br /> | NULL | alexp | PRIMARY | alexp | hello | PRIMARY KEY | <br /> +--------------------+-------------------+-----------------+--------------+------------+-----------------+ <br /> 1 row in set (0.01 sec)</font></pre> <p></p></div> <p. <p? <blockquote> <p>The trigger cannot use statements that explicitly or implicitly begin or end a transaction such as START TRANSACTION, COMMIT, or ROLLBACK. </p></blockquote> <p>Oh that's just lovely. Leave it to MySQL to drop the most important use of triggers (complex data validation) and encourage their most obnoxious use (difficult to maintain business logic). <p>As far as other features added in MySQL, I think they are definitely a step in the right direction. Stored Procedures are a key component in creating a clean interface with strong-cohesion to the data layer (see, <a href="">Strong vs Weak cohesion</a>). Views (virtual tables) are absolutely essential for creating an effective and maintainable data model. <p. </p><img src="" width="1" height="1">Alex Papadimoulis"When Should I Use SQL-Server CLR User Definied Types (UDT)?"<p>No one has asked me that question just yet, but with the release of SQL Server 2005 just around the corner, I'm sure a handful of people will. Unlike regular <A href="">User Defined Types</a>, CLR UDTs are a new feature of SQL Server 2005 that allows one to create a .NET class and use it as a column datatype. As long as a <a href="">few requirements</a> are followed, one can create any class with any number of properties and methods and use that class as a CLR UDT. <p>Generally, when a new feature is introduced with a product, it can be a bit of a challenge to know when and how to use that feature. Fortunately, with SQL Server's CLR UDTs, knowing when to use them is pretty clear: <p><b>Never.</b> <p>Let me repeat that. Never. You should never use SQL Server CLR User Defined Types. I'm pretty sure that this answer will just lead to more questions, so allow me to answer a few follow-up questions I'd anticipate. <p><u>Why Not?</u><br /. <p. <p. <p. <p><u>But wouldn't I want to share my .NET code so I don't have to duplicate logic?</u><br / <i>and</i> have some client-side code to validate it was entered as seven digits. If the system allows data entry in other places, by other means, that means more duplication of the "Account Number" logic. <p>By trying to share business logic between all of the tiers of the application, you end up with a tangled mess of a system. I have illustrated this in the diagram below. <blockquote> <p><img src="" /></p></blockquote> <p. <p><u>Never?!? How can there never, ever be an application of CLR UDTs?</u> needs to accomplish. If you can come up with an appropriate use of a CLR UDT in an information system, I'll buy you a <a href="">t-shirt or a mug</a>. <p><u>But what about the <a href="">samples</a> provided? That's a use, right there!</u><br />Allow me to address these ... <p><i>Supplementary-Aware String Manipulation / UTF8 String User-Defined Data Type</i><br />Both of these samples have to do with UTF-8 character encoding. Without getting into the <a href="">details</a>, UTF-8 encodes characters as one, two, three, or four 8-bit bytes, meaning you can not do anything with characters in the string (length, substring, etc) unless you read it byte-by-byte. This works great for preserving "funny characters" while transmitting data but is a poor choice for storage. UCS-2 uses a fixed-size character format of 16-bits per character and is what should be used for storing character data. <p><i>Calendar-Aware Date/Time UDTs</i><br />Let's think about this. A point in time is a point in time; how it's described varies by culture ("Monday", "Lunes"), time zone (+6:00 GMT, -3:00GMT), calendar (Gregorian, Aztek), and format (2005-08, Aug '05). Describing a point in time properly is essential when interfacing with people or other systems. The keyword in that last sentence was "interface;" such description is best done in the "interface" tier of a system, not in the data tier. Doing this makes as much sense as putting currency conversion and language translation in the database. <p><i>Multi-dimensional Points and Latitude/Longitude</i><br />A geospatial location is described with Latitude <i>and</i> Longitude. Not Lati-longi-tude. These are two separate attributes and putting them in the same column violates First Normal Form. The same goes for points and other "array-like" structures. <p><i>Imaginary Numbers</i><br />Seriously? Correct me if I'm wrong, but the only actual use for imaginary numbers is in solving of differential equations. If you're not sure why this invalidates the example, say these two phrases aloud: "solving differential equations" and "relational database." Didn't that feel just like saying "drilling a hole" and "hacksaw?" <p><i>But what about if I want to put down "SQL CLR UDTs" on my resume?</i><br />What's stopping you now? By reading this article, you know everything you will ever need to about CLR UDTs. With this on your resume, you will be able to use your expert knowledge on the topic to never use CLR UDT. <p>I hope that clears things up about CLR UDT. Hopefully now you look forward to not using them and strongly opposing anyone who suggests it. Oh, and I really am serious about sending The Daily WTF swag to whoever can come up with a use for these things. So think about a use; you may just get a free t-shirt. </p><img src="" width="1" height="1">Alex Papadimoulis"What's the Point of [SQL Server] User-Defined Types?"<p). </p> <p". </p> <p. </p> <p:</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p><font face="Courier New" size="2">CREATE TABLE [Transactions] (<br /> [Transaction_Id] INT IDENTITY(1,1) PRIMARY KEY NOT NULL,<br /> [Transaction_Type] VARCHAR(5) NOT NULL <br /> CHECK ([Transaction_Type] IN ('Debit','Credit','Escrow')),<br /> [Transaction_Amount] DECIMAL(4,2) NOT NULL<br /> CHECK ([Transaction_Amount] <> 0),<br /> [Reference_Code] CHAR(5)<br /> CHECK ([Reference_Code] LIKE '[A-Z][ A-Z][A-Z][A-Z][A-Z]'))<br />)</font></p></blockquote> <p.</p> .</p> <p.</p> <p>It's really easy to do. I'll use the SQL 2005 syntax, but you can do the same things in 2000 using sp_addtype and sp_addrule:</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p><font face="Courier New" size="2">CREATE TYPE USERNAME FROM VARCHAR(20)<br />GO</font></p> <p><font face="Courier New" size="2">CREATE RULE USERNAME_Domain<br /> AS @Username = LTRIM(RTRIM(@Username))<br /> AND LOWER(@Username) NOT IN ('admin','administrator','guest')<br />GO</font></p> <p><font face="Courier New" size="2">EXEC sp_bindrule 'USERNAME_Domain', 'USERNAME'<br />GO</font></p></blockquote> <p>And that's it. Now you can use the type throughout the database just as you normally would, and you'll never need to check or verify to make sure that someone slipped in an invalid value ...</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p><font face="Courier New" size="2">CREATE TABLE [User_Logons] (<br /> [Username] USERNAME NOT NULL,<br /> [Logon_Date] DATETIME NOT NULL,<br /> [Success_Indicator] CHAR(1) NOT NULL<br /> CHECK ([Success_Indicator] IN ('Y','N')),<br /> PRIMARY KEY ([Username],[Logon_Date])<br />)</font></p></blockquote><img src="" width="1" height="1">Alex Papadimoulis A Nail: Old Shoe or Glass Bottle?<blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p dir="ltr" style="MARGIN-RIGHT: 0px">"A client has asked me to build and install a custom shelving system. I'm at the point where I need to nail it, but I'm not sure what to use to pound the nails in. Should I use an old shoe or a glass bottle?</p></blockquote> <p>How would you answer the question?</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p.</p> <p.</p></blockquote> <p>I would hope that just about any sane person would choose something close to (b). Sure, it may seem a bit harsh, but think about it from the customer prospective: how would you feel if your carpenter asked such a question?</p> <p>I find it a bit disturbing, however, that this attitude is not prevalent in software development. In fact, from what I can tell, it seems to be discouraged. </p> <p).</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p><strong>Subject: Aggregates Help<br /></strong). </p> <p.</p> <p>I can figure out how to get an "array" in SQL with a table variable, but how do I make a jagged array? Any ideas?</p></blockquote> <p>Some of the folks on in the list took it as a fun challenge, going back and forth with how deficiencies are calculated, and providing some incredibly brilliant ways of solving this problem. </p> <p>Not I, though. My response was something to the effect of …</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> .</p> <p.</p></blockquote> <p>How do you think you would have responded to that post? Would you have taken the challenge to think about how to solve the problem or just take the opportunity to school the poster? </p> <p!</p> <p. </p> <p."</p> <p>Am I on the wrong side in this? Should we actively be encouraging new programmers to use their horrific techniques? Or am I just looking at this the totally wrong way?</p><img src="" width="1" height="1">Alex Papadimoulis, XP, SOA, ESB, ETC Are Dead; FAD is the Future<p>I've come across a truly revolutionary software development methodology called <strong>Front Ahead Design (FAD)</strong>. ...</p> <p><strong>I. Front Ahead Design<br /></strong <em>Do What It Takes</em> to fill in the functionality gaps.</p> <p><strong>II. Do What It Takes<br /></strong>Other <em>Do What It Takes</em>. Your customer will love you.</p> <p><strong>III. Code Light, Not "Right"<br /></strong <em>Do What It Takes</em> to add the functionality to your interface. No more.</p> <p><strong>IV. "Throw Away" Diagrams</strong><br />Think of all the Visio diagrams you've drawn over the years. Sequence diagrams, context diagrams, flow charts, and so on. Was that really productive? Did your customer ever see any of those? Were those diagrams even relevant after the system was finally developed? Didn't think so.</p> <p>In FAD, all diagrams are made on a disposable medium. Whiteboards, napkins, even your forearms work. And there is no formal modeling language to battle with: just <em>Do What It Takes</em> to draw and explain your design to other developers.</p> <p><strong>V. Life Is Short (a.k.a. Patchwork)</strong><br /. </p> <p>In FAD, this isn't even a concern. We know the short life span of a system and develop every feature (from the interface) as a patch. Maintenance programmers can come in and <em>Do What It Takes</em> to add their patches. In FAD, we don't even try to stop the aging process. We encourage it.</p> <hr /> .</p> <p>I also came across some community sites:<br /> <strong>FADblogs.com</strong> - .TEXT blogging site, open for anyone to blog about FAD<br /> <strong>FADisIn.com</strong> - general informational site with articles, help, discussion, and tools<br /> <strong>ItsAFAD.com</strong> - gallery of successful projects (from small business to enterprise) that have successfully used FAD</p> <p>They're all under construction, but I'm helping a lot with the FADblogs.com, so let me know if you'd like to be one of the FAD bloggers. </p> <p>Next article: a comparison of the FAD design tools, including H/YPe and nFAD.</p><img src="" width="1" height="1">Alex Papadimoulis Inflation<p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><?xml:namespace prefix = o<o:p> </o:p>My post the other day (<A href="">Computer Programmer Inflation</a>) got me thinking of another type of inflation that I've observed over the years: Comicality Inflation. Like other types of inflation, it's not as if things have gotten funnier, it’s just the terms we use to describe them.</p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">Back in the day (and I'm talking <em>the </em>day; as in, before electricity and all that), if a friend sent us a letter that we found entertaining, we would simply compliment it when we penned our reply: <em>Archibald, I must concede your quip about poor <?xml:namespace prefix = st1<st1:place w:Rutherford</st1:place>’s embarrassing gaffe was quite witty and remarkably entertaining</em>. </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt".</p> <ul> <li> <div class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>ha</strong> - mildly amusing, possibly causing a very soft chuckle</div></li> <li> <div class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>haha</strong> - funny, causing at a minimum a chuckle, but most likely a snort </div></li> <li> <div class="MsoNormal" style="MARGIN: 0in 0in 0pt">...</div></li> <li> <div class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>a-hahahahahaha</strong> - busting out in full-blown laughter requiring a pre-laughter breath (hence, the "a-")</div></li></ul> <p class="MsoNormal" style="MARGIN: 0in 0in 0. </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">Anyway, it would seem that "haha" would be the perfect way of indicating the comicality of something you read.<span style="mso-spacerun: yes"> </span>After all, it expands as things get funnier and it even has lots of room for personalization ("ho-ha-ha","teehehe",etc). Alas, it is not; some one had to come along and create the acronym "LOL". </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">On its own, I don't think that LOL (Laugh Out Loud) is a terrible thing. After all, there's no official "ha" scale and it's quite hard to tell if "hahaha" means "I laughed out loud" or "I had a series of snickers."<span style="mso-spacerun: yes"> </span>It's the abuse and extension of LOL that really offends me.</p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">First, let’s consider the abuse. How many times have you seen people reply with "LOL" and you know, for a fact, that what you wrote wasn’t possibly that funny? And if you think exaggerative flattery is excusable, consider the typical teenage instant message conversation:</p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong><font color="#ff0000">princessGurl1924</font></strong>: heya becca, hows it goin, lol<br /><strong><font color="#0000ff">angelKitty77</font></strong>: omg, lol it's goin pretty good lol. u??<br /><font color="#ff0000"><strong>princessGurl1924</strong></font>: lol good good!~!!</p></blockquote> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">If I knew anyone who laughed that much in real life, I would suggest that they have some serious mental disorder. Or that they are high on some cocktail of illicit narcotics. </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt". </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">I’d have to say that the extensions of LOL offend me the most. Let's consider the most prevalent:</p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>ROFL (Rolling On the Floor Laughing)</strong> -.</p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>LMAO (Laughing My Ass Off)</strong> - Some might say that this is not fair game because the colloquialism "laughing my ass off" had existed prior to the Internet. No less, I still consider the acronymization to be a direct result of LOL. </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>ROFLMAO (Rolling On the Floor Laughing My Ass Off)</strong> - A total unnecessary expansion of an expansion. Since when was an uncontrollable fit of laughter causing one to drop to the floor *not* the most extreme reaction to humor possible? And what, precisely, is the difference between laughing while rolling on the floor and laughing your ass off while rolling on the floor? How possibly could one distinguish between the two?</p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>ROFLOL (Rolling On the Floor Laughing Out Loud) </strong>- I'm guessing this falls somewhere between ROFL and ROLFMAO. But it’s just another pointless expansion. We already know that there is no difference between ROFL and ROFLMAO, but think about what ROFLOL implies. Someone who is rolling on the floor in laughter isn't doing so audibly, just quietly rolling around on the floor, laughing to themselves. </p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><strong>LOLOLOL (???) </strong>- I don't even know what this is. Laugh Out Loud Out Loud Out Loud? Laugh Out Laugh Out Loud Out Loud? How exactly did an acronym become an expandojective? It's "words" like this that make me long the good-ole-days before "haha" was created.</p></blockquote> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">I know we've all used these new-aged acronyms (except LOLOLOL, I hope). And I know we probably won't stop. But, at least consider this rant next time you reply with ROFLMAO; it just may just enhance your fits of laughter while you're rolling around on the floor.</p><img src="" width="1" height="1">Alex Papadimoulis Programmer Inflation<p>I was thinking the other day about the changes over the years in what we call people who write computer programs. Back in the day, we called these folks <i>computer programmers</i>. A rather fitting title, one would suppose, for a person who programs computers. But one would suppose wrong. <p>Shortly after <i>computer programmer</i> became the "official" title, someone, somewhere, somehow decided that it wasn't enough. After all, computer programmers do more then program, they <i>analyze</i> a problem and <i>then</i> write a program. They should, therefore, be titled <i>programmer/analysts</i>. One would suppose that such analysis is an implicit part of the job, much like how writers need to <i>think</i> about something before they actually write it. But one would suppose wrong. <p>Unlike the <i>computer programmer</i> title, <i>programmer/analyst</i> seemed to stick around for quite a while. In fact, it was only fairly recently that we all became <i>software developers</i> (or, <i>developers</i> for short). The change this time around was all about image; you gotta admit how much sexier <i>developer</i> sounds over <i>programmer</i>. Certainly one would suppose it's pretty hard to "sex up" an industry whose Steves out number its women (see the <A href="">Steve Rule</a>). But one would suppose wrong. <p>Believe it or not, <i>developer</i> is on its way out and we're in the middle of yet another title change. If you think about it, the problem with <i>developer</i> is that, if any one asked what a "developer" is, you'd have to expand it to <i>software developer</i>. Software == Computers == Programming == Nerdy. We can't have that! <p>This is where the title <i>solution developer</i> comes in. We're the guys who you call when you have a problem. Doesn't matter what the problem is, we will <i>develop</i> a <i>solution</i>. Heck, we can even develop solutions (by programming a computer) for problems that don't exist. We're that good. <p>But where do we go from here? First, we need to reach the maximum level of ambiguity possible. I'm not an expert at coming up with job titles, but I suspect <i>solution specialist</i> is a step in the right direction. Of course, once we've gone all the way to one side, the only place we can really go is to other extreme: a way more overblown/descriptive/nerdy sounding name than needed. When <i>solution specialist</i> (or whatever) expires, I really hope the replacement will end with -ologist. I would really like to be an -ologist of some sort. You know you'd like it, too.</p><img src="" width="1" height="1">Alex Papadimoulis "Steve Rule" Proved True Again<p>Yesterday on <a href="">The Daily WTF</a>, I <a href="">mentioned</a> something I called the <strong>Steve Rule</strong>: <em>in a random sample of programmers, there will be more named Steve then there will be females</em>. </p> <p).</p> <p>I thought I'd test the "Steve Rule" once again using the <a href="">Weblogs.Asp.Net</a> bloggers. To do this, I looked at the <A href="">OPML</a> [XML], and used the title of 258 blogs. There were 139 I skipped because either I couldn't tell the gender (no offense, <A href="">Suresh Behera</a>, et al.) or the title didn't have a name (e.g. <A href="">Models and Hacks</a>).</p> <p>Of the blogs, I found six female bloggers:</p> <ul> <li><A href="">Gwen Zierdt</a></li> <li><A href="">Julia Lerman</a></li> <li><A href="">Kristian Rickard</a></li> <li><A href="">Datagrid Girl (Marcie)</a></li> <li><A href="">Racheal Reese</a></li> <li><A href="">Susan Warren</a></li></ul> <p>For the guys, I was only able to find four different Steves. But, there were <strong>seven</strong> different names (with slight variations) that outnumbered the girls</p> <ul> <li>Andrew (7)</li> <li>Chris (7)</li> <li>Dave (7)</li> <li>Jason (7)</li> <li>John (7)</li> <li>Robert (7)</li> <li>Scott (9).</li></ul> <p>So, for us Weblogs.Asp.Net bloggers, it looks like we are goverened by the "Scott Rule." How about that?</p><img src="" width="1" height="1">Alex Papadimoulis | http://weblogs.asp.net/alex_papadimoulis/atom.aspx | crawl-002 | refinedweb | 7,272 | 64.1 |
The last time I showed you the new programming model we introduced to JFace to make it feel like programming SWT.
But has this been the only reason that we decided to add this new API beside the existing one (ILabelProvider, ITableLabelProvider, IColorProvider, ITableColorProvider, IFontProvider, ITableFontProvider) .
Sure enough it was not the only reason. First of all many newcomers have been confused by all those NON MANDATORY interfaces to control different aspects of items and we faced the problem that whenever SWT–Provided a new feature e.g. Owner Draw Support in 3.2 we would have to add a new interface type and to support new features e.g. ToolTip support for table/tree cells we also had to provide a new interface. We decided that this is not the way to go for the future.
So we sat down and thought about a complete new structure below JFace tree and table support. We added the idea of rows (ViewerRow) and cells (ViewerCell) abstracting common things provided of TreeItem and TableItem and completely hiding the widget specific things into this class. The abstraction level this provided to us made it possible to provide a common class named ColumnViewer.
So what have we learned so far:
- We don‘t need all those interfaces any more
- We have a new abstraction level for column based viewers (ViewerRow, ViewerCell)
- We have a new base class named ColumnViewer
So you may ask what you should use if you can‘t use the old interfaces any more. Well the answer is ColumnLabelProvider or if you want to implement the whole cell behaviour your own it‘s abstract base class named CellLabelProvider. The new ColumnLabelProvider combines all currently know interfaces (ILabelProvider, IFontProvider, IColorProvider). There‘s no base class visible to you supporting ITable*–interfaces because those interface put limitations to tables I’ll show you later let‘s now explore how to use the new ColumnLabelProvider interface and how it is used.
TableViewer viewer = new TableViewer(parent,SWT.FULL_SELECTION); TableViewerColumn column = new TableViewerColumn(viewer,SWT.NONE); column.setLabelProvider(new StockNameProvider()); column.getColumn().setText("Name"); TableViewerColumn column</span> = new TableViewerColumn(viewer,SWT.NONE); column.setLabelProvider(new StockValueProvider</span>()); column.getColumn().setText("Modification"); public class StockNameProvider extends ColumnLableProvider { @Override public String getText(Object element) { return ((Stock)element).name; } } public class StockValueProvider extends ColumnLabelProvider { @Override public String getText(Object element) { return ((Stock)element).modification.doubleValue() + "%"; } @Override public Color getForeground(Object element) { if( ((Stock)element)modiciation.doubleValue() < 0 ) { return getDisplay().getSystemColor(SWT.COLOR_RED); } else { return getDisplay().getSystemColor(SWT.COLOR_GREEN); } } }
You may now argue that this was easier with the old ITable*-API and you are true but from the point of reusability this version is much more flexible and you can reuse your LabelProviders for many different TableViewers because they don‘t hold any index informations. Another thing is that you needed a bunch of custom code if you wanted the columns to be reordable with the old API this is not an issue any more with the new API because the LabelProvider is directly connected to the column and moves with it.
Next time I’ll continue with some nice new LabelProvider features like ToolTip support and OwnerDraw.
While your doing all of this work would you consider providing a version of the tree viewer where you store no state information?
In the product I’m building our engine keeps track of the open/close state, selection etc. In addition results are readonly – so open/close I need make a request of the engine. When its done the work it will hand back a new results set.
Currently when we use the treeviewer and the content provider we have about 150 lines of code just keep the tree state in sync with the results set.
Please give us a stateless version of the tree viewer.
Please file a bug report or CC to the bug report if there one already. If this does need an API-Change maybe I have time in M6 to work on it if we need new API I think its too late.
I will file a bug report. Mostly I mentioned this as food for thought. I’m hoping to stimulate your thinking as you move forward. Sometimes it would easier for us if your provided less instead of more.
Its been filed as bug: 171284
Has the new API been finalized? I really like this new design and it would help me out quite a bit. I don’t mind at all having to adjust for minor changes here and there as you near the final 3.3 release. I’d just like to avoid adopting it only to find out it’s been abandoned for something else. 🙂
Well the parts demonstrated here are fairly ready and I don’t expect things to change but to be sure you should wait for M5 which is API freeze. The only thing that is currently in dicussion and will add new API is but I don’t expect any of the demonstrated API go away. But I’m not the decision maker. | https://tomsondev.bestsolution.at/2007/01/18/the-new-faces-of-jface-part2/ | CC-MAIN-2018-39 | refinedweb | 848 | 62.98 |
This is my first attempt at creating a plugin for ReSharper. It seems very daunting at first. It's taken me a few hours to actually figure out the base of what I wanted to do.
This isn't due to poor documentation; but unfamiliarity with how. Once I see how; the documents make a lot of sense... Catch 22 in that area.
Aside from blogging helping cement stuff mentally for myself; I hope this can streamline someone else's efforts to create ReSharper plugins.
At work on the µObjects project, we use fakes. Largely because UWP doens't support mocking frameworks. This isn't to get into the Mock vs Fake argument; but I prefer fakes now. I feel they're keeping my unit tests much cleaner.
With the fakes we use at work; part of it is a
Builder for the fake to configure the interface. I'll try to get the supporting classes into a repo soon. Not sure I can do that, or if I need to re-implement at home.
Anyway - A fake implements an interface. Has some helper stuff in the builder. This takes us anywhere from 5 - 30 minutes to get into place. I want a plug in that I can
Alt-Enter and select "Create Fake" and have it all done for me.
We have templates for a lot of the pieces, and it still takes the 5-30 minutes. This will take seconds, potentially saving hours a week.
If this works well for us; I'll probably create more aspects to the plug in. It's really around the style we've developed so the plugin won't be available for broader use. I want the plugin for some more intelligence that the Live Templates aren't providing. Nor would I expect them to.
I'm following the ReSharper DevGuide as my starting point.
Starting at the How To links to a sample plugin which doesn't build for me. shrug These things happen; but it provided enough information to get me a step closer in some spikes.
I like including the steps from guides instead of a bunch of "Go Here". There's not a lot I add; and some of the steps are a bit long and detailed. I don't want to be copy/pasting things into this.
Skipping ahead a small amount - Create the Project!.
Package up the plugin - Create a NuGet Package for the Plugin.
Now to enable debugging - Set Up Environment.
I'm having a bit of trouble with the environment configuration.
Before you can debug the plugin, you must initially install it to the Visual Studio hive you use for debugging. Do this via ReSharper | Extension Manager…. Note that you will need to specify the folder with the plugin as a custom source. You can do this via ReSharper | Options… | Extension Manager | Add.
Which folder? The one with the
nupkg? Doing that doesn't get the plugin to show up in the Extension Manager...
Which seems to have been because I had the version wrong. Using TestCop as my example; the Dependency for the Nuget Package is
<dependency id="Wave" version="[11.0.0]"/>
And BAM! Shows up.
My plan
I'm going to create a Context Action that will trigger on interfaces... I hope.
That's phase one - Get my Context Action to show.
... Which I'm having a lot of trouble with...
Huh... I think I needed a
ZoneMarker. Which isn't clearly spelled out anywhere. Part of why I'm blogging this.
The information of a zone marker can be found in the guide at the How to Define a Zone Marker.
This was linked to from the TroubleShooting page.
It's a temp solution to put an empty class at the highest common namespace.
Ensure the plugin has a Zone marker defined in the project’s root namespace (e.g. if the project has code in the Foo.Bar and Foo.Bar.Quux namespaces, the zone marker should live in the Foo.Bar.ZoneMarker class).
The class can be empty; and looks like this for me.
[ZoneMarker] public class ZoneMarker { }
A quick reinstall as specified in the Set Up Environment
IMPORTANT! It’s not necessary to reinstall the plugin each time you introduce the changes to your plugin; simple copying of the .dll will be enough in most cases (see the next steps). Nevertheless, if your changes affect any integration points, such as actions, menus, or others, you MUST reinstall the plugin.
A Context Action is an action integration point it requires a reinstall.
Now that I've done that - IT SHOWS!
VERY EXCITING! ... for me. :)
Unless I'm doing something wrong - I think I need to uninstall/install my extension everytime to test it. Sad; but OK.
I'm really wishing there was a good reference of how to generate code... :\
I'm working on reverse engineering the "Create Derived Type" ContextAction that ReSharper has built in for reference on doing this myself. It does mostly what I want to do - at least at a base level; it is. I need to extend what it does; but it'll be a start once I understand it.
This is some black magic syntax. I feel like I need to read the Lexi section of a compiler book to understand it.
I've gotten back to the plugin when I uploaded IMock. The reason for this is that the generation of the code is a pain if you don't have at least templates. Even templates take a bit of time as each method has to be done via a template.
The Reshaper plugin (oh yeah) allows the entire builder class to be generated. If we compare to no templates - this will save days of work in a normal month using this Mock framework.
I know it'll probably count as a hit against IMock that it takes so much set up - but it's clean and makes test creation go really fast.
Over all - It saves time, even if you have to do it manually.
Back to the plugin - I was slogging through it... pain in the ass, didn't understand a bunch of what I had crammed in.
I spent hours getting it into microObjects - and it made sense. What it was doing and how was a lot easier to read. Maybe due to using the code more... I'm pretty sure it was around the fact that I only had to focus on one 'level' of the code at at time. I didn't even have to see the other levels.
This caused a much lighter cognitive load while thinking about the code. This makes it easier to understand; easier to see better solutions. I'm happy with the change over.
The change also allowed me to get the other 1/2 implemented in less than 2 hours. Took me about 3 to extract all the objects; and then 2 to get it finished.
That's right - Finished.
The plugin isn't in some partial state; it'll produce a full MockClass from an interface.
My team would have loved this 6 months ago... hehe
Anyway; I'll be working to get this compiled and uploaded as part of IMock, not a separate repo as was my original plan. It's tied pretty tightly to what that is.
Anyway - HELL YEAH! Complete! | https://quinngil.com/2018/05/13/tagalong-resharper-plugin/ | CC-MAIN-2018-47 | refinedweb | 1,234 | 76.01 |
Looking for a place to stay in Rio de Janeiro? Then look no further than Casa Francisco, a budget friendly bed and breakfast that brings the best of Rio de Janeiro to your doorstep.
Casa Francisco is a budget friendly bed and breakfast offering air conditioning, a kitchenette, and a refrigerator in the rooms.
While staying at Casa Francisco, visitors can check out Ilha Fiscal (1.2 mi) and AquaRio (1.8 mi), some of Rio de Janeiro's top attractions.
During your visit, be sure to check out one of Rio de Janeiro's popular barbeque restaurants such as Fogo de Chao, Assador, and VAMO, all a short distance from Casa Francisco.
Plus, during your trip, don’t forget to check out some of the popular historic sites, such as Parque Lage, Forte de Copacabana, and Quinta da Boa Vista.
We’re sure you’ll enjoy your stay at Casa Francisco as you experience everything Rio de Janeiro has to offer.
We chose Casa Francisco for it's location and uniqueness. We never ended up finalizing the booking as the owner cancelled. However, we got all the money back - not right away and not without back and forth, but we got it ALL back
This was an amazing place to spend our first week in Rio. Paula is warm, friendly but at the same time gave me and my friends enough space to just chill and do whatever we wanted It has amazing views, and everything in Casa Francisco is picturesque the area itself Santa Teresa has aa sort of bohemian, hippy area. For carnival, it was perfect street parties outside, yet when in Casa Franciso you can easily be away from the hustle and bustle of Rio. Would recommend!
A brilliant place to stay. The building is stunning (as is the Santa Teresa neighbourhood) - stylish, spacious and boasting an amazing roof terrace where we spent many an afternoon reading / evening relaxing. Paula felt more like a friend than a host, pointing us towards bars, parties and attractions we'd definitely have missed otherwise. The best place our group stayed on our trip by some distance.
Casa Francisco made my holiday to Rio. The hostel is beautiful and peaceful but in the best neighbourhood, there always seemed to be something going in the evenings that Paula could recommend. She took us out to street parties on a couple of evenings and I have never had so much fun. It was also great staying in and hanging out on the roof terrace, which has views as good as anywhere in the city. I had only planned to be there a couple of days but came back for another week and I very much hope I will be able to stay again in the future.
We were greeted by the welcoming Paula and her friendly dog Nina - amazing host with great local and authentic recommendations. Casa Francisco is a beautiful, homely getaway with panoramic terrace views over Rio. A fully functional kitchen, great sized bedrooms - all pristine and comfy. The Santa Teresa area is very chic, with lots of quirky shops. My 3 friends and I much preferred it to the Copacabana area. Would highly recommend going to Cafe do Alto for Brasillian cuisine and ask Paula for the nearest samba bars. Would 100% return (we extended our stay whilst in Rio). Thank you Paula for your great hospitality x
Own or manage this property? Claim your listing for free to respond to reviews, update your profile and much more.Claim Your Listing | https://www.tripadvisor.com/Hotel_Review-g303506-d11933670-Reviews-Casa_Francisco-Rio_de_Janeiro_State_of_Rio_de_Janeiro.html | CC-MAIN-2021-39 | refinedweb | 592 | 70.53 |
Linear Feedback Shift Registers for the Uninitiated, Part XIII: System Identification
Last time we looked at spread-spectrum techniques using the output bit sequence of an LFSR as a pseudorandom bit sequence (PRBS). The main benefit we explored was increasing signal-to-noise ratio (SNR) relative to other disturbance signals in a communication system.
This time we’re going to use a PRBS from LFSR output to do something completely different: system identification. We’ll show two different methods of active system identification, one using sine waves and the other using a PRBS, and look at the difference. We’ll also talk about some practical aspects of system identification using a PRBS, some of which are really easy to understand, and others which are difficult.
What is System Identification, and Should We Let Jack Bauer Take Charge?
System identification is an area of engineering where the goal is to determine some unknown or uncertain information that can be used to model a system.
If we model a linear-time-invariant (LTI) system through its transfer function, then we probably want to identify that transfer function, either through poles and zeros, or as \( ABCD \) matrices in a state-space representation. Or maybe we have some more specific knowledge of a transfer function, like a low-pass filter with transfer function \( \frac{1}{RCs+1} \) where \( C \) is known but \( R \) is unknown, so we want to identify parameter \( R \). Or we have an inductor with impedance \( Ls + R_s \) where \( L \) is the intentional inductance and \( R_s \) is parasitic series resistance, and both are known only to be within certain bounds.
Nonlinear systems can also have parameters that need to be determined. Suppose you want to model the saturation of an inductor by writing its inductance as \( L=L_0 / \left(1+(I/I_1)^2\right) \) — that is, at small currents it has an inductance \( I \), but the inductance drops by half if the amplitude of current is \( |I| = I_1 \). Or you have a motor which has some unknown amount of Coulomb friction torque \( T_{fr} \), which is very roughly a constant torque that is in the opposite direction of the motor’s rotation. These values \( I_1 \) and \( T_{fr} \) can be estimated as well, using system ID techniques.
System identification methods can be classified as active or passive:
Passive identification attempts to identify system parameters by analyzing observable input and output signals, without changing those signals. (observation only)
Active identification attempts to identify system parameters by injecting test signals and determining how the system responds to those test signals. (perturbation and observation)
Under active identification, the observable signals are always going to be “better” than those available under only passive identification, because we can control the test signals and tailor them to have specific properties that reduce the uncertainty of the parameters we are trying to estimate. But there’s a tradeoff, because injecting test signals changes the behavior of a system, and there is a potential conflict between the primary system behavior and the need to identify something about that system. Let’s say the system under investigation is a bank that is suspected of laundering money to terrorists. (I’ve been watching reruns of 24 recently.) Now, you might have three ideas for stopping both the bank and the terrorists:
- Edgar and Chloe could monitor the bank’s communications for a year and wait for them to slip up. (passive identification)
- Curtis could pose undercover as an investor with income from questionable sources, who tries delicately, over the course of several weeks, to get the bankers to do something illegal. (active identification with a very small perturbation)
- Jack Bauer could burst into the bank waving a gun and yelling at the bankers, saying he’s a federal agent and, dammit, we don’t have time for this, I need to know where those terrorists are NOW. (active identification with a very large perturbation)
The choice of method depends on how important it is for the system to operate normally. Jack’s method gets results quickly, but sometimes there are messy casualties.
A more realistic example might be a skid-prevention system in a car, where the unknown parameter might be the coefficient of friction between the tires and the road. Let’s say I’m driving along on a road on a night in late winter where it has been raining and the temperature’s getting towards freezing. I’m suspicious of the traction I’m getting. A passive technique is paying attention to the speed of the wheels, the speed of the vehicle, and how fast the wheel speed accelerates or decelerates when I press on the brake and accelerator. An active technique would be for me to check the tire traction by tapping delicately on the brake or accelerator, outside of my normal driving to get the car from point A to point B. (If I’m the one doing this, I’m going to choose an active technique. What if the car itself has the capability to detect skids? Would I want it to use an active technique that modifies the car’s braking force, for purposes other than braking? I’m not sure.)
Or maybe I’m designing a digitally-controlled DC/DC converter that regulates its output voltage, and I’m trying to figure out the impedance of the load. An active identification method would permit me to adjust the output voltage very slightly, while measuring the load current — but only if I keep the regulated voltage within tight bounds, and avoid emitting high-frequency content that would show up as electromagnetic interference to other systems. A passive identification method would only let me observe output voltage and load current.
The problem with passive identification is that you are dependent on whether the observed signals are “sufficiently rich” to give you the information you need. Let’s go back to that voltage regulator situation. Suppose the load can be modeled by its Norton equivalent of a current sink \( I_1 \) and a parallel resistance \( R_p \) (if you don’t remember Norton and Thevenin equivalents, one of my earlier articles was about them). The total load current \( I_L \) can be written as \( I_L = I_1 + V/R_p \); if \( V \) never changes, then we’ll never be able to distinguish a high-impedance load where \( I_L \approx I_1 \), from a purely resistive load where \( I_L = V/R_p \). The only way we can distinguish them is if \( I_L \) and \( V \) change over time, and we can try to correlate changes in \( I_L \) with changes in \( V \). Correlation is a very powerful technique, and we’re going to use it later.
In this article we’re only going to be talking about LTI systems (those nonlinear systems are scary) using active identification techniques. We will try to keep our injected test signals small, and stay undercover, however, and avoid getting Jack Bauer involved.
Note: I need to state early on that I am not an expert in control systems or system identification. The experts in this field can toss around state-space matrices like a frisbee and bend them to their will, using quantitative measures of “sufficient richness” such as persistent excitation, which is defined by testing whether some special matrix is positive definite. I don’t quite understand the significance of persistent excitation, and can barely wrap my head around positive definite matrices (yes, I know all the eigenvalues are positive, but what does that really mean?). The best I can do is to say that persistent excitation is a measure of the degrees of freedom that a particular signal can provide — so if you’re trying to estimate 3 unknown parameters, then you need a test signal that can somehow provide independent information about each of those parameters.
Here we go!
Test system
We’re going to consider the simple system shown below:
Here we have two voltage sources, \( V_A \) and \( V_B \), a load current \( I_L \), and a capacitor \( C \) with voltage \( V_C \) that is our output. The voltage sources connect to the capacitor via resistances:
- \( R_A \) is an intentional resistance. Let’s just say \( R_A \) = 10KΩ ± 1%.
- \( R_B \) is a parasitic resistance; \( R_B \) = 730KΩ ± 20%.
- \( C \) = 0.11μF ± 5%.
- \( V_A \) is under our control, and is intended to produce a desired waveform across the capacitor.
- \( V_B \) is under our control, but it does something else (not shown in this circuit) and there’s an unavoidable leakage path to the capacitor.
- Both \( V_A \) and \( V_B \) can range anywhere from 0 - 3.3V
- \( I_L \) is an annoying and unpredictable load current:
- it is not under our control
- it varies unpredictably between 0 and 10μA.
- we can’t even observe it directly
- We measure \( V_C \) with a 12-bit analog-to-digital converter.
- Every 100μs we adjust \( V_A \) and \( V_B \) according to some kind of control loop: \( V_A \) to control \( V_C \), and \( V_B \) to control something else.
Now, here’s our problem: we want to improve the control of \( V_C \) by taking into account the effects of component tolerance in \( R_A \), \( R_B \), and \( C \), perhaps by tuning a control loop more aggressively and using feedforward to cancel the effect of \( V_B \) on \( V_C \). Who knows. That’s not the point of this article. In fact, we’re not even going to try to use a control loop in this article; it just makes things more complicated. All we’re going to do here is try to estimate the transfer functions from \( V_A \) to \( V_C \) and from \( V_B \) to \( V_C \). (There’s also a transfer function from \( I_L \) to \( V_C \) but we really don’t care about it.) We know these as a function of the three passive components:
$$\begin{align} V_C &= \frac{1}{1+\tau s}\left(\frac{R_B}{R_A + R_B}V_A + \frac{R_A}{R_A + R_B}V_B - \frac{R_AR_B}{R_A + R_B}I_L\right) \cr \tau &= (R_A || R_B)C = \frac{R_AR_B}{R_A + R_B}C \end{align}$$.
We can view this circuit as the following block diagram:
where \( V_{A0} \) and \( V_{B0} \) are the original commands for \( V_A \) and \( V_B \), \( p_A \) and \( p_B \) are the perturbations for \( V_A \) and \( V_B \), and the three transfer functions are
$$\begin{align} H_A(s) &= \frac{\alpha_A}{1+\tau s} \cr H_B(s) &= \frac{\alpha_B}{1+\tau s} \cr H_L(s) &= \frac{-R_L}{1+\tau s} \end{align}$$
with \( \alpha_A = \frac{R_B}{R_A + R_B}, \alpha_B=\frac{R_A}{R_A + R_B}, R_L = \frac{R_AR_B}{R_A + R_B} \).
The nominal values are \( \tau \) = 1085μs and \( V_C = \frac{1}{1+\tau s}\left(0.9865V_A + 0.0135V_B - 9.865K\Omega\cdot I_L\right) \).
Ra = 10e3 Rb = 730.0e3 C = 0.11e-6 tau = 1.0/(1.0/Ra + 1.0/Rb) * C alpha_A = Rb/(Ra+Rb) alpha_B = Ra/(Ra+Rb) R_L = Ra*Rb/(Ra+Rb) print("tau=%.1fus" % (tau*1e6)) print("alpha_A=%.4f" % alpha_A) print("alpha_B=%.4f" % alpha_B) print("R_L=%.3f Kohm" % (R_L/1e3))
tau=1085.1us alpha_A=0.9865 alpha_B=0.0135 R_L=9.865 Kohm
For now, let’s just say that we keep \( V_A = 0.66V = 0.2V_{ref} \) (for \( V_{ref} = 3.3V \)) with no feedback to regulate \( V_C \), and that \( V_B \) is a 5Hz square wave between \( 0.2V_{ref} \) and \( 0.8V_{ref} \).
Let’s set up a simple simulation to show the nominal case:
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib import gridspec def make_adcq(nbits): ''' quantize for adc ''' B = 1.0 / (1 << nbits) def adcq(x): return np.floor(x/B)*B return adcq def sysid_sim(Ra, Rb, C, pVa, pVb, N, dt=100e-6, seed=None, ilfunc=None): ''' simulation of Ra-Rb-C system Ra, Rb, C are component values pVa and pVb are perturbation signals to add in to Va and Vb N = number of steps seed can either be a seed to use with np.random.RandomState, or a np.random.RandomState object to handle several successive simulations Voltages are ratiometric to some Vref somewhere ''' tau = 1.0/(1.0/Ra + 1.0/Rb) * C # match discrete (1-alpha)^n to continuous e^(-dt/tau*n) alpha = 1-np.exp(-dt/tau) alpha_A = Rb/(Ra+Rb) alpha_B = Ra/(Ra+Rb) R_L = Ra*Rb/(Ra+Rb) t = np.arange(N)*dt zero = np.zeros(N) # in case pVa and pVb are scalars, extend them to a vector pVa += zero pVb += zero # Vb is a square wave between 0.2 and 0.8*Vref # with T1 = 1msec T1 = 1e-3 sqwave = (t/(200*T1)) % 1 < 0.5 Va = np.ones(N) * 0.2 + pVa Vb = (1+3*sqwave)/5.0 + pVb Vc = np.zeros(N) if ilfunc is None: il = np.zeros_like(t) else: il = ilfunc(t, seed=seed) vref = 3.3 Vck = 0 # Iterative simulation for k in xrange(N): # find the DC equivalent of VC (if C were 0) Vc_dc = alpha_A * Va[k] + alpha_B * Vb[k] - R_L*il[k]/vref # now filter it with time constant tau Vck += alpha*(Vc_dc - Vck) Vc[k] = Vck return dict(t=t, Vb=Vb, il=il, Vc=Vc, Va=Va, dt=dt, tau=tau, alpha=alpha, alpha_A=alpha_A, alpha_B=alpha_B) def ilfunc1(t, seed): # IL is a value k*2uA where k is a random integer between 0 and 5 # that changes at random intervals with transition rate alpha_L # (total = 0-10uA) # Set up PRNG if isinstance(seed, np.random.RandomState): rs = seed else: rs = np.random.RandomState(seed=seed) alpha_L = 10.0 # average number of transitions/sec N = len(t) dt = (t[-1] - t[0]) / N il_transitions = rs.random_sample(N) < alpha_L*dt il_count = np.cumsum(il_transitions) il_values = 2e-6 * rs.randint(0, 6, size=il_count[-1]+1) il = il_values[il_count] return il S = sysid_sim(Ra=Ra, Rb=Rb, C=C, pVa=0,pVb=0,N=20000,seed=1234, ilfunc=ilfunc1) fig = plt.figure(figsize=(7,7)) gs = gridspec.GridSpec(3,1,height_ratios=[1,1,3]) def plot_V_B(S, ax): t_msec = S['t']/1e-3 ax.plot(t_msec,S['Vb'],label='V_B') ax.legend(loc='best',labelspacing=0,fontsize=10) ax.set_ylim(0,1) ax.set_ylabel('V_B/Vref') def plot_I_L(S, ax): t_msec = S['t']/1e-3 ax.plot(t_msec, S['il']*1e6) ax.set_ylabel('I_L (uA)') ax.set_ylim(-1,11) ax = fig.add_subplot(gs[0]) plot_V_B(S, ax) def plot_V_C_V_A(S, ax): t_msec = S['t']/1e-3 ax.plot(t_msec,S['Vc'],label='V_C') ax.plot(t_msec,S['Va'],label='V_A') ax.legend(loc='best',labelspacing=0,fontsize=10) ax.set_ylabel('V/Vref') ax2 = fig.add_subplot(gs[1]) plot_I_L(S, ax2) ax3 = fig.add_subplot(gs[2]) plot_V_C_V_A(S,ax3) ax3.set_xlabel('t (msec)');
Okay, so that’s mildly interesting. Now let’s try applying some perturbation signals to \( V_A \) and \( V_B \) and see if that can help us understand the transfer functions from A and B to C.
Sinusoidal perturbation
Let’s try using a sinusoidal perturbation signal first. As we’ll see later, this will allow us to probe our system in the frequency-domain, at whatever frequency we use for the perturbation signal.
- Add \( V_{pA} = V_1 \cos \omega t \) as a perturbation signal to \( V_A \)
- Measure \( V_C \) and \( V_A \)
- Compute \( S(V) = \frac{2}{NV_1}\int V e^{-j \omega t}\, dt \) over some integer number \( N \) of full periods, for both \( V_C \) and \( V_A \). (In a discrete-time system, the integral becomes a sum.)
This essentially computes the transfer function of a sine wave input, considered only at that frequency. For example, if \( V_C = V_A = V_{pA} \) then it’s fairly easy to show that \( S_C = S_A =1 \). (dust off your calculus books!)
Also, for a moment we’re going to pretend that we can make perfect measurements with infinite resolution.
N = 20000 dt = 100e-6 t = np.arange(N)*dt f1 = 100 Nstep = 1.0/(f1*dt) Nstep_int = int(np.round(Nstep)) f = 1.0/Nstep_int/dt # actual frequency -- restrict to integer # of steps # per electrical cycle theta = 2*np.pi*f*t c = np.cos(theta) pVa_amplitude = 0.005 pVb_amplitude = 0 S = sysid_sim(Ra=Ra, Rb=Rb, C=C, pVa=pVa_amplitude*np.cos(theta), pVb=pVb_amplitude*np.cos(theta), N=N,seed=1234,ilfunc=ilfunc1) fig = plt.figure(figsize=(7,9)) gs = gridspec.GridSpec(3,1,height_ratios=[1,3,1]) ax1 = fig.add_subplot(gs[0]) plot_I_L(S, ax1) ax2 = fig.add_subplot(gs[1]) plot_V_C_V_A(S, ax2) def calcS(V,theta,Nstep_int): ejth = np.exp(-1j*theta) S = np.diff(np.cumsum(ejth*V)[::Nstep_int])/Nstep_int * 2 return S S['S_A'] = calcS(S['Va'],theta,Nstep_int) S['S_C'] = calcS(S['Vc'],theta,Nstep_int) def plot_S_C_S_A(S, ax): t_msec = S['t'] / 1e-3 ax.set_xlabel('t (msec)'); for Sx,l in [(S['S_C'],'S_C'), (S['S_A'],'S_A')]: ax3.plot(t_msec[::Nstep_int][:-1], np.real(Sx), drawstyle='steps', label=l) ax.legend(loc='best',labelspacing=0,fontsize=10) ax3 = fig.add_subplot(gs[2]) plot_S_C_S_A(S, ax3) ax3.set_ylim(0,0.01); print "f=%.1fHz (%d steps per cycle)" % (f,Nstep_int)
f=100.0Hz (100 steps per cycle)
We made a 100Hz perturbation of 0.005 amplitude and essentially got what we asked for. The computed amplitudes \( S_A \) and \( S_C \) aren’t very far off from the real amplitudes. The computed amplitude \( S_C \) gets disturbed quite a bit whenever there are large changes in \( V_C(t) \), which makes sense, but it confounds some of our signal processing needs.
So we’re going to do something that’s easy in post-processing on a computer, but difficult to do in real-time on an embedded system — namely compute a robust mean by throwing out the most extreme 10% of the data, hoping that we can get enough “quiet time” to get a good estimate of signal amplitudes:
def robust_mean(x, tail=0.05): x = np.sort(x) N = len(x) Ntail = int(np.floor(N*tail)) return x[Ntail:-Ntail-1].mean() def plot_tf_robust(S, ax=None, ref=None, input_key = 'S_A', output_key='S_C'): if ax is None: fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.set_xlabel('t (msec)') t_msec = S['t']/1e-3 stats = {} for key in [input_key, output_key]: Sx = S[key] label = key line, = ax.plot(t_msec[::Nstep_int][:-1], np.real(Sx), drawstyle='steps', label=label) mS = robust_mean(Sx) stats[key] = mS msg = 'robust mean({0}) = {1:.5f}'.format(label,mS) if ref is not None: msg += ' = ref*({0:.5f})'.format(mS/ref) print msg ax.plot([t_msec[0],t_msec[-1]],np.real([mS,mS]),color=line.get_color(), alpha=0.25, linewidth=4) print "ratio = {0:.5f}".format(stats[output_key]/stats[input_key]) ax.legend(loc='best',labelspacing=0,fontsize=10); return ax ax = plot_tf_robust(S, ref=0.005, input_key='S_A', output_key='S_C') ax.set_ylim(0.0,0.01);
robust mean(S_A) = 0.00500-0.00000j = ref*(1.00000-0.00000j) robust mean(S_C) = 0.00344-0.00218j = ref*(0.68807-0.43624j) ratio = 0.68807-0.43624j
The robust-mean algorithm works fairly well to filter out those spikes; in the graph above, the thick transparent lines are the robust mean. For \( V_A \), which has the computed amplitude 0.005, just what our perturbation signal is, we get a detected amplitude that is exactly what we applied as an input. For \( V_C \) we’re getting the perturbation amplitude 0.005 times (0.68807-0.43624j). The actual value of the transfer function is \( \alpha_A / (1 + \tau s) \) with \( \alpha_A = 0.9865 \) and \( \tau = \) 326μs. Let’s evaluate at \( s = 2\pi jf \) for \( f=100 \) Hz:
print ('Actual transfer function at f={0:.1f}Hz: {1:.5f}'.format( f, alpha_A / (1 + tau * 2j * np.pi * f)))
Actual transfer function at f=100.0Hz: 0.67343-0.45915j
Hmm. Not quite the same as what we measured.
Now let’s put the ADC back into the picture and look at what happens when the input is quantized:
for resolution in [12, 10]: print "ADC resolution = %d:" % resolution adcq = make_adcq(resolution) S['S_A'] = calcS(adcq(S['Va']),theta,Nstep_int) S['S_C'] = calcS(adcq(S['Vc']),theta,Nstep_int) ax = plot_tf_robust(S, ref=0.005, input_key = 'S_A', output_key = 'S_C') ax.set_title('Sine-wave amplitude estimation with %d-bit ADC' % resolution) ax.set_ylim(0.0,0.01);
ADC resolution = 12: robust mean(S_A) = 0.00500-0.00000j = ref*(1.00084-0.00000j) robust mean(S_C) = 0.00344-0.00218j = ref*(0.68779-0.43578j) ratio = 0.68721-0.43541j ADC resolution = 10: robust mean(S_A) = 0.00496-0.00000j = ref*(0.99169-0.00000j) robust mean(S_C) = 0.00344-0.00217j = ref*(0.68726-0.43485j) ratio = 0.69301-0.43849j
Yeah, the ADC resolution matters here. Look at those little shifts in the
S_C waveform in the 10-bit ADC case, which are barely perceptible in the 12-bit ADC case. Real systems don’t do so well with detecting signals that aren’t that large compared to the ADC resolution.
Now let’s try doing this at a number of different frequencies:
def sysid_probe_1(dt, N, freqlist=[2.8,5,7,10,14,20,28,50,70,100,140,200,280,500], seed=None,ilfunc=None,verbose=False,ntrials=1): t = np.arange(N)*dt tflist = {} rs = np.random.RandomState(seed) if seed is not None else None for which in ['A', 'B']: if verbose: print 'Perturbing V_%s' % which tflist[which] = [] if which == 'A': pVa_amplitude = 0.005 pVb_amplitude = 0 lwhich = 'Va' else: pVb_amplitude = 0.005 pVa_amplitude = 0 lwhich = 'Vb' for f1 in freqlist: Nstep = 1.0/(f1*dt) Nstep_int = int(np.round(Nstep)) f = 1.0/Nstep_int/dt # actual frequency -- restrict to integer # of steps # per electrical cycle theta = 2*np.pi*f*t c = np.cos(theta) data = [f1] for itrial in xrange(ntrials): if verbose: print "trial #%d" % (itrial+1) S = sysid_sim(Ra=Ra, Rb=Rb, C=C, pVa=pVa_amplitude*np.cos(theta), pVb=pVb_amplitude*np.cos(theta), N=N,seed=rs,ilfunc=ilfunc) for k in 'a b c'.split(): V = S['V'+k] Sx = calcS(V,theta,Nstep_int) S['S'+k] = Sx k = 'Sa' if which == 'A' else 'Sb' Sx = S[k] Sy = S['Sc'] H = robust_mean(Sy)/robust_mean(Sx) if verbose: print 'f={0:5.1f}: Sc/{1}={2:.4f}'.format(f1, k, H) data.append(H) tflist[which].append(data) tflist[which] = np.array(tflist[which]) return tflist def plot_tflist(tflist, dt, N, ntrials=1): fig = plt.figure() ax = fig.add_subplot(1,1,1) f1 = np.logspace(0,3,1000) legend_handles = [] for which,color,marker in [('A','red','.'), ('B','green','x')]: tfdata = tflist[which] f = np.abs(tfdata[:,0]) H_est = tfdata[:,1:] H_est_dB = 20*np.log10(np.abs(H_est)) H_theoretical = (alpha_A if which == 'A' else alpha_B)/(1+tau*2j*np.pi*f1) H_theo_dB = 20*np.log10(np.abs(H_theoretical)) ax.semilogx(f1,H_theo_dB,linewidth=0.5,color=color) h = ax.semilogx(f,H_est_dB,marker, color=color) legend_handles.append(h[0]) ax.set_title('Detected and theoretical amplitudes from sine-wave system ID' +'\n(%d samples at %.1fkHz for each data point, %d %s)' % (N, 0.001/dt, ntrials, 'trials' if ntrials > 1 else 'trial'), fontsize=10) ax.legend(labels=['A','B'],handles = legend_handles, loc='best') ax.set_xlabel('f (Hz)') ax.set_ylabel('|H_est(f)| (dB)') ax.grid('on') ax.set_ylim(-60,10) N = 10000 ntrials = 5 dt = 100e-6 tflist = sysid_probe_1(dt,N,seed=1234,ilfunc=ilfunc1,ntrials=ntrials) plot_tflist(tflist, dt, N, ntrials)
Now we’re seeing a better picture here. The sine-wave system ID doesn’t do a bad job, at least at the higher frequencies and with \( H_A(s) \). There’s very good matching with the theoretical transfer function. At low frequencies, what’s happening is that there is energy from other unwanted signals, and there’s no way to separate out that unwanted frequency content from the information we actually want to measure, which is the amplitude and phase of the perturbation signal that makes it through from input to output. Also, it takes a lot of time to get this data: each plot point in the above graph represents probing using a different frequency sine wave (14 different frequencies in the plot above), so to get the all data points we really had to take 280,000 different samples. (10000 samples × 14 test frequencies × 2 perturbation paths, A → C and B → C)
(The \( H_B(s) \) transfer function has higher relative error, because it’s a much smaller signal. Maybe we don’t need to know it precisely; maybe we do.)
We could do better estimating both transfer functions by taking even more samples, of course, so that the unwanted signals average out. Let’s bump things up by a factor of 10, for 2.8 million samples total. Note that this is going to take 280 seconds worth of data at 10kHz sampling rate, but we’d really like to get a good estimate of the transfer function:
N = 100000 ntrials = 5 dt = 100e-6 tflist = sysid_probe_1(dt,N,seed=1234,ilfunc=ilfunc1,ntrials=ntrials) plot_tflist(tflist,dt,N,ntrials)
Now let’s change our test system a little bit by using a different load current \( I_L \). It’s still going to be 0-10μA, but this time we’re going to make it a frequency-modulated sine wave in the 182-222Hz range.
import scipy.signal def ilfunc2(t, seed): # IL is a 0-10uA, frequency-modulated sine-wave # with frequency content ranging roughly from 182-222Hz. # Set up PRNG if isinstance(seed, np.random.RandomState): rs = seed else: rs = np.random.RandomState(seed=seed) fcarrier = 202 fspread = 20 tau = 0.02 alpha = dt/tau fp = 2*np.pi*rs.randn(*t.shape) fpfilt = scipy.signal.lfilter([alpha],[1.0, alpha-1.0],fp) tmod = t+1.0*fspread/fcarrier*np.cumsum(dt*fpfilt) theta = 2*np.pi*fcarrier*tmod il = 10e-6 * (1+np.sin(theta))/2 return il
df = 1/(t[-1]-t[0]) ilsample = ilfunc2(t,seed=1234) ilspectrum = 20*np.log10(np.abs(np.fft.fft(ilsample))) for k, nf in enumerate([len(ilspectrum)//2, 1000]): fig = plt.figure(figsize=(6,10)) ax = fig.add_subplot(2,1,k+1) ax.plot(np.arange(nf)*df, ilspectrum[:nf]) ax.grid('on') ax.set_title('FFT-computed spectrum of ilfunc2()') ax.set_ylabel('|I_L| (dB)') ax.set_xlabel('f (Hz)');
OK, now let’s run our test simulation with a sine-wave perturbation and using the 202Hz-centered
ilfunc2 for the load current \( I_L \):
N = 100000 ntrials=5 dt = 100e-6 tflist = sysid_probe_1(dt,N,seed=1234,ilfunc=ilfunc2,ntrials=ntrials) plot_tflist(tflist,dt,N,ntrials=ntrials)
It works great… except at the 200Hz point we get a bunch of noise.
Correlating a measured output with a sine-wave perturbation signal is very useful. You can essentially read off the transfer function directly from each frequency used. The only issue is that there’s no way to distinguish the response from the perturbation frequency and the system’s normal content at that frequency.
System Identification with Spread-Spectrum Perturbation
Now we’re going to use the output from an LFSR as a perturbation signal. This acts as a pseudorandom bit sequence (PRBS) or pseudonoise (PN) sequence. We talked about the correlation properties of an LFSR-based PRBS in part XI, but let’s recap and be a little more careful. The cross-correlation \( R_{xy}[\Delta k] \) between any two repeating signals \( x[k] \) and \( y[k] \) with period \( n \) is defined as
$$R _ {xy}[\Delta k] = \sum\limits_{k=0}^{n-1} x[k]y[k-\Delta k] $$
where the index \( k-\Delta k \) wraps around because of the repeating signals. (For nonrepeating signals, the limits of the sum are from \( -\infty \) to \( \infty \).)
We’re going to use a cross-correlation normalized to the number of samples:
$$C _ {xy}[\Delta k] = \tfrac{1}{n}R _ {xy}[\Delta k] = \tfrac{1}{n}\sum\limits_{k=0}^{n-1} x[k]y[k-\Delta k] $$
If \( y[k] = x[k] \) then we can calculate \( C_{xx}[\Delta k] \) as the autocorrelation. If you remember from part XI, the correlation between the output of a maximum-length LFSR \( b[k] \) with values scaled to ±1, and any delayed version of that same output, is
$$C_{bb}[\Delta k] = \tfrac{1}{n}\sum\limits_{k=0}^{n-1} b[k]b[k-\Delta k] = \begin{cases} 1 & \Delta k = 0 \cr -\frac{1}{n} & \Delta k \ne 0 \end{cases}$$
for \( n=2^N - 1 \) in the case of an LFSR with primitive polynomial. (This two-valued behavior, where the correlation is close to zero for \( \Delta k \ne 0 \), is very useful, and we’ll make use of it in a bit.)
We can also find the cross-correlation \( C_{by}[\Delta k] \) between \( b[k] \) and some other signal \( y[k] \). So if we use \( b[k] \) as a perturbation signal, we can correlate the system response \( y[k] \) with \( b[k] \), calculating \( C_{by}[\Delta k] \). If \( \Delta k = 0 \) then result will essentially tell us how much of the system response has no delay. Then we can correlate the system response with \( b[k-1] \) to find \( C_{by}[1] \) and the result will tell us how much of the system response has a delay of 1 step. Repeat and we can get the whole time-domain response of the transfer function.
Huh?
Okay, let’s take that a little more slowly. Below is a plot of the first 200 samples of one LFSR’s output \( b[k] \), and we’re going to correlate it with various signals:
- \( b[k] \) (to show correlation of 1)
- time-shifted versions \( b[k-\Delta k] \)
- a repeating sequence of ones (to show correlation with the DC component of any signal)
- a random sequence of ±1
from libgf2.gf2 import GF2QuotientRing, checkPeriod def lfsr_output(field, initial_state=1, nbits=None): n = field.degree if nbits is None: nbits = (1 << n) - 1 u = initial_state for _ in xrange(nbits): yield (u >> (n-1)) & 1 u = field.lshiftraw1(u) f18 = GF2QuotientRing(0x40027) assert checkPeriod(f18,262143) == 262143 pn18 = np.array(list(lfsr_output(f18))) * 2 - 1 fig = plt.figure(figsize=(6,2)) ax = fig.add_subplot(1,1,1) ax.plot(pn18[:200],'-',drawstyle='steps') ax.set_ylim(-1.1,1.1);
n=len(pn18) def pncorrelate(x, pn): return np.sum(x*pn)*1.0/len(pn) print "n=%d" % n print "1/n=%g" % (1.0/n) print "Pseudonoise correlation with various signals:" print "self: ", pncorrelate(pn18,pn18) for delta_k in xrange(1,5): print "self shifted by %d: %g" %(delta_k, pncorrelate(np.roll(pn18,delta_k),pn18)) print "ones: %g" % pncorrelate(np.ones(len(pn18)), pn18) ntry = 1000 print "%d random trials (first 10 shown):" % ntry np.random.seed(123) clist = [] for itry in xrange(ntry): c = pncorrelate(np.random.randint(2, size=n)*2-1, pn18) if itry < 10: print "random +/-1: %8.5f" % c clist.append(c) print "std dev: %.5f (%.5f expected)" % (np.std(clist), 1.0/np.sqrt(n))
n=262143 1/n=3.81471e-06 Pseudonoise correlation with various signals: self: 1.0 self shifted by 1: -3.81471e-06 self shifted by 2: -3.81471e-06 self shifted by 3: -3.81471e-06 self shifted by 4: -3.81471e-06 ones: 3.81471e-06 1000 random trials (first 10 shown): random +/-1: -0.00584 random +/-1: 0.00177 random +/-1: -0.00106 random +/-1: 0.00138 random +/-1: 0.00003 random +/-1: -0.00239 random +/-1: 0.00004 random +/-1: -0.00051 random +/-1: -0.00154 random +/-1: 0.00094 std dev: 0.00197 (0.00195 expected)
OK, great! We already talked about the autocorrelation \( C_{bb}[\Delta k] \), and sure enough we get 1 for \( \Delta k=0 \) and \( -1/n \) for a few values of \( \Delta k > 0 \).
The cross-correlation of a PRBS with a sequences of ones is \( 1/n \).
The cross-correlation of a PRBS with a random sequence of ±1 bits is a random variable that is approximately a normal distribution with zero mean and standard deviation of \( 1/\sqrt{n} \).
This \( 1/\sqrt{n} \) behavior is pretty common whenever we try to “defeat” the randomness of random noise by taking more and more samples. Yes, you can cut the amplitude by a factor of 2, but to do so you need to increase the number of samples by a factor of 4.
Anyway, now let’s use our PRBS for system identification:
def pncorrelate_array(x,y,karray): return np.array([pncorrelate(x,np.roll(y,k)) for k in karray]) def pn_sysid(pn,karray,ilfunc=ilfunc1, input_signal='Va', output_signal='Vc', input_amplitude=pVa_amplitude, filter=None): perturbation = input_amplitude*pn S = sysid_sim(Ra=Ra, Rb=Rb, C=C, pVa=perturbation if input_signal == 'Va' else 0, pVb=perturbation if input_signal == 'Vb' else 0, N=len(pn),seed=1234,ilfunc=ilfunc) if filter is None: filter = lambda x: x Cin = pncorrelate_array(filter(S[input_signal]),pn,[0]) Cout = pncorrelate_array(filter(S[output_signal]),pn,karray) return Cout, Cin, S def correlation_waveforms(Cc, Ca, pn, field, karray, h, fig=None): kmin = min(karray) kmax = max(karray) k_theo = np.linspace(kmin,kmax,10000) if fig is None: fig = plt.figure(figsize=(6.8,4)) ax=fig.add_subplot(1,1,1) ax.set_title('Unit impulse response (theoretical + from PRBS sys ID), '+ 'n=%d, f=0x%x (N=%d)' % (len(pn), field.coeffs, field.degree), fontsize=10) ax.plot(karray,Cc/Ca[0],'.') if h is not None: ax.plot(k_theo,h(k_theo)) ax.set_xlim(kmin,kmax) ax.grid('on') kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn18,karray) def h_theoretical(k,A=1.0): alpha = dt/tau kpos = k*(k>=0) return A*alpha*np.exp(-kpos*alpha)*(k>=0) correlation_waveforms(Cc,Ca,pn18,f18,karray, h_theoretical)
Now that’s interesting. We do get a very nice response using PRBS for system ID, that looks like the impulse response, but it is shifted upwards with some noisy content. That noisy offset is essentially the energy in the undisturbed system output, scrambled in the frequency domain to be spread over the spectrum of the PRBS.
Why do we get the impulse response? The two-valued autocorrelation \( C_{bb}[\Delta k] = 1 \) for \( \Delta k = 0 \) and \( C_{bb}[\Delta k] = -1/n \approx 0 \) for \( \Delta k \ne 0 \) means that the set of delayed PRBS sequences \( b[k-\Delta k] \) can be used as a quasi-orthonormal basis. (A true orthonormal basis would have cross-correlation of 1 between identical signals and 0 between different signals.) And because our system is linear time-invariant, the output response \( y[k] \) we’re trying to analyze (for example, \( y = V_C \)) can be written as the following superposition over each possible delay \( \Delta k \):
$$\begin{align} y[k] &= y_0[k] + \sum\limits_{\Delta k = 0}^{n-1} b[k-\Delta k]h[\Delta k] \cr &= \sum\limits_{\Delta k = 0}^{n-1} b[k-\Delta k]\left(h[\Delta k] + Y_0[\Delta k]\right) \end{align}$$
where
- \( y_0[k] \) is the output that we would have observed if we hadn’t perturbed the input
- \( Y_0[\Delta k] \) is \( y_0[k] \) decomposed into components weighting \( b[k-\Delta k] \)
- each \( h[\Delta k] \) component is one sample of the impulse response.
If you think this looks similar to a Fourier transform, you’re right! Fourier transforms are only one example of the use of an orthonormal basis. Anytime you have signals with two-valued autocorrelation where the \( \Delta k = 0 \) term has correlation of 1, and the \( \Delta k \ne 0 \) term has correlation of 0, it has many properties similar to a Fourier transform, such as the decomposition into weighted sums of basis components, preservation of energy (Parseval’s theorem), etc. And so we have the Hadamard-Walsh transform which uses square-wave-like basis functions (the Walsh functions). The PRBS from an LFSR doesn’t have exactly zero correlation for \( \Delta k \ne 0 \), so it’s not quite a perfect orthonormal basis, but it’s close enough for large \( n \). (And those of you who are nitpicky linear algebra students can fix the errors in your spare time, using orthogonalization techniques. Have at it!)
What about the noise? We’ll talk about that in just a minute.
For now let’s step back for a second and look at the big picture.
System Identification — the Big Picture
System identification has a few major aspects, including:
- Creating a proposed model of the unknown part(s) of the system — maybe our model is a transfer function of known structure with unknown parameters like \( H(s) = 1/(\tau s + 1) \), or maybe it falls into one of many systematic approaches like ARMA or ARMAX
Experiment design — one of the terms used in this field; system ID is essentially running an experiment on a system to gain information about it. Experiment design involves determining things like
- what kinds of input disturbance signals to use
- how large of an amplitude should be used as a disturbance
- whether passive identification is sufficient
- how many samples to take
- how often to sample
- whether pre- or post-filtering is helpful
- how to handle closed-loop control systems
Implementing and executing the experiment
- Analyzing the results to estimate model parameters and their uncertainty
The core idea of using sine waves or PRBS as input disturbance signals is fairly simple, and as we’ve seen, using sine waves gets us samples of the transfer function in the frequency domain, whereas PRBS gives us samples of the impulse response in the time domain. But that’s only the beginning, and a whole bunch of other issues need to be resolved, before you can actually get system identification working reliably. As I said before, I’m not an expert, so I don’t have answers for a lot of them, and there aren’t many books or articles that are written with the generalist engineer in mind; you have to be willing to wade through a bunch of equations to get somewhere useful.
Having said that, here are a few other practical thoughts:
Reflected Pseudonoise
Look at the last graph we drew here. It’s an impulse response with a noisy offset. I’m going to call it “reflected pseudonoise”, for reasons that may become clear later. Where does this noise come from, and how do we deal with it when we interpret the results?
Each of our estimates has some uncertainty, and the trick is being able to understand where the uncertainty comes from, in order to estimate how large the uncertainty is, and then determine how much of an impact that uncertainty has when we use those results.
For some kinds of measurements, this is fairly easy. Got an ADC input? If you have an ADC channel that is connected to a known constant voltage, you can get the uncertainty empirically by taking a bunch of samples and computing the standard deviation. Or you can bound the errors by checking the datasheets for the ICs you are using, and come up with worst-case bounds; this is usually much more conservative than using statistical measurements.
But computing uncertainty for derived calculations is difficult. What we’d like to do is try some things and see if we can justify an empirical relationship between aspects of our system and the parameter uncertainties — for example, maybe the amplitude of that noisy offset is closely correlated with the amplitude of the disturbance signal and the number of samples used.
Let’s do some experiments and look at some empirical observations.
We should expect the noisy offset to change with the period of the PRBS, so let’s try using a 12-bit, a 14-bit, and a 16-bit PRBS:
f12 = GF2QuotientRing(0x1053) p12 = (1<<12) - 1 assert checkPeriod(f12,p12) == p12 pn12 = np.array(list(lfsr_output(f12))) * 2 - 1 kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn12,karray) correlation_waveforms(Cc,Ca,pn12,f12,karray,h_theoretical)
f14 = GF2QuotientRing(0x402b) p14 = (1<<14) - 1 assert checkPeriod(f14,p14) == p14 pn14 = np.array(list(lfsr_output(f14))) * 2 - 1 kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn14,karray) correlation_waveforms(Cc,Ca,pn14,f14,karray,h_theoretical)
f16 = GF2QuotientRing(0x1002d) p16 = (1<<16) - 1 assert checkPeriod(f16,p16) == p16 pn16 = np.array(list(lfsr_output(f16))) * 2 - 1 kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn16,karray) correlation_waveforms(Cc,Ca,pn16,f16,karray,h_theoretical)
Now that’s not really fair; the smaller-size LFSRs have shorter periods, and if we use less samples from the signals in our system, we’re bound to get a poorer response. So let’s try to normalize and use approximately \( 2^{18} \) samples in each case:
def pnr(field, n=None): """ pseudonoise, repeated to approx 2^n samples """ n0 = field.degree if n is None: n = n0 nbits = ((1<<n0)-1) * (1<<(n-n0)) return np.array(list(lfsr_output(field, nbits=nbits))) * 2 - 1 pn12 = pnr(f12, 18) kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn12,karray) correlation_waveforms(Cc,Ca,pn12,f12,karray,h_theoretical)
pn14 = pnr(f14, 18) kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn14,karray) correlation_waveforms(Cc,Ca,pn14,f14,karray,h_theoretical)
pn16 = pnr(f16, 18) kmax = 500 karray = np.arange(-kmax, kmax) Cc, Ca, S = pn_sysid(pn16,karray) correlation_waveforms(Cc,Ca,pn16,f16,karray,h_theoretical)
The response looks sort of the same in all cases, kind of like a random walk, but it’s not clear where this comes from. First let’s make one small change: in order to see the noise without the system response, we’ll run the same experiment as the graph above (approximately \( 2^{18} \) samples with a 16-bit LFSR) but use a disturbance signal of essentially zero:
kmax = 500 def pn_sysid_nodisturbance(pn, karray, ilfunc=ilfunc1): # same as pn_sysid() but with no disturbance S = sysid_sim(Ra=Ra, Rb=Rb, C=C, pVa=0, pVb=0, N=len(pn),seed=1234,ilfunc=ilfunc) Cc = pncorrelate_array(S['Vc'],pn,karray) Ca = [pVa_amplitude] # amplitude which we were using in pn_sysid() return Cc, Ca, S def rms(y): """ root-mean-square """ return np.sqrt(np.mean(y**2)) def crosscorr_title(ax, pn, f): ax.set_title('Cross-correlation of Vc with PRBS, n=%d, f=0x%x (N=%d)' % (len(pn), f.coeffs, f.degree), fontsize=10) karray = np.arange(-kmax,kmax) Cc, Ca, S = pn_sysid_nodisturbance(pn16, karray) correlation_waveforms(Cc,[1.0],pn16,f16,karray,h=None) crosscorr_title(plt.gca(), pn16, f16) Vcmean = S['Vc'].mean() print "mean(Vc) = ", Vcmean print "rms(Vc) = ", rms(S['Vc']-Vcmean) print "pVa_amplitude=", pVa_amplitude
mean(Vc) = 0.189627738772 rms(Vc) = 0.0110564161443 pVa_amplitude= 0.005
Okay, remember, we’re computing \( C_{by}[\Delta k] = \tfrac{1}{n}\sum\limits_{k=0}^{n-1}b[k]y_0[k-\Delta k] \) where \( b[k] \) is our pseudonoise waveform, and \( y_0[k] \) is the normal system signal without any perturbation: in this case, it’s the voltage \( V_C \) on the capacitor in the system we’re trying to identify.
Now, \( b[k] \) is our PRBS, which is always ±1 but looks like noise, relative to the signal, and it has a mean value of \( 1/(2^N-1) \) (very close to zero) and standard deviation of 1.0. Statistically, we can think of it as a sequence of independent identically-distributed random variables, which have these properties (in case you don’t remember your probability and statistics course):
- If \( b[k] \) has zero mean and standard deviation \( \sigma \) for all \( k \), and \( a \) is a constant, then \( ab[k] \) has zero mean and standard deviation \( a\sigma \)
- If \( b[k] \) has zero mean and standard deviation \( \sigma \) for all \( k \), then \( a_0b[0] + a_1b[1] + a_2b[2] + \ldots \) has zero mean and standard deviation \( \sigma \sqrt{a_0^2 + a_1^2 + a_2^2 + \ldots} \) — the amplitudes add in quadrature
- Even though \( b[k] \) isn’t Gaussian (it’s a discrete Bernoulli variable), computing large sums like \( a_0b[0] + a_1b[1] + a_2b[2] + \ldots \) tend to have a Gaussian distribution because of the central limit theorem
So we should expect that \( C_{by}[\Delta k] \), when \( b \) and \( y \) are independent, acts somewhat like a Gaussian random variable with close-to-zero mean and standard deviation \( \sigma_C \) as follows:
$$\begin{align} \sigma_C &= \frac{1}{n}\sqrt{\sum\limits_{k=0}^{n-1}y[k]^2} \cr &= \frac{1}{\sqrt{n}}\cdot\sqrt{\frac{1}{n}\sum\limits_{k=0}^{n-1}y[k]^2} \cr &= \frac{1}{\sqrt{n}}\cdot y_{\text{RMS}} \end{align}$$
In other words, the correlation has a bell-curve distribution with standard deviation proportional to the factor \( \tfrac{1}{\sqrt{n}} \) and to \( y_{\text{RMS}} \), the root-mean-square of the input sample data \( y[k] \). If we let \( n \) get really large, this term goes to zero, but with finite \( n \) we’re left with some residual noise. The noise has two components of interest:
- a mean value \( \bar{y} \) (approximately 0.19V in the case of the \( V_C \) waveform)
- RMS \( \sigma_y \) excluding the mean (approximately 0.011V for \( V_C \))
Why is it important to consider these separately? Because they have different gains: the PRBS has a nonzero mean of \( 1/(2^N-1) \) (where \( N \) is the bit size of the LFSR), so the correlation effectively creates a gain of the mean of \( 1/(2^N-1) \), whereas the correlation gain of the RMS component is \( 1/{\sqrt{n}} \) (where \( n \) is the number of samples taken).
In the case shown above, \( N=16 \) and \( n=4\cdot(2^{16}-1) \approx 2^{18} \), so we should expect noise that has a mean of roughly \( 2^{-16}\bar{y} \approx 2.90 \times 10^{-6} \) and a standard deviation of roughly \( 2^{-9}\sigma_y \approx 2.15 \times 10^{-5} \). This is in the same ballpark as the cross-correlation graph above, where the noise wanders around between 1-2 × 10-5.
The head-scratching part of this comes with the random-walk similarity. If we calculate \( C_{by}[\Delta k] \) and \( C_{by}[\Delta k + 1] \), they tend to be similar. The individual samples \( C_{by}[\Delta k] \) aren’t independent noise sources. What’s going on?
PRBS as an Orthonormal Basis
The answer comes when we consider PRBS as an orthonormal basis, and cross-correlation as a way to transform individual samples \( y_0[k] \) (remember, we’re using the 0 subscript to denote the signal without any perturbation input) into the “PRBS domain” as weighted coefficients \( Y_0[\Delta k] \). This is the reason I call it “reflected pseudonoise”, because it’s the unmodified signals \( y_0[k] \) transformed, or reflected, into “noise” by the operation of correlation.
Perhaps you’re acquainted enough with linear algebra that this idea makes sense. If not, let’s take another look. I’m going to use one of the shorter PRBS runs just to keep the computational load down to a reasonable number.
Here’s the “noise” seen in the cross-correlation from the first 16383 samples of Vc, with the PRBS from a 14-bit LFSR, and this time we’re going to compute all 16383 cross-correlation values:
pn14s = pnr(f14) # no repeats kmax = 8192 karray = np.arange(-kmax+1, kmax) Cc, Ca, S = pn_sysid_nodisturbance(pn14s, karray) correlation_waveforms(Cc,[1.0],pn14s,f14,karray,h=None) crosscorr_title(plt.gca(), pn14s, f14)
Now, one interesting aspect of correlating signals with a full period of LFSR output relates to the frequency domain.
Just as the convolution \( x * y \) in the time domain shows up as the product \( X(f)Y(f) \) in the frequency domain, following the Convolution Theorem, the correlation \( C_{xy} \) in the time domain shows up as \( X(f)\bar{Y}(f) \) in the frequency domain, where the bar over the Y is the complex conjugate, due to the Cross-Correlation Theorem. (There’s a nice explanation of this on dsp.stackexchange by Jason R, who is not me.)
What this means is that if we know the Fourier transforms of two signals X and Y, we can calculate the Fourier transform of their correlation without actually calculating the correlation itself. Equivalently, if we want to calculate all possible values of the correlation sequence, we can use FFTs to speed up the calculation.
It gets even more interesting when you correlate with the PRBS from an LFSR, because we showed in part XII that the FFT of a PRBS has uniform amplitude:
Actually, the fine print is that the FFT is two-valued, but uniform over all frequencies except for DC:
$$\left|B(f)\right| = \begin{cases} 2^{N/2} & f \ne 0 \cr 1 & f = 0 \end{cases}$$
You can just barely see the dip at \( f=0 \) in the graph above, where the amplitude is shown in dB relative to the number of samples N.
The phase of the PRBS in the frequency domain, on the other hand, skips around seemingly randomly, with no discernable pattern.
So the key insight is that because of this weird uniform-amplitude property of the PRBS, and the Correlation Theorem, when you correlate some signal \( y[k] \) with an LFSR-based PRBS \( b[k] \), with an integral number of periods of the LFSR, then the frequency spectrum of the correlation \( C_{by}[\Delta k] \) has the same shape of amplitude (except for a relative attenuation at DC) as the frequency spectrum of \( y[k] \). The phase is scrambled, but the shape of the amplitude is left alone. We essentially get noise with the same power spectral density shape as the spectrum of the original signal. (The scaling is by \( 2^{N/2} \) if the correlation is raw, and approximately \( 2^{-N/2} \) if the correlation is normalized by a factor of \( 1/n=1/(2^N-1) \).)
Don’t believe me? Let’s try it!
# Code from part XI # show_freq_spectra(V, Vtitle, subtitle=''): n = len(V) t = np.arange(n)/10.0e3 fig = plt.figure() ax_time, ax_freq = show_fft_real(V,t,dbref=n,fig=fig, freq_only=True,linewidth=3,color='red') # Cc is normalized by 1/n, so we have to scale it up # by sqrt(n) to compare apples-to-apples. show_fft_real(Cc*np.sqrt(n),t,dbref=n,fig=fig, freq_only=True,linewidth=0.8,color='yellow') ax_freq.set_xlim(0,1000) ax_freq.set_title( 'Frequency spectra of y[k] = %s, and Y[$\\Delta$k] = correlation of y[k] with PRBS%s%s' % (Vtitle, '' if subtitle == '' else '\n', subtitle), fontsize=10); show_freq_spectra(S['Vc'], 'Vc', 'ilfunc=ilfunc1')
They’re right on top of each other!
So if your original signal without perturbation \( y_0[k] \) has its largest energy in a particular frequency range, so will the correlation \( Y_0[\Delta k] \) between \( y_0[k] \) and an LFSR-based PRBS. (Again, with the caveat that you need to perform the correlation over an entire period of the LFSR.) If I have \( y_0[k] \) with mostly low-frequency content, the power spectral density of the PRBS correlation will be mostly at low-frequencies, and this means that neighboring correlation values don’t change quickly from one value of \( \Delta k \) to the next. If I have \( y_0[k] \) with mostly high-frequency content, so will the PSD of the PRBS correlation \( Y_0[\Delta k] \).
As an example for this high-frequency content, we can use the 202Hz-centered
ilfunc2 for the load current \( I_L \):
pn14s = pnr(f14) # no repeats kmax = 8192 karray = np.arange(-kmax+1, kmax) Cc, Ca, S = pn_sysid_nodisturbance(pn14s, karray, ilfunc=ilfunc2) correlation_waveforms(Cc,[1.0],pn14s,f14,karray,h=None) crosscorr_title(plt.gca(), pn14s, f14) show_freq_spectra(S['Vc'], 'Vc', 'ilfunc=ilfunc2')
The signal \( y_0[k] \) is “reflected” into the “PRBS domain” as \( Y_0[\Delta k] \), and if \( y_0[k] \) is unrelated to the PRBS, then its correlation \( Y_0[\Delta k] \) looks like noise. (Reflected pseudonoise!) Whereas if we use the PRBS as an input perturbation, then the corresponding change in output will have a correlation \( Y[\Delta k] \) that looks like the impulse response. The challenge is separating the two, and there isn’t really any way to do so; the only things we can do are to understand and accept the magnitude of the reflected pseudonoise, or increase the magnitude of the perturbation signal so that the signal-to-noise ratio is greater.
Or is there?
Noise Shaping to Enhance PRBS-based System Identification ###
One thing we can do, if we know something about the spectral density of the background signal \( y_0[k] \), is to try to filter it out. In the frequency spectrum graph of the
ilfunc=ilfunc1 case above, most of the energy is in the lower frequencies, whereas the PRBS has energy across the spectrum.
So let’s try applying a high-pass filter to the signal we receive, before computing the correlation.
import scipy.signal alpha = 0.001 def filter_1pole(y,alpha): return scipy.signal.lfilter([alpha], [1,alpha-1], y) def make_hpf(alpha): def f(y): y_lpf = filter_1pole(y,alpha) return y - y_lpf return f kmax = 500 karray = np.arange(-kmax, kmax) # no filter Cc, Ca, S = pn_sysid(pn18,karray) correlation_waveforms(Cc,Ca,pn18,f18,karray, h_theoretical) # high pass filter Cc_hpf, Ca_hpf, S = pn_sysid(pn18,karray,filter=make_hpf(alpha)) correlation_waveforms(Cc_hpf,Ca,pn18,f18,karray, h_theoretical)
Hmm. That really doesn’t help much, except to remove the DC offset.
Getting Those System Parameters, Finally
Now what were we trying to do in the first place? Oh, yeah, estimating these transfer functions:
$$\begin{align} H_A(s) &= \frac{\alpha_A}{1+\tau s} \cr H_B(s) &= \frac{\alpha_B}{1+\tau s} \cr \end{align}$$
with uncertain parameters \( \alpha_A \), \( \alpha_B \), \( \tau \).
A central topic of system identification is devoted to techniques for solving this type of problem, namely determining parameter estimates based on the data we get back from a perturbed system, whether it’s perturbed with sine waves or PRBS.
We’re going to muddle our way through this… again, I’m not an expert in system identification, and the main point of this article is the use of PRBS, which I’ve already shown. But let’s finish this example, by looking at data in a smaller range of \( \Delta k \):
karray = np.arange(0, 32) pert_ampl = pVa_amplitude Cc, Ca, S = pn_sysid(pn18,karray,input_amplitude=pert_ampl) correlation_waveforms(Cc,Ca,pn18,f18,karray, h_theoretical)
This is essentially a nonlinear least-squares problem: we are trying to find constants \( a,b,c \) such that \( C[\Delta k] = ae^{-b\Delta k} + c + n[\Delta k] \) minimizes the mean square error \( n[\Delta k] \). The fact that there doesn’t seem to be much noise in the correlation data is encouraging.
On a PC where we have lots of computing power, estimating those constants is not too hard;
scipy.optimize.curve_fit can help, and in fact the example in the documentation is this exact type of function, an exponential with a DC offset. Let’s try it!
import scipy.optimize def exp_ofs(x, a, b, c): return a*np.exp(-b*x) + c h_measured = Cc/Ca[0] popt, pcov = scipy.optimize.curve_fit(exp_ofs, karray, h_measured) a,b,c=popt print "a=%s, b=%s, c=%s" % (a,b,c) correlation_waveforms(Cc,Ca,pn18,f18,karray, h_theoretical) ax = plt.gca() ax.plot(karray,exp_ofs(karray, a, b, c),'-',linewidth=0.5);
a=0.0866434478506, b=0.0927869720572, c=0.0104562117569
Hurray! We have a really good fit. Now let’s see how close we are to the actual values. First we have to derive the real system values \( \alpha_A \) and \( \tau \) from \( a \) and \( b \):
$$\begin{align} H(s) = \frac{\alpha_A}{1 + \tau s} &\Longleftrightarrow h(t) = \frac{\alpha_A}{\tau}e^{-t/\tau} \cr &\Longleftrightarrow h[\Delta k]=\frac{\alpha_A\Delta t}{\tau}e^{-(\Delta t/\tau)\Delta k} \end{align}$$
so that
$$\begin{align} a &= \alpha_A\Delta t/\tau \cr b &= \Delta t/\tau \cr \tau &= \Delta t/b \cr \alpha_A &= a/b \end{align}$$
Actually, this isn’t 100% accurate; in the equations above, we snuck in a sloppy continous-to-discrete time conversion, and the \( h[\Delta k]=\frac{\alpha_A\Delta t}{\tau}e^{-(\Delta t/\tau)\Delta k} \) equation isn’t exactly correct. If we let \( u=e^{-\Delta t/\tau} \) then for \( \alpha_A=1 \) we should have \( h[\Delta k] = (1-u)\cdot u^k \) so that the sum \( \sum\limits_{k\ge 0} h[\Delta k] = 1 \). (If you don’t remember your z-transform theory, the DC gain of a step response is the step response value as \( t \to \infty \), which is the sum of all the impulse response terms.) So the corrected equation is
$$h[\Delta k]=\alpha_A\left(1-e^{-\Delta t/\tau}\right)e^{-(\Delta t/\tau)\Delta k}$$
which means that
$$\begin{align} a &= \alpha_A\left(1-e^{-\Delta t/\tau}\right)\cr b &= \Delta t/\tau \cr \tau &= \Delta t/b \cr \alpha_A &= a/\left(1-e^{-b}\right) \end{align}$$
import pandas as pd df_A = pd.DataFrame([dict(estimated = S['dt']/b, actual=S['tau']), dict(estimated = a/(1-np.exp(-b)), actual=S['alpha_A'])], index=['tau','alpha_A']) df_A
Now for \( \alpha_B \). This is a little trickier; since the gain is expected to be smaller, we’ll have to bump up the perturbation amplitude in order for the output to be similar. Also, since we expect the same time constant, we can just reuse the one we identified for the A → C channel. This turns things into a linear least-squares problem: \( y = ae^{-bx}+c = au+c \), where \( u=e^{-bx} \), since we’re reusing the exponential constant \( b \). So we can use
numpy.linalg.lstsq:
karray = np.arange(0, 32) pert_ampl = 0.02 Cc, Cb, S = pn_sysid(pn18,karray,input_amplitude=pert_ampl,input_signal='Vb') def h_theo_b(x): return h_theoretical(x,A=alpha_B) u = np.exp(-b*karray) A = np.vstack([u,np.ones_like(u)]).T a, c = np.linalg.lstsq(A,Cc/Cb[0])[0] correlation_waveforms(Cc,Cb,pn18,f18,karray, h_theo_b) ax = plt.gca() ax.plot(karray,a*u+c,'-',linewidth=0.5);
df_B = pd.DataFrame(dict(estimated = a/(1-np.exp(-b)), actual=S['alpha_B']), index=['alpha_B']) df = pd.concat([df_A,df_B]) df['error (%)'] = (df['estimated'] - df['actual'])/df['actual']*100 df
There! We’re within 1% of the actual values for the results of the “A” experiment, and almost within 5% of actual value for the “B” experiment. We could do better with larger amplitude perturbation. Also, it’s possible to perturb both A and B inputs at the same time, as long as the pseudonoise for each is as uncorrelated as possible; this lets us avoid having to do two separate experiments that take twice the time:
karray = np.arange(0, 32) def pn_sysid_doitall(karray,pVa,pVb,ilfunc=ilfunc1,seed=1234): n = len(pVa) assert n == len(pVb) S = sysid_sim(Ra=Ra, Rb=Rb, C=C, pVa=pVa, pVb=pVb, N=n,seed=seed,ilfunc=ilfunc) # Estimate impulse response as ratio # between input and output correlation; # we only need input correlation with delay=0 # # Do this for the Va input Ca_in = pncorrelate_array(S['Va'],pVa,[0]) Ca_out = pncorrelate_array(S['Vc'],pVa,karray) ha_measured = Ca_out/Ca_in[0] # And then for the Vb input Cb_in = pncorrelate_array(S['Vb'],pVb,[0]) Cb_out = pncorrelate_array(S['Vc'],pVb,karray) hb_measured = Cb_out/Cb_in[0] # Now estimate tau and alpha_A from the resulting correlation curve def exp_ofs(x, a, b, c): return a*np.exp(-b*x) + c popt, pcov = scipy.optimize.curve_fit(exp_ofs, karray, ha_measured) a,b,c=popt B = 1-np.exp(-b) df_A = pd.DataFrame([dict(estimated = S['dt']/b, actual=S['tau']), dict(estimated = a/B, actual=S['alpha_A'])], index=['tau','alpha_A']) # Now estimate alpha_B u = np.exp(-b*karray) A = np.vstack([u,np.ones_like(u)]).T a, c = np.linalg.lstsq(A,hb_measured)[0] df_B = pd.DataFrame(dict(estimated = a/B, actual=S['alpha_B']), index=['alpha_B']) df = pd.concat([df_A,df_B]) df['error (%)'] = (df['estimated'] - df['actual'])/df['actual']*100 return df pn_sysid_doitall(karray, pVa=0.02*pn18, pVb=0.1*np.roll(pn18,1000))
The use of
pVb=0.1*np.roll(pn18,1000) for the “b” input is a version of the same pseudonoise used for the “A” input, but delayed by 1000 samples. As long as the impulse response of the “A” input doesn’t have any significant content after 1000 samples, this approach is safe to use, and we can use one LFSR to generate both pseudonoise sequences. We don’t actually have to keep a delayed copy of the pseudonoise sequence; if you were paying close attention in Part IX, I mentioned that any delayed version of the LFSR output can be expressed as an XOR of a particular subset of the LFSR state, so we can look for simple XOR combinations of a few of the LFSR state bits and try to find one that has a large delay relative to the normal LFSR output. The
libgf2.gf2.GF2TracePattern class can help analyze this:
import libgf2.gf2 trp1 = libgf2.gf2.GF2TracePattern.from_mask(f18,1 << 17) trp2 = libgf2.gf2.GF2TracePattern.from_mask(f18,1 << 1) rho = f18.wrap(trp1.pattern) / f18.wrap(trp2.pattern) n = (1 << f18.degree)-1 print "Expected delay = %d (out of period %d)" % ( rho.log, n) bit17 = [] bit1 = [] S = 1 for _ in xrange(n): bit17.append((S >> 17) & 1) bit1.append((S >> 1) & 1) S = f18.mulraw(S, 2) # shift left bit17 = np.array(bit17) bit1 = np.array(bit1) def bitsof(array, a, b): return ''.join('01'[bit] for bit in array[a:b]) print "bit17[0:50] = %s" % bitsof(bit17,0,50) print "bit1[0:50] = %s" % bitsof(bit1,0,50) d = rho.log print "bit1[%6d:%6d] = %s" % (d, d+50, bitsof(bit1,d,d+50))
Expected delay = 136155 (out of period 262143) bit17[0:50] = 00000000000000000100000000000010011100000001000001 bit1[0:50] = 01000000000000000011000000000001101001000000110000 bit1[136155:136205] = 00000000000000000100000000000010011100000001000001
Whee! The magic of finite fields helps us again.
Anyway, if we go to really large perturbation amplitudes, we should be able to get the error very low:
pn_sysid_doitall(karray, pVa=10.0*pn18, pVb=1000.0*np.roll(pn18,1000))
Of course, this is kind of silly; remember, we’re trying to avoid using very large amplitudes. But it does give good evidence that the math is right. (The first time I tried this, I was getting 4-5% error, and finally traced it down to the error in my continuous-to-discrete approximation, hence these lines in
sysid_sim():
# match discrete (1-alpha)^n to continuous e^(-dt/tau*n) alpha = 1-np.exp(-dt/tau)
Normally I gloss over that kind of math, because it’s close enough to just approximate a first-order filter by sticking in \( \alpha \approx \Delta t/\tau \), but when you’re doing system ID and want to be super-accurate, then it’s important to be rigorous.)
Just to make sure we do a little due diligence, let’s look at
ilfunc2, the 202Hz-centered load current:
pn_sysid_doitall(karray, pVa=0.02*pn18, pVb=0.1*np.roll(pn18,1000), ilfunc=ilfunc2)
It’s the same order-of-magnitude error.
Error analysis
We can be a little more rigorous by repeating it 10 times with different random seeds, and seeing how the statistics work out:
from IPython.display import display def error_analysis(rel_magn_list, npoints, ilfunc, pn, vbdelay=1000): for relative_magn in rel_magn_list: print "relative magnitude rho =", relative_magn print "normalized errors (rho * % error of actual value) :" errors = [] for seed in (np.arange(npoints) + 1234): df=pn_sysid_doitall(karray, pVa=relative_magn*0.02*pn, pVb=relative_magn*0.1*np.roll(pn,vbdelay), ilfunc=ilfunc, seed=seed) errors.append(df['error (%)']) norm_errors = pd.DataFrame(errors) * relative_magn norm_error_stats = pd.DataFrame(dict(mean=norm_errors.mean(), std=norm_errors.std())) display(norm_error_stats) error_analysis(rel_magn_list=[0.25, 1, 4.0], npoints=10, pn=pn18, ilfunc=ilfunc1)
relative magnitude rho = 0.25 normalized errors (rho * % error of actual value) :
relative magnitude rho = 1 normalized errors (rho * % error of actual value) :
relative magnitude rho = 4.0 normalized errors (rho * % error of actual value) :
The errors are essentially inversely proportional to the perturbation amplitude: if we increase the perturbation amplitude by a factor of 4, then we decrease the error by a factor of 4; if we decrease the perturbation amplitude by a factor of 4, then we increase the error by a factor of 4.
Here’s the same kind of experiment for the 202Hz load current,
ilfunc2:
error_analysis(rel_magn_list=[0.25, 1, 4.0], npoints=10, pn=pn18, ilfunc=ilfunc2)
relative magnitude rho = 0.25 normalized errors (rho * % error of actual value) :
relative magnitude rho = 1 normalized errors (rho * % error of actual value) :
relative magnitude rho = 4.0 normalized errors (rho * % error of actual value) :
The numbers are different (we get a larger error for
ilfunc2) but the behavior is the same: increase the signal amplitude by a factor of K, and the error goes down by a factor of K.
Why are the estimation errors larger for
ilfunc2? Because it has most of its energy at higher frequencies, and this means the PSD of reflected pseudonoise (when we correlate the system with pseudonoise) also has its content at higher frequencies. Whereas the current in
ilfunc1, even though it has about the same amplitude, has most of its energy at low frequencies, and this means the PSD of reflected pseudonoise has most of its content at low frequencies.
So what? Well, we’re using a parameter estimation technique to estimate \( a, b, c \) of \( h[\Delta k] = ae^-{b\Delta k}+c \), which does a very good job of disregarding the DC offset \( c \), and because we’re working over a small number of samples of the impulse response (32 out of 262143 in the examples in this article), only the high-frequency content of the reflected pseudonoise contributes significantly towards estimation error; the low-frequency content of the reflected pseudonoise looks like DC, and we can remove it as an unknown DC offset. So
ilfunc1‘s reflected pseudonoise gets rejected for the most part, whereas
ilfunc2 does not.
What this means is that if you’re working with systems that have most of their frequency content at frequencies much lower than the Nyquist frequency (half the sampling frequency), then the technique given in this article should work reasonably well. Whereas if there’s lots of content at higher frequencies then it’s not so good.
Now let’s look at the effect on the estimated parameters if we change the number of samples.
Here’s the same kind of thing with
ilfunc1 using one cycle of a 16-bit LFSR. I would expect that the errors are twice as bad as an 18-bit LFSR, since our error for \( n \) samples should be proportional to \( 1/\sqrt{n} \). For four cycles of a 16-bit LFSR I would expect similar performance as one cycle of an 18-bit LFSR.
print "One cycle of %s:" % f16 pn16 = pnr(f16, 16) error_analysis(rel_magn_list=[0.25, 1, 4.0], npoints=10, pn=pn16, ilfunc=ilfunc1) print "Four cycles of %s:" %f16 pn16 = pnr(f16, 18) error_analysis(rel_magn_list=[0.25, 1, 4.0], npoints=10, pn=pn16, ilfunc=ilfunc1)
One cycle) :
Four cycles) :
Hmm. It’s not clear what’s going on here. We saw earlier that the reflected pseudonoise definitely has that \( 1/\sqrt{n} \) behavior, but how that impacts the parameter estimation is not really clear, and it looks like there are some nonlinearities, because the mean value of the error is not zero. (Which technically means that the estimation technique I used was not an unbiased estimator, and we should be able to do better… but my brain is really starting to hurt here, so I’ll just let it go.)
What About Second-Order Systems? Closed-loop Systems?
We looked at a first-order system. What about a second-order system? I’m going to leave that exploration to the reader; the only difference is that you’d need to analyze the impulse response to estimate the poles and DC gain. This isn’t trivial, especially if the system behavior has wide dynamic range and the poles are very far apart. For example, even a 2nd-order system with a pole at 1Hz and the other at 1000Hz presents problems, since the transfer function amplitude at the 1000Hz pole is 1000 times smaller than at the 1Hz pole — the behavior at the higher-frequency pole is very hard to pick up.
Closed-loop systems are a challenge as well; if you perturb the system, the controller will tend to try to cancel out that perturbation. So it can be difficult in some frequency ranges to apply a perturbation signal under closed-loop control.
What About Embedded Systems?
Some of these calculations are feasible to implement on resource-limited embedded systems, and others aren’t. Here’s my take on it:
- LFSR pseudonoise generation: yes
- Computation of correlation: yes (for a limited number of samples of the impulse response, like 16 or 32; all we’re really doing is either adding or subtracting from an accumulator, depending on whether the pseudonoise bit is 1 or 0.)
- Estimation of parameters: tricky – nonlinear least squares is probably not feasible, but some kind of simplified recursive-least squares analysis is probably doable, especially if the uncertain parameters are known to lie within a narrow band, for example, 1 millisecond ± 10%. (Watch out though; recursive algorithms can have stability issues.) Or the clever engineer might be able to think of a simpler algorithm that is not as accurate but requires less computation.
I’d even note that the LFSR pseudonoise and correlation computations can be done by a low-end 8-bit processor! All that&rsqursquo;s necessary are bit shifts, XORs, addition, and subtraction, although the variables containing accumulated cross-correlation signals may need to be extended to 16- or 32-bit values to avoid overflow.
Wrapup
Whew! We covered quite a bit of ground on the topic of system identification using a pseudorandom bit sequence (PRBS) from an LFSR. (And yet, as far as system identification goes, this is just scratching the surface!)
System identification attempts to estimate uncertain or unknown parameters by observing system inputs and outputs.
In many cases the observed signals can be enhanced by perturbing some of those inputs.
Sine-wave perturbations can be used to estimate frequency-domain response, by correlating system inputs and outputs, one frequency at a time.
PRBS from an LFSR can be used as a perturbation signal to estimate impulse response in the time domain, by correlating system inputs and outputs. This works because the cross-correlation between the PRBS and a time-shifted version of the same sequence is nearly zero, forming a quasi-orthonormal basis.
The amplitudes of the frequency spectra of the following two signals are the same (except at DC):
- Any waveform \( y[k] \)
- The cross-correlation of \( y[k] \) with a PRBS over an integer number of periods
The cross-correlation of the system output with a PRBS perturbation has two parts:
- “Reflected pseudonoise” — this is the cross-correlation between PRBS and the “normal” system output (without perturbation)
- Impulse response cross-correlation — this represents cross-correlation between PRBS and the system output’s response to that perturbation, and it approximates the impulse response of the system transfer function
The amplitude of “reflected pseudonoise” is proportional (relative to the impulse response cross-correlation) to \( 1/\sqrt{n} \) where \( n \) is the number of samples.
Parameter estimation of a first-order system from the impulse response cross-correlation can be computed using nonlinear least squares.
Increasing the amplitude of the PRBS perturbation by a factor of \( K \) increases the impulse response cross-correlation by the same factor \( K \), and decreases the relative error of estimated parameters by approximately the same factor \( K \)
One more time: I’m not an expert in the field of system identification, so if one of you reads this article and finds some silly mistakes or inaccurate statements, please let me know about them, so I can improve it.
References
Unfortunately I have not found any really good references for this application of linear-feedback shift registers. The “classic” in the field of system identification is Ljung:
- Lennart Ljung, System Identification: Theory for the User, Prentice-Hall, 2nd edition, 1999.
Not only is this literally the book that the MATLAB System Identification Toolbox is based on, but Professor Ljung was the primary author of this toolbox. I don’t find it very readable at all; perhaps if you are a graduate student in control theory and estimation you will gobble up this material like you would a bowl full of potato chips, but I find it very impenetrable. Here’s an example:
Also, it has an awful name; the word “user” is primarily applied to drug addicts and customers of computer software, neither of which applies here. I bought a used copy, and the one saving grace that prevented me from angrily chucking the thing at the trash can — besides the fact that I did spend money on it — was Chapter 14, “Experiment Design”, which does have some useful practical advice, if you can get past all the \( \operatorname{arg} \operatorname{min} \) equations and those arcane cursive math fonts like \( \mathscr{ H D } \) to represent some abstract universe of things. (\( \mathscr{ D } \) = {all design variables A to D}, according to p. 340, following a long list of implementation choices with sections A - D.) Chapter 14 goes into topics like pretreatment of data with filtering, or choice of sampling interval. Unfortunately it’s a very short chapter.
Sigh. But maybe you’ll get something valuable out of reading Ljung.
Let me know if you find any other good references on the topic!
Next time we’ll cover one of the communications applications of LFSRs, namely Gold codes. And I promise it will be a much shorter article.
Previous post by Jason Sachs:
A Wish for Things That Work
Next post by Jason Sachs:
Linear Feedback Shift Registers for the Uninitiated, Part XIV: Gold Codes
- Write a CommentSelect to add a comment
The practical application of this is described in "
TMS320f28069-Based Impedance Spectroscopy With Binary
Excitation" by Rist, Reidla, Min, Parve, Martens and Land.
It might be under patent. | https://www.embeddedrelated.com/showarticle/1142.php | CC-MAIN-2018-26 | refinedweb | 12,369 | 54.22 |
Segmentation fault on SIGINT interception when trying to exit the QCoreApplication
Hi,
I want to use a cleanup function when my QCoreApplication is destroyed (intercepting a SIGINT).
I've tried two methods:
First to connect the QCoreApplication::aboutToQuit to a slot that would do the job but I don't end up in the slot when the application is closed by a SIGINT or even if close manually the app (launched via QT Creator)...
So my second option would be to use signal interceptions. Here is what I do:
#include <csignal> Proxy *theProxy = NULL; QCoreApplication *app = NULL; void handleShutDown(int signal){ Proxy::shutDown(); std::cout << "Try to exit the event loop\n"; app->exit(0); } int main(int argc, char *argv[]) { app = new QCoreApplication(argc, argv); signal(SIGINT, &handleShutDown);// shut down on ctrl-c signal(SIGTERM, &handleShutDown);// shut down on killall theProxy = Proxy::getInstance(); // QObject::connect(app, &QCoreApplication::aboutToQuit, theProxy, &Proxy::deleteInstance); if (!theProxy->start()){ std::cout << "Error starting Proxy...\n"; return 1; } app->exec(); // Run the event loop std::cout << "Event loop ended..."; return 0; }
So here is what I get when I stop my app with a SIGINT:
^CTry to exit the event loop Segmentation fault
I can see that the cleanup funtion is well executed but then there is a segmentation fault when I try to exit the QCoreApplication.
I've no clue how I could debug this...
Any idea what I'm doing wrong or how I could get traces to understand more the issue?
Thanks.
- kshegunov Qt Champions 2017
Hello,
Firstly, do not create the
QCoreApplicationinstance in the heap, you gain only headaches. Secondly, if you really need a global static variable (which I believe you don't) you should use Q_GLOBAL_STATIC. You would want to intercept the signals only if you have some very specific setup where you need to cleanup some global resources that don't belong to your application (i.e. a shutdown routine for a driver). Global variables are like premature optimization, they're troublesome and in most cases purely unnecessary, don't use them unless you're forced to.
Kind regards.
- SGaist Lifetime Qt Champion
Hi,
To add to @kshegunov, you can access the instance of QCoreApplication through qApp.
This post is deleted!
Ok, I've let the QCoreApplication on the stack and don't use anymore a global variable for my Proxy. I don't really need to as I'm using the singleton pattern. Both constructor and destructor are private.
I'm using a static method to get/create the instance and another one to delete it.
Here is the new code.
#include <csignal> void handleShutDown(int signal){ std::cout << "Try to shutdown the proxy\n"; Proxy::shutDown(); std::cout << "Try to exit the event loop\n"; qApp->exit(0); } int main(int argc, char *argv[]) { QCoreApplication app(argc, argv); signal(SIGINT, &handleShutDown);// shut down on ctrl-c signal(SIGTERM, &handleShutDown);// shut down on killall Proxy *theProxy = Proxy::getInstance(); // QObject::connect(app, &QCoreApplication::aboutToQuit, theProxy, &Proxy::deleteInstance); if (!theProxy->start()){ std::cout << "Error starting Proxy...\n"; return 1; } app.exec(); // Run the event loop std::cout << "Event loop ended..."; return 0; }
I'm still having the same segmentation fault. Any idea why? The object I'm deleting inherit from QTcpServer and is just listening.
I'm having the crash without having done anything.
As I said, the object is destructed and the crash seems to happen after on the exit call to stop the event loop.
Server started, listening on port: 11111 ^CTry to shutdown the proxy Try to exit the event loop Segmentation fault
PS: I need to have a cleanup function. In most cases, my program will be stopped by an interruption and I want to make sure it releases the resources properly and take the time to write final logs before shutting down.
Alright the handling of the shutdown interruption is working fine.
I've managed to find what was causing the segmentation fault. It is the construction/deletion of my SQL class that do a QSqlDatabase::addDatabase...
I'm going to try to understand what could be the problem and open another post if I don't manage to get what is wrong.
- kshegunov Qt Champions 2017
@mbruel
Is your SQL class by any chance a singleton as well? If you answer yes, then this is one of the reasons you should not use global variables (a singleton is pretty much the same as a global variable).
- SGaist Lifetime Qt Champion
Also, what backtrace to you get from the debugger ? | https://forum.qt.io/topic/62147/segmentation-fault-on-sigint-interception-when-trying-to-exit-the-qcoreapplication | CC-MAIN-2018-34 | refinedweb | 756 | 53.61 |
Wow, this almost looks like a real flamefest. ("Flame" being defined as the presence of metacomments.) (In the following, s is an 8-bit string, u is a Unicode string, and e is an encoding name.) The original design of the encode() methods of string and Unicode objects (in 2.0 and 2.1) is asymmetric, and clearly geared towards Unicode codecs only: to decode an 8-bit string you *have* to use unicode(s, encoding) while to encode a Unicode string into a specific 8-bit encoding you *have* to use u.encode(e). 8-bit strings also have an encode() method: s.encode(e) is the same as unicode(s).encode(e). (This is useful since code that expects Unicode strings should also work when it is passed ASCII-encoded 8-bit strings.) I'd say there's no need for s.decode(e), since this can already be done with unicode(s, e) -- and to me that API looks better since it clearly states that the result is Unicode. We *could* have designed the encoding API similarly: str(u, e) is available, symmetric with unicode(s, e), and a logical extension of str(u) which uses the default encoding. But I accept the argument that u.encode(e) is better because it emphasizes the encoding action, and because it means no API changes to str(). I guess what I'm saying here is that 'str' does not give enough of a clue that an encoding action is going on, while 'unicode' *does* give a clue that a decoding action is being done: as soon as you read "Unicode" you think "Mmm, encodings..." -- but "str" is pretty neutral, so u.encode(e) is needed to give a clue. Marc-Andre proposes (and has partially checked in) changes that stretch the meaning of the encode() method, and add a decode() method, to be basically interfaces to anything you can do with the codecs module. The return type of encode() and decode() is now determined by the codec (formerly, encode() always returned an 8-bit string). Some new codecs have been added that do things like gzip and base64. Initially, I liked this, and even contributed a codec. But questions keep coming up. What is the problem being solved? True, the codecs module has a clumsy interface if you just want to invoke a codec on some data. But that can easily be remedied by adding convenience functions encode() and decode() to codecs.py -- which would have the added advantage that it would work for other datatypes that support the buffer interface, e.g. codecs.encode(myPILobject, "base64"). True, the "codec" pattern can be used for other encodings than Unicode. But it seems to me that the entire codecs architecture is rather strongly geared towards en/decoding Unicode, and it's not clear how well other codecs fit in this pattern (e.g. I noticed that all the non-Unicode codecs ignore the error handling parameter or assert that it is set to 'strict'). Is it really right that x.encode("gzip") and x.encode("utf-8") look similar, while the former requires an 8-bit string and the latter only makes sense if x is a Unicode string? Another (minor) issue is that Unicode encoding names are an IANA namespace. Is it wise to add our own names to this? I'm not forcing a decision here, but I do ask that we consider these issues before forging ahead with what might be a mistake. A PEP would be most helpful to focus the discussion. --Guido van Rossum (home page:) | http://mail.python.org/pipermail/python-dev/2001-June/015404.html | crawl-002 | refinedweb | 601 | 73.17 |
Character set is a set of valid characters that a language can recognize. Java uses the Unicode character set.
Unicode is a 16-bit character code set that has characters representing almost all characters in almost all human alphabets.
Token is the smallest individual unit in a program. Keywords, identifiers, literals, punctuators and operators are various types of tokens.
Keywords are the words that convey a special meaning to the language compiler.
Identifiers are used as the general terminology for the names given to different parts of the program.
Rules for naming identifiers:
a) It can include alphabets, digits, underscore and dollar sign.
b) It must not be a keyword.
c) It must not begin with a digit.
d) It can be of any length.
e) It is case-sensitive.
Literals are constants, which are data values that are fixed.
Java supports three types of integer literals: decimal, octal and hexadecimal. Octal literals are preceded with 0. Hexadecimal literals are preceded with 0x or 0X.
Real literals in exponent for has two parts: a mantissa and an exponent. The mantissa can be an integer or real literal. The exponent must be an integer.
Boolean literal can be either true or false.
Character literal is a single character, enclosed in single quotes. Escape sequence is a non-graphic character that cannot be typed directly from keyboard. An escape sequence is always preceded by a backslash.
String literal is a sequence of zero or more characters, enclosed in double quotes.
Data types are means to identify the type of data and its associated operations. There are two types of data types in Java:
a) Primitive data type
b) Reference data type
Primitive data types are basic data types. Java provides eight primitive data types:
byte (1 byte)
short (2 bytes)
int (4 bytes)
long (8 bytes)
float (4 bytes)
double (8 bytes)
char (2 bytes)
boolean (1 byte)
float data type has a precision of up to 6 digits, whereas double has a precision of up to 15 digits.
Reference data types are constructed from primitive data types. They mainly store memory addresses. Examples include classes, arrays, interfaces.
Variable is a named memory location which holds a data value of a particular data type.
A class variable of boolean type has the default value true. A char variable will have a default value ‘\u0000’. All reference types are initialized with null. Other numeric variables are initialized with 0.
The keyword final makes a variable constant.
Operators represent the operations being carried out in an expression.
Arithmetic operators allows us to perform arithmetic operations. +, -, *, / and % are arithmetic operators.
Unary operators are operators that act on one operand. Binary operators are operators that act on two operands.
The + operator with strings is used for concatenating strings.
The increment operator (++) adds 1 to the operand, whereas the decrement operator (–) subtracts 1 from the operand. Both of them have two variations: prefix (change then use) and postfix (use then change).
Relational operators determine the relation between different operands. <, <=, >, >=, == and != are relational operators.
Logical operators are also known as conditional operators. They allow us to construct complex decision making expressions. &&, || and ! are logical operators.
Conditional operator (?:) is also known as ternary operator because it requires three operands. It is a shorthand alternative for if-else statement.
The [] operator is used to declare arrays. The . (dot) operator allows us to access members of an object or a class. The () operator is used in methods. The (type) operator is used in type-casting. The new operator is used to create a new object for a class, or a new array.
Operator precedence determines the order in which expressions are evaluated. Associativity rules determine grouping of operands and operators in an expression with more than one operator of the same precedence.
An expression in Java is any valid combination of operators, constants and variables.
Arithmetic expressions can be either pure or mixed. In pure expressions, all the operands are of same type. In mixed expressions, the operands are of different data types.
Math class is in java.lang package that provides us with several mathematical functions. Following are some of the commonly used mathematical functions:
a) sin(x) returns the sine of the angle x in radians.
b) cos(x) returns the cosine of the angle x in radians.
c) tan(x) returns the tangent of the angle x in radians.
d) asin(y) returns the angle whose sine is y.
e) acos(y) returns the angle whose cosine is y.
f) atan(y) returns the angle whose tangent is y.
g) atan2(x, y) returns the angle whose tangent is x / y.
h) pow(x, y) returns x raised to y.
i) exp(x) returns e raised to x.
j) log(x) returns the natural logarithm of x.
k) sqrt(x) returns the square root of x.
l) ceil(x) returns the smallest whole number greater than or equal to x.
m) floor(x) returns the largest whole number less than or equal to x.
n) rint(x) returns the rounded value of x.
o) abs(x) returns the absolute value of x.
p) max(a, b) returns the greater value among a and b.
q) min(a, b) returns the smaller value among a and b.
Type casting is the process of converting one predefined type into another. Implicit type conversion is performed by the compiler without programmer’s intervention. Explicit type conversion is user-defined that forces an expression to be of specific type. The implicit type conversion in which data types are promoted is known as coercion. You cannot typecast boolean type to another primitive type and vice-versa.
A statement forms a complete unit of execution. They always terminate with a semicolon. Assignment expressions, using ++ or –, method calls, object creation expressions are various kinds of statements.
Without classes, there can be no objects, and without objects, no computation can take place. Thus, class forms the basis of all computation.
A class is declared using the keyword class. Java file can only have one public class. A class variable/static variable is one which is shared by all objects of that class type. Instance variable is one that is created for every object of the class. | https://www.happycompiler.com/java-fundamentals/ | CC-MAIN-2020-24 | refinedweb | 1,040 | 60.61 |
Section A (40 Marks)
Question 1
(a) Define the term Bytecode.
Bytecode is a machine instruction for a Java processor chip called Java Virtual Machine (JVM).
(b) What do you understand by type conversion? How is implicit conversion different from explicit conversion?
The process of converting one predefined type into another is called type conversion.
Implicit conversion takes place automatically.
Explicit conversion is done forcefully.
(c) Name two jump statements and their use.
The break statement is used to exit from a loop or a switch case.
The continue statement is used to move to the next iteration of the loop.
(d) What is Exception? Name two exception handling blocks.
An exception is an abnormal condition that arises in a code sequence at run-time.
Two exception handling blocks are try and catch.
(e) Write two advantages of using functions in a program.
Functions lets us reuse code.
Functions reduce code complexity.
Question 2
(a) State the purpose and return data type of the following String functions:
(i)
indexOf()
Purpose: to find the index of the first occurrence of a given character or string.
Return data type:
int
(ii)
compareTo()
Purpose: to compare the contents of two strings.
Return data type:
int
(b) What is the result stored in x, after evaluating the following expression?
int x = 5;
x = x++ * 2 + 3 * --x;
= 5 * 2 + 3 * 5
= 10 + 15
= 25.
(c) Differentiate between static and non-static data members.
The static data member is common for all the objects for a given class.
The non-static data members are created separately for each object for a given class.
(d) Write the difference between
length and
length().
The
length is used to find the size of an array.
The
length() is used to find the length of a string object.
(e) Differentiate between private and protected visibility modifiers.
The private data members can only be accessed from within the class.
The protected data members can be accessed from within the class as well as from the sub-classes.
Question 3
(a) What do you understand by the term data abstraction? Explain with an example.
The act of representing essential features without including background details is known as data abstraction.
Example:
class Rectangle{
private double length;
private double width;
public double area(){
return length * width;
}
}
In the above code, the inner details of the class are hidden through encapsulation, which leads to data abstraction.
(b) What will be the output of the following code?
(i)
int m = 2;
int n = 15;
for(int i = 1; i < 5; i++);
m++;
--n;
System.out.println("m = " + m);
System.out.println("n = " + n);
m = 3
n = 14
(ii)
char x = 'A';
int m;
m = (x == 'a')? 'A' : 'a';
System.out.println("m = " + m);
m = 97
(c) Analyze the following program segment and determine how many times the loop will be executed and what will be the output of the program segment.
int k = 1, i = 2;
while(++i < 6)
k *= i;
System.out.println(k);
The loop executes 3 times.
OUTPUT: 60
(d) Give the prototype of a function check which receives a character ch and an integer n and returns true or false.
boolean check(char ch, int n)
(e) State two features of a constructor.
Constructors have the same name as their class name.
Constructors don’t have any return type (not even void).
(f) Write a statement each to perform the following task on a string:
(i) Extract the second-last character of a word stored in the variable wd.
wd.charAt(wd.length() - 2)
(ii) Check if the second character of a string str is in uppercase.
Character.isUpperCase(str.charAt(1))
(g) What will the following function return when executed?
(i)
Math.max(-17, -19)
-17
(ii)
Math.ceil(7.8)
8.0
(h) (i) Why is an object called an instance of a class?
An object is called an instance of a class because an object materialize the abstraction represented by the class. Every object will have its own state, even though they have same structure and behavior.
(ii) What is the use of the keyword
import?
The import keyword is used to include packages and interfaces into the current file.
Section B (60 Marks)
Question 4
Write a program to perform binary search on a list of integers given below, to search for an element input by the user. If it is found, display the element along with its position, otherwise display the message “Search element not found”.
5, 7, 9, 11, 15, 20, 30, 45, 89, 97.
import java.io.*; class BinarySearch{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int a[] = {5, 7, 9, 11, 15, 20, 30, 45, 89, 97}; System.out.print("Element to search: "); int key = Integer.parseInt(br.readLine()); int low = 0; int high = a.length - 1; int mid = (low + high) / 2; while(low <= high){ if(key == a[mid]) break; else if(key < a[mid]) high = mid - 1; else low = mid + 1; mid = (low + high) / 2; } if(low > high) System.out.println("Search element not found."); else System.out.println(key + " found at index " + mid); } }
Question 5
Define a class Student as given below:
Data members/instance variables:
name: to store the student’s name.
age: to store the age.
m1, m2, m3: to store the marks in three subjects.
maximum: to store the highest marks among three subjects.
average: to store the average marks.
Member functions:
(i) A parameterized constructor to initialize the data members.
(ii) To accept the details of a student.
(iii) To compute the average and the maximum out of three marks.
(iv) To display all the details of the student.
Write a main() method to create an object of the class and call the above methods accordingly to enable the task.
import java.io.*; class Student{ private String name; private int age; private int m1; private int m2; private int m3; private int maximum; double average; public Student(String n, int a, int x, int y, int z){ name = n; age = a; m1 = x; m2 = y; m3 = z; maximum = m1; average = 0.0; } public void input()throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Name: "); name = br.readLine(); System.out.print("Age: "); age = Integer.parseInt(br.readLine()); System.out.print("Marks 1: "); m1 = Integer.parseInt(br.readLine()); System.out.print("Marks 2: "); m2 = Integer.parseInt(br.readLine()); System.out.print("Marks 3: "); m3 = Integer.parseInt(br.readLine()); } public void compute(){ average = (m1 + m2 + m3) / 3.0; maximum = Math.max(m1, m2); maximum = Math.max(maximum, m3); } public void display(){ System.out.println("Name: " + name); System.out.println("Age: " + age); System.out.println("Marks 1: " + m1); System.out.println("Marks 2: " + m2); System.out.println("Marks 3: " + m3); System.out.println("Average: " + average); System.out.println("Maximum: " + maximum); } public static void main(String args[])throws IOException{ Student obj = new Student("", 0, 0, 0, 0); obj.input(); obj.compute(); obj.display(); } }
Question 6
Shasha Travels Pvt. Ltd. gives the following discount to its customers:
Ticket Amount Discount
Above Rs. 70000 18%
Rs. 55001 to Rs. 70000 16%
Rs. 35001 to Rs. 55000 12%
Rs. 25001 to Rs. 35000 10%
Less than Rs. 25001 2%
Write a program to input the name and ticket amount for the customer and calculate the discount amount and net amount to be paid. Display the output in the following format for each customer:
SNo. Name Ticket charges Discount Net amount
____ ______ _______________ _________ ____________
Assume that there are 15 customers, first customer is given the serial number 1, next customer 2… and so on.
import java.io.*; class Ticket{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String name[] = new String[15]; double amt[] = new double[15]; double dp[] = new double[15]; double net[] = new double[15]; for(int i = 0; i < 15; i++){ System.out.print("Name: "); name[i] = br.readLine(); System.out.print("Ticket amount: "); amt[i] = Double.parseDouble(br.readLine()); double da = 0.0; if(amt[i] > 70000) dp[i] = 18.0; else if(amt[i] > 55000 && amt[i] <= 70000) dp[i] = 16.0; else if(amt[i] > 35000 && amt[i] <= 55000) dp[i] = 12.0; else if(amt[i] > 25000 && amt[i] <= 35000) dp[i] = 10.0; else dp[i] = 2.0; da = dp[i] / 100.0 * amt[i]; net[i] = amt[i] - da; } System.out.println("SNo. Name Ticket Charges Discount Net Amount"); for(int i = 0; i < 15; i++) System.out.println((i + 1) + "\t" + name[i] + "\t" + amt[i] + "\t" + dp[i] + "%\t" + net[i]); } }
Question 7
Write a menu-driven program to accept a number and check and display whether it is a prime number or not, or an automorphic number or not. Use switch-case statement.
(a) Prime number: A number is said to be a prime number if it is divisible only by 1 and itself and not by any other number.
Example: 3, 5, 7, 11, 13, etc.
(b) Automorphic number: An automorphic number is the number which is contained in the last digit(s) of its square.
Example: 25 is an automorphic number as its square is 625 and 25 is present as the last two digits.
import java.io.*; class Menu{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println("1. Prime Number"); System.out.println("2. Automorphic Number"); System.out.print("Enter your choice: "); int choice = Integer.parseInt(br.readLine()); switch(choice){ case 1: System.out.print("N = "); int n = Integer.parseInt(br.readLine()); int f = 0; for(int i = 1; i <= n; i++){ if(n % i == 0) f++; } if(f == 2) System.out.println(n + " is prime."); else System.out.println(n + " is not prime."); break; case 2: System.out.print("N = "); n = Integer.parseInt(br.readLine()); int square = n * n; String a = Integer.toString(square); String b = Integer.toString(n); if(a.endsWith(b)) System.out.println(n + " is automorphic."); else System.out.println(n + " is not automorphic."); break; default: System.out.println("Invalid input!"); } } }
Question 8
Write a program to store six elements in an array P, and four elements in an array Q and produce a third array R, containing all elements of arrays P and Q. Display the resultant array.
Example:
P[] = {4, 6, 1, 2, 3, 10}
Q[] = {19, 23, 7, 8}
R[] = {4, 6, 1, 2, 3, 10, 19, 23, 7, 8}
import java.io.*; class MyList{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int p[] = new int[6]; int q[] = new int[4]; int r[] = new int[10]; System.out.println("Elements in 1st array:"); for(int i = 0; i < p.length; i++) p[i] = Integer.parseInt(br.readLine()); System.out.println("Elements in 2nd array:"); for(int i = 0; i < q.length; i++) q[i] = Integer.parseInt(br.readLine()); int index = 0; for(int i = 0; i < p.length; i++) r[index++] = p[i]; for(int i = 0; i < q.length; i++) r[index++] = q[i]; System.out.println("Resultant array:"); for(int i = 0; i < r.length; i++) System.out.print(r[i] + "\t"); } }
Question 9
Write a program to input a string in uppercase and print the frequency of each character.
Example:
INPUT: COMPUTER HARDWARE
OUTPUT:
Characters Frequency
A 2
C 1
D 1
E 2
H 1
M 1
O 1
P 1
R 3
T 1
U 1
W 1
import java.io.*; class Frequency{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.print("String: "); String s = br.readLine().toUpperCase(); for(char ch = 'A'; ch <= 'Z'; ch++){ int count = 0; for(int i = 0; i < s.length(); i++){ char x = s.charAt(i); if(x == ch) count++; } if(count > 0) System.out.println(ch + "\t" + count); } } } | https://www.happycompiler.com/class-10-2010/ | CC-MAIN-2020-24 | refinedweb | 1,990 | 60.92 |
This post gives a short introduction of how to build a Google Wave robot. The goal is to build a robot that takes user input as a Groovy script, evaluates it and displays the result back in the wave. To avoid any misunderstanding, the robot itself is not written in Groovy but in Java (shame on me ;-) and uses GroovyShell to evaluate Groovy code. For the impatient (with a Google Wave account) who want try the robot, it's address is groovybot@appspot.com. See the resources section for source code.
Introduction to Google Wave
Google Wave is a communication and collaboration platform, where people can participate in conversations. It's kind of like an e-mail conversation but in real-time like a chat, and much more interactive, with much more features. A conversation, consisting of one or more participants, is called a wave. A wave is a container for wavelets, which are threaded conversations, spawned from a wave. Wavelets are a container for messages, called blips. For more details, see the Google Wave documentation.
Google Wave Extensions
There are two types of Google Wave extensions - gadgets and robots. A gadget basically is a component containing some HTML and JavaScript, that can be included in a wave conversation. But as GroovyBot currently does not make use of gadgets (yet), I won't consider gadgets any further in this post. The other type of wave extension is robots, and as the name suggests, GroovyBot is of this type of extension. A robot is a certain kind of web application that can participate in a wave conversation just as a human participant. And just as a human participant, it can be added to a conversation and then react to certain kinds of events in the wave, e.g. when it's added to the wave or somebody submitted a message (a blip). In response, it can then modifiy the content of the conversation, e.g. by appending content or replacing content.
Getting Started
Currently robots have to be deployed to the Google App Engine. Fortunately, it's really easy to get started with Google App Engine. First you'll need to sign up for an accout. Then with the Eclipse plugin for the app engine create an application, just like shown in this post. This will create a simple Servlet web application, which can immediately be deployed on app engine. But we don't want a simple servlet application. Let's see how to build a wave robot.
Building a Robot
First of all, we need the Google Wave Java Client API, which consists of four jar files, and (for this particular robot) the Groovy jar groovy-all-1.6.5.jar. Place these in the
war/libdirectory and add them to the build path of your Eclipse project. Then we create the robot, which is simply a class extending
AbstractRobotServlet. We'll name it
GroovyBotServlet:
public class GroovyBotServlet
extends AbstractRobotServlet {
@Override
public void processEvents(
RobotMessageBundle bundle) {
// TODO
}
}
As said, a robot is basically a wave participant, which can react to certain events happening in the wave conversation. The event handling is done in the
processEventsmethod. All information about the event and the context in which the event ocured, like the wavelet, the blip, the participants and so on, is contained in the
RobotMessageBundleparameter. The robot servlet has to be mapped to the URL path
/_wave/robot/jsonrpcin the
web.xml.
The first thing we'd like to do is to show a simple message in the wave, when the robot is added as a participant. The wave API provides a convenience method called
wasSelfAddedto check for this particular event:
public void processEvents(RobotMessageBundle bundle) {
if (bundle.wasSelfAdded()) {
final Blip blip = bundle.getWavelet().appendBlip();
final TextView textView = blip.getDocument();
textView.append("GroovyBot added!");
}
}
So, if the robot was just added to the wave, we get the wavelet in which the event occured, append a blip to it and add the message to the blip's document. The result looks like this:
This was easy, and to make the robot run a Groovy script and display the result back in the wave is not much harder.
Executing Groovy Scripts
GroovyBot should not execute all messages submitted to the wave as a Groovy script. Instead, if the user wants the content of a blip to be executed as a script, the blip has to start with the prefix '!groovy' followed by the code, e.g.
The event we want to react to is a
BLIP_SUBMITTEDevent. We have to tell wave that our robot is interested in events of this type by specifying this in the file
war/_wave/capabilities.xml:
<?xml version="1.0" encoding="utf-8"?>
<w:robot xmlns:
<w:capabilities>
<w:capability
</w:capabilities>
<w:version>0.1</w:version>
</w:robot>
Then we modify the
processEventsmethod to handle these events:
public void processEvents(
final RobotMessageBundle bundle) {
for (final Event e : bundle.getEvents()) {
if (e.getType() == EventType.BLIP_SUBMITTED) {
// Get input
final Blip inputBlip = e.getBlip();
final TextView inputDocument =
inputBlip.getDocument();
final String inputText = inputDocument.getText();
if (inputText.startsWith("!groovy")) {
final String script = inputText
.substring("!groovy".length());
// Execute script
final GroovyShell shell = new GroovyShell();
final StringBuilder result =
new StringBuilder("Result: ");
try {
result.append(shell
.evaluate(script));
} catch (final Exception ex) {
result.append(e.toString());
}
// Create response
final Blip blip = bundle.getWavelet()
.appendBlip();
blip.getDocument().append(result.toString());
}
}
}
This code first checks if there is an event of type
BLIP_SUBMITTED. If so, it gets the blips contents and checks for the prefix "!groovy". Then it removes the prefix, executes the remaining content as a Groovy script with GroovyShell and appends a blip with the result. The output will look like this:
Conclusion
That's basically all, to build a simple Groovy robot. The code above shows a simplified version of GroovyBot which is currently deployed, though. It could be improved in many ways, for example
System.outcould be captured and also shown in the output. Also, the result could be displayed in another way for a better user experience, for example as a child blip, an inline blip or even as a gadget. But I ran into some issues trying this, for example inline blips were not displayed, or child blips could not be removed and re-added. There are some known bugs in the client API, but all in all, I had a really good experience with wave and its client API. There are many things left, I'd like to do, for example syntax highlighting, using a gadget or making some predefined scripts available, e.g. some DSLs. If you've got some ideas, for example what would be a better user experience, you are welcome. You can start trying GroovyBot right now. It's address is groovybot@appspot.com. Why don't you start with this example, taken from the Practically Groovy series:
!groovy
def zip = "80020"
def addr = "{zip}"
def rss = new XmlSlurper().parse(addr)
def results = rss.channel.item.title
results << "\n" + rss.channel.item.condition.@text
results << "\nTemp: " + rss.channel.item.condition.@temp
println results
Resources
[1] Google Wave
[2] GroovyBot at GitHub (Source Code)
[3] Google Wave Jave Robots Tutorial
[4] Google App Engine
3 comments:
Nick,
Very nice article. Exactly what I was looking for :)
However, I am struglling with it. All I get on my wave is this:
java.lang.NoClassDefFoundError: groovy/lang/GroovyShell
...
Caused by: java.lang.ClassNotFoundException: groovy.lang.GroovyShell
at com.google.appengine.runtime.Request.process-dd513586ce376f48(Request.java)
I had to add some stacktrace facility to your code in order to get a few more clues on what was going on, as I was not getting any blip from the robot :(
To me it seems like their java does not implement GroovyShell, but what I do not understand is, how come you got it to work. I have to be missing something.
Thank you for your advice,
David
@David: Seems to me that you've forgot to include the Groovy jars, i.e. groovy-all.jar. Sorry, forgot to mention that in the post.
I guess it's worth mentioning that Grails has standard plugins allowing it to run in AppEngine. So you could write your robot as a Grails app if it suited you. | http://stronglytypedblog.blogspot.com/2009/10/groovybot-developing-groovy-google-wave.html | CC-MAIN-2017-17 | refinedweb | 1,375 | 65.12 |
Note: This post uses the the Couchbase Analytics Data Definition Language as of the version 5.5 preview release. For updates and information on breaking changes in newer versions, please refer to Changes to the Couchbase Analytics Service.
The application built for the Couchbase Connect Silicon Valley conference last fall incorporates dynamic N1QL queries, offline mobile, IoT sensors, ad hoc queries with analytics, cross-data center replication, failover, fuzzy text matching, and a slew of other features.
You can watch the keynote demonstration here, find out about the high-level architecture here, and watch a full walk-through of how to set up the demo here.
In this post, I’ll go through all the steps to configure and run the demo. We’ll do this through the command line. (Some steps require the cURL tool.)
Throughout I include direct links to video that goes over similar material.
Clone Repository and Set Working Directory
Configuring Couchbase Server
The demo project depends on new capabilities slated for Couchbase 5.5. As of this writing, 5.5 is in beta. Download and install Couchbase 5.5 for your platform here. Start a node running.
Preparing
The following assumes you’re running on
localhost. You can adjust to a remote host as necessary. The command executables are included in your Couchbase installation. You may want to add the tools directory to your command path. For example, on a Mac, you can do something like this.
Initialize the Node
To initialize the first node, we need three steps.
- Basic setup of the administrative user, services to run, and memory allocations
- Creation of the main application bucket
- Adding a separate user for role-based access control.
Run the following commands to perform these steps.
Most of the parameters for these commands should be fairly straight-forward to figure out.
Couchbase Server nodes can be dedicated to running a subset of all available services, part of what’s referred to as Multidimensional Scaling. Here, we’re configuring the node with all the services needed by the application.
The RAM allocations are all set to their minimum values. These aren’t typically what you’d use in production, but are plenty for data included here.
Note the cluster specification uses a custom URL,
couchbase://127.0.0.1. There are other ways to specify it, but this is the easiest. This is telling the commands to connect to Couchbase Server on the local machine.
After running these commands, it will typically take a few moments to warm the node up. Give it a short pause before proceeding with the next steps.
Related video:
Data
We generated realistic synthetic patient data using the Synthea tool (Copyright © 2018 The MITRE Corporation). Synthea is open source and free to use. The full demo at Connect 2017 used hundreds of millions of records. A dataset more suitable for running on a single machine has been included in the demo source repository. To load it, execute the following.
The options tell
cbimport to expect one record per line, and to auto-generate the document key from the id field of each record.
Related video:
Eventing Service
The Couchbase Eventing Service is one of the new features being added in release 5.5. It allows you to monitor changes in the database and run JavaScript functions in response.
In this application we use it to monitor incoming patient data. This lets us push the data to the web app, rather than relying on polling. This happens using the cURL capabilities built into N1QL.
To do this, we need to set up a JavaScript function that watches changes in the database. Currently the Eventing Service needs its own meta-data bucket as well. Configure this part with the following commands.
Here’s the actual JavaScript.
This function only processes “Observation” documents. It extracts a few elements we want to display on in the web console. It then uses a
cURL call to post that data to a REST endpoint on the web server.
Related video:
Query Indexes
The application relies on a number of N1QL queries. These would run if you define a primary index, but they would be slow. Instead, add three Global Secondary Indexes.
The data used in the application follows the FHIR specification. Records include a
resourceType field. For example, “Observation”, “Condition”, and “Practitioner” are all resource types. The first index optimizes queries against this field.
In the dashboard we show graphs of temperatures. The are recorded as “Observation” documents. The second index pulls out the main information we care about displaying. (Compare the fields here with what gets pushed by the Eventing code.) This makes lookups and retrieval of the relevant data fast, since everything we need is stored in the index.
The last index is built against “Location” records. This is used to allow us to connect patients with their nearest hospital.
Related video:
Analytics
The application can examine some case history data. This kind of free-form analysis is better suited to the new Couchbase Analytics Service (currently in preview).
The Analytics Service works by automatically importing data from the operational database buckets into its own special-purpose buckets. You then define what are known as shadow datasets. Finally, you must issue an instruction to connect the analytics to your operational data. After that, queries are done using SQL++, an superset of SQL similar to N1QL. You can read more in this tutorial.
All the configuration is done by issuing commands through the analytics query engine. Configure the necessary pieces for the demo with the following command.
Here are the actual commands being run.
You could do this just as well from the query interface on the Couchbase Server web console, using the lines exactly as shown above.
Related video:
Full-Text Search
Couchbase Server Full-Text Search allows language aware matching based both on indexed fields and the input search terms. It provides a great deal of power when searching free-form text. For this application, we use it to pull out records based on just that: entries in FHIR documents set aside for unstructured notes.
Configure the needed indexes as follows.
The index is too complicated to go through here in full. I’ll just touch on some main points.
Most importantly, we’re doing language aware analysis of the
note field of an “Observation” document, and the
reason field of an “Encounter” document. These are the fields where a care provider might enter free form text.
Other entries are there to pull data in simply to display or for use in faceting. Faceting lets a user restrict and refine searches. It’s a powerful way to allow structured drilling down into data.
Look for a future post going through the index and the FTS code for more details.
Related video:
cURL Access Restrictions
N1QL queries can include
cURL-style network calls. We use these for pushing updates to the web app, and for obtaining geomapping data via Google.
Since these calls originate from a query service node, they have important security implications. Therefore, by default, it’s turned off.
We need to authorize calls to the Google endpoint and to an api on the web server. Do that with the following command.
Related video:
Sync Gateway
Sync Gateway provides the glue between Couchbase Server and the mobile application. It also provides some of the business logic. In this case, we simply need to connect to Coucbase Server, configure basic authentication, and create a channel for our primary user.
Run Sync Gateway directly as follows.
We’re not using a filtered document import here, so this can take a little while the first time to create all the needed meta-data.
Related video:
Web Client and Server
Install Node.js. The server requires version 7 or higher. I recommend using nvm to manage Node versions if you have an existing installation. (The nvm installation guide can be found here.)
Configuring the Web Client
The web client code is under
web/client. Under
src/config in the client code, update
serverURI in the
index.jsfile to point to your web server. This is the host the web server is running on. This may be different from where you run Couchbase. By default it uses
localhost, so if you plan to run everything on one machine you can just leave it as is.
Building the Client
Change directories to
web/client. Install the Node packages.
You can run in one of two modes, development or production. Development allows easier debugging, and supports hot reloading, but is more complicated to set up. This mode requires running separate servers, one that serves the client pages and the other to expose the API we need.
Here I’m only going to describe running in production mode. For the client, this just means running a build.
When finished, this will copy the final client files over to a subdirectory of the server directory. The Node server will pull app content from there.
Configuring and Running the Web Server
Change directories to
web/server. Install the Node packages.
The server has a few parameters that need setting. I included a package that will pull these either from environment variables or a file named .env in the server directory. The parameters break into two groups, those needed for Urban Airship push notifications, and those needed to connect to Couchbase.
The parameters are
- An Urban Airship application key
- An Urban Airship master secret
- The Couchbase Server cluster URL
- The username and password of a user on the CB Server cluster with appropriate privileges
- A URL for connecting to a Couchbase Server Analytics Service node.
This set of Bash shell commands will create a template for you.
Alternatively, you can just set environment variables. For example, you could leave the Urban Airship parameters out, and configure them like this instead.
If you don’t want to use the Urban Airship push notification feature, set the UA keys to something arbitrary.
You can now run the server.
Open a browser and navigate to
localhost:8080. You should see the web console.
Related video:
Android Mobile Application
Open
mobile/android/CBCHealth in Android Studio to build the application.
The Android mobile app uses Urban Airship for push notifications. If you want to include this feature, you must fill out the Urban Airship configuration with your own keys. See
mobile/android/CBCHealth/app/src/main/assets/airshipconfig.properties.sample.
If you don’t want to include push notifications, remove the following line from the application’s
AndroidManifest.xml file.
By default, this builds a version of the application for use with the Android emulator. There’s a soft entry dialog for entering temperature readings.
If you want to use the actual patch with a real device, you need to make two changes.
- In
mobile/android/CBCHealth/app/build.gradlechange
def parameters = ".EMULATOR"to
def parameters = ".DEFAULT"(See
mobile/android/CBCHealth/app/src/main/java/com/couchbase/mobile/app/launch/Parameters.javafor definitions of these entries)
- In
mobile/android/CBCHealth/app/src/main/resources/META-INF/services/com.couchbase.mobile.collectors.Collectorchange
com.couchbase.mobile.collectors.temperature.ManualEntryto
com.couchbase.mobile.collectors.temperature.RF430_TMPSNS_EVM
Related video:
Wrapping up
There’s a lot going on here. The purpose of this post is primarily to get you up and running with a full-featured app you can use to try various aspects of Couchbase out.
Look for deeper dives into the code and configuration coming up.
Postscript
Couchbase is open source and free to try out.
Find other resources on our developer portal.
You can post questions on our forums.
We actively participate on Stack Overflow.
Hit me up on Twitter with any questions, comments, topics you’d like to see, etc. @HodGreeley
One Comment
. | https://blog.couchbase.com/couchbase-data-platform-action-setup-steps/ | CC-MAIN-2019-39 | refinedweb | 1,963 | 58.99 |
05 July 2012 15:47 [Source: ICIS news]
LONDON (ICIS)--Indorama Ventures has delayed a maintenance shutdown at its ?xml:namespace>
"We are not going to take
The 198,000 tonne/year plant was scheduled to go down for maintenance in October 2012.
"For supply reasons to the market … and because of a lack of imports," the outage was postponed, Short added.
The scheduled shutdowns at Indorama's 155,000 tonne/year Workington PET plant in the UK and the 140,000 tonne/year PET plant in Wlocklawek, Poland, are on plan for September/October, Short said.
Indorama's 200,000 tonne/year expansion at its
"We are testing kit … It is not fully up yet," Short said.
The European PET market is in a state of flux after mixed pricing messages from upstream sectors resulted in some cheap PET offers disappearing this week, according to buyers and sellers.
Plant capacities are according to ICIS data.
($1 = €0.80)
Follow Caroline on Twitter for tweets on | http://www.icis.com/Articles/2012/07/05/9575786/indorama-delays-klaipeda-pet-shutdown-and-starts-up-rotterdam.html | CC-MAIN-2014-35 | refinedweb | 166 | 60.75 |
Investors in Honeywell International Inc (Symbol: HON) saw new options begin trading today, for the August 2nd expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the HON options chain for the new August 2nd contracts and identified one put and one call contract of particular interest.
The put contract at the $160.00 strike price has a current bid of 84 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $160.00, but will also collect the premium, putting the cost basis of the shares at $159.16 (before broker commissions). To an investor already interested in purchasing shares of HON, that could represent an attractive alternative to paying $172.60/share today.
Because the $160.52% return on the cash commitment, or 3.83% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Honeywell International Inc, and highlighting in green where the $160.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $175.00 strike price has a current bid of $3.25. If an investor was to purchase shares of HON stock at the current price level of $172.60/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $175.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 3.27% if the stock gets called away at the August 2nd expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if HON shares really soar, which is why looking at the trailing twelve month trading history for Honeywell International Inc, as well as studying the business fundamentals becomes important. Below is a chart showing HON's trailing twelve month trading history, with the $175.00 strike highlighted in red:
Considering the fact that the $175.88% boost of extra return to the investor, or 13.75% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 27%, while the implied volatility in the call contract example is 21%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $172.60) to be 18%. For more put and call options contract ideas worth looking at, visit StockOptionsChannel.com.
Top YieldBoost Calls of Stocks Analysts Like »
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc. | https://www.nasdaq.com/articles/august-2nd-options-now-available-honeywell-international-hon-2019-06-13 | CC-MAIN-2020-45 | refinedweb | 464 | 64 |
I decided to go through a short tutorial on how to build a basic movie database using ASP.NET's MVC framework. As I am reading it, I can already see the differences between both frameworks.
The first thing I notice, is that a default project layout in ASP.NET does not have a concept of individual apps, which can be easily used between multiple projects. The folder structure is geared more towards a monolithic application, a single folder contains all the models, controllers, and views. There is no separation of apps. This can be a mixed bag, depending on the kind of programmer you are, personally I like to keep my ideas separate, and easily usable among several projects.
The next thing I notice is the way the models and databases are built. Like most frameworks on the market, you need to specifically create the database table, then separately create a model in your application to reference the database table. I prefer Django's method, all I need to do is tell Django where my database is(connection URI info), and build my entire database in the models.py for my particular app. Once I'm ready to create the tables in the database, all I need to run is manage.py syncdb, or if I'm using South, it will differ a little, but there is still minimal database work. It just works, a term Microsoft has been struggling with since they pushed out Windows '95. Django does indeed just work.
The template systems also differ, depending on how you build the templates in ASP.NET. The tutorial I am reading through uses Razor. It seems to embed actual source code within the template... If you are developing an application where you have a separate HTML/CSS designer, this is no good. A web designer should not need to know how to program or what objects are and how to embed links into the page. A web designer's purpose is to create a nicely polished website, with perhaps some basic logic or special tags. In Django we can add a URL link to a page using this:
<a class="nav-link" href="{% url "contact-list" %}">Contacts</a>
For a designer to look at the above code, he/she can easily learn and understand what it does. It creates a URL to a specific page in the app called contact-list. Here is the same code in Razor:
@Html.ActionLink("Contacts", "Contacts")
A web designer sees this, he will wonder how he can easily add a link using his fancy WYSIWYG editor, and which parameter is actually the target and which shows what is displayed to the visitor? This format is not very web designer friendly, as it cannot be easily integrated into WYSIWYG editors. The Django alternative can be easily put into the link href using any editor, and it won't complain.
Finally we come to forms, not web forms, that's a completely separate platform(talk about confusing). ASP.NET MVC does not have a Form class which can be called into any template with ease. Instead the form is created directly in the HTML, along with the validation information. This is very messy coding, and does not properly separate the components for easy debugging and moving around. This also destroys the DRY principle, DRY stands for Don't repeat yourself. In Django, if you wanted to use a form multiple times, or in multiple applications, is literally easy-peasy. You create a simple straightforward class, and can either base the class of an existing database model, or write the fields from scratch. Once the class is done, sent it off to the template as a context to be rendered. Django offers many rendering methods, to make it really easy to test forms and fine tune them. Here is a simple form in Django:
from django import forms from django.forms.widgets import PasswordInput, Textarea class ContactForm(forms.Form): first_name = forms.CharField() last_name = forms.CharField() password = forms.CharField(widget=PasswordInput) confirm_password = forms.CharField(widget=PasswordInput) profile = forms.CharField(widget=Textarea(attrs={'cols':'60','rows':'10'})) newsletter = forms.BooleanField(label='Receive Newsletter?', required=False) def clean_confirm_password(self): password1 = self.cleaned_data.get("password", "") password2 = self.cleaned_data["confirm_password"] if password1 != password2: raise forms.ValidationError("The two password fields didn't match.") return password2
This class, can be added into any other application you create to create the same functionality. The class also contains the validation of the 2 password fields. You may notice the usage of widgets, this allows one to further customize and reuse widgets between many apps with ease. Here is the simple template which can render the above:
<form method="post"> {{form.as_p}} <input type="submit" value="Register" /> </form>
That's all there is to it, even a web designer should be-able to understand what it does. If one does not pass as_p to form, it will render as a table. Just for kicks, here is the Django view which would send the form out as a context and check for validation errors:
def contact_us(req): if req.method == 'POST': form = ContactForm(req.POST) if form.is_valid(): # No validation errors. pass else: form = ContactForm() return render(req, "contact_form.html", {'form':form})
Pretty straightforward function. Here is the same contact form in ASP.NET:
<%=> <%}%>
This is pretty messy compared to the Django code, and this code doesn't actual validate much, besides the point that all fields are filled in. If you are a web designer reading this, which one of these would you prefer to work with? Remember the Django code can be expanded to this:
<form> <p> <label for="FirstName">First Name:</label> {{form.first_name}} {{form.first_name.errors}} </p> <p> <label for="LastName">Last Name:</label> {{form.last_name}} {{form.last_name.errors}} </p> <p> <label for="Password">Password:</label> {{form.password}} {{form.password.errors}} </p> <p> <label for="Password">Confirm Password:</label> {{form.confirm_password}} {{form.confirm_password.errors}} </p> <p> <label for="Profile">Profile:</label> {{form.profile}} </p> <p> {{form.newsletter}} <label for="ReceiveNewsletter" style="display:inline">Receive Newsletter?</label> </p> <p> <input type="submit" value="Register" /> </p> </form>
Django is built from the ground up for customization, as you can see. You can even sub-class the form to alter it for a different purpose. Django takes full advantage of object-oriented programming. My next comparing article will be about Web Forms, which is another development component of ASP.NET. I may not be-able to compare it with Django, and if not, I will see what in the Python world will compare with Web Forms.
I will let the reader decide which framework they will choose for their next MVC/MTV web project. | http://pythondiary.com/blog/Apr.11,2012/comparing-django-aspnet-mvc.html | CC-MAIN-2016-07 | refinedweb | 1,120 | 58.38 |
numpy.prod() method in Python
In this article, we will learn about numpy.prod() method in Python.
Introduction:- numpy.prod() returns the product of an array with certain parameters defined.
Syntax:- numpy.prod(a, axis=None, dtype=None, out=None, keepdims=<bool_value>)
where:-
1. a= array_like –input array
2. axis= None,int or tuple of ints –it species the axis .
None – calculates the product of all elements in the array.
int – if negative, it calculates from last to the first axis.
a tuple of ints – the product of all the axes defined in tuples.
3. dtype= dtype (optional) — the type of the returned array with an accumulator in which multiplication is done. The default data type of a is used except a has less precision int dtype over the default platform type.
4. out= ndarray, optional — separate output array to store results. Above all, it can cast the results in other dtype.
5. keepdims= bool, optional — If keepdims is set to true, the axes are left in result with dimension size one, and the result will broadcast correctly against the input array. If it is set to default, the keepdims will not pass through prod method of sub-classes of ndarray but if set to the non-default value it will pass.
Examples of numpy.prod() method in Python
- To begin with, let’s print the product of the 1d array:-
import numpy as np a = [4,5] b = np.prod(a) #product of a print(b)
As a result, the following output is obtained: –
C:\Users\KIRA\Desktop>py 1d.py 20
- Likewise, print the product of a 2d array:-
import numpy as np a = [[4,5],[2,3]] b = np.prod(a) # product of 2d matrix print(b)
output:-
C:\Users\KIRA\Desktop>py 2d.py 120
- Similarly, print the product of 2d array with axis 1 which is similar to a matrix multiplication of 2 arrays:-
import numpy as np a = [[4,5],[2,3]] b = np.prod(a,axis=1) # axis changes the multiplication to matrix multiplication print(b)
output:-
C:\Users\KIRA\Desktop>py axis.py [20 6]
- In addition, print the data type of the resultant array:-
import numpy as np a = np.array([10,20,30],dtype= np.int32) # keeping int32 as data type b = np.prod(a) print(b.dtype)
output:-
C:\Users\KIRA\Desktop>py dtype.py int32
The Numpy module has many other functions for programming too. | https://www.codespeedy.com/numpy-prod-method-in-python/ | CC-MAIN-2021-10 | refinedweb | 407 | 66.44 |
Just back from 6 nights at the Pacific Sutera and cannot think of one negative thing to say about it. We had a great stay in a Luxury Sea View room which was very clean and spacious with fantastic views and a large bathroom. The pool area was lovely, and again very clean, the restaurant large and airy with a huge selection at the breakfast buffet. There is a regular shuttle bus to KK centre for just 3RM return (about 60p), and practically on-site is the island boat trip center. Everywhere was clean and fresh, the staff were excellent, with a special mention for Nels - the Beverage Manager, he always made a point of saying hello and stopping for a chat. All in all we have a fantastic trip and really enjoyed our stay, Travelocity, Ctrip TA, Agoda, Cancelon, Priceline, Cheap Tickets, TripOnline SA, Evoline l | https://www.tripadvisor.com/ShowUserReviews-g298307-d307999-r144144994-Sutera_Harbour_Resort_The_Pacific_Sutera_The_Magellan_Sutera-Kota_Kinabalu_Kota_K.html | CC-MAIN-2017-17 | refinedweb | 147 | 63.73 |
USEUNIT (C++ Builder) Equivalent...
Hi All.
I am trying to appropriate some code written in C++ builder.
within the main.cpp code it has a USEUNIT command, followed by a class, i.e.
USEUNIT("dsnotifydlg.cpp"); //____________________________________________________ WINAPI WinMain(HINSTANCE, HINSTANCE, LPSTR, int) { try { Application->Initialize(); Application->CreateForm(__classid(TForm1), &Form1); Application->Run(); } catch (exception &exception) { Application->ShowException(&exception); } return 0;
Could anyone tell me the equivalent to USEINIT in QT?
Thanks
J
I should add this code is from 15 years ago. I have seen comments suggesting it has been replaced with #include so I will try that and report back!
- mrjj Qt Champions 2017
Hi
You dont need it in Qt.
Also there is no concept of auto created forms so a plain main looks like
#include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; // same as CreateForm (ish) w.show(); return a.exec(); // same as Application->Run(); }
- mrjj Qt Champions 2017
Yes, you can say useunit is now just plain .h includes.
Also notice that Widgets/controls can be local variables so this is perfectly valid.
(unlike builder where you MUST new them)
void showdialog() {
MyDialog dia;
dia.exec(); // blocking so safe to use non pointers
}
Yep I thought as much! Thanks for the help! | https://forum.qt.io/topic/89314/useunit-c-builder-equivalent | CC-MAIN-2018-43 | refinedweb | 216 | 58.99 |
Setup
try: import cirq import cirq_google as cg except ImportError: print("installing cirq-google...") !pip install --quiet cirq-google --pre print("installed cirq-google.") import cirq import cirq_google as cg
Quantum Computing Service enables researchers to run quantum programs on Google's quantum processors. This notebook is a tutorial to get you started with the typical setup, using the open source Python framework Cirq, in the free cloud Jupyter notebook environment, Google Colab.
Access is currently restricted to those in an approved group, and you must be in that group before running this tutorial.
You can find more about running this in colaboratory in the Colab documentation or in our Cirq-specific guide to running in Colab. You can download this notebook from the GitHub repository.
Before you begin
- First, decide which project you will use the Quantum Computing Services from. All of your quantum programs and results will live under a project which you specify when creating and running these programs using Quantum Engine. You can use an existing project or create a new project. Learn more about creating a project.
- Log in and agree to Terms of Service.
- Follow this link to enable the Quantum Engine API in your Google Cloud Platform project.
After the API is enabled, you should be redirected to the Quantum Engine console and it should look like the following screenshot.
Enter your project id into the input text box below. To find your project id, click on the project menu in the blue bar at the top of the console. This will open a menu that displays your project name (e.g. "My project") and unique project id (e.g. my-project-1234). Enter the project id into the input below. (Help on finding your project id.)
Run the code in the next block (the one with the text box), which will prompt you to authenticate Google Cloud SDK to use your project. You can run the code by either clicking the play button (pointed by arrow below) or by selecting the block and pressing CTRL-ENTER. After running the block you will see a link which you should click. This will open a new browser window. Follow the authentication flow for this window. After you authenticate and allow access to your project, you will be given a string which you should enter into the text box that appears in the run area (and then press return). If you see "Authentication complete" you have done this step successfully. If this fails, make sure that you have cut and paste the string correctly (e.g. the clipboard button seems to not work for some browser/OS combinations).
# The Google Cloud Project id to use. project_id = "" processor_id = "" from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook # For real engine instances, delete 'virtual=True' below. qcs_objects = get_qcs_objects_for_notebook(project_id, processor_id, virtual=True) project_id = qcs_objects.project_id processor_id = qcs_objects.processor_id engine = qcs_objects.engine if not qcs_objects.signed_in: print("ERROR: Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.") print("Using noisy simulator instead.")
Available processors: ['rainbow', 'weber'] Using processor: rainbow ERROR: Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id. Using noisy simulator instead.
Authentication details Double clicking on the project_id block above should expose the code that is run when you run this code block. This code uses the colabtools auth module to ensure that Application Default Credentials are set and then creates a variable
colab_auth which can be used in Cirq to authenticate your calls to Quantum Computing Service.
If you are going to run code outside of colab and want to authenticate, see the below section on running from the command-line. (Note that this colab's automated outputs use a noisy simulator instead of authenticating to the production service).
Create a circuit
Now that you've enabled Quantum Computing Service and configured the notebook, let's create a basic program with Cirq. After reviewing the code, run this block to run a circuit, and print a circuit diagram and results. To learn more, refer to the Cirq overview and Cirq basics pages.
# Define a qubit at an arbitrary grid location. qubit = cirq.GridQubit(0, 0) # Create a circuit (qubits start in the |0> state). circuit = cirq.Circuit( cirq.X(qubit), # NOT gate. cirq.measure(qubit, key='result') # Measurement. ) print("Circuit:") print(circuit)
Circuit: (0, 0): ───X───M('result')───
Simulate the circuit using Cirq
Let's quickly use Cirq to simulate the circuit above.
# Simulate the circuit, repeating 1000 times. print("Simulating circuit using Cirq...\n") results = cirq.sample(circuit, repetitions=1000) print("Measurement results:") print(results)
Simulating circuit using Cirq...
Run on quantum hardware
Approved users can access quantum hardware in two modes. First, all approved users have access to a processor in "open-swim" which is a first-in-first-out queue with fairness algorithm that balances jobs across users in the queue. Secondly, processors can be reserved in hourly blocks if the user is approved. You can learn more about the reservation system on the concepts page. We'll use the processor
pacific in this demo.
Create a Quantum Engine client
Interactions with hardware are facilitated by the Quantum Computing Service. A client must first be initialized with your Google Cloud project to perform these interactions.
There are two ways to create the engine client:
cg.Engine(project_id=YOUR_PROJECT_ID)
or you can use:
engine = cg.get_engine()
View the processor's topology
Each processor has a set of available qubits laid out on a grid with limited couplings between qubits. The device specification can be printed to inspect the topology of a processor.
processor = engine.get_processor(processor_id) # Print the device showing qubit connectivity. device = processor.get_device() print(device)
(3, 2) │ │ (4, 1)───(4, 2)───(4, 3) │ │ │ │ │ │ (5, 0)───(5, 1)───(5, 2)───(5, 3)───(5, 4) │ │ │ │ │ │ │ │ (6, 1)───(6, 2)───(6, 3)───(6, 4)───(6, 5) │ │ │ │ │ │ │ │ (7, 2)───(7, 3)───(7, 4)───(7, 5)───(7, 6) │ │ │ │ │ │ (8, 3)───(8, 4)───(8, 5) │ │ (9, 4)
Note that the qubit that we used for the simulation above,
(0, 0), does not exist on the hardware. Since the grid of available qubits may change over time, we'll programatically select a valid qubit by inspecting
device.qubits. We then use the
transform_qubits() method to remap the circuit onto that qubit.
In order to run on hardware, we must also ensure that the circuit only contains gates that the hardware supports. The basic gates used here are always available, so this circuit can be run without any further changes, but in general you may need to apply additional transformations before running arbitrary circuits. See the best practices guide for more information about running circuits on hardware.
valid_qubit = sorted(device.metadata.qubit_set)[0] # Transform circuit to use an available hardware qubit. hw_circuit = circuit.transform_qubits(lambda q: valid_qubit) print(hw_circuit)
(3, 2): ───X───M('result')───
Create a job on the Quantum Engine
Cirq circuits are represented in the Quantum Computing Service as Programs. To run a Program, you must create a Job that specifies details about the execution, e.g. the processor to use and the number of times to repeat the experiment. This enables a single circuit to be run multiple times in different configurations. For a one-off use, these steps can be done together by using the
engine.run_sweep utility to create both a Program and a Job.
A new Job will be scheduled on the requested hardware as it becomes available. The execution of your Job will likely be completed within a few seconds and the results will be displayed below. The output will include a link to the console, where you can view the status and results of your jobs.
print("Uploading program and scheduling job on the Quantum Engine...\n") # Upload the program and submit jobs to run in one call. job = processor.run_sweep( program=hw_circuit, repetitions=1000) print("Scheduled. View the job at:" "programs/{}?&project={}".format(job.id(), project_id)) # Print out the results. This blocks until the results are returned. results = job.results() print("\nMeasurement results:") for result in results: print(result)
Uploading program and scheduling job on the Quantum Engine... Scheduled. View the job at:
Running from the command line
If you are planning to access Quantum Computing Service from the command line, follow these instructions to get started. If you plan on executing all of your programs from an ipython notebook, you can skip this section.
Setup Cirq
Follow the Cirq Install page to install Cirq locally. We highly recommend that you setup a virtual environment for this installation to isolate your development stack from your overall system installations. Make sure to setup the virtual environment for Python 3 and not Python 2.
Setup Google Cloud authentication
In this quickstart we will authenticate using the gcloud command line cool. To do this, one must first install gcloud. Follow the instructions for this at We will authenticate using Application Default Credentials. To do this simply run the following on your shell command line
gcloud auth application-default login
This will open up a browser window or give you a link to a webpage you can navigate to in order to go through an authentication flow. Complete this using your Google account. After this command is run, credentials will be stored on your local machine. If at any point you want to revoke these credentials you can run
gcloud auth application-default revoke.
Write and run a short quantum program
Using your favorite IDE or editor, read and then paste the following hello_qubit program into a file called
hello_qubit.py. Make sure to replace the
'your-project-id' string with the project id you created above.
import cirq import cirq_google as cg def example(engine, processor_id, project_id): """Hello qubit example run against a quantum processor.""" # Define a qubit. qubit = cirq.GridQubit(5, 2) # Create a circuit (qubits start in the |0> state). circuit = cirq.Circuit( cirq.X(qubit)**0.5, # Square root of NOT. cirq.measure(qubit, key='result') # Measurement. ) print("Uploading program and scheduling job on Quantum Engine...\n") processor = engine.get_processor(processor_id) # Upload the program and submit jobs to run in one call. job = processor.run_sweep( program=circuit, repetitions=1000) print("Scheduled. View the job at:" f"programs/{job.program().id()}/jobs/{job.id()}" f"/overview?project={project_id}") # Print out the results. This blocks until the results are returned. results = job.results() print("\nMeasurement results:") for result in results: print(result) if __name__ == '__main__': virtual = False # Needed for colab only. Can remove in stand-alone. import os if 'COLAB_GPU' in os.environ: print("Running on Colab, using a virtual engine instead.") virtual = True # Set up QCS objects. from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook qcs_objects = get_qcs_objects_for_notebook(virtual=True) processor_id = qcs_objects.processor_id project_id = qcs_objects.project_id engine = qcs_objects.engine example(engine, processor_id, project_id)
Available processors: ['rainbow', 'weber'] Using processor: rainbow Uploading program and scheduling job on Quantum Engine... Scheduled. View the job at: Measurement results: result=0111111001000000000110000111011100111101100010011110010110001000110000001010100010110110111100000001011110011011100110101000111101111001111100001111111001100110000011111100101000101010010100011000011000111110101011111001100111010011010001101111101100101011101000110100011010001100000110000011100011001000001010100010001100011100101111110011111111010000011110111011011011110001110000110101101001101000001001100011010100001110111111111010101011101110011111001110001111100101101001010101011101010111000111111100000010001010110101110000100111001010011000100110010001001100100000111101010100001001010001001000000001011101001010110010010111100001011000111110011000111100100111110010001101010001000110011011001111011010010000110001100101100010010011101111110000110110110110001101011100101010110000101111100111010101100010000100010100000100111001110110101011111110011011001011111111101111111110110111001100110011110110010000010010110101111001100101001001101001101111001110110110010011110010000101011000000111
You should then be able to run this program from your command line using:
python hello_qubit.py
You should be able to see output like that shown above.
Next steps
- Use this template colab as a base for your own explorations.
- Explore best practices for getting circuits to run on hardware. | https://quantumai.google/cirq/tutorials/google/start | CC-MAIN-2022-40 | refinedweb | 1,875 | 58.08 |
View Source Task (Elixir v1.14.0-dev)
Conven these scenarios next.
async-and-await
async and await, you will want to use supervised tasks, described next.
dynamically-supervised-tasks
Dynamically supervised tasks)
And now you can use async/await by passing the name of the supervisor instead of the pid:
Task.Supervisor.async(MyApp.TaskSupervisor, fn -> # Do something end) |> Task.await()
We encourage developers to rely on supervised tasks as much as possible. Supervised tasks improves the visibility of how many tasks are running at a given moment and enable a huge variety of patterns that gives you explicit control on how to handle the results, errors, and timeouts. Here is a summary:
Using
Task.Supervisor.start_child/2allows you to start a fire-and-forget task that you don't care about its results or if it completes successfully or not.
Using
Task.Supervisor.async/2+
Task.await/2allows you to execute tasks concurrently and retrieve its result. If the task fails, the caller will also fail.
Using
Task.Supervisor.async_nolink/2+
Task.yield/2+
Task.shutdown/2allows you to execute tasks concurrently and retrieve their results or the reason they failed within a given time frame. If the task fails, the caller won't fail. You will receive the error reason either on
yieldor
shutdown.
Furthermore, the supervisor guarantee all tasks first terminate, within a
configurable shutdown period, when your application shuts down. See the
Task.Supervisor module for details on the supported operations.
distributed-tasks
Distributed tasks
With
Task.Supervisor, it is easy to dynamically start tasks across nodes:
# On the remote node named :remote@local Task.Supervisor.start_link(name: MyApp.DistSupervisor) # On the client supervisor = {MyApp.DistSupervisor, :remote@local} Task.Supervisor.async(supervisor, MyMod, :my_fun, [arg1, arg2, arg3])
Note that, when working with distributed tasks, one should use the
Task.Supervisor.async/5 function that expects explicit module, function,
and arguments, instead of
Task.Supervisor.async/3 that works with anonymous
functions. That's because anonymous functions expect the same module version
to exist on all involved nodes. Check the
Agent module documentation for
more information on distributed processes as the limitations described there
apply to the whole ecosystem.
statically-supervised-tasks
Statically supervised tasks
The
Task module implements the
child_spec/1 function, which
allows it to be started directly under a regular
Supervisor -
instead of a
Task.Supervisor - by passing a tuple with a function
to run:
Supervisor.start_link([ {Task, fn -> :some_work end} ], strategy: :one_for_one)
This is often useful when you need to execute some steps while setting up your supervision tree. For example: to warm up caches, log the initialization status, and such.
If you don't want to put the Task code directly under the
Supervisor,
you can wrap the
Task in its own module, similar to how you would
do with a
GenServer or an
Agent:. By default, the functions
Task.start/1
and
Task.start_link/1 are for fire-and-forget tasks, where you don't
care about the results or if it completes successfully or not..
ancestor-and-caller-tracking
Ancestor and Caller Tracking_function). This means
that, although your code is the one invoking.
If a task crashes, the callers field is included as part of the log message
metadata under the
:callers key.
Link to this section Summary
Functions.
Awaits replies from multiple tasks and returns them.
Returns a specification to start a task under a supervisor.
Starts a task that immediately completes with the given
result.
Ignores an existing task.
Unlinks and shuts down the task, and then checks for a reply.
Starts a task.
Starts a task.
Starts a task as part of a supervision tree with the given
fun.
Starts a task as part of a supervision tree with the given
module,
function, and
args.
Temporarily blocks the caller process waiting for a task reply.
Yields to multiple tasks in the given time interval.
Link to this section Types
The Task type.
See
%Task{} for information about each field of the structure.
Link to this section Functions
%Task{}View Source (struct)
The Task struct.
It contains these fields:
async(fun)View Source
Starts a task that must be awaited on.
fun must be a zero-arity anonymous function. This function
spawns a process that is linked to and monitored by the caller
process. A
Task struct is returned containing the relevant
information. Developers must eventually call
Task.await/2 or
Task.yield/2 followed by
Task.shutdown/2 on the returned task.
Read the
Task module documentation for more information about
the general usage of async tasks.
linking
Linking caller the
caller caller process dies.
metadata
Metadata
The task created with this function stores
:erlang.apply/2 in
its
:mfa metadata field, which is used internally to apply
the anonymous function. Use
async/3 if you want another function
to be used as metadata.
async(module, function_name, args)View Source
Starts a task that must be awaited on.
Similar to
async/1 except the function to be started is
specified by the given
module,
function_name, and
args.
The
module,
function_name, and its arity are stored as
a tuple in the
:mfa field for reflection purposes.
async_stream(enumerable, fun, options \\ [])View Source (since 1.4.0)
@spec caller process, similarly
to
async/1.
example
Example, module, function_name, args, options \\ [])View Source (since 1.4.0)
@spec. Those tasks will be linked to an intermediate
process that is then linked to the caller process. This means a failure
in a task terminates the caller process and a failure in the caller
process terminates all tasks.
When streamed, each task will emit
{:ok, value} upon successful
completion or
{:exit, reason} if the caller is trapping exits.
It's possible to have
{:exit, {element, reason}} for exits
using the
:zip_input_on_exit option. ensure
errors in the tasks do not terminate the caller process, consider
using
Task.Supervisor.async_stream_nolink/6 to start tasks that
are not linked to the caller process.
options
Options
:max_concurrency- sets the maximum number of tasks to run at the same time. Defaults to
System.schedulers_online/0.
:ordered- whether the results should be returned in the same order as the input stream. When the output is ordered, Elixir may need to buffer results to emit them in the original order. Setting this option to false disables the need to buffer at the cost of removing ordering. This is also useful when you're using the tasks only for the side effects. Note that regardless of what
:orderedis set to, the tasks will process asynchronously. If you need to process elements in order, consider using
Enum.map/2or
Enum.each/2instead. Defaults to
true.
:timeout- the maximum amount of time (in milliseconds or
:infinity) each task is allowed to execute for. Defaults to
5000.
:on_timeout- what to do when a task times out. The possible values are:
:exit(default) - the caller (the process that spawned the tasks) exits.
:kill_task- the task that timed out is killed. The value emitted for that task is
{:exit, :timeout}.
:zip_input_on_exit- (since v1.14.0) adds the original input to
:exittuples. The value emitted for that task is
{:exit, {input, reason}}, where
inputis the collection element that caused an exited during processing. Defaults to
false.
example
Example)
first-async-tasks-to-complete
First async tasks to complete
You can also use
async_stream/3 to execute M tasks and find the N tasks
to complete. For example:
[ &heavy_call_1/0, &heavy_call_2/0, &heavy_call_3/0 ] |> Task.async_stream(fn fun -> fun.() end, ordered: false, max_concurrency: 3) |> Stream.filter(&match?({:ok, _}, &1)) |> Enum.take(2)
In the example above, we are executing three tasks and waiting for the
first 2 to complete. We use
Stream.filter/2 to restrict ourselves only
to successfully completed tasks, and then use
Enum.take/2 to retrieve
N items. Note it is important to set both
ordered: false and
max_concurrency: M, where M is the number of tasks, to make sure all
calls execute concurrently.
attention-unbound-async-take
Attention: unbound async + take
If you want to potentially process a high number of items and keep only part of the results, you may end-up processing more items than desired. Let's see an example:
1..100 |> Task.async_stream(fn i -> Process.sleep(100) IO.puts(to_string(i)) end) |> Enum.take(10)
Running the example above in a machine with 8 cores will process 16 items,
even though you want only 10 elements, since
async_stream/3 process items
concurrently. That's because it will process 8 elements at once. Then all 8
elements complete at roughly the same time, causing 8 elements to be kicked
off for processing. Out of these extra 8, only 2 will be used, and the rest
will be terminated.
Depending on the problem, you can filter or limit the number of elements upfront:
1..100 |> Stream.take(10) |> Task.async_stream(fn i -> Process.sleep(100) IO.puts(to_string(i)) end) |> Enum.to_list()
In other cases, you likely want to tweak
:max_concurrency to limit how
many elements may be over processed at the cost of reducing concurrency.
You can also set the number of elements to take to be a multiple of
:max_concurrency. For instance, setting
max_concurrency: 5 in the
example above.
await(task, timeout \\ 5000)View Source
Awaits a task reply and returns it.
In case the task process dies, the caller process will exit with the same reason as the task.
A timeout, in milliseconds or
:infinity, can be given with a default value
of
5000. If the timeout is exceeded, then the caller process will exit.
If the task process is linked to the caller process which is the case when
a task is started with
async, then the task process will also exit. If the
task process is trapping exits or not linked to the caller.
examples
Examples
iex> task = Task.async(fn -> 1 + 1 end) iex> Task.await(task) 2
compatibility-with-otp-behaviours
Compatibility with OTP behaviours
It is not recommended to
await a long-running task inside an OTP
behaviour such as
GenServer. Instead, you should match on the message
coming from a task inside your
GenServer.handle_info/2 callback.
A GenServer will receive two messages on
handle_info/2:
{ref, result}- the reply message where
refis the monitor reference returned by the
task.refand
resultis the task result
{:DOWN, ref, :process, pid, reason}- since all tasks are also monitored, you will also receive the
:DOWNmessage delivered by
Process.monitor/1. If you receive the
:DOWNmessage without a a reply, it means the task crashed
Another consideration to have in mind is that tasks started by
Task.async/1
are always linked to their callers and you may not want the GenServer to
crash if the task crashes. Therefore, it is preferable to instead use
Task.Supervisor.async_nolink/3 inside OTP behaviours. For completeness, here
is an example of a GenServer that start tasks and handles their results:
defmodule GenServerTaskExample do use GenServer def start_link(opts) do GenServer.start_link(__MODULE__, :ok, opts) end def init(_opts) do # We will keep all running tasks in a map {:ok, %{tasks: %{}}} end # Imagine we invoke a task from the GenServer to access a URL... def handle_call(:some_message, _from, state) do url = ... task = Task.Supervisor.async_nolink(MyApp.TaskSupervisor, fn -> fetch_url(url) end) # After we start the task, we store its reference and the url it is fetching state = put_in(state.tasks[task.ref], url) {:reply, :ok, state} end # If the task succeeds... def handle_info({ref, result}, state) do # The task succeed so we can cancel the monitoring and discard the DOWN message Process.demonitor(ref, [:flush]) {url, state} = pop_in(state.tasks[ref]) IO.puts "Got #{inspect(result)} for URL #{inspect url}" {:noreply, state} end # If the task fails... def handle_info({:DOWN, ref, _, _, reason}, state) do {url, state} = pop_in(state.tasks[ref]) IO.puts "URL #{inspect url} failed with reason #{inspect(reason)}" {:noreply, state} end end
With the server defined, you will want to start the task supervisor above and the GenServer in your supervision tree:
children = [ {Task.Supervisor, name: MyApp.TaskSupervisor}, {GenServerTaskExample, name: MyApp.GenServerTaskExample} ] Supervisor.start_link(children, strategy: :one_for_one)
await_many(tasks, timeout \\ 5000)View Source (since 1.11.0)
Awaits replies from multiple tasks and returns them.
This function receives a list of tasks and waits for their replies in the
given time interval. It returns a list of the results, in the same order as
the tasks supplied in the
tasks input argument.
If any of the task processes dies, the caller process will exit with the same reason as that task.
A timeout, in milliseconds or
:infinity, can be given with a default value
of
5000. If the timeout is exceeded, then the caller process will exit.
Any task processes that are linked to the caller process (which is the case
when a task is started with
async) will also exit. Any task processes that
are trapping exits or not linked to the caller process will continue to run.
This function assumes the tasks' monitors are still active or the monitor's
:DOWN message is in the message queue. If any tasks have been demonitored,
or the message already received, this function will wait for the duration of
the timeout.
This function can only be called once for any given task. If you want to be
able to check multiple times if a long-running task has finished its
computation, use
yield_many/2 instead.
compatibility-with-otp-behaviours
Compatibility with OTP behaviours
It is not recommended to
await long-running tasks inside an OTP behaviour
such as
GenServer. See
await/2 for more information.
examples
Examples
iex> tasks = [ ...> Task.async(fn -> 1 + 1 end), ...> Task.async(fn -> 2 + 3 end) ...> ] iex> Task.await_many(tasks) [2, 5]
child_spec(arg)View Source (since 1.5.0)
@spec.
completed(result)View Source (since 1.13.0)
Starts a task that immediately completes with the given
result.
Unlike
async/1, this task does not spawn a linked process. It can
be awaited or yielded like any other task.
usage
Usage
In some cases, it is useful to create a "completed" task that represents a task that has already run and generated a result. For example, when processing data you may be able to determine that certain inputs are invalid before dispatching them for further processing:
def process(data) do tasks = for entry <- data do if invalid_input?(entry) do Task.completed({:error, :invalid_input}) else Task.async(fn -> further_process(entry) end) end end Task.await_many(tasks) end
In many cases,
Task.completed/1 may be avoided in favor of returning the
result directly. You should generally only require this variant when working
with mixed asynchrony, when a group of inputs will be handled partially
synchronously and partially asynchronously.
ignore(task)View Source (since 1.13.0)
Ignores an existing task.
This means the task will continue running, but it will be unlinked and you can no longer yield, await or shut it down.
Returns
{:ok, reply} if the reply is received before ignoring the task,
{:exit, reason} if the task died before ignoring it, otherwise
nil.
Important: avoid using
Task.async/1,3 and then immediately ignoring
the task. If you want to start tasks you don't care about their
results, use
Task.Supervisor.start_child/2 instead.
Requires Erlang/OTP 24+.
shutdown(task, shutdown \\ 5000)View Source
Unlinks and shuts down the task, and then checks for a reply.
Returns
{:ok, reply} if the reply is received while shutting down the task,
{:exit, reason} if the task died, otherwise
nil. Once shut down,
you can no longer await or yield it.(fun)View Source
Starts a task.
fun must be a zero-arity anonymous function.(module, function_name, args)View Source
Starts a task._link(fun)View Source
Starts a task as part of a supervision tree with the given
fun.
fun must be a zero-arity anonymous function.
This is used to start a statically supervised task under a supervision tree.
start_link(module, function, args)View Source
Starts a task as part of a supervision tree with the given
module,
function, and
args.
This is used to start a statically supervised task under a supervision tree.
yield(task, timeout \\ 5000)View Source
Temporarily blocks the caller} if at least one of the conditions below apply:
- the task process exited with the reason
:normal
- the task isn't linked to the caller (the task was started with
Task.Supervisor.async_nolink/2or
Task.Supervisor.async_nolink/4)
- the caller is trapping exits
If you intend to check on the task but leave it running after the timeout,
you can chain this together with
ignore/1, like so:
case Task.yield(task, timeout) || Task.ignore(tasks, timeout \\ 5000)View Source.
example
Example
Task.yield_many/2 allows developers to spawn multiple tasks
and retrieve the results received in a given timeframe.
If we combine it with
Task.shutdown/2 (or
Task.ignore/1),
it allows us to gather those results and cancel (or ignore). | https://hexdocs.pm/elixir/master/Task.html | CC-MAIN-2022-27 | refinedweb | 2,823 | 59.09 |
view raw
I have installed Symfony2 on a Xampp server with PHP 5.3.8 and everything works ok ( the php, the symfony demo page ).
I try to create my own helloWorld, as the tutorial says :
php app/console generate:bundle --namespace=Acme/HelloBundle --format=yml
could not open input file : app/console
To execute command you should move to root directory of your project in terminal/CMD.
Please note that in
version 2.5 some changes has been made so command will not work with
app/console
Note: From 2.5
app/console is replaced by
bin/console.
Please check here for changes. Also check this for more details about difference. | https://codedump.io/share/HLGdKusztdWx/1/php-appconsole-wont-work-trying-to-create-bundle-for-symfony-2 | CC-MAIN-2017-22 | refinedweb | 112 | 67.45 |
csPen Class Reference
A pen implementation with which you can do various kinds of 2D rendering. More...
#include <cstool/pen.h>
Detailed Description
A pen implementation with which you can do various kinds of 2D rendering.
Definition at line 152 of file pen.h.
Member Function Documentation
Adds a texture coordinate.
- Parameters:
-
Worker, adds thick line points.
A thick point is created when the pen width is greater than 1. It uses polygons to simulate thick lines.
Adds a vertex.
- Parameters:
-
Clears the given flag.
- Parameters:
-
Clears the current transform, resets to identity.
Clip a line to the given canvas size.
Returns true the line is not empty..
Draws a single line.
Draws a series of lines.
Worker, draws the mesh.
Draws a mitered rectangle.
The miter value should be between 0.0 and 1.0, and determines how much of the corner is mitered off and beveled.
Draws a single point.
Draws a rectangle.
Draws a rounded rectangle.
The roundness value should be between 0.0 and 1.0, and determines how much of the corner is rounded off.
Draws a triangle around the given vertices.
Pops the transform stack.
The top of the stack becomes the current transform.
Pushes the current transform onto the stack.
*
Rotates by the given angle.
Set an active pen cache.
After this call all drawing calls to this pen will be cached on the cache until you clear the pen cache again. At that point the pen cache can be used to do cached drawings. Note that the pen cache does not work for text. Also note that the pen cache can be updated later with new commands if you set it again onto a pen. It is also safe to use the same pencache with different pens.
Definition at line 280 of file pen.h.
Worker, sets up the pen to do auto texturing.
Sets the current color.
- Parameters:
-
Sets the given flag.
- Parameters:
-
Sets the given mix (blending) mode.
- Parameters:
-
Sets the origin of the coordinate system.
Sets the width of the pen for line drawing.
Sets the texture handle.
- Parameters:
-
Set a transform.
Worker, sets up the mesh with the vertices, color and other information.
Initializes our working objects.
Swaps the current color and the alternate color.
Translates by the given vector.
Writes text in the given font at the given location.
Writes text in the given font, in the given box.
The alignment specified in h_align and v_align determine how it should be aligned.
Writes multiple lines of text in the given font at the given location.
Writes multiple lines of text in the given font, in the given box.
The alignment specified in h_align and v_align determine how it should be aligned.
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.1 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api/classcsPen.html | CC-MAIN-2016-30 | refinedweb | 476 | 80.07 |
Recently.
Let’s take a JSON file called properties.json that contains the following data:
{ "name": "JSONParser", "version": "1.0.0", "description": "Parse some JSON data", "keywords": [ "json", "parse", "request" ] }
This file contains a JSON object that also contains a JSON array. To keep things simple in this example, we are only going to print it out.
It is now time to create our Java project. If you keep up with my other tutorials, you’ll know we aren’t going to be using an IDE such as Eclipse. Instead we are going to be using a text editor and a command prompt. Our project structure is going to look like the following:
JSONParser src org json [ JSON Library Files Here ] jsonparser MainDriver.java lib build.xml properties.json
If you’re a Java veteran, you’ll know right away after seeing build.xml that we’ll be using Apache Ant to build our project. Feel free to change it to your preference.
First we want to download the JSON library files, and place them all in the src/org/json directory. With this done, open your MainDriver.java file and add the following two functions:
package jsonparser; import java.io.*; import org.json.*; public class MainDriver { public static void main(String[] args) { } public static String readFile(String filename) { } }
The
readFile(String filename) function will read a text file and return it as a single string. The
main(String[] args) function will be where we fiddle with all the JSON data.
Using this as a reference, make your
readFile(String filename) function look like the following:
public static String readFile(String filename) { String result = ""; try { BufferedReader br = new BufferedReader(new FileReader(filename)); StringBuilder sb = new StringBuilder(); String line = br.readLine(); while (line != null) { sb.append(line); line = br.readLine(); } result = sb.toString(); } catch(Exception e) { e.printStackTrace(); } return result; }
Again, the above code will only read a text file and return it as string data.
Now take a look at the
main(String[] args) function which will take care of various JSON work:
public static void main(String[] args) { String jsonData = readFile("properties.json"); JSONObject jobj = new JSONObject(jsonData); JSONArray jarr = new JSONArray(jobj.getJSONArray("keywords").toString()); System.out.println("Name: " + jobj.getString("name")); for(int i = 0; i < jarr.length(); i++) { System.out.println("Keyword: " + jarr.getString(i)); } }
In the above code you can see that we’ve gotten our JSON string, converted the string into a
JSONObject and then extracted the keywords array. Finally the data we obtain is printed out. There are many other great JSON functions to make use of, all of which can be found in the Javadocs for the library.
If you’re not sure how to make an Apache Ant build file, just use the following for our project:
<project> <property name="lib.dir" value="libs" /> <property name="jar.dir" value="build/jar" /> <property name="jar.name" value="JSON="jsonparser.MainDriver"/> </manifest> </jar> </target> <target name="run"> <java jar="${jar.dir}/${jar.name}" fork="true"/> </target> <target name="buildandrun" depends="build, run" /> </project>
To test everything out, just run
ant buildandrun from the command line or terminal. It will build a Java Archive (JAR) file and run it.
This is just one of many ways to parse JSON data using Java. I chose to use this library set because it closely resembles how you would parse JSON in native Android. Another common way which I might explain in a later tutorial is with JSON Simple. | https://www.thepolyglotdeveloper.com/2015/03/parse-json-file-java/ | CC-MAIN-2022-21 | refinedweb | 582 | 66.74 |
inotify_add_watch()
Add or update a watch for filesystem events associated with a path
Synopsis:
#include <sys/inotify.h> int inotify_add_watch( int fd, const char *pathname, int mask );
Since:
BlackBerry 10.0.0
Arguments:
- fd
- A valid file descriptor returned by inotify_init().
- path
- The path whose filesystem events you want to monitor.
- mask
- A bitmask that specifies the events you want to watch. The bits include:
- IN_ACCESS — the file was read.
- IN_MODIFY — the file was written to.
- IN_ATTRIB — the attributes of the file changed.
- IN_CLOSE_WRITE — a file that was opened for writing was closed.
- IN_CLOSE_NOWRITE — a file that was opened not for writing was closed.
- IN_CLOSE — the same as (IN_CLOSE_WRITE | IN_CLOSE_NOWRITE).
- IN_OPEN — the file was opened.
- IN_MOVED_FROM — the file was moved or renamed away from the item being watched.
- IN_MOVED_TO — the file was moved or renamed to the item being watched.
- IN_MOVE — the same as (IN_MOVED_FROM | IN_MOVED_TO).
- IN_CREATE — a file was created in a watched directory.
- IN_DELETE — a file or directory was deleted.
- IN_DELETE_SELF — the file or directory being monitored was deleted.
- IN_MOVE_SELF — the file or directory being monitored was moved or renamed.
IN_ALL_EVENTS is a bitwise-OR of all the event types.
You can OR the following into the event type:
- IN_ONESHOT — remove the watch during the generation of the first event.
- IN_ONLYDIR — watch the pathname only if it's a directory.
- IN_DONT_FOLLOW — don't dereference the pathname if it's a symbolic link.
- IN_EXCL_UNLINK — don't generate events for children after they've been unlinked from the watched directory.
- IN_MASK_ADD — add events to the watch mask (if one already exists) for this pathname, instead of replacing it.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The inotify_add_watch() function starts watching for filesystem events associated with the given path.
For an overview of inotify, see in the Linux Journal, but note that there are differences between the Linux and BlackBerry 10 OS implementations.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/inotify_add_watch.html | CC-MAIN-2017-09 | refinedweb | 347 | 60.92 |
by Ondřej Polesný
How to generate a static website with Vue.js in no time
You have decided to build a static site, but where do you start? How do you select the right tool for the job without previous experience? How can you ensure that you succeed the first time, while avoiding tools that won’t help you in the end?
In this article, you will learn how to adjust a Vue.js website to be automatically generated as a static site.
Introduction
I summarized the key differences between an API based website and static sites in my previous article. As a quick reminder, static sites are:
- Blazing fast
- Secure (as they are just a set of static pages)
- Regenerated every time editors update the content
- Compatible with additional dynamic functionality
What is a Static Site Generator?
A static site generator is a tool that generates a static website from a website’s implementation and content.
Content can come from a headless content management system, through a REST API. The website implementation uses one of the JavaScript frameworks like Vue.js or React. The output of a static site generator is a set of static files that form the website.
Static Site Implementation
I chose Vue.js as the JavaScript framework to use. Therefore I will be working with Nuxt.js, which is a static site generator for Vue.js.
If you are using a different framework, look for a static site generator built on top of that framework (for example Gatsby for React.js).
Essentially, Nuxt is a combination of multiple tools that together enable you to create static sites. The tools include:
- Vue2 — Core Vue.js library.
- Vue Router — Handles URL routing for pages within the website.
- Vuex — Memory store for data that are shared by components.
- Vue Server Renderer — Enables server side rendering of pages before the actual static files generation
- Vue-Meta — Manages page metadata info
Nuxt also defines how the website needs to be built in order to generate static files.
Installation
In order to start building websites with Nuxt, you need to install it. See detailed installation instructions on the Nuxt.js webpage. In a nutshell, you need
npx (shipped with NPM by default) installed and run:
npx create-nuxt-app <website-name>
You can just use default selections, unless you have preferences otherwise.
Components
In one of my previous articles I explained how to create a template layout and components. All of them were defined within single file
components.js. That needs to be changed with Nuxt. All components need to be extracted from
components.js file into separate files under folder
components. Take a look at my
blog-list component and its previous implementation:
Vue.component('blog-list', { props: ['limit'], data: function(){ return { articles: null } },
created: function(){ var query = deliveryClient .items() .type('blog_post') .elementsParameter(['link', 'title', 'image_url', 'image', 'teaser']) .orderParameter('elements.published', SortOrder.desc); if (this.limit){ query = query.limitParameter(this.limit); } query .getPromise() .then(response => this.$data.articles = response.items.map(item => ({ url: item.link.value, header: item.title.value, image: item.image_url.value != '' ? item.image_url.value : item.image.assets[0].url, teaser: item.teaser.value })) ); },
template: ` <section class="features"> <article v- ... </article> </section> ` });
To separate it, you also need to change the component’s syntax to match the following template:
<template> HTML of the component</template><script> export default { Vue.js code }</script>
Therefore my adjusted
Blog-list component looks like this:
<template> <section class="features"> <article v- ... </article> </section></template><script> export default { props: ['limit'], computed: { blogPosts: function(){ return this.$store.state.blogPosts && this.limit && this.$store.state.blogPosts.length > this.limit ? this.$store.state.blogPosts.slice(0, this.limit) : this.$store.state.blogPosts; } } }</script>
You see the template stayed the same. What changed is the implementation that is now within
export default section. Also, there used to be a function gathering data from headless CMS Kentico Cloud.
That content is now stored within Vuex store. I will explain this part later. Convert all of your components this way, to contain template within
<template> tags and implementation w
ithin <script> tags. You should end up with a similar file structure as I have:
Note that I have two new components here — Menu and Header. As my aim was to also improve performance, I decided to remove jQuery from my website. jQuery is quite a large and heavy library that was used only for small UI effects. I was able to recreate them using just Vue.js. Therefore, I converted the static HTML representing header to component. I also added the UI related functionality into
mounted function of this component.
mounted: function(){ window.addEventListener(‘scroll’, this.scroll); …},methods: { scroll: function(){ … }}
Handling Vue.js Events with Nuxt
I used to leverage Vue events in my website. The main reason was reCaptcha control used for form validation. When it was initialized, it would broadcast the event so that form component can unlock the submit button of the contact form.
With Nuxt, I no longer use
app.js or
components.js files. Therefore I created a new
recaptcha.js file that contains a simple function emitting the event when reCaptcha gets ready. Note that instead of creating new Vue.js instance just for events (
let bus = new Vue(); in my old code), it is possible to simply use
this.$nuxt the same way.
var recaptchaLoaded = function(){ this.$nuxt.$emit('recaptchaLoaded');}
Layout and Pages
The main frame of the page was
index.html, and each page defined its own layout in constants that were handed over to Vue router.
With Nuxt, the main frame including
<html>
tag, meta tags and other essentials of any HTML page are handled by Nuxt. The actual website implementation is handling only content w
ithin <body> tags. Move the layout that is common for all your
pages into layouts/default.vue and respect the same template as with components. My layout looks like this:
<template> <div> <Menu></Menu> <div id="page-wrapper"> <Header></Header> <nuxt/> <section id="footer"> <div class="inner"> … <ContactForm></ContactForm> … </div> </section> </div> </div></template><script> import ContactForm from ‘~/components/Contact-form.vue’ import Menu from ‘~/components/Menu.vue’ import Header from ‘~/components/Header.vue’ export default { components: { ContactForm, Menu, Header } } </script>
The layout is basically the HTML markup of my old
index.html. However, note the
<script> section. All of the components I want to use within this layout template need to be imported from their location and specified in exported object.
Page components were previously defined in
app.js as constants. Take a look at my old Home page for example:
const Home = { template: ` <div> <banner></banner> <section id="wrapper"> <about-overview></about-overview> ... <blog-list</blog-list> <ul class="actions"> <li><a href="/blog" class="button">See all</a></li> </ul> ... </section> </div> `}
All these pages need to be defined in separate files within
pages folder. Main page is always called
index.vue. This is how my new
pages/index.vue (my Homepage) looks like:
<template> <div> <Banner></Banner> <section id="wrapper"> <AboutOverview></AboutOverview> ... <BlogList limit="4"></BlogList> <ul class="actions"> <li><a href="/blog" class="button">See all</a></li> </ul> ... </section> </div></template><script> import Banner from ‘~/components/Banner.vue’ import AboutOverview from ‘~/components/About-overview.vue’ import BlogList from ‘~/components/Blog-list.vue’ export default { components: { Banner, AboutOverview, BlogList }, }</script>
Where to Store Assets
On every website there are assets like CSS stylesheets, images, logos, and JavaScripts. With Nuxt, all these static files need to be stored under folder static. So the folder structure currently looks like this:
When you reference any resources from CSS stylesheets like fonts or images, you need to use static folder as a root:
background-image: url("~/assets/images/bg.jpg");
Getting Content
With all the components and pages in place, we finally get to it: getting content into components.
Getting content using Nuxt is a bit different than it used to be. The important aspect of this process when using a static site generator is that the content needs to be gathered before all the pages are generated. Otherwise you will end up with a static website, but requests for content would still be dynamic, hitting the headless CMS from each visitor’s browser and you would lose the main benefit.
Nuxt contains two special methods that handle asynchronous data fetching at the right time, that is before the pages are generated. These methods are
asyncData and
fetch. They are available only to page components (that is files within
pages folder) and their purpose is the same, but
asyncData will automatically store received data within the component dataset.
This can be beneficial if you have many components on a single page using the same set of data. In my case, I even have multiple pages with many components that need to share the same data. Therefore I would either need to request the same data on each page or use a shared space that all pages and components could access.
I chose the latter. Nuxt makes it very easy for us. It automatically includes Vuex store that enables our components and pages access any data that are within the store. To start using the store all you need to do is create an
index.js file within the
store folder.
import Vuex from 'vuex'
const createStore = () => { return new Vuex.Store({ state: () => ({}), mutations: {}, actions: {}, })}export default createStore
You see the instance has a few properties:
- State
State is similar to data in components. It contains definition of data fields that are used to store data.
- Mutations
Mutation are special functions that are permitted to change data in State (mutate the state).
- Actions
Actions are simple methods that enable you to, for example, implement content gathering logic.
Let’s get back to the
Blog-list component. This component needs an array of blog posts in order to render its markup. Therefore blog posts need to be stored within Vuex store:
…const createStore = () => { return new Vuex.Store({ state: () => ({ blogPosts: null }), mutations: { setBlogPosts(state, blogPosts){ state.blogPosts = blogPosts; } }, actions: { getBlogPosts (context) { logic to get content from Kentico Cloud } }, })}
After adjusting Vuex store this way, the
Blog-list component can use its data:
<article v- …</article>
I already shared the whole implementation of this component above. If you noticed, it uses
computed function as a middle layer between component markup and Vuex store. That middle layer ensures the component displays only a specific amount of articles as configured in the
limit field.
Querying the Headless CMS
Maybe you remember the
deliveryClient was used to get data from Kentico Cloud into the components.
Disclaimer: I work for Kentico, a CMS vendor that provides both traditional (coupled) CMS and a new cloud-first headless CMS — Kentico Cloud.
The very same logic can be used here in the Vuex store actions with a little tweak. Kentico Cloud has a module for Nuxt.js, install it using following command:
npm i kenticocloud-nuxt-module — savenpm i rxjs — save
With this module you can keep using
deliveryClient like before, just with a
$ prefix. So in my case the logic to get blog posts looks like this:
…getBlogPosts (context) { return this.$deliveryClient .items() ... .orderParameter('elements.published', SortOrder.desc) .getPromise() .then(response => { context.commit('setBlogPosts', response.items.map(item => ({ url: item.link.value, header: item.title.value, image: item.image_url.value != '' ? item.image_url.value : item.image.assets[0].url, teaser: item.teaser.value }))) }); },…
If you want to use ordering using the
orderParameter, you may need to include another import in the
store/index.js file:
import { SortOrder } from 'kentico-cloud-delivery'
Now when the content gathering logic is implemented, it’s time to use the special asynchronous function fetch that I mentioned before. See my implementation in the
index.vue page:
async fetch ({store, params}) { await store.dispatch('getBlogPosts')}
The call to
store.dispatch automatically invokes
getBlogPosts action. Within the
getBlogPosts implementation note the call for
context.commit.
context refers to Vuex store and
commit will hand over blog posts data to
setBlogPosts mutation. Updating the data set with blog posts changes the state of the store (mutates it). And we are almost finished!
Other content storage options
I used Kentico Cloud headless CMS and its API here, as I am using it throughout all articles in this series. If you want to also check out other options, you can find an independent list of headless CMSs and their features at headlesscms.org.
If you don’t want to use a headless CMS and its API, you may decide to store your content in some other way — usually as a set of markdown files directly stored within your project or Git repository. You can find a nice example of this approach in nuxt-markdown-example GitHub repository.
Nuxt Configuration
The whole application needs to be properly configured using
Nuxt.config.js file. This file contains information about used modules, their configuration and site essentials like title or SEO tags. The configuration of my website looks like this:
export default { head: { title: 'Ondrej Polesny', meta: [ { charset: 'utf-8' }, ... { hid: 'description', name: 'description', content: 'Ondrej Polesny — Developer Evangelist + dog lover + freelance bus driver' } ], script: [ { src: '', type: "text/javascript" }, { src: 'assets/js/recaptcha.js', type: "text/javascript" } ], }, modules: [ 'kenticocloud-nuxt-module' ], kenticocloud: { projectId: '*KenticoCloud projectId*', enableAdvancedLogging: false, previewApiKey: '' }, css: [ {src: 'static/assets/css/main.css'}, ], build: { extractCSS: { allChunks: true } }}
The head section describes website essentials like
title and
meta tags you want to include in header.
Note the
modules and
kenticocloud configuration. The first one lists all modules your application depends on and the second one is specific module configuration. This is the place where you need to put your project API key.
To see all the options for meta tags, please refer to
Running and Generating
Static sites need to be generated before anyone can access them or upload them to an FTP server. However, it would be very time consuming to regenerate the site every single time a change has been made during the development phase. Therefore, you can run the application locally using:
npm run dev
This will create a development server for you and enable you to access your website on (or similar). While you keep your console running this command, every change you make in your scripts will have immediate effect on the website.
To generate a true static site, execute following command:
npx nuxt generate
The output, that is your static site, will be in
dist folder. Feel free to open any page in your favorite text editor and see if the source code contains content from the headless CMS. If it does, it was successfully fetched.
Conclusion
Having a generated static site will greatly improve the website’s performance. Compared to traditional sites, the webserver does not need to perform any CPU heavy operations. It only serves static files.
Compared to API based websites, the clients receive requested data instantly with the very first response. That’s what makes them all that fast — they do not need to wait for external content to be delivered via additional asynchronous requests. The content is already there in the first response from the server. That dramatically improves user experience.
Converting the site from Vue.js implementation to Nuxt definitions is very straightforward and does not require deep knowledge of all used components and packages.
Have you successfully created your first static site? Have you experienced any struggles? Please leave a comment.
In the next article I will focus on automated regeneration of static sites and where to host them, so stay tuned. | https://www.freecodecamp.org/news/how-to-generate-a-static-website-with-vue-js-in-no-time-e74e7073b7b8/ | CC-MAIN-2021-04 | refinedweb | 2,588 | 57.87 |
According to the C Standard, 6.8.4.2, paragraph 4 [ISO/IEC 9899:2011],
A switch statement causes control to jump to, into, or past the statement that is the switch body, depending on the value of a controlling expression, and on the presence of a default label and the values of any case labels on or in the switch body.
If a programmer declares variables, initializes them before the first case statement, and then tries to use them inside any of the case statements, those variables will have scope inside the
switch block but will not be initialized and will consequently contain indeterminate values.
Noncompliant Code Example
This noncompliant code example declares variables and contains executable statements before the first case label within the
switch statement:
#include <stdio.h> extern void f(int i); void func(int expr) { switch (expr) { int i = 4; f(i); case 0: i = 17; /* Falls through into default code */ default: printf("%d\n", i); } }
Implementation Details
When the preceding example is executed on GCC 4.8.1, the variable
i is instantiated with automatic storage duration within the block, but it is not initialized. Consequently, if the controlling expression
expr has a nonzero value, the call to
printf() will access an indeterminate value of
i. Similarly, the call to
f() is not executed.
Compliant Solution
In this compliant solution, the statements before the first case label occur before the
switch statement:
#include <stdio.h> extern void f(int i); int func(int expr) { /* * Move the code outside the switch block; now the statements * will get executed. */ int i = 4; f(i); switch (expr) { case 0: i = 17; /* Falls through into default code */ default: printf("%d\n", i); } return 0; }
Risk Assessment
Using test conditions or initializing variables before the first case statement in a
switch block can result in unexpected behavior and undefined behavior.
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
Related Guidelines
Key here (explains table format and definitions)
4 Comments
Robert Seacord (Manager)
Should this be cross referenced with avoid dead code?
David Svoboda
Offhand, I think this should be incorporated into that rule. MSC07-C. Detect and remove dead code. (I'd still consider this a complete rule for the purposes of this assignment.)
Also the 'Implmenetation Details' section needs to specify which platform produced the results shown here.
Pennie Walters
Under Implementation Details, should this sentence
Similarly, the call to the function will never be executed either.
say
Similarly, the call to function f will never be executed either.
instead?
Robert Seacord (Manager)
He is definitely referring to the function
f(). I'm not sure about the sentence. Maybe something like "Similarly, the call to
f()is not executed.". | https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152307 | CC-MAIN-2019-22 | refinedweb | 461 | 51.28 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
> As many on this list already know, Google has long maintained an > internal version of glibc, under the name GRTE, that is the main C > library for binaries running on production servers. There have been > sporadic releases of the sources in the past, for instance at > , but we > would like to use git to set up a higher-bandwidth conduit for > changes, as the set of local patches is now non-trivial and should be > more visible. I applaud this effort! > So to start with, it seems useful to define a namespace: > > google/* Approved to have one, but I think vapier and I should be on the owners list too. I'd quibble a bit with the layout underneath google/, but it's only we Google folks who need to agree about that (and we can discuss that either here or internally). Just in anticipation of unrelated Google branches that might come along in the future, I'd suggest that you use google/grte/* for the layout you've described. Now you just need to proceed with updating and the other steps listed there. Step 3 is already done, since I just had you added to the group and everybody else was already there. You can't edit the wiki yet because you don't have an account. Get one here: Then I can add you to EditorGroup so you can actually edit the wiki, which is what steps 2, 4, and 5 require. Thanks, Roland | http://sourceware.org/ml/libc-alpha/2015-08/msg01166.html | CC-MAIN-2019-26 | refinedweb | 263 | 75.64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.